Replies: 1 comment 1 reply
-
Hey Bernardo! Thanks for reaching out! We love the idea! So much that in fact, we've already implemented some of this 😄 As a first way of integrating with NannyML, we've implemented an export of our results to databases such as PostgreSQL, MySQL, SQL server, ... You can then have Grafana connect to that database and retrieve the metrics for there. Even better news: we have some fully containerized examples for you to browse at https://github.com/NannyML/examples. I had thought about going the OpenTelemetry and Prometheus route as well, but it seems like a much more specialized use case so I prioritized the good old database instead. This approach does have some additional operational complexity as it requires running an additional database. Just having NannyML expose metrics might be a lot simpler to deploy in a context like Kubernetes. Are there any reasons for you to look at Prometheus and OpenTelemetry specifically? Or does the database export take care of your current needs? As for the implementation suggestion: I think a subcommand for This is not a huge deal, but it probably requires communication between threads etc., so might be more complex then at first sight. Love to get your thoughts on this! |
Beta Was this translation helpful? Give feedback.
-
Hey! I would love to contribute to this feature, but I could not find any Pull Request associated with this topic. If there is, I would love to check it out.
If there isn't, I would love to hear your ideas on how to implement this. In broad lines, I am thinking of
nml
CLI to build a docker image with the instrumented code...Does this sound good? Does this sound bad? I would love to discuss.
Beta Was this translation helpful? Give feedback.
All reactions