-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add confidence intervals for benchmarks #158
Comments
Hello, I would love to work on this issue. Please can I be assigned? :) |
@Zeesky-code sure go ahead :) |
I'm sorry for the delay in working on this issue, as I haven't had time to work on it yet. To prevent delays, I would unassign myself and give someone else the opportunity to take over. Sorry for any inconvenience caused :( |
Can I take a shot at it? |
@gavincrawford Sure, go ahead! :) EDIT: The benchmark visualization page code can be found in this repo https://github.com/boa-dev/boa-dev.github.io/ |
I'll transfer the issue to the other repo, since it doesn't make sense to have it here anymore. |
Looking for some clarification on the "confidence intervals". If I'm not missing something, those aren't present in the data hosted on the data repository, meaning displaying them isn't possible without a revision of the code that updates those JSON files. Other than that, I've gotten the filler plugin up and running, with the absolute difference from the average used as the fill parameter, just for demonstration purposes. Thoughts? |
Stale. I don't think that this issue is possible to complete unless someone clarifies where you want this data pulled from, as it isn't present in the source that we're using right now. I'm unassigning myself until further clarification. |
Will mark it as blocked because I realized we previously had confidence intervals, but after moving to another benchmarking test suite we don't have that data anymore. |
Currently, our benchmarks are a bit messy. They show some points here and there, and there is a huge amount of noise. Criterion gives us nice confidence intervals that we can use. See how to implement them, you might find inspiration here: chartjs/Chart.js#6899
In the process, we might want to clean-up a bit that page, by stop using big "dots" in the graphs, and maybe having up to 2 graphs per row, so that the page is not so long. Maybe also some explanations on what each benchmark is checking.
The text was updated successfully, but these errors were encountered: