Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Treat hyperfine parameterized benchmarks as separate benchmarks #472

Open
epompeii opened this issue Aug 1, 2024 · 2 comments
Open

Treat hyperfine parameterized benchmarks as separate benchmarks #472

epompeii opened this issue Aug 1, 2024 · 2 comments
Labels

Comments

@epompeii
Copy link
Member

epompeii commented Aug 1, 2024

Currently, hyperfine parameterized benchmarks are all given the same benchmark name.

Parameterized benchmarks should include the parameter name and value in their Bencher benchmark name. They may also be set as tags, once that is implemented: #240

@epompeii
Copy link
Member Author

epompeii commented Aug 2, 2024

Correction, parameterized benchmarks are all given the same name when the --command-name option is used:

-n, --command-name
Give a meaningful
name to a command.
This can be specified
multiple times if
several commands are
benchmarked.

See this comment for an example: #471 (comment)

The parameters are available in the JSON output:

"parameters": {
  "time": "0"
}

@epompeii
Copy link
Member Author

epompeii commented Dec 9, 2024

Also, the --parameter-list option only outputs a single result, not multiple as I would have expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant