Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: Track main branch benchmarks with Bencher #1441

Merged
merged 3 commits into from
Mar 18, 2024
Merged

ci: Track main branch benchmarks with Bencher #1441

merged 3 commits into from
Mar 18, 2024

Conversation

epompeii
Copy link
Contributor

@epompeii epompeii commented Mar 14, 2024

Summary:
@tusharmath this PR adds tracking of the micro-benchmarks on the main branch.
Once, things are able to run the results will be visible on this public perf page: https://bencher.dev/perf/tailcall

In order for things to work in the tailcallhq/tailcall repo, you will need to:

  1. Signup for Bencher Cloud
  2. I will send you an invite to join the tailcall organization that I have created.
  3. Create an API token
  4. Add BENCHER_API_TOKEN as a Repository secret. (ex: Repo -> Settings -> Secrets and variables -> Actions -> New repository secret)

How do you have benchmarking-runner setup?
Due to the customer benchmarking-runner used, it is a bit difficult for me to show you things working pre-merge.
The run just gets skipped: https://github.com/epompeii/tailcall/actions/runs/8281704757
If you have have any advice on how to make this work on my end, please let me know, and I will try to implement them. I don't want to break your build!

Future work would include transitioning from the custom script currently used to using Bencher for relative benchmarking and creating PR comments.

Issue Reference(s):
Relates to: #436
Completes: #1300

Build & Testing:

  • I ran cargo test successfully.
  • I have run ./lint.sh --mode=fix to fix all linting issues raised by ./lint.sh --mode=check.

Checklist:

  • I have added relevant unit & integration tests.
  • I have updated the documentation accordingly.
  • I have performed a self-review of my code.
  • PR follows the naming convention of <type>(<optional scope>): <title>

Summary by CodeRabbit

  • Chores
    • Improved the GitHub workflow by integrating Bencher CLI for running benchmarks with specific configurations.

Copy link
Contributor

coderabbitai bot commented Mar 14, 2024

Walkthrough

The update introduces a step in the benchmarking workflow to set up the Bencher CLI tool, followed by executing benchmarks through Bencher with designated configurations and an authentication token. This enhancement aims to streamline benchmark tests, ensuring they are performed efficiently and securely.

Changes

File(s) Summary
.github/workflows/benchmark.yml Added steps to install Bencher CLI and run benchmarks with configurations and an auth token.

🐇✨
In the realm of code, where benchmarks dwell,
A rabbit hopped in, casting a spell.
With a flick and a click, Bencher took its place,
Commands lined up, in a seamless embrace.
"Let's measure," it whispered, under the moon,
For performance, we'll be attuned.
🌟🐾

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between f5e0fa4 and 052e700.
Files selected for processing (1)
  • .github/workflows/benchmark.yml (1 hunks)
Additional comments: 1
.github/workflows/benchmark.yml (1)
  • 86-100: The integration of Bencher CLI is a significant addition to the benchmarking workflow. Here are a few observations and recommendations:
  1. Use of bencherdev/bencher@main: It's generally recommended to pin actions to a specific version or commit to avoid unexpected changes. Consider using a tagged version of bencherdev/bencher if available.
  2. Security of BENCHER_API_TOKEN: Ensure that the BENCHER_API_TOKEN secret is properly set up in the repository settings. This is crucial for authentication with Bencher Cloud.
  3. Clarity on --testbed parameter: The --testbed parameter is set to benchmarking-runner. Ensure that this value accurately represents the testbed environment used by Bencher for benchmarking.
  4. Documentation and Comments: Adding comments explaining the purpose of each parameter and step, especially for custom scripts like json_to_md.rs and criterion_compare.rs, would improve maintainability and clarity for future contributors.

Overall, the integration seems well-executed, but consider the above points for refinement.

@epompeii epompeii changed the title Track main branch benchmarks with Bencher ci: Track main branch benchmarks with Bencher Mar 14, 2024
@alankritdabral
Copy link
Contributor

Due to the customer benchmarking-runner used, it is a bit difficult for me to show you things working pre-merge.

You can try using ubuntu-latest instead of benchmarking-runner just for testing purpose.

@tusharmath
Copy link
Contributor

Added the token on GitHub. Should I merge this PR? Can we also track macro benchmarks?

@epompeii
Copy link
Contributor Author

epompeii commented Mar 15, 2024

Added the token on GitHub. Should I merge this PR?

I just sent you an invite to the tailcall organization on Bencher.
Once you accept the invite (should be in the email you use for GitHub) everything should work.
Definitely your call, but this should be pretty straight forward.

Can we also track macro benchmarks?

We could wait for bencherdev/bencher#347 to be implemented.
Though I think it would be best to just get this first step merged and then we can come back and add the macro benchmarks once the wrk adapter is implemented.

Copy link

Action required: PR inactive for 2 days.
Status update or closure in 5 days.

@github-actions github-actions bot added the state: inactive No current action needed/possible; issue fixed, out of scope, or superseded. label Mar 17, 2024
@tusharmath tusharmath enabled auto-merge (squash) March 18, 2024 18:36
@tusharmath tusharmath merged commit e0a7f6c into tailcallhq:main Mar 18, 2024
26 checks passed
amitksingh1490 added a commit that referenced this pull request Mar 19, 2024
@epompeii epompeii mentioned this pull request Apr 14, 2024
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
state: inactive No current action needed/possible; issue fixed, out of scope, or superseded.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants