-
Notifications
You must be signed in to change notification settings - Fork 371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: enables selection and execution of specific e2e benchmark tests #3595
Conversation
WalkthroughThe changes introduce the ability to select and run specific end-to-end (e2e) benchmark tests in the Celestia project. A new structure and logging mechanism have been implemented to handle multiple tests, providing better modularity and control over the test execution process. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Main as main.go
participant Throughput as throughput.go
participant Logger
User->>Main: Run test with arguments
Main->>Logger: Initialize logger
Main->>Main: Process arguments
alt Specific test
Main->>Throughput: Execute specified test
else All tests
Main->>Main: Loop through all tests
Main->>Throughput: Execute each test
end
Throughput->>Logger: Output test result
Logger-->>User: Display result
Assessment against linked issues
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Personally I don't think we should default run all benchmarks. Unlike the correctness e2e tests, I think which benchmarks to run should always be explicitly specified. It doesn't really matter at the moment when we just have a single benchmark
Closes #3588
Now, you can pass the test name as an arument:
And at the end you will see