-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: add performance script to PR jobs #3274
base: main
Are you sure you want to change the base?
Conversation
- currently requires running tests to setup db (waiting to avoid actually running tests and spinning down tests/env) - loses ability to run k6 against rafiki w/ telemetry
This reverts commit 1a6c0e6.
- in preparation for switching performance test to target testenv - ensured integration tests pass
- supports reuse in setup script for performance tests which will allow running performance test against testenv - also bumps graphql pacakge version across monorepo. added to test-lib which caused some type errors in backend and auth due to a version mismatch
✅ Deploy Preview for brilliant-pasca-3e80ec ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
- not sure if config outpute by script will reflect options passed in, probably not. config in gh pr comment may not be accurate.
- name: Post/Update Performance Test Results as PR Comment | ||
if: always() | ||
uses: actions/github-script@v7 | ||
with: | ||
github-token: ${{ secrets.GITHUB_TOKEN }} | ||
script: | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am in-lining the javascript here because I found it easier than passing everything from the actions context in and it was just easier for development. We can move this to a file and just run it with node if we prefer though.
- name: Run performance tests | ||
id: perf_test | ||
run: | | ||
pnpm --filter performance run-tests:testenv -k --vus 4 --duration 1m | tee performance.log | ||
continue-on-error: true # Don't fail here, allows showing logs | ||
|
||
- name: Print MASE logs | ||
if: always() | ||
run: cat mases.log | ||
|
||
- name: Post/Update Performance Test Results as PR Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a grafana/k6 action that runs the tests and exports to grafana cloud and leaves a comment here with a link:
https://grafana.com/blog/2024/07/15/performance-testing-with-grafana-k6-and-github-actions/
I opted for doing it more manually as a start - was simpler to get going. And even with that action we might need to manual write/edit a comment if we want the actual metrics in the PR comment (looks like the comment just leaves a link). But that could probably be mitigated by setting good thresholds (fail if iterations < x etc. ) and just use the job for pass/fail and venture into grafana when it fails for more details.
It does appear like the free tier is fairly generous and it probably gives us a lot of stuff for free like a centralized view of the history of runs. Open to reworking if we want that but I think this gives us what we're looking for for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the comment generated: #3274 (comment)
It just upserts with the latest results and test logs.
- name: Run performance tests | ||
id: perf_test | ||
run: | | ||
pnpm --filter performance run-tests:testenv -k --vus 4 --duration 1m | tee performance.log |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do we want the test config to be?
I opted for 4VUs and 1M after testing some different values. I think we want to:
- not make the duration longer than necessary. Just dont want to extend the time it takes CI to finish if we can help it. So I prefered more VUs for a shorter time to some extent. There may still be some room to increase duration if we want. This is taking ~3m while the docker builds are taking 4-5m
- reliably complete the tests without failures as a baseline. But approaching the point where we do see issues. I saw some errors at 5VUs so I backed off.
I think the hardware is determined by runs-on: ubuntu-22.04
and should always be the same across runs.
export function handleSummary(data) { | ||
const requestsPerSecond = data.metrics.http_reqs.values.rate | ||
const iterationsPerSecond = data.metrics.iterations.values.rate | ||
const failedRequests = data.metrics.http_req_failed.values.passes | ||
const failureRate = data.metrics.http_req_failed.values.rate | ||
const requests = data.metrics.http_reqs.values.count | ||
|
||
const summaryText = ` | ||
**Test Configuration**: | ||
- VUs: ${options.vus} | ||
- Duration: ${options.duration} | ||
|
||
**Test Metrics**: | ||
- Requests/s: ${requestsPerSecond.toFixed(2)} | ||
- Iterations/s: ${iterationsPerSecond.toFixed(2)} | ||
- Failed Requests: ${failureRate.toFixed(2)}% (${failedRequests} of ${requests}) | ||
` | ||
|
||
return { | ||
// Preserve standard output w/ textSummary | ||
stdout: textSummary(data, { enableColors: false }), | ||
'k6-test-summary.txt': summaryText // saves to file | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is how we're saving the results for use in the comment. the stdout
will preserve the normal output and the k6-test-summary.txt saves the details we want to report.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lots of changes here but it basically just makes the target environment configurable.
test/performance/README.md
Outdated
and a process for the Mock Account Servicing Entities: | ||
|
||
```sh | ||
pnpm --filter perfromance start-mases |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: perfromance
-> performance
test/performance/config/local.env
Outdated
@@ -0,0 +1,8 @@ | |||
C9_BACKEND_HOST="cloud-nine-wallet-backend" | |||
HLB_BACKEND_HOST="happy-life-bank-backend" | |||
C9_AUTH_HOST="cloud-nine-wallet-backend" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the auth host here supposed to be (cloud-nine-wallet/happy-life-bank)-auth
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, fixed. Thanks.
test/performance/config/test.env
Outdated
@@ -0,0 +1,8 @@ | |||
C9_BACKEND_HOST="cloud-nine-wallet-test-backend" | |||
HLB_BACKEND_HOST="happy-life-bank-test-backend" | |||
C9_AUTH_HOST="cloud-nine-wallet-test-backend" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes proposed in this pull request
MockASE
handles seeding and hosting integration endpoints required for rafiki to properly function (rates, webhooks). This is located in the integration tests inmain
so I moved it out for re-use when running performance tests.main
run against our localenv. I changed it to be configurable by parameterizing the run script and test and reading from a config to get the correct environment details (network name, urls). So now you could run performance tests locally against the testenv without interference with your localenv. And can still run against localenv if you want to use the grafana dashboard.Context
fixes: #3240
Checklist
fixes #number
user-docs
label (if necessary)