-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pi0 and photon #39
Pi0 and photon #39
Conversation
We don't have the operational capacity to add more pipelines like this. @veprbl Typical jobs: (data: trace.json in https://ui.perfetto.dev) These are occupying limited job slots for long times. It is leading us to lag behind, sometimes by 8 hours, as pipelines try to catch up. That is causing people to ignore benchmarks entirely. Before we add more pipelines, one or more of several things need to happen:
|
Note in the waterfall graphs above that it takes until 3h45 after start of the pipeline before the benchmarks even start, and until 6h30 when they complete. That's the backlog I'm talking about... We used to start benchmarks after 30 minutes, and complete within 2 hours, with most results available early and only 18x275 DIS NC taking a long time. |
Please move these to detector_benchmarks. Thanks. |
Regarding performance. Most of the items are out of scope for this PR. For this we just need to push the simulation time as much as possible. Hopefully it won't make the benchmark completely useless. We should be able to scale those statistics up later by figuring out some of the steps that @wdconinc mentioned. |
@wdconinc To be fair though, the benchmarks in this current pull request is expected to run considerably faster than the lambda and sigma, since it has electromagnetic instead of hadronic showers, and a larger fraction of the generated events are used in the analysis samples (so the simulated statistics don't need to be as high). These two new ones take a little over an hour running locally on my laptop (which is probably slower than on the ci); once the benchmark runs on here I can determine how much to reduce the statistics by to make it take, say, 30 or 15 minutes. Does this sound reasonable? |
…marks into pi0_and_photon
reduce number of events in photon benchmark
reduced the range of momenta of the pi0s in the benchmark
changed the script to remove braces around NEVENTS_GEN
remove absolute file locations
changed directory for mkdir
It's about half an hour for each of these two ZDC electromagnetic-calorimetry benchmarks. I will see what I can do about reducing the stats on the other benchmarks (lambda and sigma in zdc and neutron in insert), without diminishing the quality of those benchmarks, and if there is something that can be done, it will be in a future pull request |
Half an hour for the longest job or half a core-hour for all jobs even if they run simultaneously? |
Half an hour for the longest job. Let me know if this is sufficient; if not, I will reduce statistics by a factor of 2. |
ok, I will move this to detector benchmarks |
Briefly, what does this PR introduce?
Adds photon and pi0 benchmarks
What kind of change does this PR introduce?
Please check if this PR fulfills the following:
Does this PR introduce breaking changes? What changes might users need to make to their code?
no
Does this PR change default behavior?
yes. adds new benchmarks