Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-calibration on x86 and update cost parameters #1248

Merged
merged 8 commits into from
Nov 29, 2023

Conversation

jayz22
Copy link
Contributor

@jayz22 jayz22 commented Nov 23, 2023

What

Resolves #1246 Also made a minor fix on changing the plain * to saturating_mut for map charges (mentioned in #1141).

This is the first calibration results on x86, previous results were obtained on M1. Overall most of the cost characteristics stay the same, there are some notable changes:

  • The VmInstantiation cpu cost has gone down significantly, which explains cpu numbers in most test observations going down (by more than half).
  • The arithmetics related cost centers have gone up, this includes {U,I}{256, 128} arithmetics, cryptography functions. This was due to the host observation overhead, it has been fixed.
  • The InvokeVmFunction is taking maximum arg size (=32).
  • ValSer ValSer has been calibrated using closer-to-worst examples

The mem-related cost parameters are using analytical inputs (not calibrated), same as before.

raw output

Why

[TODO: Why this change is being made. Include any context required to understand the why.]

Known limitations

@jayz22 jayz22 marked this pull request as draft November 28, 2023 22:23
@jayz22 jayz22 marked this pull request as ready for review November 29, 2023 01:24
@jayz22
Copy link
Contributor Author

jayz22 commented Nov 29, 2023

Thanks @dmkozh and @graydon for the review. I've refreshed the PR again after your comments. Now it should be ready to go again.

One thing that's been updated since your last review was the calibration of xdr conversions (ValSer, ValDeser), previously we were was treating it like a mem op (using the same analytical numbers from memcpy). But that's because previously it was calibrated to a not-too-worst case input, which was just (de)serializing a large bytes array. I've since updated the samples to deeply nested structure trying to resemble the worst case, and have updated the parameters accordingly. The cpu cost parameters are directly taking from calibration, the memory cost parameters are still analytical but based on calibration result (~a factor of 3 bytes per input byte). I think this is the best we can do but curious what you guys think.

Copy link
Contributor

@graydon graydon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work, thanks!

soroban-env-host/src/budget.rs Outdated Show resolved Hide resolved
soroban-env-host/src/budget.rs Outdated Show resolved Hide resolved
soroban-env-host/src/budget.rs Outdated Show resolved Hide resolved
@graydon graydon added this pull request to the merge queue Nov 29, 2023
Merged via the queue into stellar:main with commit c29b5a1 Nov 29, 2023
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Recalibration before final build
3 participants