Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from paritytech:master #75

Open
wants to merge 39 commits into
base: master
Choose a base branch
from
Open

Conversation

pull[bot]
Copy link

@pull pull bot commented Jan 26, 2025

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.1)

Can you help keep this open source service alive? 💖 Please sponsor : )

@pull pull bot added the ⤵️ pull label Jan 26, 2025
dmitry-markin and others added 28 commits January 27, 2025 12:29
…rd-compatible) (#7344)

Revert #7011 and replace
it with a backward-compatible solution suitable for backporting to a
release branch.

### Review notes
It's easier to review this PR per commit: the first commit is just a
revert, so it's enough to review only the second one, which is almost a
one-liner.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
closes #5978

---------

Co-authored-by: command-bot <>
Co-authored-by: Michal Kucharczyk <[email protected]>
…mni Node compatibility (#6529)

# Description

This PR adds development chain specs for the minimal and parachain
templates.
[#6334](#6334)


## Integration

This PR adds development chain specs for the minimal and para chain
template runtimes, ensuring synchronization with runtime code. It
updates zombienet-omni-node.toml, zombinet.toml files to include valid
chain spec paths, simplifying configuration for zombienet in the
parachain and minimal template.

## Review Notes

1. Overview of Changes:
- Added development chain specs for use in the minimal and parachain
template.
- Updated zombienet-omni-node.toml and zombinet.toml files in the
minimal and parachain templates to include paths to the new dev chain
specs.

2. Integration Guidance:
**NB: Follow the templates' READMEs from the polkadot-SDK master branch.
Please build the binaries and runtimes based on the polkadot-SDK master
branch.**
- Ensure you have set up your runtimes `parachain-template-runtime` and
`minimal-template-runtime`
- Ensure you have installed the nodes required ie
`parachain-template-node` and `minimal-template-node`
- Set up [Zombinet](https://paritytech.github.io/zombienet/intro.html)
- For running the parachains, you will need to install the polkadot
`cargo install --path polkadot` remember from the polkadot-SDK master
branch.
- Inside the template folders minimal or parachain, run the command to
start with `Zombienet with Omni Node`, `Zombienet with
minimal-template-node` or `Zombienet with parachain-template-node`

*Include your leftover TODOs, if any, here.*
* [ ] Test the syncing of chain specs with runtime's code.

---------

Signed-off-by: EleisonC <[email protected]>
Co-authored-by: Iulian Barbu <[email protected]>
Co-authored-by: Alexander Samusev <[email protected]>
#Description
Migrated polkadot-runtime-parachains slots benchmarking to the new
benchmarking syntax v2.
This is part of #6202

---------

Co-authored-by: Giuseppe Re <[email protected]>
Co-authored-by: seemantaggarwal <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
Resolves (partially):
#7148 (see _Problem 1 -
`ShouldExecute` tuple implementation and `Deny` filter tuple_)

This PR changes the behavior of `DenyThenTry` from the pattern
`DenyIfAllMatch` to `DenyIfAnyMatch` for the tuple.

I would expect the latter is the right behavior so make the fix in
place, but we can also add a dedicated Impl with the legacy one
untouched.

## TODO
- [x] add unit-test for `DenyReserveTransferToRelayChain`
- [x] add test and investigate/check `DenyThenTry` as discussed
[here](#6838 (comment))
and update documentation if needed

---------

Co-authored-by: Branislav Kontur <[email protected]>
Co-authored-by: Francisco Aguirre <[email protected]>
Co-authored-by: command-bot <>
Co-authored-by: Clara van Staden <[email protected]>
Co-authored-by: Adrian Catangiu <[email protected]>
…aling (#6983)

On top of #6757

Fixes #6858 by bumping
the `PARENT_SEARCH_DEPTH` constant to a larger value (30) and adds a
zombienet-sdk test that exercises the 12-core scenario.

This is a node-side limit that restricts the number of allowed pending
availability candidates when choosing the parent parablock during
authoring.
This limit is rather redundant, as the parachain runtime already
restricts the unincluded segment length to the configured value in the
[FixedVelocityConsensusHook](https://github.com/paritytech/polkadot-sdk/blob/88d900afbff7ebe600dfe5e3ee9f87fe52c93d1f/cumulus/pallets/aura-ext/src/consensus_hook.rs#L35)
(which ideally should be equal to this `PARENT_SEARCH_DEPTH`).

For 12 cores, a value of 24 should be enough, but I bumped it to 30 to
have some extra buffer.

There are two other potential ways of fixing this:
- remove this constant altogether, as the parachain runtime already
makes those guarantees. Chose not to do this, as it can't hurt to have
an extra safeguard
- set this value to be equal to the uninlcuded segment size. This value
however is not exposed to the node-side and would require a new runtime
API, which seems overkill for a redundant check.

---------

Co-authored-by: Javier Viola <[email protected]>
This PR changes how we call runtime API methods with more than 6
arguments: They are no longer spilled to the stack but packed into
registers instead. Pointers are 32 bit wide so we can pack two of them
into a single 64 bit register. Since we mostly pass pointers, this
technique effectively increases the number of arguments we can pass
using the available registers.

To make this work for `instantiate` too we now pass the code hash and
the call data in the same buffer, akin to how the `create` family
opcodes work in the EVM. The code hash is fixed in size, implying the
start of the constructor call data.

---------

Signed-off-by: xermicus <[email protected]>
Signed-off-by: Cyrill Leutwiler <[email protected]>
Co-authored-by: command-bot <>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Alexander Theißen <[email protected]>
Closes #216.

This PR allows pallets to define a `view_functions` impl like so:

```rust
#[pallet::view_functions]
impl<T: Config> Pallet<T>
where
	T::AccountId: From<SomeType1> + SomeAssociation1,
{
	/// Query value no args.
	pub fn get_value() -> Option<u32> {
		SomeValue::<T>::get()
	}

	/// Query value with args.
	pub fn get_value_with_arg(key: u32) -> Option<u32> {
		SomeMap::<T>::get(key)
	}
}
```
### `QueryId`

Each view function is uniquely identified by a `QueryId`, which for this
implementation is generated by:

```twox_128(pallet_name) ++ twox_128("fn_name(fnarg_types) -> return_ty")```

The prefix `twox_128(pallet_name)` is the same as the storage prefix for pallets and take into account multiple instances of the same pallet.

The suffix is generated from the fn type signature so is guaranteed to be unique for that pallet impl. For one of the view fns in the example above it would be `twox_128("get_value_with_arg(u32) -> Option<u32>")`. It is a known limitation that only the type names themselves are taken into account: in the case of type aliases the signature may have the same underlying types but a different id; for generics the concrete types may be different but the signatures will remain the same.

The existing Runtime `Call` dispatchables are addressed by their concatenated indices `pallet_index ++ call_index`, and the dispatching is handled by the SCALE decoding of the `RuntimeCallEnum::PalletVariant(PalletCallEnum::dispatchable_variant(payload))`. For `view_functions` the runtime/pallet generated enum structure is replaced by implementing the `DispatchQuery` trait on the outer (runtime) scope, dispatching to a pallet based on the id prefix, and the inner (pallet) scope dispatching to the specific function based on the id suffix.

Future implementations could also modify/extend this scheme and routing to pallet agnostic queries.

### Executing externally

These view functions can be executed externally via the system runtime api:

```rust
pub trait ViewFunctionsApi<QueryId, Query, QueryResult, Error> where
	QueryId: codec::Codec,
	Query: codec::Codec,
	QueryResult: codec::Codec,
	Error: codec::Codec,
{
	/// Execute a view function query.
fn execute_query(query_id: QueryId, query: Query) -> Result<QueryResult,
Error>;
}
```
### `XCQ`
Currently there is work going on by @xlc to implement [`XCQ`](https://github.com/open-web3-stack/XCQ/) which may eventually supersede this work.

It may be that we still need the fixed function local query dispatching in addition to XCQ, in the same way that we have chain specific runtime dispatchables and XCM.

I have kept this in mind and the high level query API is agnostic to the underlying query dispatch and execution. I am just providing the implementation for the `view_function` definition.

### Metadata
Currently I am utilizing the `custom` section of the frame metadata, to avoid modifying the official metadata format until this is standardized.

### vs `runtime_api`
There are similarities with `runtime_apis`, some differences being:
- queries can be defined directly on pallets, so no need for boilerplate declarations and implementations
- no versioning, the `QueryId` will change if the signature changes. 
- possibility for queries to be executed from smart contracts (see below)

### Calling from contracts
Future work would be to add `weight` annotations to the view function queries, and a host function to `pallet_contracts` to allow executing these queries from contracts.

### TODO

- [x] Consistent naming (view functions pallet impl, queries, high level api?)
- [ ] End to end tests via `runtime_api`
- [ ] UI tests
- [x] Mertadata tests
- [ ] Docs

---------

Co-authored-by: kianenigma <[email protected]>
Co-authored-by: James Wilson <[email protected]>
Co-authored-by: Giuseppe Re <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>
The old error message was often confusing, because the real reason for
the error will be printed during inherent execution.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…al addresses (#7338)

Instead of using libp2p-provided external address candidates,
susceptible to address translation issues, use litep2p-backend approach
based on confirming addresses observed by multiple peers as external.

Fixes #7207.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
- remove old bench from cmd.py and left alias for backward compatibility
- reverted the frame-wight-template as the problem was that it umbrella
template wasn't picked correctly in the old benchmarks, in
frame-omni-bench it correctly identifies the dependencies and uses
correct template

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR modifies `named_reserve()` in frame-balances to use checked math
instead of defensive saturating math.

The use of saturating math relies on the assumption that the sum of the
values will always fit in `u128::MAX`. However, there is nothing
preventing the implementing pallet from passing a larger value which
overflows. This can happen if the implementing pallet does not validate
user input and instead relies on `named_reserve()` to return an error
(this saves an additional read)

This is not a security concern, as the method will subsequently return
an error thanks to `<Self as ReservableCurrency<_>>::reserve(who,
value)?;`. However, the `defensive_saturating_add` will panic in
`--all-features`, creating false positive crashes in fuzzing operations.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR implements the block author API method. Runtimes ought to
implement it such that it corresponds to the `coinbase` EVM opcode.

---------

Signed-off-by: xermicus <[email protected]>
Signed-off-by: Cyrill Leutwiler <[email protected]>
Co-authored-by: command-bot <>
Co-authored-by: Alexander Theißen <[email protected]>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description

Migrating cumulus-pallet-session-benchmarking to the new benchmarking
syntax v2.
This is a part of #6202

---------

Co-authored-by: seemantaggarwal <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
This PR contains small fixes and backwards compatibility issues
identified during work on the larger PR:
#6906.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Related to:
#7295 (comment)

---------

Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Adrian Catangiu <[email protected]>
…rks and testing (#7379)

# Description

Currently benchmarks and tests on pallet_balances would fail when the
feature insecure_zero_ed is enabled. This PR allows to run such
benchmark and tests keeping into account the fact that accounts would
not be deleted when their balance goes below a threshold.

## Integration

*In depth notes about how this PR should be integrated by downstream
projects. This part is mandatory, and should be
reviewed by reviewers, if the PR does NOT have the `R0-Silent` label. In
case of a `R0-Silent`, it can be ignored.*

## Review Notes

*In depth notes about the **implementation** details of your PR. This
should be the main guide for reviewers to
understand your approach and effectively review it. If too long, use

[`<details>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details)*.

*Imagine that someone who is depending on the old code wants to
integrate your new code and the only information that
they get is this section. It helps to include example usage and default
value here, with a `diff` code-block to show
possibly integration.*

*Include your leftover TODOs, if any, here.*

# Checklist

* [x] My PR includes a detailed description as outlined in the
"Description" and its two subsections above.
* [x] My PR follows the [labeling requirements](

https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
) of this project (at minimum one label for `T` required)
* External contributors: ask maintainers to put the right label on your
PR.
* [x] I have made corresponding changes to the documentation (if
applicable)
* [x] I have added tests that prove my fix is effective or that my
feature works (if applicable)

You can remove the "Checklist" section once all have been checked. Thank
you for your contribution!

✄
-----------------------------------------------------------------------------

---------

Co-authored-by: Rodrigo Quelhas <[email protected]>
# Description

Close #7122.

This PR replaces the unmaintained `derivative` dependency with
`derive-where`.

## Integration

This PR doesn't change the public interfaces.

## Review Notes

The `derivative` crate, previously used to derive basic traits for
structs with generics or enums, is no longer actively maintained. It has
been replaced with the `derive-where` crate, which offers a more
straightforward syntax while providing the same features as
`derivative`.

---------

Co-authored-by: Guillaume Thiolliere <[email protected]>
- added 3 links for subweight comparison - now, ~1 month ago release, ~3
month ago release tag
- added `--3way --ours` flags for `git apply` to resolve potential
conflict
- stick to the weekly branch from the start until the end, to prevent
race condition with conflicts
This PR modifies the fatxpool to use tracing instead of log for logging.

closes #5490

Polkadot address: 12GyGD3QhT4i2JJpNzvMf96sxxBLWymz4RdGCxRH5Rj5agKW

---------

Co-authored-by: Michal Kucharczyk <[email protected]>
…n to all backing groups (#6924)

## Issues
- [[#5049] Elastic scaling: zombienet
tests](#5049)
- [[#4526] Add zombienet tests for malicious
collators](#4526)

## Description
Modified the undying collator to include a malus mode, in which it
submits the same collation to all assigned backing groups.

## TODO
* [X] Implement malicious collator that submits the same collation to
all backing groups;
* [X] Avoid the core index check in the collation generation subsystem:
https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/node/collation-generation/src/lib.rs#L552-L553;
* [X] Resolve the mismatch between the descriptor and the commitments
core index: #7104
* [X] Implement `duplicate_collations` test with zombienet-sdk;
* [X] Add PRdoc.
This should fix the error log related to PoV pre-dispatch weight being
lower than post-dispatch for `ParasInherent`:
```
ERROR tokio-runtime-worker runtime::frame-support: Post dispatch weight is greater than pre dispatch weight. Pre dispatch weight may underestimating the actual weight. Greater post dispatch weight components are ignored.
                                        Pre dispatch weight: Weight { ref_time: 47793353978, proof_size: 1019 },
                                        Post dispatch weight: Weight { ref_time: 5030321719, proof_size: 135395 }
```

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR backports regular version bumps and prdoc reorganization from
stable release branch back to master
# Description

There is a small error (which slipped through reviews) in matrix
strategy expansion which results in errors like this:
https://github.com/paritytech/polkadot-sdk/actions/runs/13079943579/job/36501002368.

## Integration

N/A

## Review Notes

Need to fix this in master and then rerun it manually against
`stable2412-1`.

Signed-off-by: Iulian Barbu <[email protected]>
Part of #5079.

Removes all usage of the static async backing params, replacing them
with dynamically computed equivalent values (based on the claim queue
and scheduling lookahead).

Adds a new runtime API for querying the scheduling lookahead value. If
not present, falls back to 3 (the default value that is backwards
compatible with values we have on production networks for
allowed_ancestry_len)

Also resolves most of
#4447, removing code
that handles async backing not yet being enabled.
While doing this, I removed the support for collation protocol version 1
on collators, as it only worked for leaves not supporting async backing
(which are none).
I also unhooked the legacy v1 statement-distribution (for the same
reason as above). That subsystem is basically dead code now, so I had to
remove some of its tests as they would no longer pass (since the
subsystem no longer sends messages to the legacy variant). I did not
remove the entire legacy subsystem yet, as that would pollute this PR
too much. We can remove the entire v1 and v2 validation protocols in a
follow up PR.

In another PR: remove test files with names `prospective_parachains`
(it'd pollute this PR if we do now)

TODO:
- [x] add deprecation warnings
- [x] prdoc

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
mordamax and others added 10 commits February 3, 2025 12:25
…te contracts (#7414)

This PR changes the behavior of `instantiate` when the resulting
contract address already exists (because the caller tried to instantiate
the same contract with the same salt multiple times): Instead of
trapping the caller, return an error code.

Solidity allows `catch`ing this, which doesn't work if we are trapping
the caller. For example, the change makes the following snippet work:

```Solidity
try new Foo{salt: hex"00"}() returns (Foo) {
    // Instantiation was successful (contract address was free and constructor did not revert)
} catch {
    // This branch is expected to be taken if the instantiation failed because of a duplicate salt
}
```

`revive` PR: paritytech/revive#188

---------

Signed-off-by: Cyrill Leutwiler <[email protected]>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description

Aligned `polkadot-omni-node` & `polkadot-parachain` versions. There is
one `NODE_VERSION` constant, in `polkadot-omni-node-lib`, used by both
binaries.

Closes #7276 .

## Integration

Node operators will know what versions of `polkadot-omni-node` &
`polkadot-parachain` they use since their versions will be kept in sync
with the stable release `polkadot` SemVer version.

## Review Notes

TODO:
- [x] update NODE_VERSION of `polkadot-omni-node-lib` when running
branch off workflow

---------

Signed-off-by: Iulian Barbu <[email protected]>
…7439)

# Description

Another small fix for sync-templates. We're copying the `polkadot-sdk`'s
`parachain-template` files (including the `parachain-template-docs`'s
Cargo.toml) to the directory where we're creating the workspace with all
`parachain-template` members crates, and workspace's toml. The error is
that in this directory for the workspace we first create the workspace's
Cargo.toml, and then copy the files of the `polkadot-sdk`'s
`parachain-template`, including the `Cargo.toml` of the
`parachain-template-docs` crate, which overwrites the workspace
Cargo.toml. In the end we delete the `Cargo.toml` (which we assume it is
of the `parachain-template-docs` crate), forgetting that previously
there should've been a workspace Cargo.toml, which should still be kept
and committed to the template's repository.

The error happens here:
https://github.com/paritytech/polkadot-sdk/actions/runs/13111697690/job/36577834127

## Integration

N/A

## Review Notes

Once again, merging this into master requires re-running sync templates
based on latest version on master. Hopefully this will be the last issue
related to the workflow itself.

---------

Signed-off-by: Iulian Barbu <[email protected]>
Related to #7400 and
#7417

We need this in order to be able to update `parity-scale-codec` to the
latest version after it's released. That's because `parity-scale-codec`
added support for checking for duplicate indexes at compile time.
#### Description
During 2s block investigation it turned out that
[ForkAwareTxPool::register_listeners](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L1036)
call takes significant amount of time.
```
register_listeners: at HashAndNumber { number: 12, hash: 0xe9a1...0b1d2 } took 200.041933ms
register_listeners: at HashAndNumber { number: 13, hash: 0x5eb8...a87c6 } took 264.487414ms
register_listeners: at HashAndNumber { number: 14, hash: 0x30cb...2e6ec } took 340.525566ms
register_listeners: at HashAndNumber { number: 15, hash: 0x0450...4f05c } took 405.686659ms
register_listeners: at HashAndNumber { number: 16, hash: 0xfa6f...16c20 } took 477.977836ms
register_listeners: at HashAndNumber { number: 17, hash: 0x5474...5d0c1 } took 483.046029ms
register_listeners: at HashAndNumber { number: 18, hash: 0x3ca5...37b78 } took 482.715468ms
register_listeners: at HashAndNumber { number: 19, hash: 0xbfcc...df254 } took 484.206999ms
register_listeners: at HashAndNumber { number: 20, hash: 0xd748...7f027 } took 414.635236ms
register_listeners: at HashAndNumber { number: 21, hash: 0x2baa...f66b5 } took 418.015897ms
register_listeners: at HashAndNumber { number: 22, hash: 0x5f1d...282b5 } took 423.342397ms
register_listeners: at HashAndNumber { number: 23, hash: 0x7a18...f2d03 } took 472.742939ms
register_listeners: at HashAndNumber { number: 24, hash: 0xc381...3fd07 } took 489.625557ms
```

This PR implements the idea outlined in #7071. Instead of having a
separate listener for every transaction in each view, we now use a
single stream of aggregated events per view, with each stream providing
events for all transactions in that view. Each event is represented as a
tuple: (transaction-hash, transaction-status). This significantly reduce
the time required for `maintain`.

#### Review Notes
- single aggregated stream, provided by the individual view delivers
events in form of `(transaction-hash, transaction-status)`,
- `MultiViewListener` now has a task. This task is responsible for:
- polling the stream map (which consists of individual view's aggregated
streams) and the `controller_receiver` which provides side-channel
[commands](https://github.com/paritytech/polkadot-sdk/blob/2b18e080cfcd6b56ee638c729f891154e566e52e/substrate/client/transaction-pool/src/fork_aware_txpool/multi_view_listener.rs#L68-L95)
(like `AddView` or `FinalizeTransaction`) sent from the _transaction
pool_.
- dispatching individual transaction statuses and control commands into
the external (created via API, e.g. over RPC) listeners of individual
transactions,
- external listener is responsible for status handling _logic_ (e.g.
deduplication of events, or ignoring some of them) and triggering
statuses to external world (_this was not changed_).
- level of debug messages was adjusted (per-tx messages shall be
_trace_),

Closes #7071

---------

Co-authored-by: Sebastian Kunert <[email protected]>
Remove the specific fee amount checks in integration tests, since it
changes every time weights are regenerated.
Found via
open-web3-stack/polkadot-ecosystem-tests#165.

Closes #7370 .

# Description

Some extrinsics from `pallet_nomination_pools` were not emitting events:
* `set_configs`
* `set_claim_permission`
* `set_metadata`
* `chill`
* `nominate`

## Integration

N/A

## Review Notes

N/A

---------

Co-authored-by: Ankan <[email protected]>
…_base_deposit` (#7230)

This PR is centered around a main fix regarding the base deposit and a
bunch of drive by or related fixtures that make sense to resolve in one
go. It could be broken down more but I am constantly rebasing this PR
and would appreciate getting those fixes in as-one.

**This adds a multi block migration to Westend AssetHub that wipes the
pallet state clean. This is necessary because of the changes to the
`ContractInfo` storage item. It will not delete the child storage
though. This will leave a tiny bit of garbage behind but won't cause any
problems. They will just be orphaned.**

## Record the deposit for immutable data into the `storage_base_deposit`

The `storage_base_deposit` are all the deposit a contract has to pay for
existing. It included the deposit for its own metadata and a deposit
proportional (< 1.0x) to the size of its code. However, the immutable
code size was not recorded there. This would lead to the situation where
on terminate this portion wouldn't be refunded staying locked into the
contract. It would also make the calculation of the deposit changes on
`set_code_hash` more complicated when it updates the immutable data (to
be done in #6985). Reason is because it didn't know how much was payed
before since the storage prices could have changed in the mean time.

In order for this solution to work I needed to delay the deposit
calculation for a new contract for after the contract is done executing
is constructor as only then we know the immutable data size. Before, we
just charged this eagerly in `charge_instantiate` before we execute the
constructor. Now, we merely send the ED as free balance before the
constructor in order to create the account. After the constructor is
done we calculate the contract base deposit and charge it. This will
make `set_code_hash` much easier to implement.

As a side effect it is now legal to call `set_immutable_data` multiple
times per constructor (even though I see no reason to do so). It simply
overrides the immutable data with the new value. The deposit accounting
will be done after the constructor returns (as mentioned above) instead
of when setting the immutable data.

## Don't pre-charge for reading immutable data

I noticed that we were pre-charging weight for the max allowable
immutable data when reading those values and then refunding after read.
This is not necessary as we know its length without reading the storage
as we store it out of band in contract metadata. This makes reading it
free. Less pre-charging less problems.

## Remove delegate locking

Fixes #7092

This is also in the spirit of making #6985 easier to implement. The
locking complicates `set_code_hash` as we might need to block settings
the code hash when locks exist. Check #7092 for further rationale.

## Enforce "no terminate in constructor" eagerly

We used to enforce this rule after the contract execution returned. Now
we error out early in the host call. This makes it easier to be sure to
argue that a contract info still exists (wasn't terminated) when a
constructor successfully returns. All around this his just much simpler
than dealing this check.

## Moved refcount functions to `CodeInfo`

They never really made sense to exist on `Stack`. But now with the
locking gone this makes even less sense. The refcount is stored inside
`CodeInfo` to lets just move them there.

## Set `CodeHashLockupDepositPercent` for test runtime

The test runtime was setting `CodeHashLockupDepositPercent` to zero.
This was trivializing many code paths and excluded them from testing. I
set it to `30%` which is our default value and fixed up all the tests
that broke. This should give us confidence that the lockup doeposit
collections properly works.

## Reworked the `MockExecutable` to have both a `deploy` and a `call`
entry point

This type used for testing could only have either entry points but not
both. In order to fix the `immutable_data_set_overrides` I needed to a
new function `add_both` to `MockExecutable` that allows to have both
entry points. Make sure to make use of it in the future :)

---------

Co-authored-by: command-bot <>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: PG Herveou <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Oliver Tale-Yazdi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.