forked from paritytech/polkadot-sdk
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] master from paritytech:master #18
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- update baseline for pallet_revive - update cmd pipeline name - Fix compilation after renaming some of benchmarks in pallet_revive. [Runtime Dev]. Changed the "instr" benchmark so that it should no longer return to little weight. It is still bogus but at least benchmarking should not work. (by @athei ) --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Alexander Theißen <[email protected]> Co-authored-by: Alexander Samusev <[email protected]> Co-authored-by: command-bot <>
This PR removes the requirement to set the `LaneId` in the relayer CLI configuration where it was not really necessary. --------- Co-authored-by: command-bot <>
Removing the shell node variant for the polkadot-parachain as discussed here: #5586 (comment) Resolves #5898
This PR adds the `stableYYMM-rcX` or `stableYYMM-X-rcX` tags to the docker images, so that they could be published with the new tag naming scheme. Closes: paritytech/release-engineering#224
Relates to: #5916 Relates to: polkadot-js/api#5976 --------- Co-authored-by: Javier Viola <[email protected]>
…pplicable (#5789) # Description The EthereumBlobExporter consumes the `dest` parameter when the destination is not `Here`. Subsequent exporters will receive a `None` value for the destination instead of the original destination value, which is incorrect. Closes #5788 ## Integration Minor fix related to the exporter behaviour. ## Review Notes Verified that tests `exporter_validate_with_invalid_dest_does_not_alter_destination` and `exporter_validate_with_invalid_universal_source_does_not_alter_universal_source` fail without the fix in the exporter. --------- Co-authored-by: Adrian Catangiu <[email protected]>
Jaeger tracing went mostly unused and it created bigger problems like wasting CPU or memory leaks, so remove it entirely. Fixes: #4995 --------- Signed-off-by: Alexandru Gheorghe <[email protected]>
# Description Closes [#5790](#5790). Useful for starting nodes based on minimal/solochain when doing development or for testing omni node with less happy code paths. It is reusing the presets defined for the nodes chain specs. ## Integration Specifically useful for development/testing if generating chain-specs for `minimal` or `solochain` runtimes from `templates` directories. ## Review Notes Added `genesis_config_presets` modules for both minimal/solochain. I reused the presets defined in each node `chain_spec` module correspondingly. ### PRDOC Not sure who uses templates, maybe node devs and runtime devs at start of their learning journey, but happy to get some guidance on how to write the prdoc if needed. ### Thinking out loud I saw concerns around sharing functionality for such genesis config presets between the template chains. I think there might be a case for doing that, on the lines of this comment: #4739 (comment). I would add that `parachains-common::genesis_config_heleper` contains a few methods from those mentioned, but I am unsure if using it as a dependency for templates is correct. Feels like the comment suggests there should be a `commons` crate concerning just `templates`, which I agree with to some degree, if we assume `cumulus` needs might be driven in certain directions that are not relevant to `templates` and vice versa. However I am not so certain about this, so would welcome some thoughts, since I am seeing `parachains-common` being used already in a few runtime implementations: https://crates.io/crates/parachains-common/reverse_dependencies?page=3, so might be a good candidate already for the `common` logic. --------- Signed-off-by: Iulian Barbu <[email protected]>
…#5917) The AllowTopLevelPaidExecutionFrom allows ClearOrigin instructions before the expected BuyExecution instruction, it also allows messages without any origin altering instructions. This commit enhances the barrier to also support messages that use AliasOrigin, or DescendOrigin. This is sometimes desired in asset transfer XCM programs that need to run the inbound assets instructions using the origin chain root origin, but then want to drop privileges for the rest of the program. Currently these programs drop privileges by clearing the origin completely, but that also unnecessarily limits the range of actions available to the rest of the program. Using DescendOrigin or AliasOrigin allows the sending chain to instruct the receiving chain what the deprivileged real origin is. See polkadot-fellows/RFCs#109 and polkadot-fellows/RFCs#122 for more details on how DescendOrigin and AliasOrigin could be used instead of ClearOrigin. --------- Signed-off-by: Adrian Catangiu <[email protected]>
…eData/OutboundLaneData (#5921) For permissionless lanes, we add `lane_state` to the `InboundLaneData` and `OutboundLaneData` structs. However, for a period of time (until both BHK and BHP are upgraded to the same version), we need the relayer to function with runtimes where one has been migrated with `lane_state` and the other has not. This PR addresses the incompatibility by introducing wrapper structs for decoding without `lane_state`.
This PR introduces the concept of immutable storage data, used for [Solidity immutable variables](https://docs.soliditylang.org/en/latest/contracts.html#immutable). This is a minimal implementation. Immutable data is attached to a contract; to keep `ContractInfo` fixed in size, we only store the length there, and store the immutable data in a dedicated storage map instead. Which comes at the cost of requiring an additional storage read (costly) for contracts using this feature. We discussed more optimal solutions not requiring any additional storage accesses internally, but they turned out to be non-trivial to implement. Another optimization benefiting multiple calls to the same contract in a single call stack would be to cache the immutable data in `Stack`. However, this potential creates a DOS vulnerability (the attack vector is to call into as many contracts in a single stack as possible, where they all have maximum immutable data to fill the cache as efficiently as possible). So this either has to be guaranteed to be a non-issue by limits, or, more likely, to have some logic to bound the cache. Eventually, we should think about introducing the concept of warm and cold storage reads (akin to EVM). Since immutable variables are commonly used in contracts, this change is blocking our initial launch and we should only optimize it properly in follow-ups. This PR also disables the `set_code_hash` API (which isn't usable for Solidity contracts without pre-compiles anyways). With immutable storage attached to contracts, we now want to run the constructor of the new code hash to collect the immutable data during `set_code_hash`. This will be implemented in a follow up PR. --------- Signed-off-by: Cyrill Leutwiler <[email protected]> Signed-off-by: xermicus <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Alexander Theißen <[email protected]> Co-authored-by: PG Herveou <[email protected]>
Updated runners for CMD and Docs
Bump zombienet version. Including fixes for `ci` failures like https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7511363 https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7511379
This PR adds a new beefy metric to monitor the number of live beefy peers. Part of investigation of litep2p request failures: #4985 cc @paritytech/networking --------- Signed-off-by: Alexandru Vasile <[email protected]>
# Description Adding Dwellir bootnodes in the `people-polkadot.json` spec file.
…ies group (#5863) Bumps the ci_dependencies group with 1 update: [docker/build-push-action](https://github.com/docker/build-push-action). Updates `docker/build-push-action` from 6.7.0 to 6.8.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/docker/build-push-action/releases">docker/build-push-action's releases</a>.</em></p> <blockquote> <h2>v6.8.0</h2> <ul> <li>Bump <code>@docker/actions-toolkit</code> from 0.37.1 to 0.38.0 in <a href="https://redirect.github.com/docker/build-push-action/pull/1230">docker/build-push-action#1230</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v6.7.0...v6.8.0">https://github.com/docker/build-push-action/compare/v6.7.0...v6.8.0</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/docker/build-push-action/commit/32945a339266b759abcbdc89316275140b0fc960"><code>32945a3</code></a> Merge pull request <a href="https://redirect.github.com/docker/build-push-action/issues/1230">#1230</a> from docker/dependabot/npm_and_yarn/docker/actions-t...</li> <li><a href="https://github.com/docker/build-push-action/commit/e0fe9cf0f26132beab7b62929bd647eef9e7df31"><code>e0fe9cf</code></a> chore: update generated content</li> <li><a href="https://github.com/docker/build-push-action/commit/8f1ff6bf9a836299c21b10f942be49efb52a832c"><code>8f1ff6b</code></a> chore(deps): Bump <code>@docker/actions-toolkit</code> from 0.37.1 to 0.38.0</li> <li>See full diff in <a href="https://github.com/docker/build-push-action/compare/5cd11c3a4ced054e52742c5fd54dca954e0edd85...32945a339266b759abcbdc89316275140b0fc960">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=6.7.0&new-version=6.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Bastian Köcher <[email protected]>
Closes #5045 and #5046 <del>On top of https://github.com/paritytech/polkadot-sdk/pull/5362</del> TODO: - [x] storage migration for allowed relay parents tracker - [x] check session index - [x] PRdoc - [x] tests - [x] ensure UMP queue cannot be abused with this change - [x] Zombienet runtime upgrade test --------- Signed-off-by: Andrei Sandu <[email protected]>
# Description ## What? Make it possible for other pallets to implement their own logic when a slash on a balance occurs. ## Why? In the [introduction of holds](paritytech/substrate#12951) @gavofyork said: > Since Holds are designed to be infallibly slashed, this means that any logic using a Freeze must handle the possibility of the frozen amount being reduced, potentially to zero. A permissionless function should be provided in order to allow bookkeeping to be updated in this instance. At Polimec we needed to find a way to reduce the vesting schedules of our users after a slash was made, and after talking to @kianenigma at the Web3Summit, we realized there was no easy way to implement this with the current traits, so we came up with this solution. ## How? - First we abstract the `done_slash` function of holds::Balanced to it's own trait that any pallet can implement. - Then we add a config type in pallet-balances that accepts a callback tuple of all the pallets that implement this trait. - Finally implement done_slash for pallet-balances such that it calls the config type. ## Integration The default implementation of done_slash is still an empty function, and the new config type of pallet-balances can be set to an empty tuple, so nothing changes by default. ## Review Notes - I suggest to focus on the first commit which contains the main logic changes. - I also have a working implementation of done_slash for pallet_vesting, should I add it to this PR? - If I run `cargo +nightly fmt --all` then I get changes to a lot of unrelated crates, so not sure if I should run it to avoid the fmt failure of the CI - Should I hunt down references to fungible/fungibles documentation and update it accordingly? **Polkadot address:** `15fj1UhQp8Xes7y7LSmDYTy349mXvUwrbNmLaP5tQKBxsQY1` # Checklist * [x] My PR includes a detailed description as outlined in the "Description" and its two subsections above. * [x] My PR follows the [labeling requirements]( https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process ) of this project (at minimum one label for `T` required) * External contributors: ask maintainers to put the right label on your PR. * [ ] I have made corresponding changes to the documentation (if applicable) --------- Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Francisco Aguirre <[email protected]>
Runtime side of #5048 Send the core selector ump signal in cumulus. Guarded by a compile time feature until nodes are upgraded to a version that includes #5423 for gracefully handling ump signals. --------- Co-authored-by: GitHub Action <[email protected]>
This PR updates the substrate-relay version for the bridges' Zombienet tests.
There are cases during warp sync or re-orgs, where we receive a notification with a block parent that was not reported in the past. This PR extends the tracking state to catch those cases and report a `Stop` event to the user. This PR adds a new state to the RPC-v2 chainHead to track which blocks have been reported. In the past we relied on the pinning mechanism to provide us details if a block is pinned or not. However, the pinning state keeps the minimal information around for pinning. Therefore, unpinning a block will cause the state to disappear. Closes: #5761 --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: Sebastian Kunert <[email protected]>
Seems to also need actions permission otherwise it error when trying to backport a change to the yml files liker [here](https://github.com/paritytech/polkadot-sdk/actions/runs/11143649431/job/30969199054).
Update try-runtime-cli to 0.8.0 for MBM testing. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
closes #5942 Couldn't find any emissions of `Event::Issued` without amount check other than in this PR. Currently, we have; https://github.com/paritytech/polkadot-sdk/blob/4bda956d2c635c3926578741a19fbcc3de69cbb8/substrate/frame/balances/src/impl_currency.rs#L212-L220 and https://github.com/paritytech/polkadot-sdk/blob/4bda956d2c635c3926578741a19fbcc3de69cbb8/substrate/frame/balances/src/impl_currency.rs#L293-L306
This PR introduces a `VestedTransfer` Trait, which handles making a transfer while also applying a vesting schedule to that balance. This can be used in pallets like the Treasury pallet, where now we can easily introduce a `vested_spend` extrinsic as an alternative to giving all funds up front. We implement `()` for the `VestedTransfer` trait, which just returns an error, and allows anyone to opt out from needing to use or implement this trait. This PR also updates the logic of `do_vested_transfer` to remove the "pre-check" which was needed before we had a default transactional layer in FRAME. Finally, I also fixed up some bad formatting in the test.rs file. --------- Co-authored-by: Guillaume Thiolliere <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
This PR adds **static** validation that prevents upload of code that: 1) Contains basic blocks larger than the specified limit (currently `200`) 2) Contains invalid instructions 3) Uses the `sbrk` instruction Doing that statically at upload time (instead of at runtime) allows us to change the basic block limit or add instructions later without worrying about breaking old code. This is well worth the linear scan of the whole blob on deployment in my opinion. Please note that those checks are not applied when existing code is just run (hot path). Also some drive by fixes: - Remove superflous `publish = true` - Abort fixture build on warning and fix existing warnings - Re-enable optimizations in fixture builds (should be fixed now in PolkaVM) - Disable stripping for fixture builds (maybe we can get some line information on trap via `RUST_LOG`) --------- Co-authored-by: command-bot <> Co-authored-by: PG Herveou <[email protected]>
#5399) This is a no-op refactor of staking pallet to move all `T::Currency` api calls under one module. A followup PR (#5501) will implement the Currency <> Fungible migration for the pallet. Introduces the new `asset` module that centralizes all interaction with `T::Currency`. This is an attempt to try minimising staking logic changes to minimal parts of the codebase. ## Things of note - `T::Currency::free_balance` in current implementation includes both staked (locked) and liquid tokens (kinda sounds wrong to call it free then). This PR renames it to `stakeable_balance` (any better name suggestions?). With #5501, this will become `free balance that can be held/staked` + `already held/staked balance`.
Migrates pallet-nft-fractionalization to benchmarking v2 syntax. Part of: * #6202 --------- Co-authored-by: Giuseppe Re <[email protected]> Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
- Fix bare_eth_transact so that it estimate more precisely the transaction fee - Add some context to the build.rs to make it easier to troubleshoot errors - Add TransactionBuilder for the RPC tests. - Improve error message, proxy rpc error from the node and handle reverted error message - Add logs in ReceiptInfo --------- Co-authored-by: GitHub Action <[email protected]>
Adds NoOp implementation for the `Polling` trait and updates benchmarks in `pallet-ranked-collective`. --------- Co-authored-by: Oliver Tale-Yazdi <[email protected]>
Part of: - #6202. --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Giuseppe Re <[email protected]>
… adaptable (#6425) # Description Resolves #6193 This PR introduces `ConstUint` as a replacement for existing constant getter types like `ConstU8`, `ConstU16`, etc., providing a more flexible and unified approach. ## Integration This update is backward compatible, so developers can choose to adopt `ConstUint` in new implementations or continue using the existing types as needed. ## Review Notes `ConstUint` is a convenient alternative to `ConstU8`, `ConstU16`, and similar types, particularly useful for configuring `DefaultConfig` in pallets. It enables configuring the underlying integer for a specific type without the need to update all dependent types, offering enhanced flexibility in type management. # Checklist * [x] My PR includes a detailed description as outlined in the "Description" and its two subsections above. * [ ] My PR follows the [labeling requirements]( https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process ) of this project (at minimum one label for `T` required) * External contributors: ask maintainers to put the right label on your PR. * [ ] I have made corresponding changes to the documentation (if applicable) * [ ] I have added tests that prove my fix is effective or that my feature works (if applicable)
Very tiny change that helps with debugging of transactions propagation by referring to the same type alias not only at receiving side, but also on the sending size for symmetry
This PR addresses an issue mentioned [here](#6424 (comment)). The problem was that when the prdoc file has two audiences, but only one description like in [prdoc_5660](https://github.com/paritytech/polkadot-sdk/blob/master/prdoc/1.16.0/pr_5660.prdoc) it was ignored by the template.
Added `ExecuteWithOrigin` instruction according to the old XCM RFC 38: polkadot-fellows/xcm-format#38. This instruction allows you to descend or clear while going back again. ## TODO - [x] Implementation - [x] Unit tests - [x] Integration tests - [x] Benchmarks - [x] PRDoc ## Future work Modify `WithComputedOrigin` barrier to allow, for example, fees to be paid with a descendant origin using this instruction. --------- Signed-off-by: Adrian Catangiu <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]> Co-authored-by: Andrii <[email protected]> Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: Joseph Zhao <[email protected]> Co-authored-by: Nazar Mokrynskyi <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Shawn Tabrizi <[email protected]> Co-authored-by: command-bot <>
This PR fixes an issue that I discovered using connecting to the RPC via localhost using cURL, where cURL tries to connect to via ipv6 before ipv4 when querying `localhost` which messed up the http host filter whereas it would connect to the address `[::1]::9944 host_header: localhost:9944` but the ipv6 interface only whitelisted `[::1]:9944` which this fixes. So let's whitelist all localhost interfaces to avoid such weird edge-cases. ### Behavior before this PR ```bash $ polkadot --chain westend-dev & $ curl -v \ -H 'Content-Type: application/json' \ -d '{"jsonrpc":"2.0","id":"id","method":"system_name"}' \ http://localhost:9944 * Host localhost:9944 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:9944... * Connected to localhost (::1) port 9944 > POST / HTTP/1.1 > Host: localhost:9944 > User-Agent: curl/8.5.0 > Accept: */* > Content-Type: application/json > Content-Length: 50 > < HTTP/1.1 403 Forbidden < content-type: text/plain < content-length: 41 < date: Tue, 12 Nov 2024 13:03:49 GMT < Provided Host header is not whitelisted. * Connection #0 to host localhost left intact ``` ### Behavior after this PR ```bash $ polkadot --chain westend-dev & ➜ wasm-tests (update-artifacts-1731284930) ✗ curl -v \ -H 'Content-Type: application/json' \ -d '{"jsonrpc":"2.0","id":"id","method":"system_name"}' \ http://localhost:9944 * Host localhost:9944 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:9944... * Connected to localhost (::1) port 9944 > POST / HTTP/1.1 > Host: localhost:9944 > User-Agent: curl/8.5.0 > Accept: */* > Content-Type: application/json > Content-Length: 50 > < HTTP/1.1 200 OK < content-type: application/json; charset=utf-8 < vary: origin, access-control-request-method, access-control-request-headers < content-length: 54 < date: Tue, 12 Nov 2024 13:02:57 GMT < * Connection #0 to host localhost left intact {"jsonrpc":"2.0","id":"id","result":"Parity Polkadot"}% ``` --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: command-bot <>
- Breaking down the integration-test into multiple tests - Fix tx hash to use expected keccak-256 - Add option to ethers.js example to connect to westend and use a private key --------- Co-authored-by: GitHub Action <[email protected]>
# Description The debug message was added to identify a potential memory leak. However, recent observations show that pruning works as expected. Therefore, it is best to remove this line, as it generates quite annoying logs. ## Integration Doesn't affect downstream projects. --------- Co-authored-by: GitHub Action <[email protected]>
…tionExtension::validate` (#6323) ## Meta This PR is part of 4 PR: * #6323 * #6324 * #6325 * #6326 ## Description One goal of transaction extension is to get rid or unsigned transactions. But unsigned transaction validation has access to the `TransactionSource`. The source is used for unsigned transactions that the node trust and don't want to pay upfront. Instead of using transaction source we could do: the transaction is valid if it is signed by the block author, conceptually it should work, but it doesn't look so easy. This PR add `TransactionSource` to the validate function for transaction extensions
# Description Part of #3326 Removes all pallet::getter occurrences from pallet-staking and replaces them with explicit implementations. Adds tests to verify that retrieval of affected entities works as expected so via storage::getter. ## Review Notes 1. Traits added to the `derive` attribute are used in tests (either directly or indirectly). 2. The getters had to be placed in a separate impl block since the other one is annotated with `#[pallet::call]` and that requires `#[pallet::call_index(0)]` annotation on each function in that block. So I thought it's better to separate them. --------- Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: Guillaume Thiolliere <[email protected]>
- [x] Removing `without_storage_info` and adding bounds on the stored types for pallet `society` - issue #6289 - [x] Migrating to benchmarking V2 - #6202 --------- Co-authored-by: Guillaume Thiolliere <[email protected]> Co-authored-by: Muharem <[email protected]>
When using multiple instances of the same pallet, each instance was executed with the components of all instances. While actually each instance should only be executed with the components generated for the particular instance. The problem here was that in the runtime only the pallet-name was used to determine if a certain pallet should be benchmarked. When using instances, the pallet name is the same for both of these instances. The solution is to also take the instance name into account. The fix requires to change the `Benchmark` runtime api to also take the `instance`. The node side is written in a backwards compatible way to also support runtimes which do not yet support the `instance` parameter. --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: clangenb <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]>
## Issue #4859 ## Description This PR removes `libp2p` types in authority-discovery, and replace them with network backend agnostic types from `sc-network-types`. The `sc-network` interface is therefore updated accordingly. --------- Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: Alexandru Vasile <[email protected]>
## Issue [[#3421] backing: improve session buffering for runtime information](#3421) ## Description In the current implementation of the backing module, certain pieces of information, which remain unchanged throughout a session, are fetched multiple times via runtime API calls. The goal of this task was to introduce a local cache to store such session-stable information and perform the runtime API call only once per session. This PR implements caching specifically for the validators list, node features, executor parameters, minimum backing votes threshold, and validator-to-group mapping, which were previously fetched from the runtime or computed each time `PerRelayParentState` was built. Now, this information is cached and reused within the session. ## TODO * [X] Create a separate struct for per-session caches; * [X] Cache validators list; * [X] Cache node features; * [X] Cache executor parameters; * [X] Cache minimum backing votes threshold; * [X] Cache validator-to-group mapping; * [X] Update tests to reflect these changes; * [X] Add prdoc. ## For the next PR Cache validator groups and any other session-stable data (if present).
# Description Add support to run networking protocol benchmarks with litep2p backend. Now we can compare the work of both libp2p and litep2p backends for notifications and request-response protocols. Next step: extract worker initialization from the benchmark loop. ### Example run on local machine <img width="916" alt="image" src="https://github.com/user-attachments/assets/6bb9f90a-76a4-417e-b9d3-db27aa8a356f"> ## Integration Does not affect downstream projects. ## Review Notes https://github.com/paritytech/polkadot-sdk/blob/d4d9502538e8a940b809ecc77843af3cea101e19/substrate/client/network/src/litep2p/service.rs#L510-L520 This method should be implemented to run request benchmarks. --------- Co-authored-by: GitHub Action <[email protected]>
Found by @ggwpez Fix staking benchmark, error was introduced when migrating to v2: #6025 --------- Co-authored-by: GitHub Action <[email protected]>
…UncheckedExtrinsic` (#6418) Follow up to #3685 Partially fixes #6403 The main PR introduced bare support for the new extension version byte as well as extension weights and benchmarking. This PR: - Removes the redundant extension version byte from the signed v4 extrinsic, previously unused and defaulted to 0. - Adds the extension version byte to the inherited implication passed to `General` transactions. - Whitelists the `pallet_authorship::Author`, `frame_system::Digest` and `pallet_transaction_payment::NextFeeMultiplier` storage items as they are read multiple times by extensions for each transaction, but are hot in memory and currently overestimate the weight. - Whitelists the benchmark caller for `CheckEra` and `CheckGenesis` as the reads are performed for every transaction and overestimate the weight. - Updates the umbrella frame weight template to work with the system extension changes. - Plans on re-running the benchmarks at least for the `frame_system` extensions. --------- Signed-off-by: georgepisaltu <[email protected]> Co-authored-by: command-bot <> Co-authored-by: gui <[email protected]>
# Description Created a workflow to search for README.docify.md in the repo, and run cargo build --features generate-readme in the dir of the file (assuming it is related to a crate). If the git diff shows some output for the README.md, then the file update wasn't pushed on the branch, and the workflow fails. Closes #6331 ## Integration Downstream projects that want to adopt this README checking workflow should: 1. Copy the `.github/workflows/readme-check.yml` file to their repository 2. Ensure any `README.docify.md` files in their project follow the expected format 3. Implement the `generate-readme` feature flag in their Cargo.toml if not already present ## Review Notes This PR adds a GitHub Actions workflow that automatically verifies README.md files are up-to-date with their corresponding README.docify.md sources. Key implementation details: - The workflow runs on both PRs and pushes to main - It finds all `README.docify.md` files recursively in the repository - For each file found: - Builds the project with `--features generate-readme` in that directory - Checks if the README.md has any uncommitted changes - Fails if any README.md is out of sync --------- Co-authored-by: Alexander Samusev <[email protected]> Co-authored-by: Iulian Barbu <[email protected]>
Set the logs_bloom in the transaction receipt --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Cyrill Leutwiler <[email protected]>
# Description When using `TypeWithDefault<u32, ..>` as the default nonce provider to overcome the [replay attack](https://wiki.polkadot.network/docs/transaction-attacks#replay-attack) issue, it fails to compile due to `TypeWithDefault<u32, ..>: TryFrom<u64>` is not satisfied (which is required by trait `BaseArithmetic`). This is because the blanket implementation `TryFrom<U> for T where U: Into<T>` only impl `TryFrom<u16>` and `TryFrom<u8>` for `u32` since `u32` only impl `Into` for `u16` and `u8` but not `u64`. This PR fixes the issue by adding `TryFrom<u16/u32/u64/u128>` and `From<u8/u16/u32/u64/u128>` impl (using macro) for `TypeWithDefault<u8/u16/u32/u64/u128, ..>` and removing the blanket impl (otherwise the compiler will complain about conflicting impl), such that `TypeWithDefault<u8/u16/u32/u64/u128, ..>: AtLeast8/16/32Bit` is satisfied. ## Integration This PR adds support to more types to be used with `TypeWithDefault`, existing code that used `u64` with `TypeWithDefault` should not be affected, an unit test is added to ensure that. ## Review Notes This PR simply makes `TypeWithDefault<u8/u16/u32/u64/u128, ..>: AtLeast8/16/32Bit` satisfied --------- Signed-off-by: linning <[email protected]>
This PR update the pallet to use the EVM 18 decimal balance in contracts call and host functions instead of the native balance. It also updates the js example to add the piggy-bank solidity contract that expose the problem --------- Co-authored-by: GitHub Action <[email protected]>
This PR updates the litep2p backend to version 0.8.1 from 0.8.0. - Check the [litep2p updates forum post](https://forum.polkadot.network/t/litep2p-network-backend-updates/9973/3) for performance dashboards. - Check [litep2p release notes](paritytech/litep2p#288) The v0.8.1 release includes key fixes that enhance the stability and performance of the litep2p library. The focus is on long-running stability and improvements to polling mechanisms. ### Long Running Stability Improvements This issue caused long-running nodes to reject all incoming connections, impacting overall stability. Addressed a bug in the connection limits functionality that incorrectly tracked connections due for rejection. This issue caused an artificial increase in inbound peers, which were not being properly removed from the connection limit count. This fix ensures more accurate tracking and management of peer connections [#286](paritytech/litep2p#286). ### Polling implementation fixes This release provides multiple fixes to the polling mechanism, improving how connections and events are processed: - Resolved an overflow issue in TransportContext’s polling index for streams, preventing potential crashes ([#283](paritytech/litep2p#283)). - Fixed a delay in the manager’s poll_next function that prevented immediate polling of newly added futures ([#287](paritytech/litep2p#287)). - Corrected an issue where the listener did not return Poll::Ready(None) when it was closed, ensuring proper signal handling ([#285](paritytech/litep2p#285)). ### Fixed - manager: Fix connection limits tracking of rejected connections ([#286](paritytech/litep2p#286)) - transport: Fix waking up on filtered events from `poll_next` ([#287](paritytech/litep2p#287)) - transports: Fix missing Poll::Ready(None) event from listener ([#285](paritytech/litep2p#285)) - manager: Avoid overflow on stream implementation for `TransportContext` ([#283](paritytech/litep2p#283)) - manager: Log when polling returns Ready(None) ([#284](paritytech/litep2p#284)) ### Testing Done Started kusama nodes running side by side with a higher number of inbound and outbound connections (500). We previously tested with peers bounded at 50. This testing filtered out the fixes included in the latest release. With this high connection testing setup, litep2p outperforms libp2p in almost every domain, from performance to the warnings / errors encountered while operating the nodes. TLDR: this is the version we need to test on kusama validators next - Litep2p Repo | Count | Level | Triage report -|-|-|- polkadot-sdk | 409 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Peer disconnected with inflight after backoffs. Banned, disconnecting. ) litep2p | 128 | warn | Refusing to add known address that corresponds to a different peer ID litep2p | 54 | warn | inbound identify substream opened for peer who doesn't exist polkadot-sdk | 7 | error | 💔 Called `on_validated_block_announce` with a bad peer ID .* polkadot-sdk | 1 | warn | ❌ Error while dialing .*: .* polkadot-sdk | 1 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Invalid justification. Banned, disconnecting. ) - Libp2p Repo | Count | Level | Triage report -|-|-|- polkadot-sdk | 1023 | warn | 💔 Ignored block \(#.* -- .*\) announcement from .* because all validation slots are occupied. polkadot-sdk | 472 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Unsupported protocol. Banned, disconnecting. ) polkadot-sdk | 379 | error | 💔 Called `on_validated_block_announce` with a bad peer ID .* polkadot-sdk | 163 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Invalid justification. Banned, disconnecting. ) polkadot-sdk | 116 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Peer disconnected with inflight after backoffs. Banned, disconnecting. ) polkadot-sdk | 83 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Same block request multiple times. Banned, disconnecting. ) polkadot-sdk | 4 | warn | Re-finalized block #.* \(.*\) in the canonical chain, current best finalized is #.* polkadot-sdk | 2 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Genesis mismatch. Banned, disconnecting. ) polkadot-sdk | 2 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Not requested block data. Banned, disconnecting. ) polkadot-sdk | 2 | warn | Can't listen on .* because: .* polkadot-sdk | 1 | warn | ❌ Error while dialing .*: .* --------- Signed-off-by: Alexandru Vasile <[email protected]>
# Description This PR is a simple fix consisting of adding a check to the process of decoding nodes of a storage proof to avoid panicking when receiving badly-constructed proofs, returning an error instead. This would close #6485 ## Integration No changes have to be done downstream, and as such the version bump should be minor. --------- Co-authored-by: Bastian Köcher <[email protected]>
…#6302) Migrates pallet-nomination-pool-benchmarking to benchmarking syntax v2. Part of: * #6202 --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Guillaume Thiolliere <[email protected]> Co-authored-by: Giuseppe Re <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )