Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Introduce the pool for storing OptimisticBlocks, joining them with chunks, executing them and reusing cached results. #10584
The change is pretty big; however, I think it's important to merge at once because it already gives the working example. I'll describe 2 major changes which are sufficient to review.
OptimisticBlockChunksPool
Receives optimistic block (OB), currently from block producer itself only. Receives chunks from ShardsManager. When, on top of some prev block, OB and chunks are received, allows to take ready OB.
Some primitive throttling and garbage collection is required, to ensure that OBs are not executed many times and that pool doesn't OOM when there are forks. For that purpose, we maintain
minimal_base_height
for chunks andblock_height_threshold
for blocks. Note that we don't remove chunks immediately because if there is a block skip, chunks should be reused to process the next OB.This feature is independent, so I also implement simple unit tests for it.
Processing OB
As discussed before, result of chunk execution on top of OB doesn't impact any part of block processing and doesn't persist anything. It is simply put to cache which can be reused when the actual block is received.
This cache, however, needs some unique key to store results. For that, I introduce
CachedShardUpdateKey
which includes necessary fields for Block or OB, all the chunks and shard id (index could also work). Note that we need chunk hashes because they define prev outgoing receipts, which in turn are used to generate incoming receipts for our chunk.For execution,
BlocksInProcessing
is extended a bit to keep OBs as well to limit the number of parallel chunk executions for blocks and OBs together. The population happens inpostprocess_optimistic_block
.Testing
Finally, we are also able to write
test_optimistic_block
. For now I just check that there is at least one cache hit, let's think about more complex cases later.I'll resolve merge conflicts later.