-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENG-326: Full Node Commitments #58
Conversation
We'll use the working CI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment about e2e tests, up to yo whether we want to add them in this PR or a separate one.
@0xmovses for now, Mikhail will be working against a stub. e2e tests can come elsewhere. |
In MonzaExecutor and SuzukaExecutor, return the Commitment value from execute_block. The underlying opt-executor is changed to produce the commitment from proof data.
…full-node-commitments
@mzabaluev Updated the stub with batching. |
Oh, one more thing you can now expect that will be helpful... |
In the contract and now stub, I added a method Here's the type signature. async fn get_max_tolerable_block_height(&self) -> Result<u64, anyhow::Error>; |
In SuzukaPartialNode, instantiate the stub McrSettlementClient.
Moar data that the executor should be able to immediately provide upon executing the block.
Send the block commitments to the MCR contract via the settlement client. This is done in a separate task from the executor loop to avoid blocking it.
Add a comment to rename since suzuka needs this Nix-provided stuff as well.
let block_metadata = self.executor.build_block_metadata( | ||
HashValue::sha3_256_of(block_id.as_bytes()), | ||
block_timestamp | ||
).await?; | ||
let block_metadata_transaction = SignatureVerifiedTransaction::Valid( | ||
Transaction::BlockMetadata( | ||
block_metadata | ||
) | ||
); | ||
block_transactions.push(block_metadata_transaction); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aha, I thought this will be somehow pulled through DA, but it needs to be synthesized here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe even lower it into maptos-opt-executor, then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we may have to reevaluate at some point. But, for now, we are going to assume that the executor can correctly decide the block height based on, well, the number of blocks it has seen.
This is, at least on the surface, a viable assumption because the if it is an honest node, it should fetch all the blocks up from the DA (and when we have state syncing from other nodes that have done so).
However, if we do need have it come up from the DA, the complexity we will have to deal with is that Celestia blob heights are continually pushed regardless of whether additional blobs have been added to a given namespace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory, the sequencer could address by keeping its own counter. But, this could be problematic.
No need to compute the commitment for the metadata workaround block.
@@ -329,7 +329,7 @@ impl Executor { | |||
// Context has a reach-around to the db so the block height should |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM hopefully we'll have a productive conversation about where implementations are separate on Friday and Monday.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Scratch that. Build error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this! Just a few thoughts
"--cfg tokio_unstable -C force-frame-pointers=yes -C force-unwind-tables=yes -C link-arg=/STACK:8000000" | ||
else | ||
"--cfg tokio_unstable -C force-frame-pointers=yes -C force-unwind-tables=yes"; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👀 Be great if nix did handle config.toml. Seems subpar that it doesn't
{ | ||
tracing::debug!("Got transaction: {:?}", transaction) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👌
transaction_sender : Sender<SignedTransaction>, | ||
pub transaction_receiver : Receiver<SignedTransaction>, | ||
light_node_client: Arc<RwLock<LightNodeServiceClient<tonic::transport::Channel>>>, | ||
pub struct SuzukaPartialNode<T> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add in a doc comment for the struct and all its fields.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be a wider ticket for documenting our public APIs (when they are stabilized enough to avoid unnecessary churn). I have filed #81 to track this.
|
||
fn bind_transaction_channel(&mut self) { | ||
self.executor.set_tx_channel(self.transaction_sender.clone()); | ||
} | ||
|
||
pub fn bound( | ||
pub fn bound<C>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any pub
methods should have a doc comment.
@@ -110,9 +140,9 @@ impl <T : SuzukaExecutor + Send + Sync + Clone>SuzukaPartialNode<T> { | |||
// receive transactions from the transaction channel and send them to be executed | |||
// ! This assumes the m1 da light node is running sequencer mode | |||
pub async fn read_blocks_from_da(&self) -> Result<(), anyhow::Error> { | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's turn those comments into doc comments
executor.execute_block(FinalityMode::Opt, block).await?; | ||
let commitment = executor.execute_block(FinalityMode::Opt, block).await?; | ||
|
||
// TODO: test the commitment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this TODO outside the scope of this PR? Maybe lets make a tracking ticket . issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added some checks in fae30be
Not testing the commitment hash itself, it would require replicating much of the internal code.
protocol-units/settlement/mcr/contracts/broadcast/DeployMCR.s.sol/1569/run-1716348078.json
Outdated
Show resolved
Hide resolved
protocol-units/settlement/mcr/contracts/broadcast/DeployMCR.s.sol/1569/run-latest.json
Outdated
Show resolved
Hide resolved
Don't use the "logging" feature to gate individual tracing macro expansions, tracing is designed to not need this. Remove dependencies on env_logger and tracing-log, these crates are no longer in use.
Test some properties of the BlockCommitment returned by a test from the execute_block method. There's no easy way to verify the commitment hash itself.
Rename to override_block_commitment for clarity.
To test dynamic behavior of the McrSettlementManager, add methods to pause and resume streaming of commitments.
Test dynamic behavior: when the client is not ready to process more incoming blocks, the manager should pause with blocks as well. Then when the client is ready to accept more, resume.
a76e9ed
to
df9cb38
Compare
Included Liam's remarks on treating verifier errors as successful verification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Summary
review
protocol-units
,networks
,scripts
.Add MCR commitment processing to Suzuka full node.
Changelog
Testing
Design
Entrypoints for understanding
MCR Settlement Client (Stub): https://github.com/movementlabsxyz/movement/tree/eng-326/full-node-commitments/protocol-units/settlement/mcr/client