Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor rollup initialization #1714

Open
wants to merge 12 commits into
base: nightly
Choose a base branch
from
Open

Conversation

rakanalh
Copy link
Contributor

@rakanalh rakanalh commented Jan 15, 2025

Description

Problem we have right now is that config along with a lot of other dependencies go through
main -> rollup -> client (sequencer, fullnode ..etc) -> da_block_handler

So introducing a component in DA block handler that requires a piece of config or a dependency as an example, would require updating the whole flow all the way to that point.

This PR changes this by running RPC & DA block handler as services along side the client (sequencer, fullnode ..etc) not from within. This makes initialization much easier.

TODO

  • Fix clippy

@rakanalh rakanalh added the T - enhancement New feature or request label Jan 15, 2025
Copy link
Member

@eyusufatik eyusufatik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left my initial comments, will need to read rollup/mod.rs a bit more in detail.

crates/common/src/da.rs Outdated Show resolved Hide resolved
Comment on lines 34 to 72
'block_sync: loop {
let last_finalized_l1_block_header =
match da_service.get_last_finalized_block_header().await {
Ok(header) => header,
Err(e) => {
error!("Could not fetch last finalized L1 block header: {}", e);
sleep(Duration::from_secs(2)).await;
continue;
}
};

let new_l1_height = last_finalized_l1_block_header.height();

for block_number in l1_height + 1..=new_l1_height {
let l1_block =
match get_da_block_at_height(&da_service, block_number, l1_block_cache.clone())
.await
{
Ok(block) => block,
Err(e) => {
error!("Could not fetch last finalized L1 block: {}", e);
sleep(Duration::from_secs(2)).await;
continue 'block_sync;
}
};

if block_number > l1_height {
l1_height = block_number;
l1_block_scan_histogram.record(
Instant::now()
.saturating_duration_since(start)
.as_secs_f64(),
);
if let Err(e) = sender.send(l1_block).await {
error!("Could not notify about L1 block: {}", e);
continue 'block_sync;
}
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also want to refactor this part of the code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What changes are you looking to have done?

crates/common/src/rpc/server.rs Outdated Show resolved Hide resolved
bin/citrea/src/cli.rs Outdated Show resolved Hide resolved
bin/citrea/src/rollup/mod.rs Show resolved Hide resolved
bin/citrea/src/rollup/mod.rs Outdated Show resolved Hide resolved
bin/citrea/src/rollup/mod.rs Outdated Show resolved Hide resolved
- L1BlockHandler for all node types has been moved
- RPC server is started for batch prover

Remaining:
- Update spawning RPC server for sequencer / light client
This consistently calls `build_services` which return all services that
a specific node would need to spawn using the task manager.
bin/citrea/src/cli.rs Outdated Show resolved Hide resolved
@rakanalh rakanalh marked this pull request as ready for review January 20, 2025 12:16
@auto-assign auto-assign bot requested a review from kpp January 20, 2025 12:16
@rakanalh rakanalh requested review from jfldde and eyusufatik January 20, 2025 12:18
Comment on lines +108 to +115
if let Some(sequencer_config) = sequencer_config {
return Ok(NodeType::Sequencer(sequencer_config));
} else if let Some(batch_prover_config) = batch_prover_config {
return Ok(NodeType::BatchProver(batch_prover_config));
} else if let Some(light_client_prover_config) = light_client_prover_config {
return Ok(NodeType::LightClientProver(light_client_prover_config));
}
Ok(NodeType::FullNode)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wish we had it the other way round and passed a --kind arg or something. Would let us have a single config field (+ the rollup config) and not be implicit about which node kind is running.
We would match on the kind here and get the config accordingly.

storage_manager,
prover_storage,
soft_confirmation_channel.0,
rpc_module,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could remove it here since we return it as is

use crate::RpcConfig;

/// Starts a RPC server with provided rpc methods.
pub async fn start_rpc_server(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC the different nodes have subtle differences between them here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T - enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants