diff --git a/.github/workflows/image-build.yaml b/.github/workflows/image-build.yaml index 8e326cb1c..5954d48f8 100644 --- a/.github/workflows/image-build.yaml +++ b/.github/workflows/image-build.yaml @@ -45,6 +45,7 @@ jobs: - blocklist-client runs-on: ubuntu-latest + environment: "Push to Docker" steps: ## Setup Docker for the builds - name: Docker setup diff --git a/README.md b/README.md index 59886c959..e119c243f 100644 --- a/README.md +++ b/README.md @@ -8,6 +8,10 @@ - [sBTC Landing Page](https://sbtc.tech/) - [sBTC Docs](https://docs.stacks.co/concepts/sbtc) +## Releases + +See [`RELEASE.md`](./RELEASE.md). + ## Design Docs **All decisions are made and tracked via GitHub issues where they and their rationale can be verified publicly.** Due to sBTC's critical nature extensive research and planning has been done to ensure all funds remain secure on launch. @@ -153,7 +157,7 @@ This GitHub Actions workflow, Cargo Vet, is designed to automate the vetting of - **Failure**: Check the GitHub Actions logs for errors and annotations about unvetted dependencies. Download the audit-suggestions.txt artifact from the "Artifacts" section of the GitHub Actions interface for a detailed report. -- **Addressing Unvetted Dependencies**: Use the suggestions in the audit-suggestions.txt file to update your dependency audit policies in the supply-chain.toml file. +- **Addressing Unvetted Dependencies**: Use the suggestions in the audit-suggestions.txt file to update your dependency audit policies in the supply-chain.toml file. Running this command you are able to check the suggestions offline: diff --git a/RELEASE.md b/RELEASE.md new file mode 100644 index 000000000..a91d72789 --- /dev/null +++ b/RELEASE.md @@ -0,0 +1,99 @@ +# sBTC: Release Process + +## Executive summary + +This release process targets the following goals: + +1. Ensure there exists a provable chain of trust connecting the source code to + the corresponding artifacts. +1. Ensure there exists a clear _separation of duties_ between those who _write_ + the code and those who _release_ it (and announce its release). + +It implements (1) by leveraging GitHub rulesets for branch/tag protection and +attestation for artifacts and (2) through the announcement process (described +below). + +## [sBTC Core developers] Creating a new release + +An sBTC release is a tagged commit on the `main` branch. + +Any commit to `main` MUST require at least one review through a GitHub pull +request. Before merging the pull request, all tests MUST pass. + +Tags MUST be named according to [semantic versioning][0]. + +[GitHub rulesets][1] ensure that only a subset of sBTC core developers can +create a `git tag`. Creating a tag SHOULD require 4-eyes (as of February 2025, +this is not yet possible). + +Once a tag is created, a [GitHub deployment environment][2] will build and +publish any corresponding artifacts. The deployment environment MUST require a +review from a subset of sBTC core developers before executing. The use of +deployment environment ensures that all credentials that are required to publish artifacts +are gated behind the review process (e.g., Docker Hub credentials, until [OIDC +identities are supported][4]). + +All artifacts MUST be [attested][3] so that their build provenance can be +established. This way, downstream users (e.g., sBTC signers) will be able to +cryptographically verify that an artifact (e.g., a Docker release) has been +built and published through GitHub actions. + +All artifacts MUST be addressed through their cryptographic digest (e.g., `git +commit` or Docker image digest), in addition to their label (e.g., the `git +tag`). + +To improve quality of life, the release notes MUST include breaking changes (if +any), upgrade migrations (if any), and a link to the relevant artifacts (e.g., +Docker images). + +## [sBTC Comms] Announcing a new release + +After a new release has been created, the sBTC Comms team will inform the sBTC +signers and provide the appropriate update instructions. + +All members of the sBTC Comms team MUST NOT be participating to sBTC development +(that is, they MUST NOT be part of the Core developers team). This ensures clear +separation of duties and, for instance, prevents a rogue core developer from +"convincing" the sBTC signers of deploying a tampered release. + +At all times, there MUST be at least two members of the sBTC Comms team in any +communications channel including an sBTC signer (similarly to the 4-eyes process for releases). + +## [sBTC Signers] Deploying a new release + +Once sBTC Signers receive a release announcement from the sBTC Comms team, they +MUST: + +1. Ensure the communication comes from a member of the sBTC Comms team. +1. Carefully read the corresponding upgrade instructions. +1. Verify the attestation of the attached artifacts. +1. Execute the upgrade. +1. Confirm the execution. + +The `gh` executable can quickly [verify attestations][5]: + +```bash +> gh attestation verify oci://index.docker.io/blockstack/sbtc:signer-0.0.9-rc6 -R stacks-network/sbtc +Loaded digest sha256:3bba86a5c2dfdbda61209dc728ab208406a909c8b5affba45da5bb4ccb27ad0d for oci://index.docker.io/blockstack/sbtc:signer-0.0.9-rc6 +Loaded 1 attestation from GitHub API + +The following policy criteria will be enforced: +- OIDC Issuer must match:................... https://token.actions.githubusercontent.com +- Source Repository Owner URI must match:... https://github.com/stacks-network +- Source Repository URI must match:......... https://github.com/stacks-network/sbtc +- Predicate type must match:................ https://slsa.dev/provenance/v1 +- Subject Alternative Name must match regex: (?i)^https://github.com/stacks-network/sbtc/ + +✓ Verification succeeded! + +sha256:3bba86a5c2dfdbda61209dc728ab208406a909c8b5affba45da5bb4ccb27ad0d was attested by: +REPO PREDICATE_TYPE WORKFLOW +stacks-network/sbtc https://slsa.dev/provenance/v1 .github/workflows/image-build.yaml@refs/tags/0.0.9-rc6 +``` + +[0]: https://semver.org +[1]: https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-rulesets/about-rulesets +[2]: https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-deployments/managing-environments-for-deployment +[3]: https://docs.github.com/en/actions/security-for-github-actions/using-artifact-attestations/using-artifact-attestations-to-establish-provenance-for-builds +[4]: https://github.com/docker/roadmap/issues/314#issuecomment-2605945137 +[5]: https://docs.github.com/en/actions/security-for-github-actions/using-artifact-attestations/using-artifact-attestations-to-establish-provenance-for-builds#verifying-artifact-attestations-with-the-github-cli diff --git a/protobufs/stacks/signer/v1/messages.proto b/protobufs/stacks/signer/v1/messages.proto index 128036248..35ef81a8d 100644 --- a/protobufs/stacks/signer/v1/messages.proto +++ b/protobufs/stacks/signer/v1/messages.proto @@ -35,8 +35,7 @@ message SignerMessage { // A wsts message. message WstsMessage { - // The transaction ID this message relates to, will be a dummy ID for DKG messages - bitcoin.BitcoinTxid txid = 1 [deprecated = true]; + reserved 1; // The wsts message oneof inner { // Tell signers to begin DKG by sending DKG public shares @@ -63,14 +62,14 @@ message WstsMessage { oneof id { // If this WSTS message is related to a Bitcoin signing round, this field // will be set to the related Bitcoin transaction ID. - bitcoin.BitcoinTxid id_bitcoin_txid = 12; + bitcoin.BitcoinTxid sweep = 12; // If this WSTS message is related to a rotate-keys transaction, this field // will be set to the _new_ aggregate public key being verified. - crypto.PublicKey id_rotate_key = 13; + crypto.PublicKey dkg_verification = 13; // If this WSTS message is related to a DKG round, this field will be set // to the 32-byte id determined based on the coordinator public key and // block hash, set by the coordinator. - crypto.Uint256 id_dkg = 14; + crypto.Uint256 dkg = 14; } } diff --git a/signer/src/bitcoin/validation.rs b/signer/src/bitcoin/validation.rs index aabe356e3..ce782f74c 100644 --- a/signer/src/bitcoin/validation.rs +++ b/signer/src/bitcoin/validation.rs @@ -322,7 +322,6 @@ impl BitcoinPreSignRequest { tx_fee: Amount::from_sat(tx.tx_fee), reports, chain_tip_height: btc_ctx.chain_tip_height, - // If the cap is None, then we assume that it is unlimited. sbtc_limits: ctx.state().get_current_limits(), }; diff --git a/signer/src/bitcoin/zmq.rs b/signer/src/bitcoin/zmq.rs index b722efc27..ee16270e9 100644 --- a/signer/src/bitcoin/zmq.rs +++ b/signer/src/bitcoin/zmq.rs @@ -50,11 +50,8 @@ pub struct BitcoinCoreMessageStream { } impl BitcoinCoreMessageStream { - /// Create a new one using the endpoint(s) in the config. - pub async fn new_from_endpoint(endpoint: &str, _subscriptions: &[T]) -> Result - where - T: AsRef, - { + /// Create a new one using the given endpoint. + pub async fn new_from_endpoint(endpoint: &str) -> Result { let inner_stream = tokio::time::timeout(Duration::from_secs(10), async { bitcoincore_zmq::subscribe_async_monitor(&[endpoint]) }) diff --git a/signer/src/block_observer.rs b/signer/src/block_observer.rs index abda5c20a..75d11dea7 100644 --- a/signer/src/block_observer.rs +++ b/signer/src/block_observer.rs @@ -17,6 +17,7 @@ //! - Update signer set transactions //! - Set aggregate key transactions +use std::collections::BTreeSet; use std::future::Future; use std::time::Duration; @@ -29,6 +30,7 @@ use crate::context::SbtcLimits; use crate::context::SignerEvent; use crate::emily_client::EmilyInteract; use crate::error::Error; +use crate::keys::PublicKey; use crate::metrics::Metrics; use crate::metrics::BITCOIN_BLOCKCHAIN; use crate::stacks::api::GetNakamotoStartHeight as _; @@ -157,8 +159,9 @@ where tracing::warn!(%error, "could not process stacks blocks"); } - if let Err(error) = self.update_sbtc_limits().await { - tracing::warn!(%error, "could not update sBTC limits"); + tracing::debug!("updating the signer state"); + if let Err(error) = self.update_signer_state(block_hash).await { + tracing::warn!(%error, "could not update the signer state"); continue; } @@ -577,6 +580,101 @@ impl BlockObserver { } Ok(()) } + + /// Update the `SignerState` object with current signer set and + /// aggregate key data. + /// + /// # Notes + /// + /// The query used for fetching the cached information can take quite a + /// lot of some time to complete on mainnet. So this function updates + /// the signers state once so that the other event loops do not need to + /// execute them. The cached information is: + /// + /// * The current signer set. It gets this information from the last + /// successful key-rotation contract call if it exists. If such a + /// contract call does not exist this function uses the latest DKG + /// shares, and if that doesn't exist it uses the bootstrap signing + /// set from the configuration. + /// * The current aggregate key. It gets this information from the last + /// successful key-rotation contract call if it exists, and from the + /// latest DKG shares if no such contract call can be found. + async fn set_signer_set_and_aggregate_key(&self, chain_tip: BlockHash) -> Result<(), Error> { + let (aggregate_key, public_keys) = + get_signer_set_and_aggregate_key(&self.context, chain_tip).await?; + + let state = self.context.state(); + if let Some(aggregate_key) = aggregate_key { + state.set_current_aggregate_key(aggregate_key); + } + + state.update_current_signer_set(public_keys); + Ok(()) + } + + /// Update the `SignerState` object with data that is unlikely to + /// change until the arrival of the next bitcoin block. + /// + /// # Notes + /// + /// The function updates the following: + /// * sBTC limits from Emily. + /// * The current signer set. + /// * The current aggregate key. + async fn update_signer_state(&self, chain_tip: BlockHash) -> Result<(), Error> { + tracing::info!("loading sbtc limits from Emily"); + self.update_sbtc_limits().await?; + + tracing::info!("updating the signer state with the current signer set"); + self.set_signer_set_and_aggregate_key(chain_tip).await + } +} + +/// Return the signing set that can make sBTC related contract calls along +/// with the current aggregate key to use for locking UTXOs on bitcoin. +/// +/// The aggregate key fetched here is the one confirmed on the canonical +/// Stacks blockchain as part of a `rotate-keys` contract call. It will be +/// the public key that is the result of a DKG run. If there are no +/// rotate-keys transactions on the canonical stacks blockchain, then we +/// fall back on the last known DKG shares row in our database, and return +/// None as the aggregate key if no DKG shares can be found, implying that +/// this signer has not participated in DKG. +#[tracing::instrument(skip_all)] +pub async fn get_signer_set_and_aggregate_key( + context: &C, + chain_tip: B, +) -> Result<(Option, BTreeSet), Error> +where + C: Context, + B: Into, +{ + let db = context.get_storage(); + let chain_tip = chain_tip.into(); + + // We are supposed to submit a rotate-keys transaction after running + // DKG, but that transaction may not have been submitted yet (if we + // have just run DKG) or it may not have been confirmed on the + // canonical Stacks blockchain. + // + // If the signers have already run DKG, then we know that all + // participating signers should have the same view of the latest + // aggregate key, so we can fall back on the stored DKG shares for + // getting the current aggregate key and associated signing set. + match db.get_last_key_rotation(&chain_tip).await? { + Some(last_key) => { + let aggregate_key = last_key.aggregate_key; + let signer_set = last_key.signer_set.into_iter().collect(); + Ok((Some(aggregate_key), signer_set)) + } + None => match db.get_latest_encrypted_dkg_shares().await? { + Some(shares) => { + let signer_set = shares.signer_set_public_keys.into_iter().collect(); + Ok((Some(shares.aggregate_key), signer_set)) + } + None => Ok((None, context.config().signer.bootstrap_signing_set())), + }, + } } #[cfg(test)] diff --git a/signer/src/context/signer_state.rs b/signer/src/context/signer_state.rs index aefdd53a5..f1d87a656 100644 --- a/signer/src/context/signer_state.rs +++ b/signer/src/context/signer_state.rs @@ -1,5 +1,6 @@ //! Module for signer state +use std::collections::BTreeSet; use std::sync::{ atomic::{AtomicBool, AtomicU64, Ordering}, RwLock, @@ -18,6 +19,7 @@ use crate::keys::PublicKey; pub struct SignerState { current_signer_set: SignerSet, current_limits: RwLock, + current_aggregate_key: RwLock>, sbtc_contracts_deployed: AtomicBool, sbtc_bitcoin_start_height: AtomicU64, is_sbtc_bitcoin_start_height_set: AtomicBool, @@ -29,6 +31,38 @@ impl SignerState { &self.current_signer_set } + /// Return the public keys of the current signer set. + pub fn current_signer_public_keys(&self) -> BTreeSet { + self.current_signer_set + .get_signers() + .into_iter() + .map(|signer| signer.public_key) + .collect() + } + + /// Replace the current signer set with the given set of public keys. + pub fn update_current_signer_set(&self, public_keys: BTreeSet) { + self.current_signer_set.replace_signers(public_keys); + } + + /// Return the current aggregate key from the cache. + #[allow(clippy::unwrap_in_result)] + pub fn current_aggregate_key(&self) -> Option { + self.current_aggregate_key + .read() + .expect("BUG: Failed to acquire read lock") + .as_ref() + .copied() + } + + /// Set the current aggregate key to the given public key. + pub fn set_current_aggregate_key(&self, aggregate_key: PublicKey) { + self.current_aggregate_key + .write() + .expect("BUG: Failed to acquire write lock") + .replace(aggregate_key); + } + /// Get the current sBTC limits. pub fn get_current_limits(&self) -> SbtcLimits { // We should never fail to acquire a lock from the RwLock so that it panics. @@ -82,6 +116,7 @@ impl Default for SignerState { Self { current_signer_set: Default::default(), current_limits: RwLock::new(SbtcLimits::zero()), + current_aggregate_key: RwLock::new(None), sbtc_contracts_deployed: Default::default(), sbtc_bitcoin_start_height: Default::default(), is_sbtc_bitcoin_start_height_set: Default::default(), @@ -132,11 +167,6 @@ impl SbtcLimits { } } - /// Create a new `SbtcLimits` object without any limits - pub fn unlimited() -> Self { - Self::new(None, None, None, None, None) - } - /// Create a new `SbtcLimits` object with limits set to zero (fully constraining) pub fn zero() -> Self { Self::new( @@ -177,9 +207,21 @@ impl SbtcLimits { pub fn max_mintable_cap(&self) -> Amount { self.max_mintable_cap.unwrap_or(Amount::MAX_MONEY) } +} + +#[cfg(any(test, feature = "testing"))] +impl SbtcLimits { + /// Create a new `SbtcLimits` object without any limits + pub fn unlimited() -> Self { + Self { + total_cap: Some(Amount::MAX_MONEY), + per_deposit_minimum: Some(Amount::ZERO), + per_deposit_cap: Some(Amount::MAX_MONEY), + per_withdrawal_cap: Some(Amount::MAX_MONEY), + max_mintable_cap: Some(Amount::MAX_MONEY), + } + } - /// TODO: Document this - #[cfg(test)] /// Create a new Self with only the given deposit minimum and maximums /// set. pub fn new_per_deposit(min: u64, max: u64) -> Self { @@ -266,6 +308,38 @@ impl SignerSet { .insert(signer); } + /// Replace the current signer set with the given set of public keys. + pub fn replace_signers(&self, public_keys: BTreeSet) { + let inner_signer_set = self.get_signers(); + + // Get a guard for the peer IDs. + #[allow(clippy::expect_used)] + let mut inner_peer_ids = self + .peer_ids + .write() + .expect("BUG: Failed to acquire write lock"); + + // Get a guard for the Signer objects the signer into the set. + #[allow(clippy::expect_used)] + let mut inner_public_keys = self + .signers + .write() + .expect("BUG: Failed to acquire write lock"); + + // Remove the old signer set + for signer in inner_signer_set { + inner_peer_ids.remove(signer.peer_id()); + inner_public_keys.remove(signer.public_key()); + } + + // Add the new signer set + for public_key in public_keys { + let signer = Signer::new(public_key); + inner_peer_ids.insert(signer.peer_id); + inner_public_keys.insert(signer); + } + } + /// Remove a signer (public key) from the known active signer set. pub fn remove_signer(&self, signer: &PublicKey) { if self.is_signer(signer) { diff --git a/signer/src/main.rs b/signer/src/main.rs index 43d1028c2..0f47a886d 100644 --- a/signer/src/main.rs +++ b/signer/src/main.rs @@ -297,12 +297,10 @@ async fn run_block_observer(ctx: impl Context) -> Result<(), Error> { // TODO: Need to handle multiple endpoints, so some sort of // failover-stream-wrapper. - let stream = BitcoinCoreMessageStream::new_from_endpoint( - config.bitcoin.block_hash_stream_endpoints[0].as_str(), - &["hashblock"], - ) - .await - .unwrap(); + let endpoint = config.bitcoin.block_hash_stream_endpoints[0].as_str(); + let stream = BitcoinCoreMessageStream::new_from_endpoint(endpoint) + .await + .unwrap(); // TODO: We should have a new() method that builds from the context let block_observer = block_observer::BlockObserver { diff --git a/signer/src/message.rs b/signer/src/message.rs index 7d3d4d21f..05f30f180 100644 --- a/signer/src/message.rs +++ b/signer/src/message.rs @@ -228,31 +228,31 @@ pub struct BitcoinPreSignAck; #[derive(Debug, Clone, Copy, PartialEq)] pub enum WstsMessageId { /// The WSTS message is related to a Bitcoin transaction signing round. - BitcoinTxid(bitcoin::Txid), + Sweep(bitcoin::Txid), /// The WSTS message is related to a rotate key verification operation. - RotateKey(PublicKey), + DkgVerification(PublicKey), /// The WSTS message is related to a DKG round. Dkg([u8; 32]), } impl From for WstsMessageId { fn from(txid: bitcoin::Txid) -> Self { - Self::BitcoinTxid(txid) + Self::Sweep(txid) } } impl From for WstsMessageId { fn from(txid: crate::storage::model::BitcoinTxId) -> Self { - Self::BitcoinTxid(txid.into()) + Self::Sweep(txid.into()) } } impl std::fmt::Display for WstsMessageId { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match self { - WstsMessageId::BitcoinTxid(txid) => write!(f, "bitcoin-txid({})", txid), - WstsMessageId::RotateKey(aggregate_key) => { - write!(f, "rotate-key({})", aggregate_key) + WstsMessageId::Sweep(txid) => write!(f, "sweep({})", txid), + WstsMessageId::DkgVerification(aggregate_key) => { + write!(f, "dkg-verification({})", aggregate_key) } WstsMessageId::Dkg(id) => { write!(f, "dkg({})", hex::encode(id)) diff --git a/signer/src/proto/convert.rs b/signer/src/proto/convert.rs index 1d5c79088..a8548c56e 100644 --- a/signer/src/proto/convert.rs +++ b/signer/src/proto/convert.rs @@ -1061,7 +1061,6 @@ impl TryFrom for SignatureShareResponse { } } impl From for proto::WstsMessage { - #[allow(deprecated)] fn from(value: WstsMessage) -> Self { let inner = match value.inner { wsts::net::Message::DkgBegin(inner) => { @@ -1095,21 +1094,14 @@ impl From for proto::WstsMessage { }; proto::WstsMessage { - txid: match value.id { - WstsMessageId::BitcoinTxid(txid) => { - Some(proto::BitcoinTxid::from(BitcoinTxId::from(txid))) - } - WstsMessageId::Dkg(_) => None, - WstsMessageId::RotateKey(_) => None, - }, id: Some(match value.id { - WstsMessageId::BitcoinTxid(txid) => { - wsts_message::Id::IdBitcoinTxid(proto::BitcoinTxid { - txid: Some(proto::Uint256::from(BitcoinTxId::from(txid).into_bytes())), - }) + WstsMessageId::Sweep(txid) => wsts_message::Id::Sweep(proto::BitcoinTxid { + txid: Some(proto::Uint256::from(BitcoinTxId::from(txid).into_bytes())), + }), + WstsMessageId::DkgVerification(pubkey) => { + wsts_message::Id::DkgVerification(pubkey.into()) } - WstsMessageId::RotateKey(pubkey) => wsts_message::Id::IdRotateKey(pubkey.into()), - WstsMessageId::Dkg(id) => wsts_message::Id::IdDkg(id.into()), + WstsMessageId::Dkg(id) => wsts_message::Id::Dkg(id.into()), }), inner: Some(inner), } @@ -1153,21 +1145,15 @@ impl TryFrom for WstsMessage { } }; - #[allow(deprecated)] Ok(WstsMessage { - id: match value.id { - Some(id) => match id { - wsts_message::Id::IdBitcoinTxid(txid) => { - WstsMessageId::BitcoinTxid(BitcoinTxId::try_from(txid)?.into()) - } - wsts_message::Id::IdRotateKey(pubkey) => { - WstsMessageId::RotateKey(PublicKey::try_from(pubkey)?) - } - wsts_message::Id::IdDkg(id) => WstsMessageId::Dkg(id.into()), - }, - None => WstsMessageId::BitcoinTxid( - BitcoinTxId::try_from(value.txid.required()?)?.into(), - ), + id: match value.id.required()? { + wsts_message::Id::Sweep(txid) => { + WstsMessageId::Sweep(BitcoinTxId::try_from(txid)?.into()) + } + wsts_message::Id::DkgVerification(pubkey) => { + WstsMessageId::DkgVerification(PublicKey::try_from(pubkey)?) + } + wsts_message::Id::Dkg(id) => WstsMessageId::Dkg(id.into()), }, inner, }) diff --git a/signer/src/proto/generated/stacks.signer.v1.rs b/signer/src/proto/generated/stacks.signer.v1.rs index a9d67ee4d..1bb154989 100644 --- a/signer/src/proto/generated/stacks.signer.v1.rs +++ b/signer/src/proto/generated/stacks.signer.v1.rs @@ -286,10 +286,6 @@ pub mod signer_message { /// A wsts message. #[derive(Clone, PartialEq, ::prost::Message)] pub struct WstsMessage { - /// The transaction ID this message relates to, will be a dummy ID for DKG messages - #[deprecated] - #[prost(message, optional, tag = "1")] - pub txid: ::core::option::Option, /// The wsts message #[prost(oneof = "wsts_message::Inner", tags = "2, 3, 4, 5, 6, 7, 8, 9, 10, 11")] pub inner: ::core::option::Option, @@ -343,16 +339,16 @@ pub mod wsts_message { /// If this WSTS message is related to a Bitcoin signing round, this field /// will be set to the related Bitcoin transaction ID. #[prost(message, tag = "12")] - IdBitcoinTxid(super::super::super::super::bitcoin::BitcoinTxid), + Sweep(super::super::super::super::bitcoin::BitcoinTxid), /// If this WSTS message is related to a rotate-keys transaction, this field /// will be set to the _new_ aggregate public key being verified. #[prost(message, tag = "13")] - IdRotateKey(super::super::super::super::crypto::PublicKey), + DkgVerification(super::super::super::super::crypto::PublicKey), /// If this WSTS message is related to a DKG round, this field will be set /// to the 32-byte id determined based on the coordinator public key and /// block hash, set by the coordinator. #[prost(message, tag = "14")] - IdDkg(super::super::super::super::crypto::Uint256), + Dkg(super::super::super::super::crypto::Uint256), } } /// Wraps an inner type with a public key and a signature, diff --git a/signer/src/storage/in_memory.rs b/signer/src/storage/in_memory.rs index e943d60b5..644a2a9ff 100644 --- a/signer/src/storage/in_memory.rs +++ b/signer/src/storage/in_memory.rs @@ -301,6 +301,18 @@ impl super::DbRead for SharedStore { .map(|block| block.block_hash)) } + async fn get_bitcoin_canonical_chain_tip_ref( + &self, + ) -> Result, Error> { + Ok(self + .lock() + .await + .bitcoin_blocks + .values() + .max_by_key(|block| (block.block_height, block.block_hash)) + .map(model::BitcoinBlockRef::from)) + } + async fn get_stacks_chain_tip( &self, bitcoin_chain_tip: &model::BitcoinBlockHash, diff --git a/signer/src/storage/mod.rs b/signer/src/storage/mod.rs index 95acbec3f..e276c2046 100644 --- a/signer/src/storage/mod.rs +++ b/signer/src/storage/mod.rs @@ -47,6 +47,11 @@ pub trait DbRead { &self, ) -> impl Future, Error>> + Send; + /// Get the bitcoin canonical chain tip. + fn get_bitcoin_canonical_chain_tip_ref( + &self, + ) -> impl Future, Error>> + Send; + /// Get the stacks chain tip, defined as the highest stacks block /// confirmed by the bitcoin chain tip. fn get_stacks_chain_tip( diff --git a/signer/src/storage/model.rs b/signer/src/storage/model.rs index c5825f7c6..8d934a79a 100644 --- a/signer/src/storage/model.rs +++ b/signer/src/storage/model.rs @@ -839,11 +839,12 @@ impl std::fmt::Display for BitcoinBlockHash { /// A struct that references a specific bitcoin block is identifier and its /// position in the blockchain. -#[derive(Debug, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] +#[derive(Debug, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, sqlx::FromRow)] #[cfg_attr(feature = "testing", derive(fake::Dummy))] pub struct BitcoinBlockRef { /// The height of the block in the bitcoin blockchain. #[cfg_attr(feature = "testing", dummy(faker = "0..u32::MAX as u64"))] + #[sqlx(try_from = "i64")] pub block_height: u64, /// Bitcoin block hash. It uniquely identifies the bitcoin block. pub block_hash: BitcoinBlockHash, diff --git a/signer/src/storage/postgres.rs b/signer/src/storage/postgres.rs index ccf67d059..189019167 100644 --- a/signer/src/storage/postgres.rs +++ b/signer/src/storage/postgres.rs @@ -799,6 +799,22 @@ impl super::DbRead for PgStore { .map_err(Error::SqlxQuery) } + async fn get_bitcoin_canonical_chain_tip_ref( + &self, + ) -> Result, Error> { + sqlx::query_as::<_, model::BitcoinBlockRef>( + "SELECT + block_hash + , block_height + FROM sbtc_signer.bitcoin_blocks + ORDER BY block_height DESC, block_hash DESC + LIMIT 1", + ) + .fetch_optional(&self.0) + .await + .map_err(Error::SqlxQuery) + } + async fn get_stacks_chain_tip( &self, bitcoin_chain_tip: &model::BitcoinBlockHash, diff --git a/signer/src/testing/btc.rs b/signer/src/testing/btc.rs index 5d00ed3e8..1d0cc8486 100644 --- a/signer/src/testing/btc.rs +++ b/signer/src/testing/btc.rs @@ -1,6 +1,7 @@ //! Helper functions for the bitcoin module //! use bitcoin::Amount; +use bitcoin::BlockHash; use bitcoin::OutPoint; use bitcoin::ScriptBuf; use bitcoin::Sequence; @@ -10,8 +11,12 @@ use bitcoin::TxOut; use bitcoin::Witness; use emily_client::models::CreateDepositRequestBody; +use futures::StreamExt as _; +use tokio_stream::wrappers::ReceiverStream; use crate::bitcoin::utxo; +use crate::bitcoin::zmq::BitcoinCoreMessageStream; +use crate::error::Error; /// Return a transaction that is kinda like the signers' transaction, /// but it does not service any requests, and it does not have any @@ -56,3 +61,29 @@ impl utxo::DepositRequest { } } } + +/// Create a new BlockHash stream for messages from bitcoin core over the +/// ZMQ interface. +/// +/// The returned object implements Stream + Send + Sync, which is sometimes +/// needed in our integration tests. +/// +/// # Notes +/// +/// This function panics if it cannot establish a connection the bitcoin +/// core in 10 seconds. +pub async fn new_zmq_block_hash_stream(endpoint: &str) -> ReceiverStream> { + let zmq_stream = BitcoinCoreMessageStream::new_from_endpoint(endpoint) + .await + .unwrap(); + + let (sender, receiver) = tokio::sync::mpsc::channel(100); + tokio::spawn(async move { + let mut stream = zmq_stream.to_block_hash_stream(); + while let Some(block) = stream.next().await { + sender.send(block).await.unwrap(); + } + }); + + ReceiverStream::new(receiver) +} diff --git a/signer/src/testing/transaction_coordinator.rs b/signer/src/testing/transaction_coordinator.rs index 5f60de0b1..7737d103d 100644 --- a/signer/src/testing/transaction_coordinator.rs +++ b/signer/src/testing/transaction_coordinator.rs @@ -1019,8 +1019,9 @@ where .into(); let dkg_txid = testing::dummy::txid(&fake::Faker, rng); - let (aggregate_key, all_dkg_shares) = - signer_set.run_dkg(bitcoin_chain_tip, dkg_txid, rng).await; + let (aggregate_key, all_dkg_shares) = signer_set + .run_dkg(bitcoin_chain_tip, dkg_txid.into(), rng) + .await; let encrypted_dkg_shares = all_dkg_shares.first().unwrap(); diff --git a/signer/src/testing/transaction_signer.rs b/signer/src/testing/transaction_signer.rs index ca9fb4e3e..2a1f56b20 100644 --- a/signer/src/testing/transaction_signer.rs +++ b/signer/src/testing/transaction_signer.rs @@ -215,7 +215,9 @@ where coordinator_signer_info, self.signing_threshold, ); - let aggregate_key = coordinator.run_dkg(bitcoin_chain_tip, dummy_txid).await; + let aggregate_key = coordinator + .run_dkg(bitcoin_chain_tip, dummy_txid.into()) + .await; for handle in event_loop_handles.into_iter() { assert!(handle @@ -262,7 +264,7 @@ async fn run_dkg_and_store_results_for_signers<'s: 'r, 'r, S, Rng>( let dkg_txid = testing::dummy::txid(&fake::Faker, rng); let bitcoin_chain_tip = *chain_tip; let (_, all_dkg_shares) = testing_signer_set - .run_dkg(bitcoin_chain_tip, dkg_txid, rng) + .run_dkg(bitcoin_chain_tip, dkg_txid.into(), rng) .await; for (storage, encrypted_dkg_shares) in stores.into_iter().zip(all_dkg_shares) { diff --git a/signer/src/testing/wsts.rs b/signer/src/testing/wsts.rs index 62ca34aeb..5fad7f508 100644 --- a/signer/src/testing/wsts.rs +++ b/signer/src/testing/wsts.rs @@ -137,7 +137,7 @@ impl Coordinator { pub async fn run_dkg( &mut self, bitcoin_chain_tip: model::BitcoinBlockHash, - txid: bitcoin::Txid, + id: WstsMessageId, ) -> PublicKey { self.wsts_coordinator .move_to(coordinator::State::DkgPublicDistribute) @@ -148,10 +148,9 @@ impl Coordinator { .start_public_shares() .expect("failed to start public shares"); - self.send_packet(bitcoin_chain_tip, txid.into(), outbound) - .await; + self.send_packet(bitcoin_chain_tip, id, outbound).await; - match self.loop_until_result(bitcoin_chain_tip, txid.into()).await { + match self.loop_until_result(bitcoin_chain_tip, id).await { wsts::state_machine::OperationResult::Dkg(aggregate_key) => { PublicKey::try_from(&aggregate_key).expect("Got the point at infinity") } @@ -405,7 +404,7 @@ impl SignerSet { pub async fn run_dkg( &mut self, bitcoin_chain_tip: model::BitcoinBlockHash, - txid: bitcoin::Txid, + id: WstsMessageId, rng: &mut Rng, ) -> (PublicKey, Vec) { let mut signer_handles = Vec::new(); @@ -414,7 +413,7 @@ impl SignerSet { signer_handles.push(handle); } - let aggregate_key = self.coordinator.run_dkg(bitcoin_chain_tip, txid).await; + let aggregate_key = self.coordinator.run_dkg(bitcoin_chain_tip, id).await; for handle in signer_handles { let signer = handle.await.expect("signer crashed"); @@ -554,7 +553,9 @@ mod tests { let signer_info = generate_signer_info(&mut rng, num_signers); let mut signer_set = SignerSet::new(&signer_info, threshold, || network.connect()); - let (_, dkg_shares) = signer_set.run_dkg(bitcoin_chain_tip, txid, &mut rng).await; + let (_, dkg_shares) = signer_set + .run_dkg(bitcoin_chain_tip, txid.into(), &mut rng) + .await; assert_eq!(dkg_shares.len(), num_signers as usize); } diff --git a/signer/src/transaction_coordinator.rs b/signer/src/transaction_coordinator.rs index ed0495d12..e23eb45b5 100644 --- a/signer/src/transaction_coordinator.rs +++ b/signer/src/transaction_coordinator.rs @@ -289,9 +289,8 @@ where // we need to know the aggregate key for constructing bitcoin // transactions. We need to know the current signing set and the // current aggregate key. - let (maybe_aggregate_key, signer_public_keys) = self - .get_signer_set_and_aggregate_key(&bitcoin_chain_tip) - .await?; + let maybe_aggregate_key = self.context.state().current_aggregate_key(); + let signer_public_keys = self.context.state().current_signer_public_keys(); // If we are not the coordinator, then we have no business // coordinating DKG or constructing bitcoin and stacks @@ -313,15 +312,7 @@ where let should_coordinate_dkg = should_coordinate_dkg(&self.context, &bitcoin_chain_tip).await?; let aggregate_key = if should_coordinate_dkg { - let dkg_result = self.coordinate_dkg(&bitcoin_chain_tip).await?; - // TODO: in `run_dkg_from_scratch` test, `dkg_result` differs from - // value fetched from the db. Adding a temporary fix for the (probably) - // race condition, but we should address this properly. - self.get_signer_set_and_aggregate_key(&bitcoin_chain_tip) - .await - .ok() - .and_then(|res| res.0) - .unwrap_or(dkg_result) + self.coordinate_dkg(&bitcoin_chain_tip).await? } else { maybe_aggregate_key.ok_or(Error::MissingAggregateKey(*bitcoin_chain_tip))? }; @@ -774,7 +765,7 @@ where self.coordinate_signing_round( bitcoin_chain_tip, &mut coordinator_state_machine, - WstsMessageId::RotateKey(*aggregate_key), + WstsMessageId::DkgVerification(*aggregate_key), tap_sighash.as_byte_array(), SignatureType::Taproot(None), ).await @@ -1195,7 +1186,7 @@ where // set of the last DKG (either through the last rotate-keys // contract call or from the `dkg_shares` table) so we wind up // never changing the signing set. - let (_, signer_set) = self.get_signer_set_and_aggregate_key(chain_tip).await?; + let signer_set = self.context.state().current_signer_public_keys(); let mut state_machine = FireCoordinator::new(signer_set, self.threshold, self.private_key); @@ -1258,13 +1249,9 @@ where // this assumes that the signer set doesn't change for the duration of this call, // but we're already assuming that the bitcoin chain tip doesn't change // alternately we could hit the DB every time we get a new message - let (_, signer_set) = self - .get_signer_set_and_aggregate_key(bitcoin_chain_tip) - .await?; - + let signer_set = self.context.state().current_signer_public_keys(); tokio::pin!(signal_stream); - coordinator_state_machine.save(); // Let's get the next message from the network or the // TxSignerEventLoop. // @@ -1495,50 +1482,6 @@ where })) } - /// Return the signing set that can make sBTC related contract calls - /// along with the current aggregate key to use for locking UTXOs on - /// bitcoin. - /// - /// The aggregate key fetched here is the one confirmed on the - /// canonical Stacks blockchain as part of a `rotate-keys` contract - /// call. It will be the public key that is the result of a DKG run. If - /// there are no rotate-keys transactions on the canonical stacks - /// blockchain, then we fall back on the last known DKG shares row in - /// our database, and return None as the aggregate key if no DKG shares - /// can be found, implying that this signer has not participated in - /// DKG. - #[tracing::instrument(skip_all)] - pub async fn get_signer_set_and_aggregate_key( - &self, - bitcoin_chain_tip: &model::BitcoinBlockHash, - ) -> Result<(Option, BTreeSet), Error> { - let db = self.context.get_storage(); - - // We are supposed to submit a rotate-keys transaction after - // running DKG, but that transaction may not have been submitted - // yet (if we have just run DKG) or it may not have been confirmed - // on the canonical Stacks blockchain. - // - // If the signers have already run DKG, then we know that all - // participating signers should have the same view of the latest - // aggregate key, so we can fall back on the stored DKG shares for - // getting the current aggregate key and associated signing set. - match db.get_last_key_rotation(bitcoin_chain_tip).await? { - Some(last_key) => { - let aggregate_key = last_key.aggregate_key; - let signer_set = last_key.signer_set.into_iter().collect(); - Ok((Some(aggregate_key), signer_set)) - } - None => match db.get_latest_encrypted_dkg_shares().await? { - Some(shares) => { - let signer_set = shares.signer_set_public_keys.into_iter().collect(); - Ok((Some(shares.aggregate_key), signer_set)) - } - None => Ok((None, self.context.config().signer.bootstrap_signing_set())), - }, - } - } - fn pub_key(&self) -> PublicKey { PublicKey::from_private_key(&self.private_key) } diff --git a/signer/src/transaction_signer.rs b/signer/src/transaction_signer.rs index 8111a4257..7c5866171 100644 --- a/signer/src/transaction_signer.rs +++ b/signer/src/transaction_signer.rs @@ -25,6 +25,7 @@ use crate::keys::PublicKey; use crate::keys::PublicKeyXOnly; use crate::message; use crate::message::BitcoinPreSignAck; +use crate::message::Payload; use crate::message::StacksTransactionSignRequest; use crate::message::WstsMessageId; use crate::metrics::Metrics; @@ -247,7 +248,7 @@ where SignerEvent::TxCoordinator(TxCoordinatorEvent::MessageGenerated(msg)) | SignerEvent::P2P(P2PEvent::MessageReceived(msg)) => { if let Err(error) = self.handle_signer_message(&msg).await { - tracing::error!(%error, "error handling signer message"); + tracing::error!(%error, "error processing signer message"); } } _ => {} @@ -271,7 +272,7 @@ where } = chain_tip_report; let span = tracing::Span::current(); - span.record("chain_tip", tracing::field::display(chain_tip)); + span.record("chain_tip", tracing::field::display(chain_tip.block_hash)); tracing::trace!( %sender_is_coordinator, %chain_tip_status, @@ -280,34 +281,26 @@ where "handling message from signer" ); - match (&msg.inner.payload, sender_is_coordinator, chain_tip_status) { - ( - message::Payload::StacksTransactionSignRequest(request), - true, - ChainTipStatus::Canonical, - ) => { + let payload = &msg.inner.payload; + match (payload, sender_is_coordinator, chain_tip_status) { + (Payload::StacksTransactionSignRequest(request), true, ChainTipStatus::Canonical) => { self.handle_stacks_transaction_sign_request( request, - &msg.bitcoin_chain_tip, + &chain_tip, &msg.signer_public_key, ) .await?; } - (message::Payload::WstsMessage(wsts_msg), _, _) => { - self.handle_wsts_message( - wsts_msg, - &msg.bitcoin_chain_tip, - msg.signer_public_key, - &chain_tip_report, - ) - .await?; + (Payload::WstsMessage(wsts_msg), _, ChainTipStatus::Canonical) => { + self.handle_wsts_message(wsts_msg, msg.signer_public_key, &chain_tip_report) + .await?; } - (message::Payload::BitcoinPreSignRequest(requests), _, _) => { + (Payload::BitcoinPreSignRequest(requests), true, ChainTipStatus::Canonical) => { let instant = std::time::Instant::now(); let pre_validation_status = self - .handle_bitcoin_pre_sign_request(requests, &msg.bitcoin_chain_tip) + .handle_bitcoin_pre_sign_request(requests, &chain_tip) .await; let status = if pre_validation_status.is_ok() { @@ -333,9 +326,9 @@ where pre_validation_status?; } // Message types ignored by the transaction signer - (message::Payload::StacksTransactionSignature(_), _, _) - | (message::Payload::SignerDepositDecision(_), _, _) - | (message::Payload::SignerWithdrawalDecision(_), _, _) => (), + (Payload::StacksTransactionSignature(_), _, _) + | (Payload::SignerDepositDecision(_), _, _) + | (Payload::SignerWithdrawalDecision(_), _, _) => (), // Any other combination should be logged _ => { @@ -356,7 +349,7 @@ where let storage = self.context.get_storage(); let chain_tip = storage - .get_bitcoin_canonical_chain_tip() + .get_bitcoin_canonical_chain_tip_ref() .await? .ok_or(Error::NoChainTip)?; @@ -364,12 +357,12 @@ where .get_bitcoin_block(msg_bitcoin_chain_tip) .await? .is_some(); - let is_canonical = msg_bitcoin_chain_tip == &chain_tip; + let is_canonical = msg_bitcoin_chain_tip == &chain_tip.block_hash; - let signer_set = self.get_signer_public_keys(&chain_tip).await?; + let signer_set = self.context.state().current_signer_public_keys(); let sender_is_coordinator = crate::transaction_coordinator::given_key_is_coordinator( msg_sender, - &chain_tip, + &chain_tip.block_hash, &signer_set, ); @@ -396,22 +389,15 @@ where pub async fn handle_bitcoin_pre_sign_request( &mut self, request: &message::BitcoinPreSignRequest, - bitcoin_chain_tip: &model::BitcoinBlockHash, + chain_tip: &model::BitcoinBlockRef, ) -> Result<(), Error> { let db = self.context.get_storage_mut(); - let bitcoin_block = db - .get_bitcoin_block(bitcoin_chain_tip) - .await - .map_err(|_| Error::NoChainTip)? - .ok_or_else(|| Error::NoChainTip)?; - let (maybe_aggregate_key, _signer_set) = self - .get_signer_set_and_aggregate_key(bitcoin_chain_tip) - .await?; + let maybe_aggregate_key = self.context.state().current_aggregate_key(); let btc_ctx = BitcoinTxContext { - chain_tip: *bitcoin_chain_tip, - chain_tip_height: bitcoin_block.block_height, + chain_tip: chain_tip.block_hash, + chain_tip_height: chain_tip.block_height, context_window: self.context_window, signer_public_key: self.signer_public_key(), aggregate_key: maybe_aggregate_key.ok_or(Error::NoDkgShares)?, @@ -436,7 +422,7 @@ where db.write_bitcoin_withdrawals_outputs(&withdrawals_outputs) .await?; - self.send_message(BitcoinPreSignAck, bitcoin_chain_tip) + self.send_message(BitcoinPreSignAck, &chain_tip.block_hash) .await?; Ok(()) } @@ -445,12 +431,12 @@ where async fn handle_stacks_transaction_sign_request( &mut self, request: &StacksTransactionSignRequest, - bitcoin_chain_tip: &model::BitcoinBlockHash, + chain_tip: &model::BitcoinBlockRef, origin_public_key: &PublicKey, ) -> Result<(), Error> { let instant = std::time::Instant::now(); let validation_status = self - .assert_valid_stacks_tx_sign_request(request, bitcoin_chain_tip, origin_public_key) + .assert_valid_stacks_tx_sign_request(request, chain_tip, origin_public_key) .await; metrics::histogram!( @@ -470,7 +456,7 @@ where // We need to set the nonce in order to get the exact transaction // that we need to sign. - let wallet = SignerWallet::load(&self.context, bitcoin_chain_tip).await?; + let wallet = SignerWallet::load(&self.context, &chain_tip.block_hash).await?; wallet.set_nonce(request.nonce); let multi_sig = MultisigTx::new_tx(&request.contract_tx, &wallet, request.tx_fee); @@ -484,7 +470,7 @@ where let msg = message::StacksTransactionSignature { txid, signature }; - self.send_message(msg, bitcoin_chain_tip).await?; + self.send_message(msg, &chain_tip.block_hash).await?; Ok(()) } @@ -495,7 +481,7 @@ where pub async fn assert_valid_stacks_tx_sign_request( &self, request: &StacksTransactionSignRequest, - chain_tip: &model::BitcoinBlockHash, + chain_tip: &model::BitcoinBlockRef, origin_public_key: &PublicKey, ) -> Result<(), Error> { let db = self.context.get_storage(); @@ -511,12 +497,8 @@ where return Err(Error::ValidationSignerSet(request.aggregate_key)); } - let Some(block) = db.get_bitcoin_block(chain_tip).await? else { - return Err(Error::MissingBitcoinBlock(*chain_tip)); - }; - let req_ctx = ReqContext { - chain_tip: block.into(), + chain_tip: *chain_tip, context_window: self.context_window, origin: *origin_public_key, aggregate_key: request.aggregate_key, @@ -561,7 +543,6 @@ where pub async fn handle_wsts_message( &mut self, msg: &message::WstsMessage, - bitcoin_chain_tip: &model::BitcoinBlockHash, msg_public_key: PublicKey, chain_tip_report: &MsgChainTipReport, ) -> Result<(), Error> { @@ -573,6 +554,8 @@ where // Get the current tracing span. let span = tracing::Span::current(); + let MsgChainTipReport { chain_tip, .. } = chain_tip_report; + match &msg.inner { WstsNetMessage::DkgBegin(request) => { span.record(WSTS_DKG_ID, request.dkg_id); @@ -585,20 +568,20 @@ where return Ok(()); } - tracing::debug!("responding to dkg-begin"); + tracing::debug!("processing message"); // Assert that DKG should be allowed to proceed given the current state // and configuration. - assert_allow_dkg_begin(&self.context, bitcoin_chain_tip).await?; + assert_allow_dkg_begin(&self.context, chain_tip).await?; - let signer_public_keys = self.get_signer_public_keys(bitcoin_chain_tip).await?; + let signer_public_keys = self.context.state().current_signer_public_keys(); let state_machine = SignerStateMachine::new( signer_public_keys, self.threshold, self.signer_private_key, )?; - let id = StateMachineId::Dkg(*bitcoin_chain_tip); + let id = StateMachineId::Dkg(chain_tip.block_hash); self.wsts_state_machines.put(id, state_machine); if let Some(pause) = self.dkg_begin_pause { @@ -609,8 +592,8 @@ where tokio::time::sleep(pause).await; } - let id = StateMachineId::Dkg(*bitcoin_chain_tip); - self.relay_message(id, msg.id, &msg.inner, bitcoin_chain_tip) + let id = StateMachineId::Dkg(chain_tip.block_hash); + self.relay_message(id, msg.id, &msg.inner, &chain_tip.block_hash) .await?; } WstsNetMessage::DkgPrivateBegin(request) => { @@ -624,32 +607,32 @@ where return Ok(()); } - tracing::debug!("responding to dkg-private-begin"); + tracing::debug!("processing message"); - let id = StateMachineId::Dkg(*bitcoin_chain_tip); - self.relay_message(id, msg.id, &msg.inner, bitcoin_chain_tip) + let id = StateMachineId::Dkg(chain_tip.block_hash); + self.relay_message(id, msg.id, &msg.inner, &chain_tip.block_hash) .await?; } WstsNetMessage::DkgPublicShares(request) => { span.record(WSTS_DKG_ID, request.dkg_id); span.record(WSTS_SIGNER_ID, request.signer_id); - tracing::debug!("responding to dkg-public-shares"); + tracing::debug!("processing message"); - let id = StateMachineId::Dkg(*bitcoin_chain_tip); + let id = StateMachineId::Dkg(chain_tip.block_hash); self.validate_sender(&id, request.signer_id, &msg_public_key)?; - self.relay_message(id, msg.id, &msg.inner, bitcoin_chain_tip) + self.relay_message(id, msg.id, &msg.inner, &chain_tip.block_hash) .await?; } WstsNetMessage::DkgPrivateShares(request) => { span.record(WSTS_DKG_ID, request.dkg_id); span.record(WSTS_SIGNER_ID, request.signer_id); - tracing::debug!("responding to dkg-private-shares"); + tracing::debug!("processing message"); - let id = StateMachineId::Dkg(*bitcoin_chain_tip); + let id = StateMachineId::Dkg(chain_tip.block_hash); self.validate_sender(&id, request.signer_id, &msg_public_key)?; - self.relay_message(id, msg.id, &msg.inner, bitcoin_chain_tip) + self.relay_message(id, msg.id, &msg.inner, &chain_tip.block_hash) .await?; } WstsNetMessage::DkgEndBegin(request) => { @@ -663,10 +646,9 @@ where return Ok(()); } - tracing::debug!("responding to dkg-end-begin"); - - let id = StateMachineId::from(bitcoin_chain_tip); - self.relay_message(id, msg.id, &msg.inner, bitcoin_chain_tip) + tracing::debug!("processing message"); + let id = StateMachineId::from(&chain_tip.block_hash); + self.relay_message(id, msg.id, &msg.inner, &chain_tip.block_hash) .await?; } WstsNetMessage::DkgEnd(request) => { @@ -699,7 +681,7 @@ where return Ok(()); } - tracing::debug!(signature_type = ?request.signature_type, "responding to nonce-request"); + tracing::debug!(signature_type = ?request.signature_type, "processing message"); let db = self.context.get_storage(); @@ -710,7 +692,7 @@ where ); return Ok(()); } - WstsMessageId::BitcoinTxid(txid) => { + WstsMessageId::Sweep(txid) => { span.record("txid", txid.to_string()); tracing::info!( "responding to nonce-request for bitcoin transaction signing" @@ -740,7 +722,7 @@ where (id, accepted_sighash.public_key) } - WstsMessageId::RotateKey(key) => { + WstsMessageId::DkgVerification(key) => { // This is a rotate-key verification signing round. The // data provided by the coordinator for signing is // expected to be the current bitcoin chain tip block @@ -777,7 +759,7 @@ where } let (state_machine_id, _, mock_tx) = self - .ensure_rotate_key_state_machine(bitcoin_chain_tip, new_key) + .ensure_rotate_key_state_machine(&chain_tip.block_hash, new_key) .await?; let tap_sighash = mock_tx.compute_sighash()?; @@ -789,7 +771,7 @@ where } self.handle_rotate_key_message( - bitcoin_chain_tip, + &chain_tip.block_hash, state_machine_id, &msg.inner, ) @@ -808,7 +790,7 @@ where .await?; self.wsts_state_machines.put(id, state_machine); - self.relay_message(id, msg.id, &msg.inner, bitcoin_chain_tip) + self.relay_message(id, msg.id, &msg.inner, &chain_tip.block_hash) .await?; } WstsNetMessage::SignatureShareRequest(request) => { @@ -824,6 +806,8 @@ where return Ok(()); } + tracing::debug!(signature_type = ?request.signature_type, "processing message"); + let db = self.context.get_storage(); let id = match msg.id { @@ -831,7 +815,7 @@ where tracing::warn!("🔐 received signature-share-request for DKG round, which is not supported"); return Ok(()); } - WstsMessageId::BitcoinTxid(txid) => { + WstsMessageId::Sweep(txid) => { span.record("txid", txid.to_string()); tracing::info!( signature_type = ?request.signature_type, @@ -843,7 +827,7 @@ where accepted_sighash.sighash.into() } - WstsMessageId::RotateKey(key) => { + WstsMessageId::DkgVerification(key) => { // This is a rotate-key verification signing round. The // data provided by the coordinator for signing is // expected to be the current bitcoin chain tip block @@ -883,7 +867,7 @@ where ); let (state_machine_id, _, mock_tx) = self - .ensure_rotate_key_state_machine(bitcoin_chain_tip, new_key) + .ensure_rotate_key_state_machine(&chain_tip.block_hash, new_key) .await?; let tap_sighash = mock_tx.compute_sighash()?; @@ -895,7 +879,7 @@ where } self.handle_rotate_key_message( - bitcoin_chain_tip, + &chain_tip.block_hash, state_machine_id, &msg.inner, ) @@ -905,7 +889,7 @@ where }; let response = self - .relay_message(id, msg.id, &msg.inner, bitcoin_chain_tip) + .relay_message(id, msg.id, &msg.inner, &chain_tip.block_hash) .await; self.wsts_state_machines.pop(&id); @@ -917,7 +901,7 @@ where span.record("dkg_sign_id", request.sign_id); span.record("dkg_iter_id", request.sign_iter_id); - let WstsMessageId::RotateKey(key) = msg.id else { + let WstsMessageId::DkgVerification(key) = msg.id else { return Ok(()); }; @@ -931,7 +915,7 @@ where } let (state_machine_id, _, mock_tx) = self - .ensure_rotate_key_state_machine(bitcoin_chain_tip, new_key) + .ensure_rotate_key_state_machine(&chain_tip.block_hash, new_key) .await?; let tap_sighash = mock_tx.compute_sighash()?; @@ -940,7 +924,7 @@ where return Err(Error::InvalidSigningOperation); } - self.handle_rotate_key_message(bitcoin_chain_tip, state_machine_id, &msg.inner) + self.handle_rotate_key_message(&chain_tip.block_hash, state_machine_id, &msg.inner) .await?; } WstsNetMessage::SignatureShareResponse(request) => { @@ -949,17 +933,17 @@ where span.record("dkg_sign_id", request.sign_id); span.record("dkg_iter_id", request.sign_iter_id); - let WstsMessageId::RotateKey(key) = msg.id else { + let WstsMessageId::DkgVerification(key) = msg.id else { return Ok(()); }; let new_key = key.into(); let (state_machine_id, _, _) = self - .ensure_rotate_key_state_machine(bitcoin_chain_tip, new_key) + .ensure_rotate_key_state_machine(&chain_tip.block_hash, new_key) .await?; - self.handle_rotate_key_message(bitcoin_chain_tip, state_machine_id, &msg.inner) + self.handle_rotate_key_message(&chain_tip.block_hash, state_machine_id, &msg.inner) .await?; } } @@ -1266,73 +1250,6 @@ where Ok(()) } - /// Return the signing set that can make sBTC related contract calls - /// along with the current aggregate key to use for locking UTXOs on - /// bitcoin. - /// - /// The aggregate key fetched here is the one confirmed on the - /// canonical Stacks blockchain as part of a `rotate-keys` contract - /// call. It will be the public key that is the result of a DKG run. If - /// there are no rotate-keys transactions on the canonical stacks - /// blockchain, then we fall back on the last known DKG shares row in - /// our database, and return None as the aggregate key if no DKG shares - /// can be found, implying that this signer has not participated in - /// DKG. - #[tracing::instrument(skip_all)] - pub async fn get_signer_set_and_aggregate_key( - &self, - bitcoin_chain_tip: &model::BitcoinBlockHash, - ) -> Result<(Option, BTreeSet), Error> { - let db = self.context.get_storage(); - - // We are supposed to submit a rotate-keys transaction after - // running DKG, but that transaction may not have been submitted - // yet (if we have just run DKG) or it may not have been confirmed - // on the canonical Stacks blockchain. - // - // If the signers have already run DKG, then we know that all - // participating signers should have the same view of the latest - // aggregate key, so we can fall back on the stored DKG shares for - // getting the current aggregate key and associated signing set. - match db.get_last_key_rotation(bitcoin_chain_tip).await? { - Some(last_key) => { - let aggregate_key = last_key.aggregate_key; - let signer_set = last_key.signer_set.into_iter().collect(); - Ok((Some(aggregate_key), signer_set)) - } - None => match db.get_latest_encrypted_dkg_shares().await? { - Some(shares) => { - let signer_set = shares.signer_set_public_keys.into_iter().collect(); - Ok((Some(shares.aggregate_key), signer_set)) - } - None => Ok((None, self.context.config().signer.bootstrap_signing_set())), - }, - } - } - - /// Get the set of public keys for the current signing set. - /// - /// If there is a successful `rotate-keys` transaction in the database - /// then we should use that as the source of truth for the current - /// signing set, otherwise we fall back to the bootstrap keys in our - /// config. - #[tracing::instrument(skip_all)] - pub async fn get_signer_public_keys( - &self, - chain_tip: &model::BitcoinBlockHash, - ) -> Result, Error> { - let db = self.context.get_storage(); - - // Get the last rotate-keys transaction from the database on the - // canonical Stacks blockchain (which we identify using the - // canonical bitcoin blockchain). If we don't have such a - // transaction then get the bootstrap keys from our config. - match db.get_last_key_rotation(chain_tip).await? { - Some(last_key) => Ok(last_key.signer_set.into_iter().collect()), - None => Ok(self.context.config().signer.bootstrap_signing_set()), - } - } - fn signer_public_key(&self) -> PublicKey { PublicKey::from_private_key(&self.signer_private_key) } @@ -1342,17 +1259,11 @@ where /// based on the current state of the signer and the DKG configuration. async fn assert_allow_dkg_begin( context: &impl Context, - bitcoin_chain_tip: &model::BitcoinBlockHash, + bitcoin_chain_tip: &model::BitcoinBlockRef, ) -> Result<(), Error> { let storage = context.get_storage(); let config = context.config(); - // Get the bitcoin block at the chain tip so that we know the height - let bitcoin_chain_tip_block = storage - .get_bitcoin_block(bitcoin_chain_tip) - .await? - .ok_or(Error::NoChainTip)?; - // Get the number of DKG shares that have been stored let dkg_shares_entry_count = storage.get_encrypted_dkg_shares_count().await?; @@ -1383,7 +1294,7 @@ async fn assert_allow_dkg_begin( ); return Err(Error::DkgHasAlreadyRun); } - if bitcoin_chain_tip_block.block_height < dkg_min_height.get() { + if bitcoin_chain_tip.block_height < dkg_min_height.get() { tracing::warn!( ?dkg_min_bitcoin_block_height, %dkg_target_rounds, @@ -1423,7 +1334,7 @@ pub struct MsgChainTipReport { /// The status of the chain tip relative to the signers' perspective. pub chain_tip_status: ChainTipStatus, /// The bitcoin chain tip. - pub chain_tip: model::BitcoinBlockHash, + pub chain_tip: model::BitcoinBlockRef, } impl MsgChainTipReport { @@ -1541,14 +1452,17 @@ mod tests { } // Dummy chain tip hash which will be used to fetch the block height - let bitcoin_chain_tip: model::BitcoinBlockHash = Faker.fake(); + let bitcoin_chain_tip = model::BitcoinBlockRef { + block_hash: Faker.fake(), + block_height: chain_tip_height, + }; // Write a bitcoin block at the given height, simulating the chain tip. storage .write_bitcoin_block(&model::BitcoinBlock { block_height: chain_tip_height, parent_hash: Faker.fake(), - block_hash: bitcoin_chain_tip, + block_hash: bitcoin_chain_tip.block_hash, }) .await .unwrap(); @@ -1581,14 +1495,17 @@ mod tests { .unwrap(); // Dummy chain tip hash which will be used to fetch the block height. - let bitcoin_chain_tip: model::BitcoinBlockHash = Faker.fake(); + let bitcoin_chain_tip = model::BitcoinBlockRef { + block_hash: Faker.fake(), + block_height: 100, + }; // Write a bitcoin block at the given height, simulating the chain tip. storage .write_bitcoin_block(&model::BitcoinBlock { block_height: 100, parent_hash: Faker.fake(), - block_hash: bitcoin_chain_tip, + block_hash: bitcoin_chain_tip.block_hash, }) .await .unwrap(); @@ -1623,7 +1540,7 @@ mod tests { // Attempt to handle the DkgBegin message. This should fail using the // default settings, as the default settings allow only one DKG round. let result = signer - .handle_wsts_message(&msg, &bitcoin_chain_tip, Faker.fake(), &chain_tip_report) + .handle_wsts_message(&msg, Faker.fake(), &chain_tip_report) .await; // Assert that the DkgBegin message was not allowed to proceed and @@ -1691,7 +1608,7 @@ mod tests { // We shouldn't get an error as we stop to process the message early signer - .handle_wsts_message(&msg, &bitcoin_chain_tip, Faker.fake(), &chain_tip_report) + .handle_wsts_message(&msg, Faker.fake(), &chain_tip_report) .await .expect("expected success"); } @@ -1776,7 +1693,7 @@ mod tests { // We shouldn't get an error as we stop to process the message early signer - .handle_wsts_message(&msg, &bitcoin_chain_tip, Faker.fake(), &chain_tip_report) + .handle_wsts_message(&msg, Faker.fake(), &chain_tip_report) .await .expect("expected success"); } diff --git a/signer/src/wsts_state_machine.rs b/signer/src/wsts_state_machine.rs index 3c8d69552..f016bc17e 100644 --- a/signer/src/wsts_state_machine.rs +++ b/signer/src/wsts_state_machine.rs @@ -26,7 +26,6 @@ use wsts::state_machine::coordinator::fire; use wsts::state_machine::coordinator::frost; use wsts::state_machine::coordinator::Config; use wsts::state_machine::coordinator::Coordinator as _; -use wsts::state_machine::coordinator::SavedState; use wsts::state_machine::coordinator::State as WstsState; use wsts::state_machine::OperationResult; use wsts::state_machine::StateMachine as _; @@ -127,9 +126,6 @@ where /// Gets the coordinator configuration. fn get_config(&self) -> Config; - /// Save the state required to reconstruct the state machine. - fn save(&self) -> SavedState; - /// Create a new coordinator state machine from the given aggregate /// key. /// @@ -160,30 +156,12 @@ where self.process_packet(&packet) } - /// Process inbound messages - fn process_inbound_messages( - &mut self, - messages: &[Message], - ) -> Result<(Vec, Vec), Error> { - let packets = messages - .iter() - .map(Packet::from_message) - .collect::>(); - self.process_inbound_packets(&packets) - } - /// Process the given packet. fn process_packet( &mut self, packet: &Packet, ) -> Result<(Option, Option), Error>; - /// Process inbound packets - fn process_inbound_packets( - &mut self, - packets: &[Packet], - ) -> Result<(Vec, Vec), Error>; - /// Start a signing round with the given message and signature type. fn start_signing_round( &mut self, @@ -235,10 +213,6 @@ impl WstsCoordinator for FireCoordinator { self.0.get_config() } - fn save(&self) -> SavedState { - self.0.save() - } - async fn load( storage: &S, aggregate_key: PublicKeyXOnly, @@ -276,29 +250,6 @@ impl WstsCoordinator for FireCoordinator { Ok(coordinator) } - fn process_message( - &mut self, - message: &Message, - ) -> Result<(Option, Option), Error> { - let packet = Packet::from_message(message); - self.0 - .process_message(&packet) - .map_err(Error::wsts_coordinator) - } - - fn process_inbound_messages( - &mut self, - messages: &[Message], - ) -> Result<(Vec, Vec), Error> { - let packets = messages - .iter() - .map(Packet::from_message) - .collect::>(); - self.0 - .process_inbound_messages(&packets) - .map_err(Error::wsts_coordinator) - } - fn process_packet( &mut self, packet: &Packet, @@ -308,15 +259,6 @@ impl WstsCoordinator for FireCoordinator { .map_err(Error::wsts_coordinator) } - fn process_inbound_packets( - &mut self, - packets: &[Packet], - ) -> Result<(Vec, Vec), Error> { - self.0 - .process_inbound_messages(packets) - .map_err(Error::wsts_coordinator) - } - fn start_signing_round( &mut self, message: &[u8], @@ -371,10 +313,6 @@ impl WstsCoordinator for FrostCoordinator { self.0.get_config() } - fn save(&self) -> SavedState { - self.0.save() - } - async fn load( storage: &S, aggregate_key: PublicKeyXOnly, @@ -412,29 +350,6 @@ impl WstsCoordinator for FrostCoordinator { Ok(coordinator) } - fn process_message( - &mut self, - message: &Message, - ) -> Result<(Option, Option), Error> { - let packet = Packet::from_message(message); - self.0 - .process_message(&packet) - .map_err(Error::wsts_coordinator) - } - - fn process_inbound_messages( - &mut self, - messages: &[Message], - ) -> Result<(Vec, Vec), Error> { - let packets = messages - .iter() - .map(Packet::from_message) - .collect::>(); - self.0 - .process_inbound_messages(&packets) - .map_err(Error::wsts_coordinator) - } - fn process_packet( &mut self, packet: &Packet, @@ -444,15 +359,6 @@ impl WstsCoordinator for FrostCoordinator { .map_err(Error::wsts_coordinator) } - fn process_inbound_packets( - &mut self, - packets: &[Packet], - ) -> Result<(Vec, Vec), Error> { - self.0 - .process_inbound_messages(packets) - .map_err(Error::wsts_coordinator) - } - fn start_signing_round( &mut self, message: &[u8], diff --git a/signer/tests/integration/block_observer.rs b/signer/tests/integration/block_observer.rs index 2e47e6f0d..1270d9c04 100644 --- a/signer/tests/integration/block_observer.rs +++ b/signer/tests/integration/block_observer.rs @@ -1,3 +1,4 @@ +use std::collections::BTreeSet; use std::collections::HashSet; use std::ops::Deref; use std::sync::atomic::AtomicBool; @@ -18,20 +19,23 @@ use emily_client::apis::testing_api; use emily_client::models::CreateDepositRequestBody; use fake::Fake as _; use fake::Faker; -use futures::StreamExt; use rand::SeedableRng as _; use sbtc::testing::regtest; use sbtc::testing::regtest::Recipient; use signer::bitcoin::utxo::SbtcRequests; use signer::bitcoin::utxo::SignerBtcState; +use signer::block_observer::get_signer_set_and_aggregate_key; use signer::context::SbtcLimits; use signer::emily_client::EmilyClient; use signer::error::Error; +use signer::keys::PublicKey; use signer::keys::SignerScriptPubKey as _; use signer::stacks::api::TenureBlocks; use signer::storage::model; use signer::storage::model::BitcoinBlockHash; use signer::storage::model::EncryptedDkgShares; +use signer::storage::model::RotateKeysTransaction; +use signer::storage::model::StacksBlock; use signer::storage::model::TxOutput; use signer::storage::model::TxOutputType; use signer::storage::model::TxPrevout; @@ -41,7 +45,6 @@ use signer::storage::DbWrite; use signer::testing::stacks::DUMMY_SORTITION_INFO; use signer::testing::stacks::DUMMY_TENURE_INFO; -use signer::bitcoin::zmq::BitcoinCoreMessageStream; use signer::block_observer::BlockObserver; use signer::context::Context as _; use signer::context::SignerEvent; @@ -51,7 +54,7 @@ use signer::storage::DbRead as _; use signer::testing; use signer::testing::context::TestContext; use signer::testing::context::*; -use tokio_stream::wrappers::ReceiverStream; +use signer::testing::storage::model::TestData; use url::Url; use crate::setup::TestSweepSetup; @@ -67,7 +70,7 @@ pub const GET_POX_INFO_JSON: &str = /// that pass validation, regardless of when they were confirmed. #[test_case::test_case(1; "one block ago")] #[test_case::test_case(5; "five blocks ago")] -#[tokio::test] +#[test_log::test(tokio::test)] async fn load_latest_deposit_requests_persists_requests_from_past(blocks_ago: u64) { // We start with the typical setup with a fresh database and context // with a real bitcoin core client and a real connection to our @@ -146,24 +149,9 @@ async fn load_latest_deposit_requests_persists_requests_from_past(blocks_ago: u6 let start_flag = Arc::new(AtomicBool::new(false)); let flag = start_flag.clone(); - // We jump through all of these hoops to make sure that the block - // stream object is Send + Sync. - let zmq_stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["hashblock"]) - .await - .unwrap(); - let (sender, receiver) = tokio::sync::mpsc::channel(100); - - tokio::spawn(async move { - let mut stream = zmq_stream.to_block_hash_stream(); - while let Some(block) = stream.next().await { - sender.send(block).await.unwrap(); - } - }); - let block_observer = BlockObserver { context: ctx.clone(), - bitcoin_blocks: ReceiverStream::new(receiver), + bitcoin_blocks: testing::btc::new_zmq_block_hash_stream(BITCOIN_CORE_ZMQ_ENDPOINT).await, }; // We need at least one receiver @@ -264,22 +252,9 @@ async fn link_blocks() { }) .await; - let zmq_stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["hashblock"]) - .await - .unwrap(); - let (sender, receiver) = tokio::sync::mpsc::channel(100); - - tokio::spawn(async move { - let mut stream = zmq_stream.to_block_hash_stream(); - while let Some(block) = stream.next().await { - sender.send(block).await.unwrap(); - } - }); - let block_observer = BlockObserver { context: ctx.clone(), - bitcoin_blocks: ReceiverStream::new(receiver), + bitcoin_blocks: testing::btc::new_zmq_block_hash_stream(BITCOIN_CORE_ZMQ_ENDPOINT).await, }; let mut signal_rx = ctx.get_signal_receiver(); @@ -430,24 +405,9 @@ async fn block_observer_stores_donation_and_sbtc_utxos() { let start_flag = Arc::new(AtomicBool::new(false)); let flag = start_flag.clone(); - // We jump through all of these hoops to make sure that the block - // stream object is Send + Sync. - let zmq_stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["hashblock"]) - .await - .unwrap(); - let (sender, receiver) = tokio::sync::mpsc::channel(1000); - - tokio::spawn(async move { - let mut stream = zmq_stream.to_block_hash_stream(); - while let Some(block) = stream.next().await { - sender.send(block).await.unwrap(); - } - }); - let block_observer = BlockObserver { context: ctx.clone(), - bitcoin_blocks: ReceiverStream::new(receiver), + bitcoin_blocks: testing::btc::new_zmq_block_hash_stream(BITCOIN_CORE_ZMQ_ENDPOINT).await, }; tokio::spawn(async move { @@ -747,24 +707,9 @@ async fn block_observer_handles_update_limits(deployed: bool, sbtc_limits: SbtcL let start_flag = Arc::new(AtomicBool::new(false)); let flag = start_flag.clone(); - // We jump through all of these hoops to make sure that the block - // stream object is Send + Sync. - let zmq_stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["hashblock"]) - .await - .unwrap(); - let (sender, receiver) = tokio::sync::mpsc::channel(100); - - tokio::spawn(async move { - let mut stream = zmq_stream.to_block_hash_stream(); - while let Some(block) = stream.next().await { - sender.send(block).await.unwrap(); - } - }); - let block_observer = BlockObserver { context: ctx.clone(), - bitcoin_blocks: ReceiverStream::new(receiver), + bitcoin_blocks: testing::btc::new_zmq_block_hash_stream(BITCOIN_CORE_ZMQ_ENDPOINT).await, }; let mut signal_receiver = ctx.get_signal_receiver(); @@ -906,3 +851,323 @@ async fn next_headers_to_process_ignores_known_headers() { testing::storage::drop_db(db).await; } + +/// The [`get_signer_set_and_aggregate_key`] function is supposed to fetch +/// the "current" signing set and the aggregate key to use for bitcoin +/// transactions. It attempts to get the latest rotate-keys contract call +/// transaction confirmed on the canonical Stacks blockchain and falls back +/// to the DKG shares table if no such transaction can be found. +/// +/// This tests that we prefer rotate keys transactions if it's available +/// but will use the DKG shares behavior is indeed the case. +#[tokio::test] +async fn get_signer_public_keys_and_aggregate_key_falls_back() { + let db = testing::storage::new_test_database().await; + + let mut rng = rand::rngs::StdRng::seed_from_u64(51); + + let ctx = TestContext::builder() + .with_storage(db.clone()) + .with_mocked_clients() + .build(); + + // We need stacks blocks for the rotate-keys transactions. + let test_params = testing::storage::model::Params { + num_bitcoin_blocks: 10, + num_stacks_blocks_per_bitcoin_block: 1, + num_deposit_requests_per_block: 0, + num_withdraw_requests_per_block: 0, + num_signers_per_request: 0, + consecutive_blocks: false, + }; + let test_data = TestData::generate(&mut rng, &[], &test_params); + test_data.write_to(&db).await; + + // We always need the chain tip. + let chain_tip = db.get_bitcoin_canonical_chain_tip().await.unwrap().unwrap(); + + // We have no rows in the DKG shares table and no rotate-keys + // transactions, so there should be no aggregate key, since that only + // happens after DKG, but we should always know the current signer set. + let (maybe_aggregate_key, signer_set) = get_signer_set_and_aggregate_key(&ctx, chain_tip) + .await + .unwrap(); + assert!(maybe_aggregate_key.is_none()); + assert!(!signer_set.is_empty()); + + // Alright, lets write some DKG shares into the database. When we do + // that the signer set should be considered whatever the signer set is + // from our DKG shares. + let shares: EncryptedDkgShares = Faker.fake_with_rng(&mut rng); + db.write_encrypted_dkg_shares(&shares).await.unwrap(); + + let (aggregate_key, signer_set) = get_signer_set_and_aggregate_key(&ctx, chain_tip) + .await + .unwrap(); + + let shares_signer_set: BTreeSet = + shares.signer_set_public_keys.iter().copied().collect(); + + assert_eq!(shares.aggregate_key, aggregate_key.unwrap()); + assert_eq!(shares_signer_set, signer_set); + + // Okay now we write a rotate-keys transaction into the database. To do + // that we need the stacks chain tip, and a something in 3 different + // tables... + let stacks_chain_tip = db.get_stacks_chain_tip(&chain_tip).await.unwrap().unwrap(); + + let rotate_keys: RotateKeysTransaction = Faker.fake_with_rng(&mut rng); + let transaction = model::Transaction { + txid: rotate_keys.txid.into_bytes(), + tx: Vec::new(), + tx_type: model::TransactionType::RotateKeys, + block_hash: stacks_chain_tip.block_hash.into_bytes(), + }; + let tx = model::StacksTransaction { + txid: rotate_keys.txid, + block_hash: stacks_chain_tip.block_hash, + }; + + db.write_transaction(&transaction).await.unwrap(); + db.write_stacks_transaction(&tx).await.unwrap(); + db.write_rotate_keys_transaction(&rotate_keys) + .await + .unwrap(); + + // Alright, now that we have a rotate-keys transaction, we can check if + // it is preferred over the DKG shares table. + let (aggregate_key, signer_set) = get_signer_set_and_aggregate_key(&ctx, chain_tip) + .await + .unwrap(); + + let rotate_keys_signer_set: BTreeSet = + rotate_keys.signer_set.iter().copied().collect(); + + assert_eq!(rotate_keys.aggregate_key, aggregate_key.unwrap()); + assert_eq!(rotate_keys_signer_set, signer_set); + + testing::storage::drop_db(db).await; +} + +/// This test checks that the signer state is updated with the latest the +/// sbtc limits, current signer set, and current aggregate key after the +/// block observer processes a bitcoin block. +#[tokio::test] +async fn block_observer_updates_state_after_observing_bitcoin_block() { + let mut rng = rand::rngs::StdRng::seed_from_u64(512); + // We start with the typical setup with a fresh database and context + // with a real bitcoin core client and a real connection to our + // database. + let (_, faucet) = regtest::initialize_blockchain(); + let db = testing::storage::new_test_database().await; + let mut ctx = TestContext::builder() + .with_storage(db.clone()) + .with_first_bitcoin_core_client() + .with_mocked_emily_client() + .with_mocked_stacks_client() + .build(); + + // We need to set up the stacks client as well. We use it to fetch + // information about the Stacks blockchain, so we need to prep it, even + // though it isn't necessary for our test. + ctx.with_stacks_client(|client| { + client + .expect_get_tenure_info() + .returning(|| Box::pin(std::future::ready(Ok(DUMMY_TENURE_INFO.clone())))); + client.expect_get_block().returning(|_| { + let response = Ok(NakamotoBlock { + header: NakamotoBlockHeader::empty(), + txs: Vec::new(), + }); + Box::pin(std::future::ready(response)) + }); + client + .expect_get_tenure() + .returning(|_| Box::pin(std::future::ready(TenureBlocks::nearly_empty()))); + client.expect_get_pox_info().returning(|| { + let response = serde_json::from_str::(GET_POX_INFO_JSON) + .map_err(Error::JsonSerialize); + Box::pin(std::future::ready(response)) + }); + client + .expect_get_sortition_info() + .returning(|_| Box::pin(std::future::ready(Ok(DUMMY_SORTITION_INFO.clone())))); + }) + .await; + + ctx.with_emily_client(|client| { + client + .expect_get_deposits() + .returning(|| Box::pin(std::future::ready(Ok(vec![])))); + + client + .expect_get_limits() + .returning(|| Box::pin(std::future::ready(Ok(SbtcLimits::unlimited())))); + }) + .await; + + // We only proceed with the test after the BlockObserver "process" has + // started, and we use this counter to notify us when that happens. + let start_flag = Arc::new(AtomicBool::new(false)); + let flag = start_flag.clone(); + + let block_observer = BlockObserver { + context: ctx.clone(), + bitcoin_blocks: testing::btc::new_zmq_block_hash_stream(BITCOIN_CORE_ZMQ_ENDPOINT).await, + }; + + // In this test the signer set public keys start empty. When running + // the signer binary the signer starts as the bootstrap signing set. + // Also, the sbtc limits start off as "zero" and then get updated by + // the block observer. + let state = ctx.state(); + assert_eq!(state.get_current_limits(), SbtcLimits::zero()); + assert!(state.current_signer_public_keys().is_empty()); + assert!(state.current_aggregate_key().is_none()); + + tokio::spawn(async move { + flag.store(true, Ordering::Relaxed); + block_observer.run().await + }); + + // Wait for the task to start. + while !start_flag.load(Ordering::SeqCst) { + tokio::time::sleep(Duration::from_millis(10)).await; + } + + // Let's generate a new block and wait for our block observer to send a + // BitcoinBlockObserved signal. + let chain_tip = faucet.generate_blocks(1).pop().unwrap().into(); + + ctx.wait_for_signal(Duration::from_secs(3), |signal| { + matches!( + signal, + SignerSignal::Event(SignerEvent::BitcoinBlockObserved) + ) + }) + .await + .unwrap(); + + // If we pass the above without panicking it should be fine, this is just a + // sanity check. + let db_chain_tip = db + .get_bitcoin_canonical_chain_tip() + .await + .expect("cannot get chain tip"); + assert_eq!(db_chain_tip, Some(chain_tip)); + + // There is no aggregate key since there aren't any key rotation + // contract calls and no DKG shares. But the current signer set should + // be the bootstrap signing set now. + let bootstrap_signing_set = ctx.config().signer.bootstrap_signing_set(); + assert_eq!(state.get_current_limits(), SbtcLimits::unlimited()); + assert!(state.current_aggregate_key().is_none()); + assert_eq!(state.current_signer_public_keys(), bootstrap_signing_set); + + // Okay now let's add in some DKG shares into the database. This should + // take precedence over what is configured as the bootstrap signing + // set. + let mut dkg_shares: EncryptedDkgShares = Faker.fake_with_rng(&mut rng); + let mut public_keys: Vec = std::iter::repeat_with(|| Faker.fake_with_rng(&mut rng)) + .take(12) + .collect(); + public_keys.sort(); + dkg_shares.signer_set_public_keys = public_keys; + db.write_encrypted_dkg_shares(&dkg_shares).await.unwrap(); + + // Sanity check that the signing set in the DKG shares are different + // from the bootstrap signing set. + let dkg_public_keys = dkg_shares.signer_set_public_keys.iter().copied().collect(); + assert_ne!(dkg_public_keys, bootstrap_signing_set); + + // Let's generate a new block and wait for our block observer to send a + // BitcoinBlockObserved signal. Then after we received the signal that + // a bitcoin block has been observed we check the signer state. + let chain_tip = faucet.generate_blocks(1).pop().unwrap().into(); + + ctx.wait_for_signal(Duration::from_secs(3), |signal| { + matches!( + signal, + SignerSignal::Event(SignerEvent::BitcoinBlockObserved) + ) + }) + .await + .unwrap(); + + // Check that the chain tip has been updated. + let db_chain_tip = db + .get_bitcoin_canonical_chain_tip() + .await + .expect("cannot get chain tip"); + assert_eq!(db_chain_tip, Some(chain_tip)); + + let dkg_aggregate_key = Some(dkg_shares.aggregate_key); + assert_eq!(state.get_current_limits(), SbtcLimits::unlimited()); + assert_eq!(state.current_aggregate_key(), dkg_aggregate_key); + assert_eq!(state.current_signer_public_keys(), dkg_public_keys); + + // Okay now we're going to show what happens if we have received a key + // rotation event. Such events take priority over DKG shares, even if + // the DKG shares are newer. So let's add such an event to the + // database. First we need a stacks block for the join. + let stacks_block = StacksBlock { + bitcoin_anchor: chain_tip, + ..Faker.fake_with_rng(&mut rng) + }; + + db.write_stacks_block(&stacks_block).await.unwrap(); + + let rotate_keys: RotateKeysTransaction = Faker.fake_with_rng(&mut rng); + let transaction = model::Transaction { + txid: rotate_keys.txid.into_bytes(), + tx: Vec::new(), + tx_type: model::TransactionType::RotateKeys, + block_hash: stacks_block.block_hash.into_bytes(), + }; + let tx = model::StacksTransaction { + txid: rotate_keys.txid, + block_hash: stacks_block.block_hash, + }; + + db.write_transaction(&transaction).await.unwrap(); + db.write_stacks_transaction(&tx).await.unwrap(); + db.write_rotate_keys_transaction(&rotate_keys) + .await + .unwrap(); + + // Let's add some DKG shares after the insertion of the rotate keys + // transaction. + let dkg_shares: EncryptedDkgShares = Faker.fake_with_rng(&mut rng); + db.write_encrypted_dkg_shares(&dkg_shares).await.unwrap(); + + // Let's generate a new block and wait for our block observer to send a + // BitcoinBlockObserved signal. + let chain_tip = faucet.generate_blocks(1).pop().unwrap().into(); + + ctx.wait_for_signal(Duration::from_secs(3), |signal| { + matches!( + signal, + SignerSignal::Event(SignerEvent::BitcoinBlockObserved) + ) + }) + .await + .unwrap(); + + let db_chain_tip = db + .get_bitcoin_canonical_chain_tip() + .await + .expect("cannot get chain tip"); + assert_eq!(db_chain_tip, Some(chain_tip)); + + // We expect the signer state to be the same as what is in the rotate + // keys event in the database. + let rotate_keys_aggregate_key = Some(rotate_keys.aggregate_key); + let rotate_keys_public_keys = rotate_keys.signer_set.iter().copied().collect(); + + assert_eq!(state.current_aggregate_key(), rotate_keys_aggregate_key); + assert_eq!(state.current_signer_public_keys(), rotate_keys_public_keys); + assert_ne!(rotate_keys_public_keys, dkg_public_keys); + assert_ne!(rotate_keys_aggregate_key, dkg_aggregate_key); + + testing::storage::drop_db(db).await; +} diff --git a/signer/tests/integration/emily.rs b/signer/tests/integration/emily.rs index d1cccc320..9a01875a3 100644 --- a/signer/tests/integration/emily.rs +++ b/signer/tests/integration/emily.rs @@ -99,8 +99,9 @@ where .into(); let dkg_txid = testing::dummy::txid(&fake::Faker, rng); - let (aggregate_key, all_dkg_shares) = - signer_set.run_dkg(bitcoin_chain_tip, dkg_txid, rng).await; + let (aggregate_key, all_dkg_shares) = signer_set + .run_dkg(bitcoin_chain_tip, dkg_txid.into(), rng) + .await; let encrypted_dkg_shares = all_dkg_shares.first().unwrap(); signer_set diff --git a/signer/tests/integration/postgres.rs b/signer/tests/integration/postgres.rs index eb1390583..7583180ab 100644 --- a/signer/tests/integration/postgres.rs +++ b/signer/tests/integration/postgres.rs @@ -1002,7 +1002,7 @@ async fn should_return_the_same_last_key_rotation_as_in_memory_store() { testing::wsts::SignerSet::new(&signer_info, threshold, || dummy_wsts_network.connect()); let dkg_txid = testing::dummy::txid(&fake::Faker, &mut rng); let (_, all_shares) = testing_signer_set - .run_dkg(chain_tip, dkg_txid, &mut rng) + .run_dkg(chain_tip, dkg_txid.into(), &mut rng) .await; let shares = all_shares.first().unwrap(); diff --git a/signer/tests/integration/transaction_coordinator.rs b/signer/tests/integration/transaction_coordinator.rs index fb1ec07fb..08b1669e0 100644 --- a/signer/tests/integration/transaction_coordinator.rs +++ b/signer/tests/integration/transaction_coordinator.rs @@ -31,7 +31,7 @@ use emily_client::apis::deposit_api; use emily_client::apis::testing_api; use fake::Fake as _; use fake::Faker; -use futures::StreamExt; +use futures::StreamExt as _; use lru::LruCache; use mockito; use rand::rngs::OsRng; @@ -73,10 +73,8 @@ use stacks_common::types::chainstate::SortitionId; use test_case::test_case; use test_log::test; use tokio_stream::wrappers::BroadcastStream; -use tokio_stream::wrappers::ReceiverStream; use url::Url; -use signer::bitcoin::zmq::BitcoinCoreMessageStream; use signer::block_observer::BlockObserver; use signer::context::Context; use signer::emily_client::EmilyClient; @@ -93,7 +91,6 @@ use signer::stacks::contracts::CompleteDepositV1; use signer::stacks::contracts::SMART_CONTRACTS; use signer::storage::model; use signer::storage::model::EncryptedDkgShares; -use signer::storage::model::RotateKeysTransaction; use signer::storage::DbRead as _; use signer::storage::DbWrite as _; use signer::testing; @@ -148,8 +145,9 @@ where .into(); let dkg_txid = testing::dummy::txid(&fake::Faker, rng); - let (aggregate_key, all_dkg_shares) = - signer_set.run_dkg(bitcoin_chain_tip, dkg_txid, rng).await; + let (aggregate_key, all_dkg_shares) = signer_set + .run_dkg(bitcoin_chain_tip, dkg_txid.into(), rng) + .await; let encrypted_dkg_shares = all_dkg_shares.first().unwrap(); signer_set @@ -473,11 +471,6 @@ async fn process_complete_deposit() { .expect_estimate_fees() .once() .returning(move |_, _, _| Box::pin(async move { Ok(25505) })); - - client - .expect_get_sbtc_total_supply() - .once() - .returning(move |_| Box::pin(async move { Ok(Amount::ZERO) })); }) .await; @@ -494,6 +487,17 @@ async fn process_complete_deposit() { let (aggregate_key, bitcoin_chain_tip) = run_dkg(&context, &mut rng, &mut testing_signer_set).await; + // When the signer binary starts up in main(), it sets the current + // signer set public keys in the context state using the values in the + // bootstrap_signing_set configuration parameter. Later, the aggregate + // key gets set in the block observer. We're not running a block + // observer in this test, nor are we going through main, so we manually + // update the state here. + let signer_set_public_keys = testing_signer_set.signer_keys().into_iter().collect(); + let state = context.state(); + state.update_current_signer_set(signer_set_public_keys); + state.set_current_aggregate_key(aggregate_key); + // Ensure we have a signers UTXO (as a donation, to not mess with the current // temporary `get_swept_deposit_requests` implementation) push_utxo_donation(&context, &aggregate_key, &setup.sweep_block_hash).await; @@ -777,122 +781,6 @@ async fn deploy_smart_contracts_coordinator( testing::storage::drop_db(db).await; } -/// The [`TxCoordinatorEventLoop::get_signer_set_and_aggregate_key`] -/// function is supposed to fetch the "current" signing set and the -/// aggregate key to use for bitcoin transactions. It attempts to get the -/// latest rotate-keys contract call transaction confirmed on the canonical -/// Stacks blockchain and falls back to the DKG shares table if no such -/// transaction can be found. -/// -/// This tests that we prefer rotate keys transactions if it's available -/// but will use the DKG shares behavior is indeed the case. -#[tokio::test] -async fn get_signer_public_keys_and_aggregate_key_falls_back() { - let db = testing::storage::new_test_database().await; - - let mut rng = rand::rngs::StdRng::seed_from_u64(51); - - let ctx = TestContext::builder() - .with_storage(db.clone()) - .with_mocked_clients() - .build(); - - let network = InMemoryNetwork::new(); - - ctx.state().set_sbtc_contracts_deployed(); // Skip contract deployment - let coord = TxCoordinatorEventLoop { - network: network.connect(), - context: ctx.clone(), - context_window: 10000, - private_key: ctx.config().signer.private_key, - signing_round_max_duration: Duration::from_secs(10), - bitcoin_presign_request_max_duration: Duration::from_secs(10), - threshold: 2, - dkg_max_duration: Duration::from_secs(10), - is_epoch3: true, - }; - - // We need stacks blocks for the rotate-keys transactions. - let test_params = testing::storage::model::Params { - num_bitcoin_blocks: 10, - num_stacks_blocks_per_bitcoin_block: 1, - num_deposit_requests_per_block: 0, - num_withdraw_requests_per_block: 0, - num_signers_per_request: 0, - consecutive_blocks: false, - }; - let test_data = TestData::generate(&mut rng, &[], &test_params); - test_data.write_to(&db).await; - - // We always need the chain tip. - let chain_tip = db.get_bitcoin_canonical_chain_tip().await.unwrap().unwrap(); - - // We have no rows in the DKG shares table and no rotate-keys - // transactions, so there should be no aggregate key, since that only - // happens after DKG, but we should always know the current signer set. - let (maybe_aggregate_key, signer_set) = coord - .get_signer_set_and_aggregate_key(&chain_tip) - .await - .unwrap(); - assert!(maybe_aggregate_key.is_none()); - assert!(!signer_set.is_empty()); - - // Alright, lets write some DKG shares into the database. When we do - // that the signer set should be considered whatever the signer set is - // from our DKG shares. - let shares: EncryptedDkgShares = Faker.fake_with_rng(&mut rng); - db.write_encrypted_dkg_shares(&shares).await.unwrap(); - - let (aggregate_key, signer_set) = coord - .get_signer_set_and_aggregate_key(&chain_tip) - .await - .unwrap(); - - let shares_signer_set: BTreeSet = - shares.signer_set_public_keys.iter().copied().collect(); - - assert_eq!(shares.aggregate_key, aggregate_key.unwrap()); - assert_eq!(shares_signer_set, signer_set); - - // Okay not we write a rotate-keys transaction into the database. To do - // that we need the stacks chain tip, and a something in 3 different - // tables... - let stacks_chain_tip = db.get_stacks_chain_tip(&chain_tip).await.unwrap().unwrap(); - - let rotate_keys: RotateKeysTransaction = Faker.fake_with_rng(&mut rng); - let transaction = model::Transaction { - txid: rotate_keys.txid.into_bytes(), - tx: Vec::new(), - tx_type: model::TransactionType::RotateKeys, - block_hash: stacks_chain_tip.block_hash.into_bytes(), - }; - let tx = model::StacksTransaction { - txid: rotate_keys.txid, - block_hash: stacks_chain_tip.block_hash, - }; - - db.write_transaction(&transaction).await.unwrap(); - db.write_stacks_transaction(&tx).await.unwrap(); - db.write_rotate_keys_transaction(&rotate_keys) - .await - .unwrap(); - - // Alright, now that we have a rotate-keys transaction, we can check if - // it is preferred over the DKG shares table. - let (aggregate_key, signer_set) = coord - .get_signer_set_and_aggregate_key(&chain_tip) - .await - .unwrap(); - - let rotate_keys_signer_set: BTreeSet = - rotate_keys.signer_set.iter().copied().collect(); - - assert_eq!(rotate_keys.aggregate_key, aggregate_key.unwrap()); - assert_eq!(rotate_keys_signer_set, signer_set); - - testing::storage::drop_db(db).await; -} - /// Test that we run DKG if the coordinator notices that DKG has not been /// run yet. /// @@ -948,6 +836,11 @@ async fn run_dkg_from_scratch() { let network = WanNetwork::default(); let mut signers: Vec<_> = Vec::new(); + let signer_set_public_keys: BTreeSet = signer_key_pairs + .iter() + .map(|kp| kp.public_key().into()) + .collect(); + for (kp, data) in iter { let broadcast_stacks_tx = broadcast_stacks_tx.clone(); let db = testing::storage::new_test_database().await; @@ -959,6 +852,15 @@ async fn run_dkg_from_scratch() { }) .build(); + // When the signer binary starts up in main(), it sets the current + // signer set public keys in the context state using the values in + // the bootstrap_signing_set configuration parameter. Later, the + // state gets updated in the block observer. We're not running a + // block observer in this test, nor are we going through main, so + // we manually update the state here. + ctx.state() + .update_current_signer_set(signer_set_public_keys.clone()); + ctx.with_stacks_client(|client| { client .expect_estimate_fees() @@ -1176,6 +1078,10 @@ async fn run_subsequent_dkg() { // The aggregate key we will use for the first DKG shares entry. let aggregate_key_1: PublicKey = Faker.fake_with_rng(&mut rng); + let signer_set_public_keys: BTreeSet = signer_key_pairs + .iter() + .map(|kp| kp.public_key().into()) + .collect(); for (kp, data) in iter { let broadcast_stacks_tx = broadcast_stacks_tx.clone(); @@ -1194,14 +1100,20 @@ async fn run_subsequent_dkg() { // This ensures that they participate in DKG. data.write_to(&db).await; + // When the signer binary starts up in main(), it sets the current + // signer set public keys in the context state using the values in + // the bootstrap_signing_set configuration parameter. Later, this + // state gets updated in the block observer. We're not running a + // block observer in this test, nor are we going through main, so + // we manually update the necessary state here. + ctx.state() + .update_current_signer_set(signer_set_public_keys.clone()); + // Write one DKG shares entry to the signer's database simulating that // DKG has been successfully run once. db.write_encrypted_dkg_shares(&EncryptedDkgShares { aggregate_key: aggregate_key_1, - signer_set_public_keys: signer_key_pairs - .iter() - .map(|kp| kp.public_key().into()) - .collect(), + signer_set_public_keys: signer_set_public_keys.iter().copied().collect(), status: DkgSharesStatus::Verified(data.bitcoin_blocks[0].clone().into()), ..Faker.fake() }) @@ -1615,22 +1527,10 @@ async fn sign_bitcoin_transaction() { ev.run().await }); - let zmq_stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["hashblock"]) - .await - .unwrap(); - let (sender, receiver) = tokio::sync::mpsc::channel(100); - - tokio::spawn(async move { - let mut stream = zmq_stream.to_block_hash_stream(); - while let Some(block) = stream.next().await { - sender.send(block).await.unwrap(); - } - }); - let block_observer = BlockObserver { context: ctx.clone(), - bitcoin_blocks: ReceiverStream::new(receiver), + bitcoin_blocks: testing::btc::new_zmq_block_hash_stream(BITCOIN_CORE_ZMQ_ENDPOINT) + .await, }; let counter = start_count.clone(); tokio::spawn(async move { @@ -2058,22 +1958,10 @@ async fn sign_bitcoin_transaction_multiple_locking_keys() { ev.run().await }); - let zmq_stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["hashblock"]) - .await - .unwrap(); - let (sender, receiver) = tokio::sync::mpsc::channel(100); - - tokio::spawn(async move { - let mut stream = zmq_stream.to_block_hash_stream(); - while let Some(block) = stream.next().await { - sender.send(block).await.unwrap(); - } - }); - let block_observer = BlockObserver { context: ctx.clone(), - bitcoin_blocks: ReceiverStream::new(receiver), + bitcoin_blocks: testing::btc::new_zmq_block_hash_stream(BITCOIN_CORE_ZMQ_ENDPOINT) + .await, }; let counter = start_count.clone(); tokio::spawn(async move { @@ -2650,22 +2538,10 @@ async fn skip_smart_contract_deployment_and_key_rotation_if_up_to_date() { ev.run().await }); - let zmq_stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["hashblock"]) - .await - .unwrap(); - let (sender, receiver) = tokio::sync::mpsc::channel(100); - - tokio::spawn(async move { - let mut stream = zmq_stream.to_block_hash_stream(); - while let Some(block) = stream.next().await { - sender.send(block).await.unwrap(); - } - }); - let block_observer = BlockObserver { context: ctx.clone(), - bitcoin_blocks: ReceiverStream::new(receiver), + bitcoin_blocks: testing::btc::new_zmq_block_hash_stream(BITCOIN_CORE_ZMQ_ENDPOINT) + .await, }; let counter = start_count.clone(); tokio::spawn(async move { @@ -3205,6 +3081,10 @@ async fn test_conservative_initial_sbtc_limits() { // - We load the database with a bitcoin blocks going back to some // genesis block. // ========================================================================= + let signer_set_public_keys: BTreeSet = signer_key_pairs + .iter() + .map(|kp| kp.public_key().into()) + .collect(); let mut signers = Vec::new(); for kp in signer_key_pairs.iter() { let db = testing::storage::new_test_database().await; @@ -3222,6 +3102,15 @@ async fn test_conservative_initial_sbtc_limits() { .with_mocked_emily_client() .build(); + // When the signer binary starts up in main(), it sets the current + // signer set public keys in the context state using the values in + // the bootstrap_signing_set configuration parameter. Later, this + // state gets updated in the block observer. We're not running a + // block observer in this test, nor are we going through main, so + // we manually update the necessary state here. + ctx.state() + .update_current_signer_set(signer_set_public_keys.clone()); + let network = network.connect(&ctx); signers.push((ctx, db, kp, network)); @@ -3234,7 +3123,7 @@ async fn test_conservative_initial_sbtc_limits() { let dkg_txid = testing::dummy::txid(&fake::Faker, &mut rng); let (aggregate_key, encrypted_shares) = signer_set - .run_dkg(chain_tip_info.hash.into(), dkg_txid, &mut rng) + .run_dkg(chain_tip_info.hash.into(), dkg_txid.into(), &mut rng) .await; for ((_, db, _, _), dkg_shares) in signers.iter_mut().zip(&encrypted_shares) { @@ -3440,22 +3329,10 @@ async fn test_conservative_initial_sbtc_limits() { ev.run().await }); - let zmq_stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["hashblock"]) - .await - .unwrap(); - let (sender, receiver) = tokio::sync::mpsc::channel(100); - - tokio::spawn(async move { - let mut stream = zmq_stream.to_block_hash_stream(); - while let Some(block) = stream.next().await { - sender.send(block).await.unwrap(); - } - }); - let block_observer = BlockObserver { context: ctx.clone(), - bitcoin_blocks: ReceiverStream::new(receiver), + bitcoin_blocks: testing::btc::new_zmq_block_hash_stream(BITCOIN_CORE_ZMQ_ENDPOINT) + .await, }; let counter = start_count.clone(); tokio::spawn(async move { diff --git a/signer/tests/integration/transaction_signer.rs b/signer/tests/integration/transaction_signer.rs index 7b3c30bea..6cbcc0867 100644 --- a/signer/tests/integration/transaction_signer.rs +++ b/signer/tests/integration/transaction_signer.rs @@ -12,6 +12,7 @@ use signer::bitcoin::utxo::RequestRef; use signer::bitcoin::utxo::Requests; use signer::bitcoin::utxo::UnsignedTransaction; use signer::bitcoin::validation::TxRequestIds; +use signer::block_observer::get_signer_set_and_aggregate_key; use signer::context::Context; use signer::context::SbtcLimits; use signer::error::Error; @@ -27,16 +28,15 @@ use signer::network::MessageTransfer; use signer::stacks::contracts::ContractCall; use signer::storage::model; use signer::storage::model::BitcoinBlockHash; +use signer::storage::model::BitcoinBlockRef; use signer::storage::model::BitcoinTxId; use signer::storage::model::BitcoinTxSigHash; -use signer::storage::model::RotateKeysTransaction; use signer::storage::model::SigHash; use signer::storage::model::StacksTxId; use signer::storage::DbRead as _; use signer::storage::DbWrite as _; use signer::testing; use signer::testing::context::*; -use signer::testing::storage::model::TestData; use signer::transaction_signer::ChainTipStatus; use signer::transaction_signer::MsgChainTipReport; use signer::transaction_signer::TxSignerEventLoop; @@ -50,102 +50,6 @@ use crate::setup::TestSignerSet; use crate::setup::TestSweepSetup; use crate::setup::TestSweepSetup2; -/// Test that [`TxSignerEventLoop::get_signer_public_keys`] falls back to -/// the bootstrap config if there is no rotate-keys transaction in the -/// database. -#[tokio::test] -async fn get_signer_public_keys_and_aggregate_key_falls_back() { - let db = testing::storage::new_test_database().await; - - let mut rng = rand::rngs::StdRng::seed_from_u64(51); - - let ctx = TestContext::builder() - .with_storage(db.clone()) - .with_mocked_clients() - .build(); - - let network = InMemoryNetwork::new(); - - let coord = TxSignerEventLoop { - network: network.connect(), - context: ctx.clone(), - context_window: 10000, - wsts_state_machines: LruCache::new(NonZeroUsize::new(100).unwrap()), - signer_private_key: ctx.config().signer.private_key, - threshold: 2, - rng: rand::rngs::StdRng::seed_from_u64(51), - dkg_begin_pause: None, - wsts_frost_state_machines: LruCache::new(NonZeroUsize::new(5).unwrap()), - wsts_frost_mock_txs: LruCache::new(NonZeroUsize::new(5).unwrap()), - }; - - // We need stacks blocks for the rotate-keys transactions. - let test_params = testing::storage::model::Params { - num_bitcoin_blocks: 10, - num_stacks_blocks_per_bitcoin_block: 1, - num_deposit_requests_per_block: 0, - num_withdraw_requests_per_block: 0, - num_signers_per_request: 0, - consecutive_blocks: false, - }; - let test_data = TestData::generate(&mut rng, &[], &test_params); - test_data.write_to(&db).await; - - // We always need the chain tip. - let chain_tip = db.get_bitcoin_canonical_chain_tip().await.unwrap().unwrap(); - - // We have no transactions in the database, just blocks header hashes - // and block heights. The `get_signer_public_keys` function falls back - // to the config for keys if no rotate-keys transaction can be found. - // So this function almost never errors. - let bootstrap_signer_set = coord.get_signer_public_keys(&chain_tip).await.unwrap(); - // We check that the signer set can form a valid wallet when we load - // the config. In particular, the signing set should not be empty. - assert!(!bootstrap_signer_set.is_empty()); - - let config_signer_set = ctx.config().signer.bootstrap_signing_set(); - assert_eq!(bootstrap_signer_set, config_signer_set); - - // Okay now we write a rotate-keys transaction into the database. To do - // that we need the stacks chain tip, and a something in 3 different - // tables... - let stacks_chain_tip = db.get_stacks_chain_tip(&chain_tip).await.unwrap().unwrap(); - - let rotate_keys: RotateKeysTransaction = Faker.fake_with_rng(&mut rng); - let transaction = model::Transaction { - txid: rotate_keys.txid.into_bytes(), - tx: Vec::new(), - tx_type: model::TransactionType::RotateKeys, - block_hash: stacks_chain_tip.block_hash.into_bytes(), - }; - let tx = model::StacksTransaction { - txid: rotate_keys.txid, - block_hash: stacks_chain_tip.block_hash, - }; - - db.write_transaction(&transaction).await.unwrap(); - db.write_stacks_transaction(&tx).await.unwrap(); - db.write_rotate_keys_transaction(&rotate_keys) - .await - .unwrap(); - - // Alright, now that we have a rotate-keys transaction, we can check if - // it is preferred over the config. - let signer_set: Vec = coord - .get_signer_public_keys(&chain_tip) - .await - .unwrap() - .into_iter() - .collect(); - - let mut rotate_keys_signer_set = rotate_keys.signer_set.clone(); - rotate_keys_signer_set.sort(); - - assert_eq!(rotate_keys_signer_set, signer_set); - - testing::storage::drop_db(db).await; -} - /// Test that [`TxSignerEventLoop::assert_valid_stacks_tx_sign_request`] /// errors when the signer is not in the signer set. #[tokio::test] @@ -167,8 +71,11 @@ async fn signing_set_validation_check_for_stacks_transactions() { let mut setup = TestSweepSetup::new_setup(rpc, faucet, 10000, &mut rng); // Let's get the blockchain data into the database. - let chain_tip: BitcoinBlockHash = setup.sweep_block_hash.into(); - backfill_bitcoin_blocks(&db, rpc, &chain_tip).await; + let chain_tip = BitcoinBlockRef { + block_hash: setup.sweep_block_hash.into(), + block_height: setup.sweep_block_height, + }; + backfill_bitcoin_blocks(&db, rpc, &chain_tip.block_hash).await; // This is all normal things that need to happen in order to pass // validation. @@ -247,9 +154,12 @@ pub async fn assert_should_be_able_to_handle_sbtc_requests() { // Create a test setup with a confirmed deposit transaction let setup = TestSweepSetup::new_setup(rpc, faucet, 10000, &mut rng); // Backfill the blockchain data into the database - let chain_tip: BitcoinBlockHash = setup.sweep_block_hash.into(); - backfill_bitcoin_blocks(&db, rpc, &chain_tip).await; - let bitcoin_block = db.get_bitcoin_block(&chain_tip).await.unwrap(); + let chain_tip = BitcoinBlockRef { + block_hash: setup.sweep_block_hash.into(), + block_height: setup.sweep_block_height, + }; + backfill_bitcoin_blocks(&db, rpc, &chain_tip.block_hash).await; + let bitcoin_block = db.get_bitcoin_block(&chain_tip.block_hash).await.unwrap(); let public_aggregate_key = setup.aggregated_signer.keypair.public_key().into(); @@ -262,6 +172,15 @@ pub async fn assert_should_be_able_to_handle_sbtc_requests() { setup.store_deposit_request(&db).await; setup.store_deposit_decisions(&db).await; + let (aggregate_key, signer_set_public_keys) = + get_signer_set_and_aggregate_key(&ctx, chain_tip.block_hash) + .await + .unwrap(); + + let state = ctx.state(); + state.set_current_aggregate_key(aggregate_key.unwrap()); + state.update_current_signer_set(signer_set_public_keys); + // Initialize the transaction signer event loop let network = WanNetwork::default(); @@ -293,7 +212,7 @@ pub async fn assert_should_be_able_to_handle_sbtc_requests() { let sbtc_state = signer::bitcoin::utxo::SignerBtcState { utxo: ctx .get_storage() - .get_signer_utxo(&chain_tip) + .get_signer_utxo(&chain_tip.block_hash) .await .unwrap() .unwrap(), @@ -320,9 +239,10 @@ pub async fn assert_should_be_able_to_handle_sbtc_requests() { let mut handle = network.connect(&ctx).spawn(); - let result = tx_signer + tx_signer .handle_bitcoin_pre_sign_request(&sbtc_context, &chain_tip) - .await; + .await + .unwrap(); // check if we are receving an Ack from the signer tokio::time::timeout(Duration::from_secs(2), async move { @@ -331,8 +251,6 @@ pub async fn assert_should_be_able_to_handle_sbtc_requests() { .await .unwrap(); - assert!(result.is_ok()); - // Check that the intentions to sign the requests sighashes // are stored in the database let (will_sign, _) = db @@ -406,7 +324,10 @@ async fn new_state_machine_per_valid_sighash() { let report = MsgChainTipReport { sender_is_coordinator: true, chain_tip_status: ChainTipStatus::Canonical, - chain_tip: BitcoinBlockHash::from([0; 32]), + chain_tip: BitcoinBlockRef { + block_hash: BitcoinBlockHash::from([0; 32]), + block_height: 0, + }, }; // The message that we will send is for the following sighash. We'll @@ -431,7 +352,7 @@ async fn new_state_machine_per_valid_sighash() { // Now for the nonce request message let mut nonce_request_msg = WstsMessage { - id: WstsMessageId::BitcoinTxid(*txid), + id: WstsMessageId::Sweep(*txid), inner: wsts::net::Message::NonceRequest(NonceRequest { dkg_id: 1, sign_id: 1, @@ -446,12 +367,7 @@ async fn new_state_machine_per_valid_sighash() { assert!(tx_signer.wsts_state_machines.is_empty()); tx_signer - .handle_wsts_message( - &nonce_request_msg, - &report.chain_tip, - msg_public_key, - &report, - ) + .handle_wsts_message(&nonce_request_msg, msg_public_key, &report) .await .unwrap(); @@ -474,12 +390,7 @@ async fn new_state_machine_per_valid_sighash() { }; let response = tx_signer - .handle_wsts_message( - &nonce_request_msg, - &report.chain_tip, - msg_public_key, - &report, - ) + .handle_wsts_message(&nonce_request_msg, msg_public_key, &report) .await; let id2 = StateMachineId::BitcoinSign(random_sighash); @@ -506,8 +417,19 @@ async fn max_one_state_machine_per_bitcoin_block_hash_for_dkg() { // Let's make sure that the database has the chain tip. let (rpc, _) = sbtc::testing::regtest::initialize_blockchain(); - let chain_tip: BitcoinBlockHash = rpc.get_best_block_hash().unwrap().into(); - backfill_bitcoin_blocks(&db, rpc, &chain_tip).await; + let headers = &rpc.get_chain_tips().unwrap()[0]; + let chain_tip = BitcoinBlockRef { + block_hash: headers.hash.into(), + block_height: headers.height, + }; + backfill_bitcoin_blocks(&db, rpc, &chain_tip.block_hash).await; + + let (_, signer_set_public_keys) = get_signer_set_and_aggregate_key(&ctx, chain_tip.block_hash) + .await + .unwrap(); + + ctx.state() + .update_current_signer_set(signer_set_public_keys); // Initialize the transaction signer event loop let network = WanNetwork::default(); @@ -528,7 +450,7 @@ async fn max_one_state_machine_per_bitcoin_block_hash_for_dkg() { // We need to convince the signer event loop that it should accept the // message that we are going to send it. DkgBegin messages are only // accepted from the coordinator on the canonical chain tip. - let report = MsgChainTipReport { + let mut report = MsgChainTipReport { sender_is_coordinator: true, chain_tip_status: ChainTipStatus::Canonical, chain_tip, @@ -547,13 +469,13 @@ async fn max_one_state_machine_per_bitcoin_block_hash_for_dkg() { assert!(tx_signer.wsts_state_machines.is_empty()); tx_signer - .handle_wsts_message(&dkg_begin_msg, &chain_tip, msg_public_key, &report) + .handle_wsts_message(&dkg_begin_msg, msg_public_key, &report) .await .unwrap(); // We should have a state machine associated with the current chain tip // request message that we just received. - let id1 = StateMachineId::from(&chain_tip); + let id1 = StateMachineId::from(&chain_tip.block_hash); let state_machine = tx_signer.wsts_state_machines.get(&id1).unwrap(); assert_eq!(state_machine.dkg_id, dkg_id); assert_eq!(tx_signer.wsts_state_machines.len(), 1); @@ -568,7 +490,7 @@ async fn max_one_state_machine_per_bitcoin_block_hash_for_dkg() { }; tx_signer - .handle_wsts_message(&dkg_begin_msg, &chain_tip, msg_public_key, &report) + .handle_wsts_message(&dkg_begin_msg, msg_public_key, &report) .await .unwrap(); @@ -578,16 +500,14 @@ async fn max_one_state_machine_per_bitcoin_block_hash_for_dkg() { // If we say the current chain tip is something else, a new state // machine will be created associated with that chain tip - let random_block: model::BitcoinBlock = Faker.fake_with_rng(&mut rng); - let chain_tip = random_block.block_hash; - db.write_bitcoin_block(&random_block).await.unwrap(); + report.chain_tip = Faker.fake_with_rng(&mut rng); tx_signer - .handle_wsts_message(&dkg_begin_msg, &chain_tip, msg_public_key, &report) + .handle_wsts_message(&dkg_begin_msg, msg_public_key, &report) .await .unwrap(); - let id2 = StateMachineId::from(&chain_tip); + let id2 = StateMachineId::from(&report.chain_tip.block_hash); let state_machine = tx_signer.wsts_state_machines.get(&id2).unwrap(); assert_eq!(state_machine.dkg_id, dkg_id); assert_eq!(tx_signer.wsts_state_machines.len(), 2); diff --git a/signer/tests/integration/zmq.rs b/signer/tests/integration/zmq.rs index 977230c5d..40f05f5c7 100644 --- a/signer/tests/integration/zmq.rs +++ b/signer/tests/integration/zmq.rs @@ -13,10 +13,9 @@ pub const BITCOIN_CORE_ZMQ_ENDPOINT: &str = "tcp://localhost:28332"; async fn block_stream_streams_blocks() { let (_, faucet) = regtest::initialize_blockchain(); - let stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["rawblock"]) - .await - .unwrap(); + let stream = BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT) + .await + .unwrap(); let mut block_stream = stream.to_block_stream(); @@ -66,10 +65,9 @@ async fn block_stream_streams_blocks() { async fn block_hash_stream_streams_block_hashes() { let (_, faucet) = regtest::initialize_blockchain(); - let stream = - BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT, &["hashblock"]) - .await - .unwrap(); + let stream = BitcoinCoreMessageStream::new_from_endpoint(BITCOIN_CORE_ZMQ_ENDPOINT) + .await + .unwrap(); let mut block_hash_stream = stream.to_block_hash_stream();