-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes configuration for bitcoin and electrumx #3573
Conversation
This is kustomize base configuration for bitcoin node. The configuration is based on the config stored in tbtc v1 repo: https://github.com/keep-network/tbtc/tree/main/infrastructure/kube/keep-test and the code that is currently deployed to cluster.
This is an overlay for kustomize configuration of the bitcoin node running in keep-test cluster. The configuration is based on the code that is currently deployed to the cluster.
The server is already running in keep-test cluster.
This clas is based on SSD disk. It requires Google Compute Engine persistent disk CSI Driver to be enabled on the cluster, see: https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver
We expose the node internally to use it with electrumx. We don't need to expose it externally.
We don't need this property in bitcoind statefulset, it will be defined in electrumx config
The labels are being set by kustomization and `podManagementPolicy: OrderedReady` is a default setting, so we don't need it.
We don't need to run a wallet so `-disablewallet=1`. `-txindex=1` is required by electrumx to have all the transactions.
Estimated required storage based on the network: - for mainnet: 600 Gi (default) - for testnet: 40 Gi
We published an image to the Docker registry and want to use it.
Security context is a good practice for running the nodes.
We defined a separate namespace to manage all bitcoin related confuration in keep-test cluster for test network. The namespace has `-testnet` suffix to separate networks. We may want to run `-mainnet` or `-regtest` nodes in the keep-test cluster at some point.
The configuration uses kustomization templates with some modifications to switch to testnet.
The server is now is running with bitcoind deamon instead of bcoin, which made the error message change.
Make a snapshot of PVC from first replica and use it scale the statefulset to two replicas. Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots#v1
Configuration of two bitcoind replicas based on the snapshot of the first replica data.
We run 3 replicas that 1 and 2 are based on the snapshot from replica 0 sync.
The file helps switching contexts when changing the directory
apiVersion: v1 | ||
kind: PersistentVolumeClaim | ||
metadata: | ||
name: bitcoind-data-bitcoind-1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am looking at the list of PVCs in GCP and I see bitcoind-data-bitcoind-1
and bitcoind-data-bitcoind-0
. To confirm my understanding: we are running two replicas of bitcoind, each with its own PVC. Is this correct? If so, why don't we have bitcoind-data-bitcoind-0-pvc.yaml
file here as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bitcoind-data-bitcoind-0-pvc
got created automatically when we started the bitcoind statefulset with volumeClaimTemplates
and just one replica.
After the first replica synced we crated a snapshot of bitcoind-data-bitcoind-0
PVC and used it as a dataSource
to create the bitcoind-data-bitcoind-1
.
spec: | ||
volumeSnapshotClassName: bitcoind | ||
source: | ||
persistentVolumeClaimName: bitcoind-data-bitcoind-0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it just for -0
? From the PR, I understand that we first synced one instance, then used the snapshots to sync the second instance faster. Long-term, don't we want to have snapshots for both PVCs? If -0
gets corrupted for whatever reason, we could restore it from -1
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We needed snapshot from bitcoind-data-bitcoind-0
to spin up the second replica. There was no point of creating a snapshot of bitcoind-data-bitcoind-1
, as they would be pretty close. After some time I plan to take a new snapshot and repeat it periodically.
spec: | ||
volumeSnapshotClassName: electrumx | ||
source: | ||
persistentVolumeClaimName: electrumx-data-electrumx-0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question about generating snapshots as for bitcoind. Don't we need -1
snapshots as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Answered here.
We don't need bcoin configuration as we switched to bitcoin core.
This PR adds Kubernetes manifests of bitcoin node and electrumx server.
The configuration is based on the V1's config: https://github.com/keep-network/tbtc/tree/main/infrastructure/kube/keep-test with some improvements.
For V1 we were running bcoin, now we switch to bitcoind, due to bcoin-org/bcoin#1153
Bitcoind and Electrum servers are running in dedicated kubernetes namespaces:
bitcoin-testnet
bitcoin
We started with one replica for each server and after sync was done, we created snapshots that were used for PVC creation for other replicas.
Closes: #3590