Install Helm and add the Akash repo if not done previously by following the steps in this guide.
All steps in this section should be conducted from the Kubernetes control plane node on which Helm has been installed.
Rook has published the following Helm charts for the Ceph storage provider:
- Rook Ceph Operator: Starts the Ceph Operator, which will watch for Ceph CRs (custom resources)
- Rook Ceph Cluster: Creates Ceph CRs that the operator will use to configure the cluster
The Helm charts are intended to simplify deployment and upgrades.
- Note - if any issues are encountered during the Rook deployment, tear down the Rook-Ceph components via the steps listed here and begin anew.
- Deployment typically takes approximately 10 minutes to complete**.**
If you already have the akash-rook
helm chart installed, make sure to use the following documentation:
- Add the Rook repo to Helm
helm repo add rook-release https://charts.rook.io/release
- Expected/Example Result
# helm repo add rook-release https://charts.rook.io/release
"rook-release" has been added to your repositories
- Verify the Rook repo has been added
helm search repo rook-release --version v1.13.5
- Expected/Example Result
# helm search repo rook-release --version v1.13.5
NAME CHART VERSION APP VERSION DESCRIPTION
rook-release/rook-ceph v1.13.5 v1.13.5 File, Block, and Object Storage Services for yo...
rook-release/rook-ceph-cluster v1.13.5 v1.13.5 Manages a single Ceph cluster namespace for Rook
Scroll further for PRODUCTION
For additional Operator chart values refer to this page.
For all-in-one deployments, you will likely want only one replica of the CSI provisioners.
- Add following to
rook-ceph-operator.values.yml
created in the subsequent step- By setting
provisionerReplicas
to1
, you ensure that only a single replica of the CSI provisioner is deployed. This defaults to2
when it is not explicitly set.
csi:
provisionerReplicas: 1
You can disable default resource limits by using the following yaml config, this is useful when testing:
cat > rook-ceph-operator.values.yml << 'EOF'
resources:
csi:
csiRBDProvisionerResource:
csiRBDPluginResource:
csiCephFSProvisionerResource:
csiCephFSPluginResource:
csiNFSProvisionerResource:
csiNFSPluginResource:
EOF
helm install --create-namespace -n rook-ceph rook-ceph rook-release/rook-ceph --version 1.13.5 -f rook-ceph-operator.values.yml
No customization is required by default.
- Install the Operator chart:
helm install --create-namespace -n rook-ceph rook-ceph rook-release/rook-ceph --version 1.13.5
For additional Cluster chart values refer to this page.
For custom storage configuration refer to this example.
For production multi-node setup, please skip this section and scroll further for PRODUCTION SETUP
- Device Filter: Update
deviceFilter
to correspond with your specific disk configurations. - Storage Class: Modify the
storageClass
name frombeta3
to an appropriate one, as outlined in the Storage Class Types table. - Node Configuration: Under the
nodes
section, list the nodes designated for Ceph storage, replacing placeholders likenode1
,node2
, etc., with your Kubernetes node names.
When setting up an all-in-one production provider or a single storage node with multiple storage drives (minimum requirement: 3 drives, or 2 drives if osdsPerDevice
is set to 2):
- Failure Domain: Set
failureDomain
toosd
. - Size Settings:
- The
size
andosd_pool_default_size
should always be set toosdsPerDevice + 1
whenfailureDomain
is set toosd
. - Set
min_size
andosd_pool_default_min_size
to2
. - Set
size
andosd_pool_default_size
to3
. Note: These can be set to2
if you have a minimum of 3 drives andosdsPerDevice
is1
.
- The
- Resource Allocation: To ensure Ceph services receive sufficient resources, comment out or remove the
resources:
field before execution.
cat > rook-ceph-cluster.values.yml << 'EOF'
operatorNamespace: rook-ceph
configOverride: |
[global]
osd_pool_default_pg_autoscale_mode = on
osd_pool_default_size = 1
osd_pool_default_min_size = 1
cephClusterSpec:
resources:
mon:
count: 1
mgr:
count: 1
storage:
useAllNodes: false
useAllDevices: false
deviceFilter: "^nvme."
config:
osdsPerDevice: "1"
nodes:
- name: "node1"
config:
cephBlockPools:
- name: akash-deployments
spec:
failureDomain: host
replicated:
size: 1
parameters:
min_size: "1"
bulk: "true"
storageClass:
enabled: true
name: beta3
isDefault: true
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
# RBD image format. Defaults to "2".
imageFormat: "2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
imageFeatures: layering
# The secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
# Do not create default Ceph file systems, object stores
cephFileSystems:
cephObjectStores:
# Spawn rook-ceph-tools, useful for troubleshooting
toolbox:
enabled: true
resources:
EOF
- Device Filter: Update
deviceFilter
to match your disk specifications. - Storage Class: Change the
storageClass
name frombeta3
to a suitable one, as specified in the Storage Class Types table. - OSDs Per Device: Adjust
osdsPerDevice
according to the guidelines provided in the aforementioned table. - Node Configuration: In the
nodes
section, add your nodes for Ceph storage, ensuring to replacenode1
,node2
, etc., with the actual names of your Kubernetes nodes.
For a setup involving a single storage node with multiple storage drives (minimum: 3 drives, or 2 drives if osdsPerDevice
= 2):
- Failure Domain: Set
failureDomain
toosd
. - Size Settings:
- The
size
andosd_pool_default_size
should always be set toosdsPerDevice + 1
whenfailureDomain
is set toosd
. - Set
min_size
andosd_pool_default_min_size
to2
. - Set
size
andosd_pool_default_size
to3
. Note: These can be set to2
if you have a minimum of 3 drives andosdsPerDevice
is1
.
- The
cat > rook-ceph-cluster.values.yml << 'EOF'
operatorNamespace: rook-ceph
configOverride: |
[global]
osd_pool_default_pg_autoscale_mode = on
osd_pool_default_size = 3
osd_pool_default_min_size = 2
cephClusterSpec:
mon:
count: 3
mgr:
count: 2
storage:
useAllNodes: false
useAllDevices: false
deviceFilter: "^nvme."
config:
osdsPerDevice: "2"
nodes:
- name: "node1"
config:
- name: "node2"
config:
- name: "node3"
config:
cephBlockPools:
- name: akash-deployments
spec:
failureDomain: host
replicated:
size: 3
parameters:
min_size: "2"
bulk: "true"
storageClass:
enabled: true
name: beta3
isDefault: true
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
# RBD image format. Defaults to "2".
imageFormat: "2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
imageFeatures: layering
# The secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
# Do not create default Ceph file systems, object stores
cephFileSystems:
cephObjectStores:
# Spawn rook-ceph-tools, useful for troubleshooting
toolbox:
enabled: true
#resources:
EOF
- Install the Cluster chart:
helm install --create-namespace -n rook-ceph rook-ceph-cluster \
--set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster --version 1.13.5 -f rook-ceph-cluster.values.yml
This label is mandatory and is used by the Akash's
inventory-operator
for searching the storageClass.
- Change
beta3
to yourstorageClass
you have picked before
kubectl label sc beta3 akash.network=true
When running a single storage node or all-in-one, make sure to change the failure domain from
host
toosd
for the.mgr
pool.
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- bash
ceph osd crush rule create-replicated replicated_rule_osd default osd
ceph osd pool set .mgr crush_rule replicated_rule_osd