diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000..14367d8 Binary files /dev/null and b/.DS_Store differ diff --git a/README.md b/README.md new file mode 100644 index 0000000..b113844 --- /dev/null +++ b/README.md @@ -0,0 +1,69 @@ +### OpenEBS + +OpenEBS is a Cloud Native software defined storage solution that enable us to use several storage options (disks, SSDs, cloud volumes, etc) and use them to dynamically provision Kubernetes Persistent Volumes. This prevents Cloud Lock-in, enables custom Storage classes per workload, Replication, Clones and Snapshots. + +* Container-attached and container-native storage on Kubernetes. +* Each workload is provided with a dedicated storage controller. +* Implements granular storage policies and isolation. +* Completely in userspace making it highly portable. +* Volumes provisioned through OpenEBS are always containerized and represented as a pod. +* OpenEBS is a collection Storage Engines. + * [Jiva] (./openebs/Jiva/README.md) + * [cStor](./openebs/cStor/README.md) + * [LocalPV-hostpath] (./openebs/LocalPV/hostpath/README.md) + * [LocalPV-device] (./openebs/LocalPV/device/README.md) + + + +## Engines Comparison + +![OpenEBS](./openebs/src/openebs.jpg) + +## Prerequisites (iSCSI client) + +* Depending on kubernetes provider or solution with need to setup the iSCSI client +* https://docs.openebs.io/docs/next/prerequisites.html +* Usually there is no need. + +## Default OpenEBS Setup on Kubernetes +``` +kubectl create namespace openebs +helm repo add openebs https://openebs.github.io/charts +helm repo update +helm install --namespace openebs openebs openebs/openebs +``` + +## Important node +* Whe we setup the OpenEBS storage on the cluster the cloud default/standard provider will not work. +* In all sts and in the helm charts wehave to set the sc. + +### Node Device Manager(NDM) +* Is an important component in the OpenEBS architecture. NDM treats block devices as resources that need to be monitored and managed just like other resources such as CPU, Memory and Network. It is a daemonset which runs on each node, detects attached block devices based on the filters and loads them as block devices custom resource into Kubernetes. These custom resources are aimed towards helping hyper-converged Storage Operators by providing abilities like: + +* Easy to access inventory of Block Devices available across the Kubernetes Cluster. +* Predict failures on the Disks to help with taking preventive actions. +* Allow dynamically attaching/detaching disks to a storage pod, without restarting the corresponding NDM pod running on the Node where the disk is attached/detached. +* NDM daemon runs in containers and has to access the underlying storage devices and run in Privileged mode. + +* The Node Device Manager (NDM) is an important component of the OpenEBS control plane. Each node in the Kubernetes cluster runs an NDM DaemonSet which is responsible for discovering the new block storage devices and if they match the filter, it reports it to the NDM operator to register that as a block device resource. NDM acts as the conduit between the control plane and the physical disks attached to each node. It maintains the inventory of registered block storage devices in the etcd database which is the single source of truth for the cluster. + +## References +* https://docs.openebs.io/docs/next/ndm.html + + +### mayactl +* The mayactl is the command line tool for interacting with OpenEBS volumes and Pools. The mayactl is not used or required while provisioning or managing the OpenEBS volumes, but it is currently used while debugging and troubleshooting. OpenEBS volume and pool status can be get using the mayactl command. +* For getting access to mayactl command line tool, you have to login or execute into the maya-apiserver pod on Kubernetes. + + +## References +* https://docs.openebs.io/docs/next/mayactl.html + + +### Maya online +* You can use the SaaS platform to monitor your openebs storage layer, metrics and logs. + +## References: +* https://director.mayadata.io/ + + diff --git a/openebs/Jiva/README.md b/openebs/Jiva/README.md new file mode 100644 index 0000000..40d1937 --- /dev/null +++ b/openebs/Jiva/README.md @@ -0,0 +1,32 @@ +### Jiva + +* Jiva is a light weight storage engine that is recommended to use for low capacity workloads. +* OpenEBS provisions a Jiva volume with three replicas on three different nodes. Ensure that there are 3 Nodes in the cluster. The data in each replica is stored in the local container storage of that replica itself. +* The data is replicated and highly available and is suitable for quick testing of OpenEBS and simple application PoCs. +* If it is single node cluster, then change the replica count accordingly and apply the modified YAML spec. +* If its a several cluster node but you want to bound to a mount point in a certain node specify the host in the Jiva sp. + + +## Provision with local attached or cloud storage + +* In this mode, local disks on each node has to be formatted and mounted at a directory path. +* All nodes must have the same mount point. +* Need to create an aditional storage pool. + +``` +apiVersion: openebs.io/v1alpha1 +kind: StoragePool +metadata: + name: jiva-pool + type: hostdir +spec: + host: k8s-federated-storage-pool-3leom + path: "/mnt/5g" +``` + + + +## Storage Class + +## References +* https://docs.openebs.io/docs/next/jivaguide.html \ No newline at end of file diff --git a/openebs/Jiva/busybox-jiva-stateful.yaml b/openebs/Jiva/busybox-jiva-stateful.yaml new file mode 100644 index 0000000..77bf193 --- /dev/null +++ b/openebs/Jiva/busybox-jiva-stateful.yaml @@ -0,0 +1,47 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + app: busyjiva + name: busyjiva +spec: + clusterIP: None + selector: + app: busyjiva +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: busyjiva + labels: + app: busyjiva +spec: + serviceName: busyjiva + replicas: 1 + selector: + matchLabels: + app: busyjiva + template: + metadata: + labels: + app: busyjiva + spec: + containers: + - name: busyjiva + image: busybox + imagePullPolicy: IfNotPresent + command: + - sleep + - infinity + volumeMounts: + - name: busyjiva + mountPath: /busybox + volumeClaimTemplates: + - metadata: + name: busyjiva + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: openebs-jiva-single-node + resources: + requests: + storage: 2Gi \ No newline at end of file diff --git a/openebs/Jiva/jiva-pool.yaml b/openebs/Jiva/jiva-pool.yaml new file mode 100644 index 0000000..3f5f7d2 --- /dev/null +++ b/openebs/Jiva/jiva-pool.yaml @@ -0,0 +1,8 @@ +apiVersion: openebs.io/v1alpha1 +kind: StoragePool +metadata: + name: jiva-pool + type: hostdir +spec: + host: k8s-federated-storage-pool-3leoq + path: "/mnt/5g" \ No newline at end of file diff --git a/openebs/Jiva/jiva-sc.yaml b/openebs/Jiva/jiva-sc.yaml new file mode 100644 index 0000000..04e01c8 --- /dev/null +++ b/openebs/Jiva/jiva-sc.yaml @@ -0,0 +1,12 @@ +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: openebs-jiva-single-node + annotations: + openebs.io/cas-type: jiva + cas.openebs.io/config: | + - name: ReplicaCount + value: "1" + - name: StoragePool + value: jiva-pool +provisioner: openebs.io/provisioner-iscsi \ No newline at end of file diff --git a/openebs/LocalPV/device/README.md b/openebs/LocalPV/device/README.md new file mode 100644 index 0000000..44f6509 --- /dev/null +++ b/openebs/LocalPV/device/README.md @@ -0,0 +1,57 @@ +#### OpenEBS Local Persistent Volumes backed by Block Devices + +* OpenEBS Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using block devices available on the node to persist data, hereafter referred to as OpenEBS Local PV Device volumes. + +## Advantages compared to native Kubernetes Local Persistent Volumes. + +Dynamic Volume provisioner as opposed to a Static Provisioner. +Better management of the Block Devices used for creating Local PVs by OpenEBS NDM. NDM provides capabilities like discovering Block Device properties, setting up Device Pools/Filters, metrics collection and ability to detect if the Block Devices have moved across nodes. + +## Custom storageclass + +``` +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: local-device + annotations: + openebs.io/cas-type: local + cas.openebs.io/config: | + - name: StorageType + value: device + # - name: FSType + # value: xfs + # - name: BlockDeviceTag + # value: "busybox-device" +provisioner: openebs.io/local +reclaimPolicy: Delete +volumeBindingMode: WaitForFirstConsumer +``` + +## Notes + +* The volumeBindingMode MUST ALWAYS be set to WaitForFirstConsumer. volumeBindingMode: WaitForFirstConsumer instructs Kubernetes to initiate the creation of PV only after Pod using PVC is scheduled to the node. +* The FSType will take effect only if the underlying block device is not formatted. For instance if the block device is formatted with "Ext4", specifying "XFS" in the storage class will not clear Ext4 and format with XFS. If the block devices are already formatted, you can clear the filesystem information using wipefs -f -a . After the filesystem has been cleared, NDM pod on the node needs to be restarted to update the Block Device. + + +## (optional) Block Device Tagging +* You can reserve block devices in the cluster that you would like the OpenEBS Dynamic Local Provisioner to pick up some specific block devices available on the node. You can use the NDM Block Device tagging feature to reserve the devices. For example, if you would like Local SSDs on your cluster for running XXX stateful application. You can tag a few devices in the cluster with a tag named XXX + + +```$ kubectl get blockdevices -n openebs +NAME NODENAME SIZE CLAIMSTATE STATUS AGE +blockdevice-024f756ac7e6443d7a6fd9b113a244c7 k8s-federated-storage-pool-3leof 10737418240 Unclaimed Active 17h +blockdevice-1edd039fbd32f20da9c566aacf2a619a k8s-federated-storage-pool-3leof 4293901824 Unclaimed Inactive 19h +blockdevice-213ddf0c20169d58933de0d6e0d5cb86 k8s-federated-storage-pool-3leof 8588869120 Unclaimed Inactive 18h +blockdevice-59e2446a8c68b2d00fb5aea120252291 k8s-federated-storage-pool-3leoq 10737418240 Unclaimed Active 17h +blockdevice-683904e790125ebb099342ca92d9b655 k8s-federated-storage-pool-3leoq 483328 Unclaimed Active 19h +blockdevice-6bb13f92472a40f70caa39f44dc9aa9c k8s-federated-storage-pool-3leom 53686025728 Claimed Active 19h +blockdevice-93b74e5c76521c795679c792cb81d72e k8s-federated-storage-pool-3leoq 53686025728 Claimed Active 19h +``` + +```$ kubectl label bd -n openebs blockdevice-024f756ac7e6443d7a6fd9b113a244c7 openebs.io/block-device-tag=busybox-device``` + + +```$ kubectl get blockdevices -n openebs --show-labels +NAME NODENAME SIZE CLAIMSTATE STATUS AGE LABELS +blockdevice-024f756ac7e6443d7a6fd9b113a244c7 k8s-federated-storage-pool-3leof 10737418240 Unclaimed Active 17h kubernetes.io/hostname=k8s-federated-storage-pool-3leof,ndm.io/blockdevice-type=blockdevice,ndm.io/managed=true,openebs.io/block-device-tag=busybox-device``` \ No newline at end of file diff --git a/openebs/LocalPV/device/busybox-device-stateful.yaml b/openebs/LocalPV/device/busybox-device-stateful.yaml new file mode 100644 index 0000000..35cf510 --- /dev/null +++ b/openebs/LocalPV/device/busybox-device-stateful.yaml @@ -0,0 +1,36 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: busybox-device + labels: + app: busybox-device +spec: + serviceName: busybox-device + replicas: 1 + selector: + matchLabels: + app: busybox-device + template: + metadata: + labels: + app: busybox-device + spec: + containers: + - name: busybox-device + image: busybox + imagePullPolicy: IfNotPresent + command: + - sleep + - infinity + volumeMounts: + - name: busybox-device + mountPath: /mnt/busybox + volumeClaimTemplates: + - metadata: + name: busybox-device + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: local-device-sc + resources: + requests: + storage: 4Gi \ No newline at end of file diff --git a/openebs/LocalPV/device/local-device-sc.yaml b/openebs/LocalPV/device/local-device-sc.yaml new file mode 100644 index 0000000..bd17bc1 --- /dev/null +++ b/openebs/LocalPV/device/local-device-sc.yaml @@ -0,0 +1,12 @@ +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: local-device-sc + annotations: + openebs.io/cas-type: local + cas.openebs.io/config: | + - name: StorageType + value: device +provisioner: openebs.io/local +reclaimPolicy: Delete +volumeBindingMode: WaitForFirstConsumer \ No newline at end of file diff --git a/openebs/LocalPV/hostpath/README.md b/openebs/LocalPV/hostpath/README.md new file mode 100644 index 0000000..48cf907 --- /dev/null +++ b/openebs/LocalPV/hostpath/README.md @@ -0,0 +1,46 @@ +#### OpenEBS Local PV Hostpath + +OpenEBS Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using a unique Hostpath (directory) on the node to persist data + +## Advantages compared to native Kubernetes hostpath volumes + +* OpenEBS Local PV Hostpath allows your applications to access hostpath via StorageClass, PVC, and PV. This provides you the flexibility to change the PV providers without having to redesign * your Application YAML. +* Data protection using the Velero Backup and Restore. +* Protect against hostpath security vulnerabilities by masking the hostpath completely from the application YAML and pod. + + +## Customize hostpath directory +* By default hostpath volumes will be created under /var/openebs/local directory. +* You can change the BasePath value on sc yaml file. + +``` +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: local-hostpath + annotations: + openebs.io/cas-type: local + cas.openebs.io/config: | + - name: StorageType + value: hostpath + - name: BasePath + value: /var/local-hostpath +provisioner: openebs.io/local +reclaimPolicy: Delete +volumeBindingMode: WaitForFirstConsumer +``` + +## Notes + +* The volumeBindingMode MUST ALWAYS be set to WaitForFirstConsumer. volumeBindingMode: WaitForFirstConsumer instructs Kubernetes to initiate the creation of PV only after Pod using PVC is scheduled to the node. +* If the BasePath does not exist on the node, OpenEBS Dynamic Local PV Provisioner will attempt to create the directory, when the first Local Volume is scheduled on to that node. You MUST ensure that the value provided for BasePath is a valid absolute path. +* As the Local PV storage classes use waitForFirstConsumer, do not use nodeName in the Pod spec to specify node affinity. If nodeName is used in the Pod spec, then PVC will remain in pending state. + +## Identifying PV associated with sts +``` +kubectl get pods -n openebs -l openebs.io/component-name=openebs-localpv-provisioner +``` + + +## References +* https://docs.openebs.io/docs/next/uglocalpv-hostpath.html diff --git a/openebs/LocalPV/hostpath/busybox-hostpath-stateful.yaml b/openebs/LocalPV/hostpath/busybox-hostpath-stateful.yaml new file mode 100644 index 0000000..24ee8c5 --- /dev/null +++ b/openebs/LocalPV/hostpath/busybox-hostpath-stateful.yaml @@ -0,0 +1,36 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: busybox-hostpath + labels: + app: busybox-hostpath +spec: + serviceName: busybox-hostpath + replicas: 1 + selector: + matchLabels: + app: busybox-hostpath + template: + metadata: + labels: + app: busybox-hostpath + spec: + containers: + - name: busybox-hostpath + image: busybox + imagePullPolicy: IfNotPresent + command: + - sleep + - infinity + volumeMounts: + - name: busybox-hostpath + mountPath: /busybox + volumeClaimTemplates: + - metadata: + name: busybox-hostpath + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: local-hostpath-sc + resources: + requests: + storage: 2Gi \ No newline at end of file diff --git a/openebs/LocalPV/hostpath/local-hostpath-sc.yaml b/openebs/LocalPV/hostpath/local-hostpath-sc.yaml new file mode 100644 index 0000000..06d377b --- /dev/null +++ b/openebs/LocalPV/hostpath/local-hostpath-sc.yaml @@ -0,0 +1,14 @@ +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: local-hostpath-new + annotations: + openebs.io/cas-type: local + cas.openebs.io/config: | + - name: StorageType + value: hostpath + - name: BasePath + value: /var/newpath +provisioner: openebs.io/local +reclaimPolicy: Delete +volumeBindingMode: WaitForFirstConsumer \ No newline at end of file diff --git a/openebs/cStor/README.md b/openebs/cStor/README.md new file mode 100644 index 0000000..1a712e7 --- /dev/null +++ b/openebs/cStor/README.md @@ -0,0 +1,184 @@ +#### cStor + +### Creating cStor Storage Pools + +* https://docs.openebs.io/docs/next/ugcstor.html#creating-cStor-storage-pools + + +## 1 - Get the details of blockdevices attached in the cluster. +```kubectl get blockdevice -n openebs``` + +* We must attach a volume to every worker node. +* Identify block devices which are unclaimed, unmounted on node and does not contain any filesystem. (Raw device) +* We can see more details by describing the blockdevice. + +``` +NAME NODENAME SIZE CLAIMSTATE STATUS AGE +blockdevice-1deee5ebf258da0aa3de3ace59f8fcb7 federated-storage-pool-3lotj 21473771008 Unclaimed Active 20h +blockdevice-5b449a1be25eb3a27fd8f71815b8a3a7 federated-storage-pool-3lotr 21473771008 Unclaimed Active 20h +blockdevice-7c3b0f65f2a0d118c8a61fa1bf829cc0 federated-storage-pool-3lotb 483328 Unclaimed Active 21h +blockdevice-c9a2f5a62a1342918c55425312fc712b federated-storage-pool-3lotb 21473771008 Unclaimed Active 20h +blockdevice-f505a3f715a04e11d488daca15b5c1ac federated-storage-pool-3lotj 483328 Unclaimed Active 21h +blockdevice-f9014cf39c86a73160d216e059c11ab3 federated-storage-pool-3lotr 483328 Unclaimed Active 21h +``` + +## 2 - Create a StoragePoolClaim configuration. + +* Edit cstor-pool-config.yaml and add the block devices. + +``` +#Use the following YAMLs to create a cStor Storage Pool. +apiVersion: openebs.io/v1alpha1 +kind: StoragePoolClaim +metadata: + name: cstor-disk-pool + annotations: + cas.openebs.io/config: | + - name: PoolResourceRequests + value: |- + memory: 2Gi + - name: PoolResourceLimits + value: |- + memory: 4Gi +spec: + name: cstor-disk-pool + type: disk + poolSpec: + poolType: striped + blockDevices: + blockDeviceList: + - blockdevice-14f3a461bdde1cd820dd3a7819e36c54 #Node federated-storage-pool-3lotr + - blockdevice-1deee5ebf258da0aa3de3ace59f8fcb7 #Node federated-storage-pool-3lotj + - blockdevice-96e849167d30baa806a144d134749220 #Node federated-storage-pool-3lotb +--- +``` + +* poolType represents how the data will be written to the disks on a given pool instance on a node. Supported values are striped, mirrored, raidz and raidz2 + +When the poolType = mirrored , ensure the number of blockDevice CRs selected from each node are an even number. The data is striped across mirrors. For example, if 4x1TB blockDevice are selected on node1, the raw capacity of the pool instance of cstor-disk-pool on node1 is 2TB. + +``` +blockDevices: + blockDeviceList: + - blockdevice-14f3a461bdde1cd820dd3a7819e36c54 #Node federated-storage-pool-3lotr + - blockdevice-1deee5ebf258da0aa3de3ace59f8fcb7 #Node federated-storage-pool-3lotj + - blockdevice-96e849167d30baa806a144d134749220 #Node federated-storage-pool-3lotb + +``` + +When the poolType = striped, the number of blockDevice CRs from each node can be in any number. The data is striped across each blockDevice. For example, if 4x1TB blockDevices are selected on node1, the raw capacity of the pool instance of cstor-disk-pool on that node1 is 4TB. + +``` +blockDevices: + blockDeviceList: + - blockdevice-14f3a461bdde1cd820dd3a7819e36c54 #Node federated-storage-pool-3lotr + - blockdevice-1deee5ebf258da0aa3de3ace59f8fcb7 #Node federated-storage-pool-3lotj + - blockdevice-5b449a1be25eb3a27fd8f71815b8a3a7 #Node federated-storage-pool-3lotr + - blockdevice-96e849167d30baa806a144d134749220 #Node federated-storage-pool-3lotb + - blockdevice-a6615702dd287d47f253d026abfcb235 #Node federated-storage-pool-3lotj + - blockdevice-c9a2f5a62a1342918c55425312fc712b #Node federated-storage-pool-3lotb +``` + + +When the poolType = raidz, ensure that the number of blockDevice CRs selected from each node are like 3,5,7 etc. The data is written with single parity. For example, if 3x1TB blockDevice are selected on node1, the raw capacity of the pool instance of cstor-disk-pool on node1 is 2TB. 1 disk will be used as a parity disk. + +When the poolType = raidz2, ensure that the number of blockDevice CRs selected from each node are like 6,8,10 etc. The data is written with dual parity. For example, if 6x1TB blockDevice are selected on node1, the raw capacity of the pool instance of cstor-disk-pool on node1 is 4TB. 2 disks will be used for parity. + +## 3 - Apply the StoragePoolClaim configuration. +``` +$ kubectl apply -f cstor-pool-mirrored-config.yaml +``` + +* Get Storage pool configuration +``` +$ kubectl get spc +NAME AGE +cstor-disk-pool-mirrored 29s +``` + +* Get cStor Pool + +``` +$ kubectl get csp +NAME ALLOCATED FREE CAPACITY STATUS READONLY TYPE AGE +cstor-disk-pool-mirrored-4jec 272K 19.9G 19.9G Healthy false mirrored 2m43s +cstor-disk-pool-mirrored-iat3 272K 19.9G 19.9G Healthy false mirrored 2m43s +cstor-disk-pool-mirrored-zjp3 81.5K 19.9G 19.9G Healthy false mirrored 2m43s +``` + +* We can see there are a storage class per each node running on 3 pods (nodes) +``` +$ kubectl get pods -n openebs +NAME READY STATUS RESTARTS AGE +cstor-disk-pool-mirrored-4jec-85c5fbdbb9-7gvjt 3/3 Running 0 13m +cstor-disk-pool-mirrored-iat3-556b4d5554-7wxzc 3/3 Running 0 13m +cstor-disk-pool-mirrored-zjp3-59ff464cdd-p796h 3/3 Running 0 13m +``` + +* Get the sc +``` +$ kubectl get sc +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +openebs-cstor-sc openebs.io/provisioner-iscsi Delete Immediate false 32s +openebs-device openebs.io/local Delete WaitForFirstConsumer false 22h +openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 22h +openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 22h +openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 22h +``` + +## 4 - Creating cStor Storage Class +``` +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: openebs-scstor-sc + annotations: + openebs.io/cas-type: cstor + cas.openebs.io/config: | + - name: StoragePoolClaim + value: "cstor-disk-pool-mirrored" + - name: ReplicaCount + value: "3" +provisioner: openebs.io/provisioner-iscsi +``` + +## 5 - Deploy a StatefulSet +* On the Sts the storageClassName must mention the recently created sc. + + + +### Setting Pool Policies + +### Monitoring cStor + +helm install grafana grafana/grafana + + +kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo + + +helm install prometheus prometheus-community/prometheus --set alertmanager.enabled=false --set server.persistentVolume.storageClass=openebs-cstor-sc +prometheus-server.default.svc.cluster.local + + + + +### Deleting or Updating a storage pool using same (or some) disks + +* 1 - Terminate the Sts that has OpeEBS sc in use. +* 2 - Terminate the PVC bound between the Sts and PV. +* 3 - Delete the sp configuration to free the blockdevices. +* 4 - Delete the sc because is not available anymore. + +### Notes + +* Noticed that the container was always in Terminating state after deletion of sts. +* Had to force the deletion of pod and pvc. + + + +## Custom resources + +kubectl get cvr -n openebs +kubectl get cstorvolume -n openebs + diff --git a/openebs/cStor/busybox-cstor-stateful.yaml b/openebs/cStor/busybox-cstor-stateful.yaml new file mode 100644 index 0000000..fa3907e --- /dev/null +++ b/openebs/cStor/busybox-cstor-stateful.yaml @@ -0,0 +1,47 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + app: busybox + name: busybox +spec: + clusterIP: None + selector: + app: busybox +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: busybox + labels: + app: busybox +spec: + serviceName: busybox + replicas: 1 + selector: + matchLabels: + app: busybox + template: + metadata: + labels: + app: busybox + spec: + containers: + - name: busybox + image: busybox + imagePullPolicy: IfNotPresent + command: + - sleep + - infinity + volumeMounts: + - name: busybox + mountPath: /busybox + volumeClaimTemplates: + - metadata: + name: busybox + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: openebs-cstor-sc + resources: + requests: + storage: 2Gi \ No newline at end of file diff --git a/openebs/cStor/cstor-pool-mirrored-config.yaml b/openebs/cStor/cstor-pool-mirrored-config.yaml new file mode 100644 index 0000000..0a732db --- /dev/null +++ b/openebs/cStor/cstor-pool-mirrored-config.yaml @@ -0,0 +1,27 @@ +#Use the following YAMLs to create a cStor Storage Pool. +apiVersion: openebs.io/v1alpha1 +kind: StoragePoolClaim +metadata: + name: cstor-disk-pool-mirrored + annotations: + cas.openebs.io/config: | + - name: PoolResourceRequests + value: |- + memory: 2Gi + - name: PoolResourceLimits + value: |- + memory: 4Gi +spec: + name: cstor-disk-pool-mirrored + type: disk + poolSpec: + poolType: mirrored + blockDevices: + blockDeviceList: + - blockdevice-14f3a461bdde1cd820dd3a7819e36c54 #Node federated-storage-pool-3lotr + - blockdevice-1deee5ebf258da0aa3de3ace59f8fcb7 #Node federated-storage-pool-3lotj + - blockdevice-5b449a1be25eb3a27fd8f71815b8a3a7 #Node federated-storage-pool-3lotr + - blockdevice-96e849167d30baa806a144d134749220 #Node federated-storage-pool-3lotb + - blockdevice-a6615702dd287d47f253d026abfcb235 #Node federated-storage-pool-3lotj + - blockdevice-c9a2f5a62a1342918c55425312fc712b #Node federated-storage-pool-3lotb +--- diff --git a/openebs/cStor/cstor-pool-striped-config.yaml b/openebs/cStor/cstor-pool-striped-config.yaml new file mode 100644 index 0000000..fc5ab31 --- /dev/null +++ b/openebs/cStor/cstor-pool-striped-config.yaml @@ -0,0 +1,24 @@ +#Use the following YAMLs to create a cStor Storage Pool. +apiVersion: openebs.io/v1alpha1 +kind: StoragePoolClaim +metadata: + name: cstor-disk-pool-striped + annotations: + cas.openebs.io/config: | + - name: PoolResourceRequests + value: |- + memory: 2Gi + - name: PoolResourceLimits + value: |- + memory: 4Gi +spec: + name: cstor-disk-pool + type: disk + poolSpec: + poolType: striped + blockDevices: + blockDeviceList: + - blockdevice-6bb13f92472a40f70caa39f44dc9aa9c # k8s-federated-storage-pool-3leom 53686025728 Unclaimed Active 64s + - blockdevice-93b74e5c76521c795679c792cb81d72e # k8s-federated-storage-pool-3leoq 53686025728 Unclaimed Active 44s + - blockdevice-9dac4cb77614f548dd494cd17c0ca84b # k8s-federated-storage-pool-3leof 53686025728 Unclaimed Active 19s +--- diff --git a/openebs/cStor/openebs-cstor-sc.yaml b/openebs/cStor/openebs-cstor-sc.yaml new file mode 100644 index 0000000..682bf3c --- /dev/null +++ b/openebs/cStor/openebs-cstor-sc.yaml @@ -0,0 +1,14 @@ +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: openebs-cstor-sc + annotations: + openebs.io/cas-type: cstor + cas.openebs.io/config: | + - name: StoragePoolClaim + value: "cstor-disk-pool-striped" + - name: ReplicaCount + value: "3" + - name: VolumeMonitor + enabled: "true" +provisioner: openebs.io/provisioner-iscsi \ No newline at end of file diff --git a/openebs/src/openebs.jpg b/openebs/src/openebs.jpg new file mode 100644 index 0000000..9b2402f Binary files /dev/null and b/openebs/src/openebs.jpg differ