Skip to content

Commit

Permalink
First commit
Browse files Browse the repository at this point in the history
  • Loading branch information
grafino committed Oct 2, 2020
0 parents commit dfc518c
Show file tree
Hide file tree
Showing 18 changed files with 665 additions and 0 deletions.
Binary file added .DS_Store
Binary file not shown.
69 changes: 69 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
### OpenEBS

OpenEBS is a Cloud Native software defined storage solution that enable us to use several storage options (disks, SSDs, cloud volumes, etc) and use them to dynamically provision Kubernetes Persistent Volumes. This prevents Cloud Lock-in, enables custom Storage classes per workload, Replication, Clones and Snapshots.

* Container-attached and container-native storage on Kubernetes.
* Each workload is provided with a dedicated storage controller.
* Implements granular storage policies and isolation.
* Completely in userspace making it highly portable.
* Volumes provisioned through OpenEBS are always containerized and represented as a pod.
* OpenEBS is a collection Storage Engines.
* [Jiva] (./openebs/Jiva/README.md)
* [cStor](./openebs/cStor/README.md)
* [LocalPV-hostpath] (./openebs/LocalPV/hostpath/README.md)
* [LocalPV-device] (./openebs/LocalPV/device/README.md)



## Engines Comparison

![OpenEBS](./openebs/src/openebs.jpg)

## Prerequisites (iSCSI client)

* Depending on kubernetes provider or solution with need to setup the iSCSI client
* https://docs.openebs.io/docs/next/prerequisites.html
* Usually there is no need.

## Default OpenEBS Setup on Kubernetes
```
kubectl create namespace openebs
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs openebs openebs/openebs
```

## Important node
* Whe we setup the OpenEBS storage on the cluster the cloud default/standard provider will not work.
* In all sts and in the helm charts wehave to set the sc.

### Node Device Manager(NDM)
* Is an important component in the OpenEBS architecture. NDM treats block devices as resources that need to be monitored and managed just like other resources such as CPU, Memory and Network. It is a daemonset which runs on each node, detects attached block devices based on the filters and loads them as block devices custom resource into Kubernetes. These custom resources are aimed towards helping hyper-converged Storage Operators by providing abilities like:

* Easy to access inventory of Block Devices available across the Kubernetes Cluster.
* Predict failures on the Disks to help with taking preventive actions.
* Allow dynamically attaching/detaching disks to a storage pod, without restarting the corresponding NDM pod running on the Node where the disk is attached/detached.
* NDM daemon runs in containers and has to access the underlying storage devices and run in Privileged mode.

* The Node Device Manager (NDM) is an important component of the OpenEBS control plane. Each node in the Kubernetes cluster runs an NDM DaemonSet which is responsible for discovering the new block storage devices and if they match the filter, it reports it to the NDM operator to register that as a block device resource. NDM acts as the conduit between the control plane and the physical disks attached to each node. It maintains the inventory of registered block storage devices in the etcd database which is the single source of truth for the cluster.

## References
* https://docs.openebs.io/docs/next/ndm.html


### mayactl
* The mayactl is the command line tool for interacting with OpenEBS volumes and Pools. The mayactl is not used or required while provisioning or managing the OpenEBS volumes, but it is currently used while debugging and troubleshooting. OpenEBS volume and pool status can be get using the mayactl command.
* For getting access to mayactl command line tool, you have to login or execute into the maya-apiserver pod on Kubernetes.


## References
* https://docs.openebs.io/docs/next/mayactl.html


### Maya online
* You can use the SaaS platform to monitor your openebs storage layer, metrics and logs.

## References:
* https://director.mayadata.io/


32 changes: 32 additions & 0 deletions openebs/Jiva/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
### Jiva

* Jiva is a light weight storage engine that is recommended to use for low capacity workloads.
* OpenEBS provisions a Jiva volume with three replicas on three different nodes. Ensure that there are 3 Nodes in the cluster. The data in each replica is stored in the local container storage of that replica itself.
* The data is replicated and highly available and is suitable for quick testing of OpenEBS and simple application PoCs.
* If it is single node cluster, then change the replica count accordingly and apply the modified YAML spec.
* If its a several cluster node but you want to bound to a mount point in a certain node specify the host in the Jiva sp.


## Provision with local attached or cloud storage

* In this mode, local disks on each node has to be formatted and mounted at a directory path.
* All nodes must have the same mount point.
* Need to create an aditional storage pool.

```
apiVersion: openebs.io/v1alpha1
kind: StoragePool
metadata:
name: jiva-pool
type: hostdir
spec:
host: k8s-federated-storage-pool-3leom
path: "/mnt/5g"
```



## Storage Class

## References
* https://docs.openebs.io/docs/next/jivaguide.html
47 changes: 47 additions & 0 deletions openebs/Jiva/busybox-jiva-stateful.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
apiVersion: v1
kind: Service
metadata:
labels:
app: busyjiva
name: busyjiva
spec:
clusterIP: None
selector:
app: busyjiva
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: busyjiva
labels:
app: busyjiva
spec:
serviceName: busyjiva
replicas: 1
selector:
matchLabels:
app: busyjiva
template:
metadata:
labels:
app: busyjiva
spec:
containers:
- name: busyjiva
image: busybox
imagePullPolicy: IfNotPresent
command:
- sleep
- infinity
volumeMounts:
- name: busyjiva
mountPath: /busybox
volumeClaimTemplates:
- metadata:
name: busyjiva
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: openebs-jiva-single-node
resources:
requests:
storage: 2Gi
8 changes: 8 additions & 0 deletions openebs/Jiva/jiva-pool.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: openebs.io/v1alpha1
kind: StoragePool
metadata:
name: jiva-pool
type: hostdir
spec:
host: k8s-federated-storage-pool-3leoq
path: "/mnt/5g"
12 changes: 12 additions & 0 deletions openebs/Jiva/jiva-sc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-jiva-single-node
annotations:
openebs.io/cas-type: jiva
cas.openebs.io/config: |
- name: ReplicaCount
value: "1"
- name: StoragePool
value: jiva-pool
provisioner: openebs.io/provisioner-iscsi
57 changes: 57 additions & 0 deletions openebs/LocalPV/device/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
#### OpenEBS Local Persistent Volumes backed by Block Devices

* OpenEBS Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using block devices available on the node to persist data, hereafter referred to as OpenEBS Local PV Device volumes.

## Advantages compared to native Kubernetes Local Persistent Volumes.

Dynamic Volume provisioner as opposed to a Static Provisioner.
Better management of the Block Devices used for creating Local PVs by OpenEBS NDM. NDM provides capabilities like discovering Block Device properties, setting up Device Pools/Filters, metrics collection and ability to detect if the Block Devices have moved across nodes.

## Custom storageclass

```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-device
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: device
# - name: FSType
# value: xfs
# - name: BlockDeviceTag
# value: "busybox-device"
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
```

## Notes

* The volumeBindingMode MUST ALWAYS be set to WaitForFirstConsumer. volumeBindingMode: WaitForFirstConsumer instructs Kubernetes to initiate the creation of PV only after Pod using PVC is scheduled to the node.
* The FSType will take effect only if the underlying block device is not formatted. For instance if the block device is formatted with "Ext4", specifying "XFS" in the storage class will not clear Ext4 and format with XFS. If the block devices are already formatted, you can clear the filesystem information using wipefs -f -a <device-path>. After the filesystem has been cleared, NDM pod on the node needs to be restarted to update the Block Device.


## (optional) Block Device Tagging
* You can reserve block devices in the cluster that you would like the OpenEBS Dynamic Local Provisioner to pick up some specific block devices available on the node. You can use the NDM Block Device tagging feature to reserve the devices. For example, if you would like Local SSDs on your cluster for running XXX stateful application. You can tag a few devices in the cluster with a tag named XXX


```$ kubectl get blockdevices -n openebs
NAME NODENAME SIZE CLAIMSTATE STATUS AGE
blockdevice-024f756ac7e6443d7a6fd9b113a244c7 k8s-federated-storage-pool-3leof 10737418240 Unclaimed Active 17h
blockdevice-1edd039fbd32f20da9c566aacf2a619a k8s-federated-storage-pool-3leof 4293901824 Unclaimed Inactive 19h
blockdevice-213ddf0c20169d58933de0d6e0d5cb86 k8s-federated-storage-pool-3leof 8588869120 Unclaimed Inactive 18h
blockdevice-59e2446a8c68b2d00fb5aea120252291 k8s-federated-storage-pool-3leoq 10737418240 Unclaimed Active 17h
blockdevice-683904e790125ebb099342ca92d9b655 k8s-federated-storage-pool-3leoq 483328 Unclaimed Active 19h
blockdevice-6bb13f92472a40f70caa39f44dc9aa9c k8s-federated-storage-pool-3leom 53686025728 Claimed Active 19h
blockdevice-93b74e5c76521c795679c792cb81d72e k8s-federated-storage-pool-3leoq 53686025728 Claimed Active 19h
```

```$ kubectl label bd -n openebs blockdevice-024f756ac7e6443d7a6fd9b113a244c7 openebs.io/block-device-tag=busybox-device```


```$ kubectl get blockdevices -n openebs --show-labels
NAME NODENAME SIZE CLAIMSTATE STATUS AGE LABELS
blockdevice-024f756ac7e6443d7a6fd9b113a244c7 k8s-federated-storage-pool-3leof 10737418240 Unclaimed Active 17h kubernetes.io/hostname=k8s-federated-storage-pool-3leof,ndm.io/blockdevice-type=blockdevice,ndm.io/managed=true,openebs.io/block-device-tag=busybox-device```
36 changes: 36 additions & 0 deletions openebs/LocalPV/device/busybox-device-stateful.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: busybox-device
labels:
app: busybox-device
spec:
serviceName: busybox-device
replicas: 1
selector:
matchLabels:
app: busybox-device
template:
metadata:
labels:
app: busybox-device
spec:
containers:
- name: busybox-device
image: busybox
imagePullPolicy: IfNotPresent
command:
- sleep
- infinity
volumeMounts:
- name: busybox-device
mountPath: /mnt/busybox
volumeClaimTemplates:
- metadata:
name: busybox-device
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-device-sc
resources:
requests:
storage: 4Gi
12 changes: 12 additions & 0 deletions openebs/LocalPV/device/local-device-sc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-device-sc
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: device
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
46 changes: 46 additions & 0 deletions openebs/LocalPV/hostpath/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#### OpenEBS Local PV Hostpath

OpenEBS Dynamic Local PV provisioner can create Kubernetes Local Persistent Volumes using a unique Hostpath (directory) on the node to persist data

## Advantages compared to native Kubernetes hostpath volumes

* OpenEBS Local PV Hostpath allows your applications to access hostpath via StorageClass, PVC, and PV. This provides you the flexibility to change the PV providers without having to redesign * your Application YAML.
* Data protection using the Velero Backup and Restore.
* Protect against hostpath security vulnerabilities by masking the hostpath completely from the application YAML and pod.


## Customize hostpath directory
* By default hostpath volumes will be created under /var/openebs/local directory.
* You can change the BasePath value on sc yaml file.

```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /var/local-hostpath
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
```

## Notes

* The volumeBindingMode MUST ALWAYS be set to WaitForFirstConsumer. volumeBindingMode: WaitForFirstConsumer instructs Kubernetes to initiate the creation of PV only after Pod using PVC is scheduled to the node.
* If the BasePath does not exist on the node, OpenEBS Dynamic Local PV Provisioner will attempt to create the directory, when the first Local Volume is scheduled on to that node. You MUST ensure that the value provided for BasePath is a valid absolute path.
* As the Local PV storage classes use waitForFirstConsumer, do not use nodeName in the Pod spec to specify node affinity. If nodeName is used in the Pod spec, then PVC will remain in pending state.

## Identifying PV associated with sts
```
kubectl get pods -n openebs -l openebs.io/component-name=openebs-localpv-provisioner
```


## References
* https://docs.openebs.io/docs/next/uglocalpv-hostpath.html
36 changes: 36 additions & 0 deletions openebs/LocalPV/hostpath/busybox-hostpath-stateful.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: busybox-hostpath
labels:
app: busybox-hostpath
spec:
serviceName: busybox-hostpath
replicas: 1
selector:
matchLabels:
app: busybox-hostpath
template:
metadata:
labels:
app: busybox-hostpath
spec:
containers:
- name: busybox-hostpath
image: busybox
imagePullPolicy: IfNotPresent
command:
- sleep
- infinity
volumeMounts:
- name: busybox-hostpath
mountPath: /busybox
volumeClaimTemplates:
- metadata:
name: busybox-hostpath
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-hostpath-sc
resources:
requests:
storage: 2Gi
14 changes: 14 additions & 0 deletions openebs/LocalPV/hostpath/local-hostpath-sc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath-new
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /var/newpath
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Loading

0 comments on commit dfc518c

Please sign in to comment.