Skip to content

Commit

Permalink
Add example for Access Points
Browse files Browse the repository at this point in the history
Adds a README and sample specs for exposing multiple independent data
sources on the same EFS volume by creating and mounting access points.
  • Loading branch information
2uasimojo committed Apr 27, 2020
1 parent 0df5cfb commit 74272a9
Show file tree
Hide file tree
Showing 3 changed files with 169 additions and 0 deletions.
1 change: 1 addition & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ Before the example, you need to:
* [Accessing the filesystem from multiple pods](../examples/kubernetes/multiple_pods/README.md)
* [Consume EFS in StatefulSets](../examples/kubernetes/statefulset/README.md)
* [Mount subpath](../examples/kubernetes/volume_path/README.md)
* [Use Access Points](../examples/kubernetes/access_points/README.md)

## Development
Please go through [CSI Spec](https://github.com/container-storage-interface/spec/blob/master/spec.md) and [Kubernetes CSI Developer Documentation](https://kubernetes-csi.github.io/docs) to get some basic understanding of CSI driver before you start.
Expand Down
78 changes: 78 additions & 0 deletions examples/kubernetes/access_points/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
## EFS Access Points
Like [volume path mounts](../volume_path), mounting [EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) allows you to expose separate data stores with independent ownership and permissions from a single EFS volume.
In this case, the separation is managed on the EFS side rather than the kubernetes side.

**Note**: Because access point mounts require TLS, this is not supported in driver versions at or before `0.3`.

### Create Access Points (in EFS)
Following [this doc](https://docs.aws.amazon.com/efs/latest/ug/create-access-point.html), create a separate access point for each independent data store you wish to expose in your cluster, tailoring the ownership and permissions as desired.
Note that there's no need to use different EFS volumes.
This example assumes you are using two access points.

### Edit [Persistent Volume Spec](./specs/example.yaml)
```
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
mountOptions:
- tls
- accesspoint=[AccessPointId]
csi:
driver: efs.csi.aws.com
volumeHandle: [FileSystemId]
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv2
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
mountOptions:
- tls
- accesspoint=[AccessPointId]
csi:
driver: efs.csi.aws.com
volumeHandle: [FileSystemId]
```
In each PersistentVolume, replace both the `[FileSystemId]` in `spec.csi.volumeHandle` and the `[AccessPointId]` value of the `accesspoint` option in `spec.mountOptions`.
You can find these values using the AWS CLI:
```sh
>> aws efs describe-access-points --query 'AccessPoints[*].{"FileSystemId": FileSystemId, "AccessPointId": AccessPointId}'
```
If you are using the same underlying EFS volume, the `FileSystemId` will be the same in both PersistentVolume specs, but the `AccessPointId` will differ.

### Deploy the Example Application
Create PVs, persistent volume claims (PVCs), and storage class:
```sh
>> kubectl apply -f examples/kubernetes/volume_path/specs/example.yaml
```

### Check EFS filesystem is used
After the objects are created, verify the pod is running:

```sh
>> kubectl get pods
```

Also you can verify that data is written into the EFS filesystems:

```sh
>> kubectl exec -ti efs-app -- tail -f /data-dir1/out.txt
>> kubectl exec -ti efs-app -- ls /data-dir2
```
90 changes: 90 additions & 0 deletions examples/kubernetes/access_points/specs/example.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
mountOptions:
- tls
- accespoint=fsap-068c22f0246419f75
csi:
driver: efs.csi.aws.com
volumeHandle: fs-e8a95a42
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv2
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
mountOptions:
- tls
- accesspoint=fsap-19f752f0068c22464
csi:
driver: efs.csi.aws.com
volumeHandle: fs-e8a95a42
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim1
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim2
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: efs-app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data-dir1/out.txt; sleep 5; done"]
volumeMounts:
- name: efs-volume-1
mountPath: /data-dir1
- name: efs-volume-2
mountPath: /data-dir2
volumes:
- name: efs-volume-1
persistentVolumeClaim:
claimName: efs-claim1
- name: efs-volume-2
persistentVolumeClaim:
claimName: efs-claim2

0 comments on commit 74272a9

Please sign in to comment.