The tests used in the Argocon Lightning Talk: Configuring Volumes for Parallel Workflow Reads and Writes - Lukonde Mwila, Amazon Web Services & Tim Collins, Pipekit
NFS-server-provisioner | S3 |
---|---|
![]() |
![]() |
The talk can be found here.
The slide deck for this talk can be found here.
The tests require:
- A configured Workflow environment with S3 configured as an artifact repository.
- nfs-server-provisioner installed in the cluster, using
nfs
as the storage class name.- The test requires a minimum of 40Gi of storage (either ephemeral or mounted from a persistent volume)
Tests are found in in the tests directory of this repo.
The workflow controller configmap configuration we used is in the workflows-config directory of this repo. However, more information on setting up an S3 artifact repository with Argo Workflows can be found in the Argo Workflows Documentation
Deploys nfs-server-provisioner using Argo CD and uses it in a simple CI workflow.
You can run the whole thing locally in a k3d cluster.
This same example also offers the exact same workflow using minio as the artifact repository to pass data between steps. This allows you to compare the setup differences between the two and to decide the best approach for your own use cases.
The repo for the nfs-server provisioner.
Pipekit is the control plane for Argo Workflows. Platform teams use Pipekit to manage data & CI pipelines at scale, while giving developers self-serve access to Argo. Pipekit's unified logging view, enterprise-grade RBAC, and multi-cluster management capabilities lower maintenance costs for platform teams while delivering a superior devex for Argo users. Sign up for a 30-day free trial at pipekit.io/signup.
Learn more about Pipekit's professional support for companies already using Argo at pipekit.io/services.