- Installing tools - helm and kubectl
- Configuring permissions for each stage of pipeline to talk to its cluster
- Configuring access to cloud platform
- Security considerations of lots of certs or whatever floating around there
- No visibility into the deployment without sending info back to CI/CD tool
- Tempting to put cluster configurations in app code repo
- Declarative - stored separately and idempotnent
- Uniform and reproducible
- Keeps config away from data and code
- simplifies pipelines needing to determine what is code v. config
- Client operations do not need to know about storage or file structure
git commit
is the method for making changes - audit, security, etc.- furthermore, any local
kubectl
commands just get clobbered when GitOps refreshes
- furthermore, any local
- Not a pipeline, but can be the "CD" part of it
- You are going to hate it at first
- forget to add files to
kustomization.yaml
- patches are still ugly, still repetitive between env configs
- forget to add files to
- Enforcing common labels
- Scaling or resource requests for different environments
- Factor out Azure Resource/Object IDs or AWS ARNs
- Conditional or global application of shared YAML snippets
- Insertion of YAML into a key or a list index
- Dirty, manual
kubectl
changes against a cluster - Rolling back a config change
- can be configured with CRDs
git revert
- Complete recreation of cluster for experiments or DR
- Namespace
<env>-<tool>
- Some deployment
- Customize file served based on the environment and tool
Maybe:
- Ingress and associated DNS entry at
<tool>.<env>.host.com
- cert-manager deployment in each NS to do SSL for us.
- Cluster control via branch policies rather than writing K8s roles
- I gave engineers read-only access everywhere minus secrets
- May be writing and creating artifacts that are not necessary with Helm
- Watching Image names and tags in a container repo
- Connections to Git and Helm repos
- Loads of CRDs that need permissions in your cloud and may operate in a somewhat opaque manner
- Envs handled w/ combo of branches and customize overlays
Lo malo
- It's a lot of files - you will forget to add your file to
kustomization.yaml
- Patching is ugly and verbose, strategic merge patches are often very redundant
- You will push YAML that won't compile or that violates KRM - up to you to hook in checks
- Mentally compiling paths seems like something that is easy to do, but it gets complicated
- "Where is such-and-such resource set?" is not always easy to answer
- Strict
kustomize
had me inserting with patches an indexed list of drives b/c they were app-specific config maps or secrets that needed to be in init containers and I couldn't think of a clever way to do it with the Helm chart I had.
Lo bueno
- Inventory control and pruning
- No tools or permissions to install in CI/CD agents (apart from writing to container reg)
HelmRelease
CRD generally does the right thing out of the box.watch kubectl hr -A
is generally what I do to for sanity
- Flux's
Kustomization
CRD allows handy levers to reconcile itself and insert dependencies to otherKustomizations
. - Image Updates
- Envs handled w/ combo of branches and customize overlays
- SyncPolicy and SyncOptions seems cool
- create namesapce
- self healing and pruning can be set, default is 3 minutes
- webhook option between git repo and ArgoCD
Handles plain YAML, Helm Charts, and Kustomize Seems like image updates still happen via CI/CD, but Argo sucks them in Can send you an alert instead of reverting a manual change to what's in the repo.
Lo Bueno
- Inventory control and pruning
- What looks like a very comprehensive UI
- CRDs
AppProject
andApplication
- define a cluster and group of applications
- Seems easier to manage multiple clusters from a single cluster
Sources:
- Envs handled with function evaluations and in-line changes
Lo Malo
Lo Bueno
- Inventory control and pruning
- Validation is treated as a first-class task
- KPT has nice built in validator feature that would have saved me time before commits pushing to GitOps repo. I used a pre-commit hook that just made sure it made valid YAML.