-
Notifications
You must be signed in to change notification settings - Fork 0
User Guide
In this guide we'll walk you through how to install particle, create a helm chart and test it.
We'll be using the default driver
and provisioner
, which are kind
and helm
.
Before we can do anything, we need to have some tools installed. Those will be helm
and kubectl
. kubectl
is not necessary for particle
to work correctly, we'll use it only to explore the created cluster.
Let's install it with brew
:
brew install kubectl helm
If you don't have brew, you'll have to install helm and kubectl as its documentations states.
Now, you'll need to download particle.
Again, if you use brew:
brew tap little-angry-clouds/my-brews
brew install particle
If you don't, you can get the binary from GitHub's release page, decompress the file and move it to your PATH.
# You need curl and jq (https://stedolan.github.io/jq/)
latest=$(curl -s https://api.github.com/repos/little-angry-clouds/particle/releases/latest | jq -r ".assets[].browser_download_url" | grep linux_amd64)
# It will return a list of combination of binaries from different architectures and OS, choose the one you want and download it
curl $latest -L -o particle.tar.gz
tar xzf particle.tar.gz
sudo mv particle /usr/local/bin
Once downloaded, you should add it your path. It might depend on your system, so you should make sure it's correctly done. Usually moving it to /usr/local/bin
is enough.
sudo mv particle /usr/local/bin/
sudo chmod 550 /usr/local/bin/particle
Once installed, make sure the CLI is correctly installed and run:
particle version
You should see the CLI version.
Particle has a lot of dependencies. In the last section we've only seen the mandatory, but if you want to use other verifiers or linters, you'll need to install them as well. To make it easier to have all this packages in one page, there's a docker image with all them at littleangryclouds/particle. It has installed the next tools:
- Multiple kubectl versions
- Helm 3
- Kind
- Bats
- Yamllint
- Kubeval
- kube-score
- kube-lint
- helmfile
It does not have an entrypoint, command or args, so you can use any of it freely.
The complete docker order would be:
docker run -ti --volume $(pwd):/$(pwd | xargs -n1 basename) --volume /var/run/docker.sock:/var/run/docker.sock \
--volume /var/lib/docker/:/var/lib/docker/ --volume $HOME/.kube/kind-config:/root/.kube/config --net host \
--workdir /$(pwd | xargs -n1 basename) littleangryclouds/particle particle test
Now, that's a way too long command, so it would be recommended to do an alias to make the dockerized particle behave exactly like native particle would be:
alias particle="docker run -ti --volume $(pwd):/$(pwd | xargs -n1 basename) --volume /var/run/docker.sock:/var/run/docker.sock --volume /var/lib/docker/:/var/lib/docker/ --volume $HOME/.kube/kind-config:/root/.kube/config --net host --workdir /$(pwd | xargs -n1 basename) littleangryclouds/particle:v0.0.8 particle"
Keep in mind that this command uses to mount the host socket to do docker in docker, and to use the host network.
Now that the CLI is running locally, we'll begin to use it. It's time to create the helm chart. For example purposes, we'll call the chart nginx
.
You could create it with helm chart create nginx
, but you'd need to add some stuff manually. Luckily, particle
wrapps that command and does that stuff automatically:
particle init chart nginx
If there's no error, this command has created a directory named nginx
. If you list it, you'll it's pretty much the same as creating the chart with helm
. The main difference is the particle
directory, which contains a default scenario
with some default values.
The particle.yml
will contain something like:
driver:
name: kind
provisioner:
name: helm
lint: |-
set -e
helm lint
verifier:
name: helm
At this point we have the skeleton created. We just need to add some example functionality and do some tests. We'll only do cosmetic changes, since this is not the point of this document.
Let's suppose that instead of adding some cool templates to the chart, we only want to change the name of the deployment. We edit templates/deployment.yaml
and change it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: particle-{{ include "nginx.fullname" . }}
# Prior: name: {{ include "nginx.fullname" . }}
labels:
{{- include "nginx.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
...
The next step would be test the chart. Test implies doing a bunch of stuff. We'll see the steps more deeply, but in a nutshell, testing implis running linters, install the chart in a real cluster and verify its state.
Particle is initialized with the default configuration, which means that all the tools used in the testing are done with the helm
binary.
For the sake of simplification in this example, the testing examples are:
- Lint with
helm lint
- Create the kubernetes cluster with
kind
- Install the chart with
helm upgrade --install
- Verify that the state is the one desired with
helm test
- Destroy the kubernetes cluster
All this, and more, will happen when you execute particle test
.
Here you can see a screencast of the execution of particle and the exploration of the cluster state with kubectl.
To add more tests, you may check helm's documentation.
And that's it! If you want to continue learning how particle
works, make sure to check out the rest of the documentation.
At the moment of writing the documentation, the default configuration file is the next. The default driver
is kind
, the default provisioner
is helm
, the default verifier
is helm
and the default linter
is helm
. This is a minimal example:
---
driver:
name: kind
provisioner:
name: helm
linter: |-
set -e
helm lint
verifier: |-
set -e
helm test chart
dependency:
name: helm
And this is a more complete example:
---
driver:
name: kind
values:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8080
protocol: TCP
provisioner:
name: helm
values:
ingress:
enabled: true
hosts:
- paths:
- /
lint: |-
set -e
helm lint
yamllint values.yaml
helm template . | kubeval
verifier: |-
set -e
bats particle/default/test.bats
dependency:
name: helm
-
name: The driver name. Required
- Supported values:
kind
- Supported values:
-
kubernetes-version: The kubernetes version. The versions are
kind
's node tags. Optional -
values: The configuration for the driver. When using
kind
, it would be the values for the kind cluster. Optional
driver:
name: kind
kubernetes-version: v1.21.2
values:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8080
protocol: TCP
-
name: The
provisioner
name. Required- Supported values:
helm
- Supported values:
-
values: The configuration for the provisioner. When using
helm
, it would be the values for the helm chart. Optional
provisioner:
name: helm
values:
ingress:
enabled: true
It doesn't accept any parameter, just a list of commands that are executed at the OS level. It's recommendable that the first line is set -e
. This way the step will fail when any of the commands return a failure.
verifier: |-
set -e
helm test chart
-
name: The
dependency
manager name. Required -
charts: A list of dependencies to install. Optional
- repository-name: The name of the helm repository.
- repository-url: The url of the helm repository.
dependency:
name: helm
charts:
- repository-name: drone
repository-url: https://drone.github.io/charts
You may see examples here.
- name: The name of the helm chart.
-
values: The configuration for the preparation. When using
helm
, it would be the values for the helm chart. Optional - version: The version of the helm chart. Optional
prepare:
- name: drone/drone
version: 0.1.7
values:
env:
DRONE_SERVER_HOST: localhost
It doesn't accept any parameter, just a list of commands that are executed at the OS level. It's recommendable that the first line is set -e
. This way the step will fail when any of the commands return a failure.
linter: |-
set -e
helm lint
You may see examples here.
In this section we'll see each command that particle
supports and explain its function.
It deletes whatever is installed in the kubernetes cluster. That is all that the converge
and prepare
commands install.
It installs the main kubernetes manifests to the cluster. For example, when using the helm
provisioner
it will install the developed helm chart.
It creates the kubernetes cluster. For example, when using the kind
driver
it will create a kind
cluster on your computer.
It adds the dependencies that the tests might need. For example, when using the helm
provisioner
it will add the helm repositories locally executing helm repo add
.
An example use case would be if you're developing an application that connects to a MySQL database, in this section you'd declare the helm chart that you want to deploy before your application.
It destroys the kubernetes cluster. For example, when using the kind
driver
it will destroy the kind
cluster on your computer.
It creates all the local files, from the kubernetes manifests to the particle.yml
file. For example, when using the helm
provisioner
it will basically wrap the helm init
command.
It checks that all the files are correctly linted. It's not integrated in particle
, in the sense that it doesn't offer an API. You can pass to it any number of verifiers and the ones you want. It will all be executed as a shell script.
By default, it will execute helm lint
.
You can see more in the linters section.
It installs the dependencies that the dependency
command makes available. For example, when using the helm
provisioner
it will install all the declared charts.
It checks that particle.yml
has the correct syntax. It basically checks its schema, if it has keys that particle doesn't support, it will fail.
It executes the tests that check that what's deployed in the cluster, it's in the desired state. It's not integrated in particle
, in the sense that it doesn't offer an API. You can pass to it any number of verifiers and the ones you want. It will all be executed as a shell script.
By default, it will execute helm test
.
You can see more in the verifiers section.
In this section we'll se how to use Particle with different CI and different methods of installation.
In this is an example, we'll be testing a monorepo that contains multiple helm charts with a native installation of all the packages:
---
name: Tests
on: pull_request
jobs:
particle:
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@v2
with:
submodules: recursive
fetch-depth: 0
- name: Install dependencies
run: brew install bats-core kubectl little-angry-clouds/my-brews/particle
- name: Run particle for every chart that has a change
run: |
changed_directories=$(git diff --diff-filter=AM --name-only remotes/origin/master HEAD | grep -vs ".github" || true)
changed_directories=$(echo $changed_directories | xargs -n1 dirname | cut -d"/" -f1| sort -u | grep -sv "^.$")
for directory in $changed_directories
do
cd $directory
particle test
done
You may check this repository to see a real example.
In this is an example we'll be testing a single helm chart using the docker image. To do so, you need to configure a docker service to avoid using the privileged
flag in the CI, which is always a no-go.
---
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: "true"
DOCKER_CERT_PATH: "/certs/client"
DOCKER_HOST: "tcp://docker:2376"
particle:
services:
- docker:20.10.8-dind
image: littleangryclouds/particle:v0.0.8
script:
- apt update && apt install iproute2 -y
- particle syntax -s gitlab
- particle dependency -s gitlab
- particle lint -s gitlab
- particle cleanup -s gitlab
- particle destroy -s gitlab
- particle create -s gitlab
- sed -i -E -e 's/127.0.0.1|0\.0\.0\.0|localhost/docker/g' "$HOME/.kube/config"
- particle prepare -s gitlab
- particle converge -s gitlab
- particle verify -s gitlab
- particle cleanup -s gitlab
- particle destroy -s gitlab
only:
- push
This job will be activated on every push. Instead of using the plain particle test
command, there's every command explicitly. That's because it's necessary to modify the kubeconfig
that kind creates. Instead of pointing to localhost, as does by default, it has to be changed to point to gitlab's docker service. This job will run in any Gitlab Runner with doker with the next configuration:
[[runners]]
name = "Docker runner"
limit = 0
output_limit = 4096
url = "https://gitlab.com"
environment = []
token = "whatever"
executor = "docker"
[runners.docker]
image = "ubuntu"
disable_entrypoint_overwrite = false
oom_kill_disable = false
volumes = ["/certs/client", "/cache"]
shm_size = 0
privileged = true
tls_verify = true
In short, it's necessary to have the privileged
value in true but we are restricting it to the docker service. You may want to check Gitlab's docker's runner documentation.
Yamllint is a linter for YAML files. It's not usable for the helm template files, since is not plain YAML. But we can use it with the values.yaml
and Chart.yaml
files1
The quick way is:
sudo pip install yamllint
Check their documentation to see more ways.
driver:
name: kind
provisioner:
name: helm
lint: |-
set -e
yamllint values.yaml
helm lint
verifier: |-
set -e
helm test nginx
dependency:
name: helm
Kubeval is a tool for validating a Kubernetes YAML or JSON configuration file. It does so using schemas generated from the Kubernetes OpenAPI specification, and therefore can validate schemas for multiple versions of Kubernetes.
If you have brew:
brew tap instrumenta/instrumenta
brew install kubeval
Check their documentation to see more ways.
driver:
name: kind
provisioner:
name: helm
lint: |-
set -e
helm template . | kubeval
helm lint
verifier: |-
set -e
helm test nginx
dependency:
name: helm
Kube-score is a tool that performs static code analysis of your Kubernetes object definitions. The output is a list of recommendations of what you can improve to make your application more secure and resilient.
Some of the rules are pretty opiniated, so keep it in mind.
If you have brew:
brew install kube-score/tap/kube-score
Check their documentation to see more ways.
The next example disables all the tests that by default would be activated when creating an empty chart. Is your job to make you chart comply with theese checks and then you stop ignoring them.
---
driver:
name: kind
provisioner:
name: helm
lint: |-
set -e
yamllint values.yaml
helm template . | kube-score score --ignore-test pod-probes --ignore-test \
pod-networkpolicy --ignore-test container-resources --ignore-test \
container-image-pull-policy --ignore-test container-security-context \
--ignore-test container-image-tag --ignore-test container-security-context -
helm lint
verifier: |-
set -e
helm test nginx
dependency:
name: helm
You can see a list of the checks and its IDs here.
If you have brew:
brew install kube-linter
Check their documentation to see more ways.
The next example disables all the tests that by default would be activated when creating an empty chart. Is your job to make you chart comply with theese checks and then you stop ignoring them. To deactivate all the examples, you should create a .kube-linter.yml
file at the root of your directory with the next contents:
---
checks:
addAllBuiltIn: true
exclude:
- "privileged-ports"
- "default-service-account"
- "no-liveness-probe"
- "no-read-only-root-fs"
- "no-readiness-probe"
- "required-annotation-email"
- "required-label-owner"
- "run-as-non-root"
- "unset-cpu-requirements"
- "unset-memory-requirements"
And the next is the particle
configuration:
---
driver:
name: kind
provisioner:
name: helm
lint: |-
set -e
yamllint values.yaml
helm template . | kube-linter lint -
helm lint
verifier: |-
set -e
helm test nginx
dependency:
name: helm
You can see a list of the checks and its IDs here.
Bats is an automated testing system. It provides an extensible framework to do automated testing.
If you have brew:
brew install bats
Check their documentation to see more ways.
The next bit it's to install interesting bats libraries. It's a bit hacky, but it should work, it's been tested.
#!/bin/bash
set -euo pipefail
# Install libraries
mkdir -p bats/lib/
git clone https://github.com/ztombol/bats-support bats/lib/bats-support
git clone https://github.com/ztombol/bats-assert bats/lib/bats-assert
detiktmp="$(mktemp -d detik.XXXXX)"
cd "${detiktmp}"
wget https://github.com/bats-core/bats-detik/archive/refs/heads/master.zip
unzip master.zip
cd ..
cp ${detiktmp}/bats-detik-master/lib/*.bash bats/lib/
rm -rf "${detiktmp}"
chmod +x bats/lib/*.bash
Save the next file at particle/default/test.bats
:
# Beware, this test assumes that the chart is called "nginx" and the provider is helm
# It uses relatives paths because of how bats loads its libraries
load "../../bats/lib/detik"
load "../../bats/lib/linter"
load "../../bats/lib/bats-support/load"
load "../../bats/lib/bats-assert/load"
DETIK_CLIENT_NAME="kubectl"
@test "verify the deployment" {
run verify "there are 2 pods named 'nginx'"
assert_success
run verify "there is 1 service named 'nginx'"
assert_success
}
Change the verifier configuration on your particle.yml
file:
verifier: |-
set -e
bats particle/default/test.bats
If your test fails, it won't show bats's error. To do so, enable the --debug
flag.
There's more examples here. And this one is a real test that I use for my charts.