This describes how zerotect can be obtained (securely) and configured so you can build your own recipes.
- "Trust Me" Quickstarts
- First-Principles Install
Everything described in this document is encapsulated in scripted recipes for various distributions and init-systems. These are a great way to quickly install zerotect.
As the scripts follow the curl pipe-to bash pattern, the rest of this document details how you can develop your own automation to deploy zerotect, depending on your level of trust (which may be zero trust).
To install zerotect:
curl -s -L https://github.com/polyverse/zerotect/releases/latest/download/install.sh | sh
To uninstall zerotect:
curl -s -L https://github.com/polyverse/zerotect/releases/latest/download/install.sh | sudo sh -s -- --uninstall
To get all supported options:
curl -s -L https://github.com/polyverse/zerotect/releases/latest/download/install.sh | sudo sh -s -- --help
This section deals with zerotect installation primitives (including if necessary, compiling it from source yourself.) This is especially important for security-conscious organizations for a complete auditable trail.
Zerotect executables are posted in Github Releases.
The latest zerotect executable can be found here: https://github.com/polyverse/zerotect/releases/latest/download/zerotect
For transparency, you can study .travis.yml and the build logs to audit the pre-built binaries.
As part of the zerotect build process, container images are also build and published on Github agailable here:
https://github.com/polyverse/zerotect/packages/199165
These are particularly useful for running as Sidecars (in Pods/Tasks) or DaemonSets (once-per-host).
More information on this usage is found In the Cloud-Native World section.
For complete audit and assurance, you may compile zerotect from scratch. Zerotect is built in Rust.
On a system with Rust build tools available:
# clone this repository
git clone https://github.com/polyverse/zerotect.git
# Go to the repository root
cd zerotect
# Build
cargo build
All regular rust tools/options recipes work - from cross-compilation, static linking, build profiles and so forth. You may build it any way you wish.
DURABLE_ZEROTECT_LOCATION=/usr/local/bin
We recommend placing zerotect in the /usr/local/bin
directory. Specifically since zerotect needs to run with higher privilege levels than a regular user, it is better to not have it under a user directory.
To ensure zerotect is running when you want it to run, and not running when you don't, you need to plan for some sort of lifecycle management. We present two main recommendations for running zerotect.
Since zerotect detects side-effects from the kernel, it is sufficient to run a single instance of zerotect for every Kernel. What this means is, traditional Linux "containers" (using cgroups and namespaces) do not need zerotect whtin them so long as either the host is running it, or there's a single container running it.
However, "VM" containers such as Kata Containers, Firecracker VMs, and so forth will warrant a zerotect instance per container, since they would not share the same kernel.
Zerotect needs to run once-per-kernel. Usually a kernel bootstraps and powers a rather complex system, and the system runs applications (and/or containers) on top of it.
In such cases, zerotect should be installed as a manageable service directly on the system.
Example 1: Some applications running on a host
application application application zerotect process directly
process 1 process 2 process 3 on kernel host/VM (not
containerized)
+--------------------------------------------------------------------------+
| |
| Linux Kernel |
| |
+--------------------------------------------------------------------------+
Example 2: Some containers running on a host
+------------+ +------------+ +------------+
| | | | | | zerotect process directly
| container1 | | container2 | | container3 | on kernel host/VM (not
| | | | | | containerized)
+------------+ +------------+ +------------+
+--------------------------------------------------------------------------+
| |
| Linux Kernel |
| |
+--------------------------------------------------------------------------+
Example 3: Some applications/containers coexisting on a host
+---------------+
| |
application | container 5 | application zerotect process directly
process 1 | | process 3 on kernel host/VM (not
+---------------+ containerized)
+--------------------------------------------------------------------------+
| |
| Linux Kernel |
| |
+--------------------------------------------------------------------------+
In all these cases, it helps to run zerotect using the init system (systemd, sysvinit, upstart, etc.)
Now it is possible (and may even be desirable in some cases, such as running a DaemonSet on a Kubernetes cluster) to run zerotect as a privileged container like the model below.
The container itself is now a first-class serivce per host that must be managed through preferred container-management tooling.
Example 4: zerotect as a privileged container
+-----------------------------------------+
| |
| zerotect in privileged container |
| OR sufficient access to read |
| /dev/kmsg |
| | |
+-----+-----------------------------------+
|
|
+--------------------------------------------------------------------------+
| | |
| | |
| v Linux Kernel |
| /dev/kmsg |
| |
+--------------------------------------------------------------------------+
This leaves one question open: How is zerotect run within the container itself?
This method is the recommended way to run zerotect in a container at entrypoint, with its maximum life being that of the container. This can be very useful for testing and validation of config options and parameters, as well as controlled on-demand execution.
$DURABLE_ZEROTECT_LOCATION/zerotect <options>
Zerotect's lifetime is that of your current context (shell, user session or host). It will not automatically start up when a host/container starts.
You may push the one-off directly executed process to the background. A concrete example of this use is in online demos, where zerotect doesn't need to be durable long-term.
It also has application in a container where you can spawn the zerotect process before the main blocking process is started. Like thus:
$DURABLE_ZEROTECT_LOCATION/zerotect <options> &
When iterating/testing, un-orchestrated Docker containers can be monitored quickly without extra scaffolding (such as Docker Desktop testing).
If you're 100% Cloud-Native and your primitive is a Container, there are two primary ways to run zerotect as a container.
Whenever you run containers orchestrated over "Nodes" (Machines that you see and know about, on top of which your containers run), as with Kubernetes, Nomad, ECS, CloudRun, OpenShift or even plain config management tools like Ansible/Chef/Puppet and use OCI (Docker) Images as purely a packaging/deployment mechanism.
Using the principle of Run one zerotect per kernel, we recommend running the zerotect container as a DaemonSet or equivalent for your orchestrator.
The second is mostly a subset of the first use-case, but where containers are really VMs and do not share a kernel or you do not see the host (Azure Container Instances, AWS Fargate, etc.)
There are a number of isolation projects that make VMs look and feel like containers. These include (but are not limited to) KataContainers and Firecracker.
When multiple containers in a "Pod" or "Task", share the same kernel, it is useful to run zerotect as a sidecar within that Pod/Task.
While zerotect does take command-line parameters (documented in the main README.md), it is not recommended to embed CLI-based configuration options in your init configuration.
Instead, we recommend running it with a configuration file located at /etc/zerotect/
:
$DURABLE_ZEROTECT_LOCATION/zerotect --configfile /etc/zerotect/zerotect.toml
When using a configuration file, no other command-line options are supported. To see all options available in a configuration file, read the Reference zerotect.toml file.