Incus
Incus is a modern system container and virtual machine manager developed and maintained by the same team that first created LXD. It’s released under the Apache 2.0 license and is run as a community led Open Source project as part of the Linux Containers organization.
The Incus executor builds on top of Incus (6.0 LTS) and is under active development. Incus builds the base for a clustered ISM deployment.
It utilizes the REST API for managing isabelle server instances as LXC containers. These containers are connected to ISM through native bridge networks. Mounts are realized by attaching bind-mounted disk devices during runtime.
Development
To develop ISM with Incus you need a host container for running your ISM builds. This will require a working Incus installation on your development machine.
The flake in this repo provides images for an Incus development container.
The image definition is contained in nix/devcontainer.nix
, it's a NixOS 24.05 with build tooling for ISM, golang development utilities and isalink's utilities.
To build and import the images, use: sudo sh nix/import-image.sh
.
The general workflow is supposed to be:
- Create the dev container once
- Work either in the container (with helix) or outside on the ISM code
- Compile ISM in the container with
make build
- Run ISM in the container
The workflow for creating the dev container is:
- Launch a privileged container (for accessing the shared ISM repo):
sudo incus launch ism-dev dev -c security.privileged=true
- Attach your local ISM repository to it (assuming it's in pwd):
sudo incus config device add dev ism disk source=$(pwd) path=/ism
- Attach your host Incus socket to it:
sudo incus config device add dev socket disk source=/var/lib/incus path=/var/lib/incus
- Enter shell in container:
sudo incus shell dev
- In the shell, enter ISM dir:
cd /ism
- In the shell, Build ISM:
make
- In the shell, Run ISM:
./bin/ism -v 4 -config ism.incus.yml -clean-network
- In the shell, enter ISM dir:
All changes from the /ism
in your dev
container are synced via a rw bind mount.
You can keep editing locally and only need to make
+ run in the container.
However, you also can just edit in the container.
Incus containers will also (if not ephemeral) survive a reboot, so you can re-use this container.
Storage
The incus executor handles storage by creating a disk device with a path from the host on demand.
These devices are reference counted and follow the same contraints/guarantees as the systemd
executor's bind
mounts.
There are two main difference between these devices
and mounts from the systemd
executor:
- We don't have to manage the mounts themselves. We only call the API to create them, and they are automatically destroyed when an ephemeral instance is stopped.
- There is no intermediate
dataDir
instead of this we directly mount the source directories into the container. The destination path stays the same as/data/<hash of alias>
. This is possible because we don't need to manage the mounts ourselves and can delegate the mount namespace handling to Incus.
Networking
Firewall rules
Incus is known to clash with Docker/ Podman when running on the same host. This is mostly due to the firewall rules of the container management systems interfering with each other. Please consult the documentation for information on how to configure your firewall. Otherwise, refer to the Discussion site for information on handling this for your distribution.
Clash with NixOS Firewall
Your specific network setup and may differ. Please check your already installed firewall rules before aplying the rules below to any system.
Incus is known to clash with the default configuration of NixOS's firewall. You will have to both:
- set the incus bridge as a trusted network interface
networking.firewall.trustedInterfaces = [ "incusbr0" ];
- allow traffic through the dynamic network interfaces (required for DHCP)
This depends on your executors.incus.prefix
configuration, in general use:
networking.firewall.extraInputRules = ''
meta iifname { "veth*", "ism*" } accept
'';
networking.firewall.extraForwardRules = ''
meta iifname { "veth*", "ism*" } accept
'';
NOTE: The above setting will weaken the security of your local firewall due to being quite permissive. The intended use-case here is to be permissive to ease development.
For production machines and/ or secure devices please reduce the permissions.
You can usually track down the required ports/ interface names with tcpdump
and ss
/ netstat
.
It may also be useful to configure executors.incus.prefix
to make ISM-managed network interfaces easily identifiable.
Why not Docker/Podman?
We have actively tried to utilize docker and podman however their lack of runtime configuration for volumes and/ or mounts has made them unviable. The Incus executor was initially a port of an attempt at a docker executor.