Hosting Linux containers and virtual machines on Raspberry Pi

Author
lch361
Creation date
Word count
5194

Single-board computers are often used as Linux servers. Raspberry Pi 4, for example, has enough system resources to run multiple applications in parallel. Often, these applications are run separately on different VMs or containers, in order to enhance security and maintainability. This article shows a way to set up your Raspberry Pi as a Linux container host and a hypervisor.

Table of contents Anchor link

  1. Introduction
  2. Installing Alpine Linux on Raspberry Pi with ZFS root
    1. Booting Raspberry Pi into Alpine Linux boot media
    2. Upgrading Linux kernel and installing new modules on a boot media
    3. Replacing boot media with actual Alpine Linux installation
  3. Creating a local network for containers and VMs
  4. Creating and running containers via LXC
    1. Creating containers from templates
    2. Connecting container to the local network
    3. Using LXC with ZFS
    4. Using LXC with OpenRC
  5. Running virtual machines via QEMU
    1. Using EFI in AArch64 VMs via AAVMF
    2. Connecting VM to the local network
    3. Using QEMU with ZFS
    4. Connecting to QEMU VM’s console via pseudo-TTY
  6. Managing multiple LXC containers and QEMU VMs via Incus
    1. Setting up Incus from command line. Using existing ZFS pool and network interface
    2. Installing web UI
    3. Converting exising LXC containers to Incus
    4. Installing QEMU support for Incus
  7. Troubleshooting
    1. On boot, Linux didn’t import ZFS pool and mount root file system
    2. Incus instance creation failed: qemu-system-aarch64:: exit status 1
    3. Nested LXC containers fail to start in Incus
    4. Can’t mount file systems from hypervisor to Incus VM
    5. Command not found when trying to connect Incus terminal
  8. Tips and tricks
    1. Disabling useless WiFi on boot
    2. Saving disk space via ZFS compression
    3. Using one Syslog server for all Linux instances
    4. Installing and serving Incus documentation
  9. Conclusion
  10. References

Introduction Anchor link

I have been configuring and maintaining many Linux services for a long time. It all started after getting my first VPS and a domain name. On one remote Linux installation, which is the VPS, I had a website along with a couple of IM1 self–hosted applications.

Over time, my needs for services, resources and security have grown. I want to explore new applications that I haven’t used before. I want to be less constrained in terms of RAM and disk space. Finally, I want to make my system maintainable and robust.

To satisfy these needs, Raspberry Pi 4 Model B at home was used. I installed Linux on it, established a VPN connection between Raspberry and VPS, hosted some services. Discovered containerization and successfully implemented it on Linux with ARM642. Today, this approach allows me to easily add new services, some of which are part of lch361.net, and it simplifies system management a lot (e.g. updating packages, privilege & resource separation).

Sometimes, installing and configuring several applications can be difficult, and my case was no exception. This is why I wrote this article to share my experience, solutions, tips and tricks.

  • Chapter 2 discusses a Linux setup on Raspberry Pi, which gets complicated by installing ZFS as a root file system.
  • Chapter 3 explains the IP network setup suitable for connecting containers, VMs and host.
  • Chapter 4 introduces LXC, a containerization tool, and showcases some of its unique features.
  • Chapter 5 introduces QEMU, a virtualization tool, in similar fashion to Chapter 4.
  • Chapter 6 showcases Incus, a unified interface for managing containers and VMs, utilizing both LXC and QEMU and making use of concepts described in Chapter 4 and Chapter 5.
  • Chapter 7 lists troubles that I have encountered and solutions for them.
  • Chapter 8 lists miscellaneous advices regarding other chapters of this article.

Installing Alpine Linux on Raspberry Pi with ZFS root Anchor link

As a kernel for the OS, Linux was chosen as the most known server option. As a Linux distribution, Alpine Linux was chosen due to its spectacularly small disk size, simplicity and good Raspberry Pi support. As a file system, ZFS was chosen due to its flexibility and many advanced features useful for system administration, e.g. compression and snapshots.

Alpine Linux has a very flexible installation process, suitable for many use cases. The process, however, is not equally easy for all use cases. All Alpine boot images are using setup-alpine utility, which only allows creating ext2, ext3, ext4, BTRFS, XFS file systems for root[1]. And because Linux kernel doesn’t ship with ZFS out of the box, neither does any Alpine Linux boot image. This means that if we want to get Alpine Linux installed on a ZFS, we have to do additional steps for installation and add ZFS support to the boot media. The following subchapters show how it’s done.

Booting Raspberry Pi into Alpine Linux boot media Anchor link

  1. Download Alpine Linux Raspberry Pi image from Alpine Linux downloads.
  2. Verify image integrity via sha256 sum.
  3. Decompress the image.
  4. Plug SD card to your desktop. Get its device file name.
  5. Copy the image to SD card.
  6. Eject the SD card from your desktop.
Complete shell script doing all steps above
ALPINE_MIRROR=https://dl-cdn.alpinelinux.org
ALPINE_VERSION=3.23
ALPINE_PATCH=3
IMG_FILENAME=alpine-rpi-$ALPINE_VERSION.$ALPINE_PATCH-aarch64.img
IMG_URL="$ALPINE_MIRROR/alpine/v$ALPINE_VERSION/releases/aarch64/$IMG_FILENAME.gz"
wget "$IMG_URL"
wget "$IMG_URL.sha256" -O- | sha256sum -c
gzip -d "$IMG_FILENAME.gz"
SDCARD=/dev/mmcblk0
dd if="$IMG_FILENAME" of="$SDCARD" bs=4096 status=progress
eject "$SDCARD"

Upgrading Linux kernel and installing new modules on a boot media Anchor link

If you plug the SD card from previous chapter into Raspberry Pi, it will boot into a minimal Alpine Linux installation. It doesn’t have any default users, passwords or Internet connection, so be ware of that and be ready to configure Raspberry Pi directly on the console via HDMI.

The installation image uses one FAT32 partition containing various packages, lots of device trees for many kinds of Raspberry Pi, Raspberry Pi Linux kernel, initramfs and a SquashFS “modloop” containing kernel modules and firmware. On boot, initramfs installs packages from the boot media and mounts modloop. Boot media is mounted read–only, initramfs runs purely from tmpfs — that allows to safely eject, modify or overwrite the boot media.

Most of the installation media listed on Alpine Linux downloads is actually in ISO 9660 format, which doesn’t allow write operations on a file system. That usually isn’t a problem on desktop installations, where a bootloader (usually UEFI) is able to boot from either internal or USB drive. Raspberry Pi, however, always boots from its SD card, meaning that it has to become both a boot media and an installation target. I guess Alpine Linux developers have thought of this situation as well; that’s why they used FAT32 instead of ISO 9660 for the file system. That classifies our SD card as customizable boot device, which makes our life easier and allows to do some useful stuff, which is shown in this chapter[2].

Essentially, we just need to get ZFS kernel modules into our boot media. Because all Linux kernel modules are stored in SquashFS, and that file system is read–only, adding a new module requires recompiling the entire file system. We’ll have to use Alpine Linux’s specific scripts to do that the right way[1].

  1. On Raspberry Pi’s console log in as root, no password required.
  2. Connect to the Internet.
    setup-interfaces
    ifup eth0  # If Raspberry Pi's connected via cable
    ifup wlan0  # If Raspberry Pi connects via WiFi
    
  3. Configure package repositories.
    setup-apkrepos
    
  4. Install packages for disk partitioning, making file system and making an initramfs. These will help us later.
    apk add sfdisk e2fsprogs mkinitfs
    
  5. Add new partition to the SD card3.
    sfdisk -fa /dev/mmcblk0 <<EOF
    start=1G,size=8G
    EOF
    
  6. Create a file system on the new partition. If you’re using ext4, you can turn off journaling to improve I/O speed.
    mkfs.ext4 -O ^has_journal /dev/mmcblk0p2
    
  7. Mount new file system somewhere.
    export TMPDIR=/tmp/update-kernel
    mkdir -p "$TMPDIR"
    mount /dev/mmcblk0p2 "$TMPDIR"
    
  8. Remount boot media read–write.
    mount -o remount,rw /media/mmcblk0p1
    
  9. Update Linux kernel via Alpine Linux’s “update-kernel” script. This script uses temporary storage, at least 8 GB. Because most Raspberry Pi can’t afford free 8 GB of RAM, we’ll claim this space from SD card via previously exported TMPDIR variable[3].
    update-kernel -p zfs-rpi /media/mmcblk0p1/boot
    
  10. Reboot!
    reboot
    

Replacing boot media with actual Alpine Linux installation Anchor link

In previous chapter, Raspberry Pi had been rebooted into the same Alpine Linux boot media, but with a new Linux kernel and ZFS module available. We can use that to perform the real Alpine Linux installation, which will run not from RAM, but from SD card. Instructions here are similar to Alpine Linux wiki[4].

  1. On Raspberry Pi’s console log in as root, no password required.
  2. Start performing a regular Alpine Linux installation till the “Disk & Install” section. Press Ctrl+C
    setup-alpine
    
  3. Install packages for disk partitioning, file systems and ZFS utilities. Load ZFS module for ZFS utilities to work.
    apk add sfdisk dosfstools zfs
    modprobe zfs
    
  4. Unmount the boot media.
    umount /media/mmcblk0p1
    
  5. Repartition SD card. Refresh device nodes after repartitioning.
    sfdisk --label dos /media/mmcblk0 <<EOF
    - 300M c *
    - - - -
    EOF
    mdev -s
    

    This will create mmcblk0p1, EFI system partition, taking 300 MB space; and mmcblk0p2, Linux file system, taking all of the remaining space.

  6. Create file systems.
    mkfs.vfat /dev/mmcblk0p1
    
  7. Create a ZFS pool. Configure ZFS volume without mounting it.
    zpool create -f rpool /dev/mmcblk0p2
    zfs set -u mountpoint=/,compression=lz4,acltype=posixacl,xattr=sa rpool
    

    ZFS configuration is optional. Pick options according to your needs or preferences. Aside from mountpoint, which must always be equal to “/”.

  8. Mount the new Linux file system root.
    mkdir /media/rpool
    mount -t zfs rpool /media/rpool
    mkdir /media/rpool/boot
    mount /dev/mmcblk0p1 /media/rpool/boot
    
  9. Enable ZFS services. They’ll become enabled on the target system as well.
    rc-update add zfs-import sysinit
    rc-update add zfs-mount sysinit
    
  10. Complete the last step of setup-alpine.
    setup-disk /media/rpool
    

That should install the entire Alpine Linux system, including Raspberry Pi Linux kernel with necessary modules and generated initramfs supporting ZFS. If everything went smoothly, reboot should result in Raspberry Pi booting into Alpine Linux on a ZFS. Mission accomplished!

Creating a local network for containers and VMs Anchor link

Containers and VMs are often treated as independent servers. This means they have to not only have access to the Internet, but be reachable from outside. This can be achieved by having multiple public IP addresses or port forwarding from the router to containers. In either way, containers and VMs have to be connected to the same LAN as Raspberry Pi. This chapter shows one easy approach for this.

LAN specified as a routing table
Destination Subnet mask Gateway
192.0.2.0 255.255.255.0 link-local
0.0.0.0 0.0.0.0 192.0.2.1

Feel free to use any IP addresses instead of example 192.0.2.0/24 subnet.

In achieving this topology, Linux’s network interfaces can help. They are abstractions over hardware or virtual network devices. For example, bridge interface represents the switch, but it connects another interfaces instead of physical devices[5]. These interfaces, in turn, represent either the server’s NIC (eth0), container endpoints (veth0) or VM endpoints (tun0)[5]. That way, we can connect all containers and VMs on Raspberry Pi to the gateway as shown in the figure below:

Network interfaces diagram
Network interfaces on Linux hosts in LAN

We’ll start implementing this network from “br-lan”. Alpine Linux’s “openrc” package provides a service named /etc/init.d/networking. That service depends on “ifupdown” program. That program can bring up and down certain interfaces according to the configuration specified in /etc/network/interfaces file. Therefore, in order to make our network interfaces persistent between boot, we have to edit this file.

Adding a bridge to /etc/network/interfaces, configuring Raspberry Pi's IP address
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static

auto br-lan
iface br-lan inet static
	address 192.0.2.2/24
	gateway 192.0.2.1
	bridge-ports eth0

It’s very important that IP address is configured on the bridge, not on the NIC. Bridge will inherit MAC address from its first slave, “eth0”. Also, eth0 is going to pass through all incoming packages to the bridge anyway, rendering IP configuration on eth0 useless.

At this point, we are ready to extend Raspberry Pi’s LAN onto containers and VMs. But methods of connecting them to the bridge will be considered in the future chapters.

Creating and running containers via LXC Anchor link

LXC is used in this article as a system containerization tool. I like it: it’s simple, minimalistic, daemonless, lightweight4 yet functional. Containerization as I always imagined!

On Alpine Linux, LXC is provided via package “lxc”. Just one package to use LXC with full feature set!

apk add lxc

Talking about features, the following subchapters show some of the most useful features on examples.

Creating containers from templates Anchor link

It’s always better to start using containers from a pre–created file system image than adding files from scratch. LXC can assist in this via templates[6]. Below are a couple of examples.

Creating container from Alpine Linux image, template "local"
# Installing dependency
apk add lxc-templates

# Downloading file system tree
ALPINE_MIRROR=https://dl-cdn.alpinelinux.org
ALPINE_VERSION=3.23
ALPINE_PATCH=3
FSTREE_FILENAME=alpine-minirootfs-$ALPINE_VERSION.$ALPINE_PATCH-aarch64.tar
wget "$ALPINE_MIRROR/alpine/v$ALPINE_VERSION/releases/aarch64/$FSTREE_FILENAME.gz"
gzip -d "$FSTREE_FILENAME.gz"
xz "$FSTREE_FILENAME"

# Creating container
lxc-create --name alpine-based-container --template local -- --fstree "$FSTREE_FILENAME.xz"

Even better when you don’t have to keep your template around, and can fetch it from a server:

Creating container from Alpine Linux image, template "download"
apk add lxc-download  # Installing dependency 
lxc-create --name alpine-based-container --template download -- --dist alpine --release 3.23 --arch arm64

Connecting container to the local network Anchor link

To connect every new container to the bridge we’ve configured in previous chapter, add the following content to /etc/lxc/default.conf[7]:

lxc.net.0.type = veth
lxc.net.0.link = br-lan
lxc.net.0.flags = up
lxc.net.0.hwaddr = 10:66:6a:xx:xx:xx

This will create virtual Ethernet for every container, connected to the “br-lan” bridge, with random MAC address. Virtual Ethernet’s endpoint on container side will be named “eth0”.

Using LXC with ZFS Anchor link

Create a ZFS volume in a pool made in previous chapter.

zfs create rpool/lxc

Now you can specify this ZFS volume when creating containers[6]. This will allocate container’s rootfs in a new ZFS volume under rpool/lxc.

lxc-create --name zfs-based-container --bdev zfs --zfsroot rpool/lxc ...

Deploying containers in ZFS gives two advantages:

  • File system size control:
    zfs set quota=20M rpool/lxc/zfs-based-container
    
  • File system snapshots5:
    zfs snapshot rpool/lxc/zfs-based-container@snap0
    

Using LXC with OpenRC Anchor link

Alpine Linux has “lxc-openrc” package, allowing to start LXC containers from OpenRC init system.

To start certain LXC container on reboot:

  1. Symlink the OpenRC service.
    ln -s lxc /etc/init.d/lxc.foo
    rc-update add lxc.foo
    
  2. Maybe add dependencies on other LXC containers.
    ln -s lxc /etc/init.d/lxc.bar
    echo 'rc_need=lxc.bar' >> /etc/conf.d/lxc.foo
    rc-update add lxc.bar
    
  3. Start LXC containers from OpenRC:
    rc-service lxc.foo start
    

To start multiple containers on reboot:

  1. Add this to LXC container’s config:
    lxc.start.auto = 1
    lxc.group = "onboot"
    
  2. Use OpenRC service.
    rc-update add lxc
    

Specifying dependencies between containers is less flexible when using the approach with one “lxc” service. However, you can still control containers’ boot order:

echo 'lxc.start.order=1' >> /var/lib/lxc/foo/config
echo 'lxc.start.order=2' >> /var/lib/lxc/bar/config

Running virtual machines via QEMU Anchor link

QEMU is used in this article for virtualization tool. A widely known interface to KVM, a Linux type 2 hypervisor. Minimalistic interface, daemonless, but complex. Even though QEMU can be used without GUI, its CLI options are vast. The QEMU documentation is comprehensive, but I promise you don’t need to memorize every CLI option[8]. The relationship between QEMU and some VM manager, like libvirt, knida resembles relationship between Assembly and the C programming language: everyone uses the second, but knowing the first can help solve hard problems. This is why this chapter still exists, providing some examples of how to use QEMU on Alpine Linux, AArch64.

On Alpine Linux, QEMU system emulation is available via “qemu-system-aarch64” package. Many other “qemu-system-*” packages are available for different CPU architectures, but it’s recommended to always emulate native arch, making use of KVM acceleration.

Some packages are provided in different repositories. For stable releases, Alpine Linux has 2: “main” and “community”. All Alpine mirrors that I’ve seen provide both under similar URLs: all you gotta do is change “main” at the end of the URL to “community”.

Enabling community repository and installing QEMU
cat /etc/apk/repositories | grep 'main$' | sed 's|main$|community|' >> /etc/apk/repositories
apk add qemu-system-aarch64

Using EFI in AArch64 VMs via AAVMF Anchor link

QEMU is a hardware emulator. Computers run on hardware. Computers need a firmware to start their OS. Therefore, QEMU VMs should use a firmware (BIOS) as well. You don’t want to provide -kernel option to every VM you start, ain’t that right?

UEFI is a well-known standard for computer firmware, both desktop and servers. On Linux, EFI firmware can be built using EDK II, if you’d like to get involved in firmware development, of course. If not, there are packages for images built by EDK II. One of them is “ovmf” on Alpine Linux. Unfortunately, BIOS firmware is usually only developed for one architecture, because BIOS always use assembly in their code, and OVMF is not an exception. Luckily, for AArch64, an EFI firmware is built and shipped in “aavmf” Alpine Linux package, which is what’s used in this chapter.

Using AAVMF in QEMU
apk add aavmf
qemu-system-aarch64 -bios /usr/share/AAVMF/QEMU_EFI.fd "$@"

"$@" here denotes “everything else”: all arguments given to a shell script.

Now you can manage VMs just like they were regular computers: with EFI shell, GRUB bootloader, etc.

Connecting VM to the local network Anchor link

To connect QEMU VM to a network described in previous chapter, add the following two options to QEMU. First option declares a network on the host, second option emulates a device connected to that network.

qemu-system-aarch64 -netdev bridge,br=br-lan,id=net0 -device e1000,netdev=net0 "$@"

Device e1000 is a simple 1G/s Ethernet emulator. There are other options as well, e.g. virtio-net-pci if you like VirtIO.

On host, a tapX device is created: a link–layer tunnel connected to br-lan and communicating with emulated NIC in VM.

Using QEMU with ZFS Anchor link

ZFS allows to allocate in its pool not only file systems, but also block volumes. While file systems have dnode structure, paths, attributes, etc., volumes simply have blocks of memory on pool’s devices. This allows to use ZFS volumes as efficiently as disk partitions, formatted with a file system like ext4 or a partition table like GPT. This chapter shows how to use ZFS volumes created on a pool from previous chapter in QEMU VMs.

Actually, any file of sufficient size can be used as a block device: have a file system, be mounted, partitioned, etc. But there is still an overhead of ZFS volume, because all blocks are written to a file. On the contrary, creating ZFS volumes in a pool allows to bypass the file system and write blocks directly to device.

  1. Create a ZFS block volume:
    zfs create -V 4G rpool/alpine-vm
    
  2. After creating, a /dev/zdX device should appear corresponding to new ZFS volume.
    ls /dev/zd*
    
  3. After finding the required device name, use it in QEMU as a raw hard drive.
    qemu-system-aarch64 -drive format=raw,file=/dev/zd0 "$@"
    

It is possible to make ZFS files persistent, thus making our work more convenient.

  1. Make sure udev daemon is running:
    apk add eudev
    rc-update add udev sysinit
    rc-update del mdev sysinit
    rc-service udev start
    

    On Alpine Linux, by default Busybox’s mdev is used. Both udev and mdev are used for dynamic device node allocation. First difference is that udev runs as a daemon, while mdev as a one-shot service. A daemon can react to hotplug events (e.g. new devices were added), while mdev has to be executed again. Another difference is in rule syntax, which define node names and other stuff.

  2. Install udev rules for ZFS block volumes:
    apk add zfs-udev
    
  3. If you created a ZFS volume before, reload udev rules and recreate devices:
    udevadm control --reload-rules
    udevadm trigger
    
    Or via OpenRC:
    rc-service udev-trigger restart
    
  4. Use volume in QEMU under a persistent name:
    qemu-system-aarch64 -drive format=raw,file=/dev/zvol/rpool/alpine-vm "$@"
    

Connecting to QEMU VM’s console via pseudo-TTY Anchor link

QEMU, while being a purely CLI application, allows many ways to interact with launched VMs. This includes, but not limited to: VNC, SPICE, desktop windows, serial consoles. The last option is, perhaps, the most useful when it comes to emulating headless servers.

Getting VM’s console on the same TTY is achieved via following option:

qemu-system-aarch64 -nographic "$@"

A serial console is created in QEMU VM as a new device. Therefore, for Linux VMs, EFI shell, GRUB and Linux kernel should expect and use that device. If not, you won’t see EFI logs, GRUB menu or the Linux system whatsoever. The last one is by far the worst, therefore ensure that kernel runs with following option:

console=ttyAMA0,115200

If a Linux distribution provides VM images, like Alpine Linux downloads, they will work out of the box, because they’re built with QEMU console in mind.

If you want to detach QEMU from current TTY (e.g. daemonizing), stdio console won’t be very useful. Luckily, you can achieve same console, but on an entriely new pseudo-terminal (PTY)!

  1. Add following option:

    qemu-system-aarch64 -serial pty "$@"
    

    QEMU will print something like this:

    char device redirected to /dev/pts/1 (label serial0)
    
  2. Connect to PTY via any terminal communication program you like.

    With Minicom:

    apk add minicom
    minicom -D /dev/pts/1
    

    With Picocom:

    apk add picocom
    picocom /dev/pts/1
    

Managing multiple LXC containers and QEMU VMs via Incus Anchor link

Because both LXC and QEMU are command line utilities, managing multiple instances with them involves typing several commands or many command arguments. This looks pretty cool every time you do it, but eventually becomes pretty tiresome and prone to errors. Monitoring instances is also troublesome, because CLI was never a good UI for real–time visual overview.

A simple solution to this problem is Incus — software managing both system containers (LXC, as described in previous chapter) and virtual machines (QEMU, as described in previous chapter) via common API served on either UNIX or TCP socket.

Unlike LXC, which is daemonless, Incus always runs a daemon. On Raspberry Pi, I observed a 60-150 MiB memory usage by the daemon itself and 14-20 MiB for every LXC instance. Bare LXC is a much more lightweight solution, sometimes an important thing to consider.

On Alpine Linux, Incus is provided via package “incus” from “community” repository.

apk add incus
rc-service incusd start

The following subchapters demonstrate Incus’s capabilities and setup process.

Setting up Incus from command line. Using existing ZFS pool and network interface Anchor link

  1. To control the Incus daemon, install a CLI client:
    apk add incus-client
    
  2. Begin basic Incus initialization:
    incus admin init
    
  3. To add ZFS pool from previous chapter, when asked about storage pool, choose following answers:
    Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
    Name of the new storage pool [default=default]: servers
    Name of the storage backend to use (dir, zfs) [default=zfs]: zfs
    Would you like to create a new zfs dataset under rpool/incus? (yes/no) [default=yes]: yes
    
  4. To connect containers and VMs to the network implemented in previous chapter, when asked about network bridge, choose following:
    Would you like to create a new local network bridge? (yes/no) [default=yes]: no
    Would you like to use an existing bridge or host interface? (yes/no) [default=no]: yes
    Name of the existing bridge or host interface: br-lan
    
  5. To prerare to manage Incus via web UI, choose following:
    Would you like the server to be available over the network? (yes/no) [default=no]: yes
    Address to bind to (not including port) [default=all]: 
    Port to bind to [default=8443]: 
    
  6. Answer all other questions. Command should finish executing, which means that a new pool, a network device and a TCP port have been added to Incus.

Installing web UI Anchor link

In previous chapter, step 5, Incus was bound to some TCP port, e.g. 8443. We can try to connect to it via HTTPS:

curl --insecure https://192.168.0.2:8443

But right now we’ll only get an API response:

{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":["/1.0"]}

According to the documentation, Incus is actually capable of serving a web UI if provided with a special environment variable[9]. Here is how it’s done:

  1. There is one package in Alpine Linux’s official repositories for an Incus UI. However, it resides in “testing” repository, so we need to enable it:
    ALPINE_MIRROR=https://dl-cdn.alpinelinux.org
    echo "$ALPINE_MIRROR/alpine/edge/testing" >> /etc/apk/repositories
    
  2. Install the UI package:
    apk add incus-ui-canonical
    
  3. In incusd service’s config, export INCUS_UI and set it to path where “incus-ui-canonical” was installed.
    echo 'export INCUS_UI=/usr/share/incus-ui' >> /etc/conf.d/incusd
    
  4. Restart Incus.
    rc-service incusd restart
    

After these steps, if you enter https://192.168.0.2:8443 from your browser, Incus will redirect you to UI, based on your user agent. Set up login method that you prefer: TLS certificates are the simplest approach, while OpenID and OpenFGA can be used for external authentication and authorization respecively[9]. After logging in, you should see a UI like this:

Incus UI
An example of Incus UI

Using API is, of course, still possible e.g. via curl.

Managing Incus from web UI is going to be significantly easier and more enjoyable than via incus-client.

Converting exising LXC containers to Incus Anchor link

What if you liked using LXC in the past, but discovered Incus and enjoyed it even more? I personally have recognized both the need to transition to Incus, as well as the need to convert 10 LXC containers to Incus instances. This chapter shows how it can be done.

  1. Install tools for converting system containers to Incus instances.
    apk add incus-conversion
    
  2. Convert all LXC containers installed at /var/lib/lxc to Incus isntances. Delete source LXC afterwards.
    lxc-to-incus --all --delete
    
  3. If you did manual configuration of some containers, be prepared that some LXC options don’t mean anything in Incus. You’ll see the following error message on such occasion:
    Parsing LXC configuration
    Checking for unsupported LXC configuration keys
    Skipping container 'foo': Found unsupported config key 'lxc.group'
    
    Delete unsupported keys manually, then return to step 2.
  4. (Bonus) Delete incus-conversion if you’re planning to use LXC only from Incus.
    apk del incus-conversion
    

Installing QEMU support for Incus Anchor link

While creating VM instances in Incus UI is straightforward, additional dependencies are required that do not come with “incus” package.

  1. Install dependencies. Package “incus-vm” is a little buggy in the sense that, no matter your CPU architecture, it will always install QEMU for x86_64, ehen though Incus always uses KVM. Therefore, install the right QEMU alongside.
    apk add incus-vm qemu-system-aarch64
    

    Surprisingly, while “incus-vm” can’t figure out proper QEMU to install, it chooses a proper EFI firmware to install: “ovmf” for x86_64 and “aavmf” for aarch64, mentioned previously in QEMU chapter.

  2. Disable secure boot in default profile (in web UI: Configuration, Security policies, Enable secureboot (VMs only)). AAVMF doesn’t provide secure boot, therefore it’s necessary to disable it in Incus.
  3. If you’re using ZFS pool in Incus, make sure to install “zfs-udev”, as described in previous chapter.

After all that, hopefully, it should be possible to create and start VM instances.

Troubleshooting Anchor link

On boot, Linux didn’t import ZFS pool and mount root file system Anchor link

Initramfs on Raspberry Pi drops into emergency shell after not being able to mount the root file system.

Check kernel’s arguments:

cat /proc/cmdline | grep root=ZFS=rpool
echo $?

If it returns 1, bootloader is configured incorrectly.

From the same recovery shell, mount the first (/boot) partition. In file cmdline.txt, remove all incorrect root options, then add a correct option for your ZFS root. Try starting the kernel again.

mount /dev/mmcblk0p1 /mnt
vi /mnt/cmdline.txt
cat /mnt/cmdline.txt | grep root=ZFS=rpool
reboot

Incus instance creation failed: qemu-system-aarch64:: exit status 1 Anchor link

I was only able to finally start a QEMU VM after installing “qemu-hw-display-virtio-gpu-pci” package. I’m gonna guess it’s because Incus always provides a graphical overview of every VM, and always uses GPU on a PCI provided with VirtIO for that.

Nested LXC containers fail to start in Incus Anchor link

If Incus lists the warning: “Couldn’t find the CGroup memory controller”, expect errors from LXC containers. With no memory controller in CGroup, memory limits for instances will be ignored. Also, if you try to run nested containers, you might get the following error message from LXC (even if you enabled nesting in Incus!):

Resource busy - Could not enable "+cpuset +cpu +io +pids" controllers in the unified cgroup 9

I noticed that by default, for some reason, CGroup memory is disabled in Raspberry Pi Linux kernel. Here are the contents of /proc/cmdline before fix. Notice the parameter “cgroup_disable”.

coherent_pool=1M 8250.nr_uarts=0 snd_bcm2835.enable_headphones=0 cgroup_disable=memory numa_policy=interleave nvme.max_host_mem_size_mb=0 bcm2708_fb.fbwidth=1184 bcm2708_fb.fbheight=624 bcm2708_fb.fbswap=1 vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000  root=ZFS=rpool modules=sd-mod,usb-storage,zfs quiet rootfstype=zfs

Appending “cgroup_enable=memory” to /boot/cmdline.txt, then rebooting solved the issue and allowed launching nested containers and set memory limits for instances.

Can’t mount file systems from hypervisor to Incus VM Anchor link

Unfortunately, that’s a problem with “virtiosfd” package on AArch64. It should share host file systems with QEMU VMs by utilizing VirtIO, but instead the process just dies. There is an issue for this problem. For now, if you really need to share file systems between a hypervisor and VM, use NFS.

Command not found when trying to connect Incus terminal Anchor link

Check instance’s logs. Check if there is a message like this:

ERROR    attach - ../src/lxc/attach.c:lxc_attach_run_command:1841 - No such file or directory - Failed to exec "bash"

If an instance doesn’t have Bash (e.g. Alpine Linux), reconnect the terminal while specifying sh command instead of bash.

Tips and tricks Anchor link

Disabling useless WiFi on boot Anchor link

WiFi connection is rarely useful on servers, especially when using a network configured in previous chapter. If you’re not planning to use WiFi interface, it is possible to disable it at boot time.

echo 'dtoverlay=disable-wifi' >> /boot/config.txt

Or, if config.txt includes usercfg.txt:

echo 'dtoverlay=disable-wifi' >> /boot/usercfg.txt

On next reboot, Linux will start without wlan0 interface.

Saving disk space via ZFS compression Anchor link

ZFS has built-in compression, which allows to compress and decompress blocks on the fly in order to save file system space. It also can give faster write speeds if data is susceptible to compression. This is usually true for most volumes; especially for Linux root volume, because ELF binaries are easily compressed. I recommend enabling compression by default on every ZFS volume in rpool. Algorithm “LZ4” is one of the fastest in terms of (de)compression speed.

zfs set compression=lz4 rpool

Some volumes (including those created in Incus) are only supposed to have compressed data (e.g. download cache, package repository). In that case, turn off compression for them specifically:

zfs set compression=off rpool/incus/custom/...

I also recommend turning compression off for every block volume that’s used as a partition table/file system (e.g. VM root disk), because file systems in that volume can have a compression of their own.

zfs set compression=off rpool/incus/virtual-machines

Using one Syslog server for all Linux instances Anchor link

There is a very high chance that most, if not all, Incus instances will run a Unix-like OS. They will almost always have Syslog, and many programs use it. One Unix-like host usually runs Syslog daemon for saving logs, log rotation job and applications which send logs to the daemon. On a network it can be easier for Syslog daemon to send logs to a remote host; only that host will be responsible for log rotation.

This chapter shows a centralized Syslog setup using Rsyslog on Raspberry Pi and Busybox’s syslog on Incus instances.

  1. On Raspberry Pi, install dependencies:
    apk add rsyslog logrotate
    
  2. Add following lines to Rsyslog’s configuration, which enables listening for messages on UDP socket.
    echo >> /etc/rsyslog.conf <<EOF
    module(load="imudp")
    input(
    	type="imudp"
    	address="192.0.2.2"
    	port="514"
    )
    EOF
    
  3. On every instance, configure Syslog daemon to send messages to remote host only.
    echo 'SYSLOGD_OPTS="-t -R 192.0.2.2"' >> /etc/conf.d/syslog
    

Installing and serving Incus documentation Anchor link

Installed in previous chapter web UI has many links to the documentation that should be served via Incus daemon itself; however, no package on Alpine Linux provides that documentation yet. If there was a package like incus-doc, it would be possible to use INCUS_DOCUMENTATION variable, just like with web UI[9]. I made a merge request for incus-doc though; if you Abuild Incus from my branch, you’ll get Incus documentation of the same version as Incus daemon.

Conclusion Anchor link

It is possible to host many system containers and virtual machines on Raspberry Pi, as on many other servers with Intel/AMD CPUs. While the process gets a little complex, it allows to set up a cheap virtual hosting solution that can be used at home and is easy to maintain.

The support, however, is not ideal yet on AArch64 (e.g. no virtiosfd) and Raspberry Pi (no nested KVM). Perhaps other device which runs x86_64 would support virtualization better. However, because every software used in this article is open source, there is a high probability that these problems will be noticed and fixed in the future, making Raspberry Pi a flawless virtual host.

References Anchor link

  1. Alpine Linux. "alpine-conf" (2025-12-03), ver. 3.21.0. Accessed: 2026-03-07.

  2. Alpine Linux. "Create a Bootable Device" (2025-02-05). Accessed: 2026-03-07.

  3. Alpine Linux. "Diskless mode, Upgrading a diskless system" (2025-02-28). Accessed: 2026-03-07.

  4. Alpine Linux. "Root on ZFS with native encryption" (2025-04-02). Accessed: 2026-03-07.

  5. Michail Litvak. iproute2. "ip-link(8)" (2012-12-13).

  6. Daniel Lezcano. "lxc-create(1)" (2021-06-03).

  7. Daniel Lezcano. "lxc.container.conf(5)" (2021-06-03).

  8. The QEMU Project Developers. "QEMU User Documentation" (2026), ver. 10.2.50. Accessed: 2026-03-11.

  9. Incus contributors. "Incus documentation" (2025-08-16), ver. 6.0.5. Accessed: 2026-03-14.


  1. Instant messaging.

  2. Raspberry Pi’s native CPU architecture.

  3. It’s smart to leave a gap between mmcblk0p1 and mmcblk0p2, leaving a possibility to grow the file system in case mmcblk0p1 runs out of free space during update-kernel.

  4. 1.17MiB memory usage overhead on every container running!

  5. This actually bypasses LXC’s limitation which only allows creating snapshots of a stopped container!