Introduction
I have been configuring and maintaining many Linux services for a long time. It all started after getting my first VPS and a domain name. On one remote Linux installation, which is the VPS, I had a website along with a couple of IM1 self–hosted applications.
Over time, my needs for services, resources and security have grown. I want to explore new applications that I haven’t used before. I want to be less constrained in terms of RAM and disk space. Finally, I want to make my system maintainable and robust.
To satisfy these needs, Raspberry Pi 4 Model B at home was used. I installed Linux on it, established a VPN connection between Raspberry and VPS, hosted some services. Discovered containerization and successfully implemented it on Linux with ARM642. Today, this approach allows me to easily add new services, some of which are part of lch361.net, and it simplifies system management a lot (e.g. updating packages, privilege & resource separation).
Sometimes, installing and configuring several applications can be difficult, and my case was no exception. This is why I wrote this article to share my experience, solutions, tips and tricks.
- Chapter 2 discusses a Linux setup on Raspberry Pi, which gets complicated by installing ZFS as a root file system.
- Chapter 3 explains the IP network setup suitable for connecting containers, VMs and host.
- Chapter 4 introduces LXC, a containerization tool, and showcases some of its unique features.
- Chapter 5 introduces QEMU, a virtualization tool, in similar fashion to Chapter 4.
- Chapter 6 showcases Incus, a unified interface for managing containers and VMs, utilizing both LXC and QEMU and making use of concepts described in Chapter 4 and Chapter 5.
- Chapter 7 lists troubles that I have encountered and solutions for them.
- Chapter 8 lists miscellaneous advices regarding other chapters of this article.
Installing Alpine Linux on Raspberry Pi with ZFS root
As a kernel for the OS, Linux was chosen as the most known server option. As a Linux distribution, Alpine Linux was chosen due to its spectacularly small disk size, simplicity and good Raspberry Pi support. As a file system, ZFS was chosen due to its flexibility and many advanced features useful for system administration, e.g. compression and snapshots.
Alpine Linux has a very flexible installation process,
suitable for many use cases. The process, however, is not equally easy for all
use cases. All Alpine boot images are using setup-alpine utility, which only
allows creating ext2, ext3, ext4, BTRFS, XFS file systems for
root[1]. And because Linux kernel doesn’t ship with
ZFS out of the box, neither does any Alpine Linux boot image.
This means that if we want to get Alpine Linux installed on a ZFS, we have
to do additional steps for installation and add ZFS support to the boot
media. The following subchapters show how it’s done.
Booting Raspberry Pi into Alpine Linux boot media
- Download Alpine Linux Raspberry Pi image from Alpine Linux downloads.
- Verify image integrity via sha256 sum.
- Decompress the image.
- Plug SD card to your desktop. Get its device file name.
- Copy the image to SD card.
- Eject the SD card from your desktop.
ALPINE_MIRROR=https://dl-cdn.alpinelinux.org
ALPINE_VERSION=3.23
ALPINE_PATCH=3
IMG_FILENAME=alpine-rpi-.-aarch64.img
IMG_URL="/alpine/v/releases/aarch64/.gz"
|
SDCARD=/dev/mmcblk0
Upgrading Linux kernel and installing new modules on a boot media
If you plug the SD card from previous chapter into Raspberry Pi, it will boot into a minimal Alpine Linux installation. It doesn’t have any default users, passwords or Internet connection, so be ware of that and be ready to configure Raspberry Pi directly on the console via HDMI.
The installation image uses one FAT32 partition containing various packages, lots of device trees for many kinds of Raspberry Pi, Raspberry Pi Linux kernel, initramfs and a SquashFS “modloop” containing kernel modules and firmware. On boot, initramfs installs packages from the boot media and mounts modloop. Boot media is mounted read–only, initramfs runs purely from tmpfs — that allows to safely eject, modify or overwrite the boot media.
Most of the installation media listed on Alpine Linux downloads is actually in ISO 9660 format, which doesn’t allow write operations on a file system. That usually isn’t a problem on desktop installations, where a bootloader (usually UEFI) is able to boot from either internal or USB drive. Raspberry Pi, however, always boots from its SD card, meaning that it has to become both a boot media and an installation target. I guess Alpine Linux developers have thought of this situation as well; that’s why they used FAT32 instead of ISO 9660 for the file system. That classifies our SD card as customizable boot device, which makes our life easier and allows to do some useful stuff, which is shown in this chapter[2].
Essentially, we just need to get ZFS kernel modules into our boot media. Because all Linux kernel modules are stored in SquashFS, and that file system is read–only, adding a new module requires recompiling the entire file system. We’ll have to use Alpine Linux’s specific scripts to do that the right way[1].
- On Raspberry Pi’s console log in as
root, no password required. - Connect to the Internet.
- Configure package repositories.
- Install packages for disk partitioning, making file system and making an
initramfs. These will help us later.
- Add new partition to the SD card3.
- Create a file system on the new partition. If you’re using ext4, you can
turn off journaling to improve I/O speed.
- Mount new file system somewhere.
- Remount boot media read–write.
- Update Linux kernel via Alpine Linux’s “update-kernel” script.
This script uses temporary storage, at least 8 GB. Because most Raspberry Pi
can’t afford free 8 GB of RAM, we’ll claim this space from SD card via
previously exported
TMPDIRvariable[3]. - Reboot!
Replacing boot media with actual Alpine Linux installation
In previous chapter, Raspberry Pi had been rebooted into the same Alpine Linux boot media, but with a new Linux kernel and ZFS module available. We can use that to perform the real Alpine Linux installation, which will run not from RAM, but from SD card. Instructions here are similar to Alpine Linux wiki[4].
- On Raspberry Pi’s console log in as
root, no password required. - Start performing a regular Alpine Linux installation till the
“Disk & Install” section. Press
Ctrl+C - Install packages for disk partitioning, file systems and ZFS utilities.
Load ZFS module for ZFS utilities to work.
- Unmount the boot media.
- Repartition SD card. Refresh device nodes after repartitioning.
This will create
mmcblk0p1, EFI system partition, taking 300 MB space; andmmcblk0p2, Linux file system, taking all of the remaining space. - Create file systems.
- Create a ZFS pool. Configure ZFS volume without mounting it.
ZFS configuration is optional. Pick options according to your needs or preferences. Aside from
mountpoint, which must always be equal to “/”. - Mount the new Linux file system root.
- Enable ZFS services. They’ll become enabled on the target system as well.
- Complete the last step of
setup-alpine.
That should install the entire Alpine Linux system, including Raspberry Pi Linux kernel with necessary modules and generated initramfs supporting ZFS. If everything went smoothly, reboot should result in Raspberry Pi booting into Alpine Linux on a ZFS. Mission accomplished!
Creating a local network for containers and VMs
Containers and VMs are often treated as independent servers. This means they have to not only have access to the Internet, but be reachable from outside. This can be achieved by having multiple public IP addresses or port forwarding from the router to containers. In either way, containers and VMs have to be connected to the same LAN as Raspberry Pi. This chapter shows one easy approach for this.
| Destination | Subnet mask | Gateway |
|---|---|---|
| 192.0.2.0 | 255.255.255.0 | link-local |
| 0.0.0.0 | 0.0.0.0 | 192.0.2.1 |
Feel free to use any IP addresses instead of example 192.0.2.0/24 subnet.
In achieving this topology, Linux’s network interfaces can help. They are abstractions over hardware or virtual network devices. For example, bridge interface represents the switch, but it connects another interfaces instead of physical devices[5]. These interfaces, in turn, represent either the server’s NIC (eth0), container endpoints (veth0) or VM endpoints (tun0)[5]. That way, we can connect all containers and VMs on Raspberry Pi to the gateway as shown in the figure below:
We’ll start implementing this network from “br-lan”. Alpine Linux’s “openrc”
package provides a service named /etc/init.d/networking. That service depends
on “ifupdown” program. That program can bring up and down certain interfaces
according to the configuration specified in /etc/network/interfaces file.
Therefore, in order to make our network interfaces persistent between boot, we
have to edit this file.
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
auto br-lan
iface br-lan inet static
address 192.0.2.2/24
gateway 192.0.2.1
bridge-ports eth0
It’s very important that IP address is configured on the bridge, not on the NIC. Bridge will inherit MAC address from its first slave, “eth0”. Also, eth0 is going to pass through all incoming packages to the bridge anyway, rendering IP configuration on eth0 useless.
At this point, we are ready to extend Raspberry Pi’s LAN onto containers and VMs. But methods of connecting them to the bridge will be considered in the future chapters.
Creating and running containers via LXC
LXC is used in this article as a system containerization tool. I like it: it’s simple, minimalistic, daemonless, lightweight4 yet functional. Containerization as I always imagined!
On Alpine Linux, LXC is provided via package “lxc”. Just one package to use LXC with full feature set!
Talking about features, the following subchapters show some of the most useful features on examples.
Creating containers from templates
It’s always better to start using containers from a pre–created file system image than adding files from scratch. LXC can assist in this via templates[6]. Below are a couple of examples.
# Installing dependency
# Downloading file system tree
ALPINE_MIRROR=https://dl-cdn.alpinelinux.org
ALPINE_VERSION=3.23
ALPINE_PATCH=3
FSTREE_FILENAME=alpine-minirootfs-.-aarch64.tar
# Creating container
Even better when you don’t have to keep your template around, and can fetch it from a server:
Connecting container to the local network
To connect every new container to the bridge we’ve configured
in previous chapter, add the
following content to /etc/lxc/default.conf[7]:
lxc.net.0.type = veth
lxc.net.0.link = br-lan
lxc.net.0.flags = up
lxc.net.0.hwaddr = 10:66:6a:xx:xx:xx
This will create virtual Ethernet for every container, connected to the “br-lan” bridge, with random MAC address. Virtual Ethernet’s endpoint on container side will be named “eth0”.
Using LXC with ZFS
Create a ZFS volume in a pool made in previous chapter.
Now you can specify this ZFS volume when creating containers[6].
This will allocate container’s rootfs in a new ZFS volume under
rpool/lxc.
Deploying containers in ZFS gives two advantages:
- File system size control:
- File system snapshots5:
Using LXC with OpenRC
Alpine Linux has “lxc-openrc” package, allowing to start LXC containers from OpenRC init system.
To start certain LXC container on reboot:
- Symlink the OpenRC service.
- Maybe add dependencies on other LXC containers.
- Start LXC containers from OpenRC:
To start multiple containers on reboot:
- Add this to LXC container’s config:
lxc.start.auto = 1 lxc.group = "onboot" - Use OpenRC service.
Specifying dependencies between containers is less flexible when using the approach with one “lxc” service. However, you can still control containers’ boot order:
Running virtual machines via QEMU
QEMU is used in this article for virtualization tool. A widely known interface to KVM, a Linux type 2 hypervisor. Minimalistic interface, daemonless, but complex. Even though QEMU can be used without GUI, its CLI options are vast. The QEMU documentation is comprehensive, but I promise you don’t need to memorize every CLI option[8]. The relationship between QEMU and some VM manager, like libvirt, knida resembles relationship between Assembly and the C programming language: everyone uses the second, but knowing the first can help solve hard problems. This is why this chapter still exists, providing some examples of how to use QEMU on Alpine Linux, AArch64.
On Alpine Linux, QEMU system emulation is available via “qemu-system-aarch64” package. Many other “qemu-system-*” packages are available for different CPU architectures, but it’s recommended to always emulate native arch, making use of KVM acceleration.
Some packages are provided in different repositories. For stable releases, Alpine Linux has 2: “main” and “community”. All Alpine mirrors that I’ve seen provide both under similar URLs: all you gotta do is change “main” at the end of the URL to “community”.
| |
Using EFI in AArch64 VMs via AAVMF
QEMU is a hardware emulator. Computers run on hardware. Computers need a
firmware to start their OS. Therefore, QEMU VMs should use a firmware (BIOS) as
well. You don’t want to provide -kernel option to every VM you start, ain’t
that right?
UEFI is a well-known standard for computer firmware, both desktop and servers. On Linux, EFI firmware can be built using EDK II, if you’d like to get involved in firmware development, of course. If not, there are packages for images built by EDK II. One of them is “ovmf” on Alpine Linux. Unfortunately, BIOS firmware is usually only developed for one architecture, because BIOS always use assembly in their code, and OVMF is not an exception. Luckily, for AArch64, an EFI firmware is built and shipped in “aavmf” Alpine Linux package, which is what’s used in this chapter.
"$@"here denotes “everything else”: all arguments given to a shell script.
Now you can manage VMs just like they were regular computers: with EFI shell, GRUB bootloader, etc.
Connecting VM to the local network
To connect QEMU VM to a network described in previous chapter, add the following two options to QEMU. First option declares a network on the host, second option emulates a device connected to that network.
Device e1000 is a simple 1G/s Ethernet emulator. There are other options as
well, e.g. virtio-net-pci if you like VirtIO.
On host, a tapX device is created: a link–layer tunnel connected to br-lan
and communicating with emulated NIC in VM.
Using QEMU with ZFS
ZFS allows to allocate in its pool not only file systems, but also block volumes. While file systems have dnode structure, paths, attributes, etc., volumes simply have blocks of memory on pool’s devices. This allows to use ZFS volumes as efficiently as disk partitions, formatted with a file system like ext4 or a partition table like GPT. This chapter shows how to use ZFS volumes created on a pool from previous chapter in QEMU VMs.
Actually, any file of sufficient size can be used as a block device: have a file system, be mounted, partitioned, etc. But there is still an overhead of ZFS volume, because all blocks are written to a file. On the contrary, creating ZFS volumes in a pool allows to bypass the file system and write blocks directly to device.
- Create a ZFS block volume:
- After creating, a
/dev/zdXdevice should appear corresponding to new ZFS volume. - After finding the required device name, use it in QEMU as a raw hard drive.
It is possible to make ZFS files persistent, thus making our work more convenient.
- Make sure udev daemon is running:
On Alpine Linux, by default Busybox’s
mdevis used. Bothudevandmdevare used for dynamic device node allocation. First difference is thatudevruns as a daemon, whilemdevas a one-shot service. A daemon can react to hotplug events (e.g. new devices were added), whilemdevhas to be executed again. Another difference is in rule syntax, which define node names and other stuff. - Install udev rules for ZFS block volumes:
- If you created a ZFS volume before, reload udev rules and recreate devices:
Or via OpenRC: - Use volume in QEMU under a persistent name:
Connecting to QEMU VM’s console via pseudo-TTY
QEMU, while being a purely CLI application, allows many ways to interact with launched VMs. This includes, but not limited to: VNC, SPICE, desktop windows, serial consoles. The last option is, perhaps, the most useful when it comes to emulating headless servers.
Getting VM’s console on the same TTY is achieved via following option:
A serial console is created in QEMU VM as a new device. Therefore, for Linux VMs, EFI shell, GRUB and Linux kernel should expect and use that device. If not, you won’t see EFI logs, GRUB menu or the Linux system whatsoever. The last one is by far the worst, therefore ensure that kernel runs with following option:
console=ttyAMA0,115200
If a Linux distribution provides VM images, like Alpine Linux downloads, they will work out of the box, because they’re built with QEMU console in mind.
If you want to detach QEMU from current TTY (e.g. daemonizing), stdio console won’t be very useful. Luckily, you can achieve same console, but on an entriely new pseudo-terminal (PTY)!
-
Add following option:
QEMU will print something like this:
char device redirected to /dev/pts/1 (label serial0) -
Connect to PTY via any terminal communication program you like.
With Minicom:
With Picocom:
Managing multiple LXC containers and QEMU VMs via Incus
Because both LXC and QEMU are command line utilities, managing multiple instances with them involves typing several commands or many command arguments. This looks pretty cool every time you do it, but eventually becomes pretty tiresome and prone to errors. Monitoring instances is also troublesome, because CLI was never a good UI for real–time visual overview.
A simple solution to this problem is Incus — software managing both system containers (LXC, as described in previous chapter) and virtual machines (QEMU, as described in previous chapter) via common API served on either UNIX or TCP socket.
Unlike LXC, which is daemonless, Incus always runs a daemon. On Raspberry Pi, I observed a 60-150 MiB memory usage by the daemon itself and 14-20 MiB for every LXC instance. Bare LXC is a much more lightweight solution, sometimes an important thing to consider.
On Alpine Linux, Incus is provided via package “incus” from “community” repository.
The following subchapters demonstrate Incus’s capabilities and setup process.
Setting up Incus from command line. Using existing ZFS pool and network interface
- To control the Incus daemon, install a CLI client:
- Begin basic Incus initialization:
- To add ZFS pool from previous chapter,
when asked about storage pool, choose following answers:
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes Name of the new storage pool [default=default]: servers Name of the storage backend to use (dir, zfs) [default=zfs]: zfs Would you like to create a new zfs dataset under rpool/incus? (yes/no) [default=yes]: yes - To connect containers and VMs to the network implemented in previous chapter,
when asked about network bridge, choose following:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no Would you like to use an existing bridge or host interface? (yes/no) [default=no]: yes Name of the existing bridge or host interface: br-lan - To prerare to manage Incus via web UI, choose
following:
Would you like the server to be available over the network? (yes/no) [default=no]: yes Address to bind to (not including port) [default=all]: Port to bind to [default=8443]: - Answer all other questions. Command should finish executing, which means that a new pool, a network device and a TCP port have been added to Incus.
Installing web UI
In previous chapter, step 5, Incus was bound to some TCP port, e.g. 8443. We can try to connect to it via HTTPS:
But right now we’ll only get an API response:
According to the documentation, Incus is actually capable of serving a web UI if provided with a special environment variable[9]. Here is how it’s done:
- There is one package in Alpine Linux’s official repositories for an Incus UI.
However, it resides in “testing” repository, so we need to enable it:
ALPINE_MIRROR=https://dl-cdn.alpinelinux.org - Install the UI package:
- In
incusdservice’s config, exportINCUS_UIand set it to path where “incus-ui-canonical” was installed. - Restart Incus.
After these steps, if you enter https://192.168.0.2:8443 from your browser,
Incus will redirect you to UI, based on your user agent. Set up login method
that you prefer: TLS certificates are the simplest approach, while OpenID and
OpenFGA can be used for external authentication and authorization
respecively[9]. After logging in, you should see a UI
like this:
Using API is, of course, still possible e.g. via
curl.
Managing Incus from web UI is going to be significantly easier and more
enjoyable than via incus-client.
Converting exising LXC containers to Incus
What if you liked using LXC in the past, but discovered Incus and enjoyed it even more? I personally have recognized both the need to transition to Incus, as well as the need to convert 10 LXC containers to Incus instances. This chapter shows how it can be done.
- Install tools for converting system containers to Incus instances.
- Convert all LXC containers installed at
/var/lib/lxcto Incus isntances. Delete source LXC afterwards. - If you did manual configuration of some containers, be prepared that some
LXC options don’t mean anything in Incus. You’ll see the following error
message on such occasion:
Delete unsupported keys manually, then return to step 2.Parsing LXC configuration Checking for unsupported LXC configuration keys Skipping container 'foo': Found unsupported config key 'lxc.group' - (Bonus) Delete
incus-conversionif you’re planning to use LXC only from Incus.
Installing QEMU support for Incus
While creating VM instances in Incus UI is straightforward, additional dependencies are required that do not come with “incus” package.
- Install dependencies. Package “incus-vm” is a little buggy in the sense
that, no matter your CPU architecture, it will always install QEMU for
x86_64, ehen though Incus always uses KVM. Therefore, install the right
QEMU alongside.
Surprisingly, while “incus-vm” can’t figure out proper QEMU to install, it chooses a proper EFI firmware to install: “ovmf” for x86_64 and “aavmf” for aarch64, mentioned previously in QEMU chapter.
- Disable secure boot in default profile (in web UI: Configuration, Security policies, Enable secureboot (VMs only)). AAVMF doesn’t provide secure boot, therefore it’s necessary to disable it in Incus.
- If you’re using ZFS pool in Incus, make sure to install “zfs-udev”, as described in previous chapter.
After all that, hopefully, it should be possible to create and start VM instances.
Troubleshooting
On boot, Linux didn’t import ZFS pool and mount root file system
Initramfs on Raspberry Pi drops into emergency shell after not being able to mount the root file system.
Check kernel’s arguments:
|
If it returns 1, bootloader is configured incorrectly.
From the same recovery shell, mount the first (/boot) partition. In file
cmdline.txt, remove all incorrect root options, then add a correct option
for your ZFS root. Try starting the kernel again.
|
Incus instance creation failed: qemu-system-aarch64:: exit status 1
I was only able to finally start a QEMU VM after installing “qemu-hw-display-virtio-gpu-pci” package. I’m gonna guess it’s because Incus always provides a graphical overview of every VM, and always uses GPU on a PCI provided with VirtIO for that.
Nested LXC containers fail to start in Incus
If Incus lists the warning: “Couldn’t find the CGroup memory controller”, expect errors from LXC containers. With no memory controller in CGroup, memory limits for instances will be ignored. Also, if you try to run nested containers, you might get the following error message from LXC (even if you enabled nesting in Incus!):
Resource busy - Could not enable "+cpuset +cpu +io +pids" controllers in the unified cgroup 9
I noticed that by default, for some reason, CGroup memory is disabled in
Raspberry Pi Linux kernel. Here are the contents of /proc/cmdline before
fix. Notice the parameter “cgroup_disable”.
coherent_pool=1M 8250.nr_uarts=0 snd_bcm2835.enable_headphones=0 cgroup_disable=memory numa_policy=interleave nvme.max_host_mem_size_mb=0 bcm2708_fb.fbwidth=1184 bcm2708_fb.fbheight=624 bcm2708_fb.fbswap=1 vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000 root=ZFS=rpool modules=sd-mod,usb-storage,zfs quiet rootfstype=zfs
Appending “cgroup_enable=memory” to /boot/cmdline.txt, then rebooting solved
the issue and allowed launching nested containers and set memory limits for
instances.
Can’t mount file systems from hypervisor to Incus VM
Unfortunately, that’s a problem with “virtiosfd” package on AArch64. It should share host file systems with QEMU VMs by utilizing VirtIO, but instead the process just dies. There is an issue for this problem. For now, if you really need to share file systems between a hypervisor and VM, use NFS.
Command not found when trying to connect Incus terminal
Check instance’s logs. Check if there is a message like this:
ERROR attach - ../src/lxc/attach.c:lxc_attach_run_command:1841 - No such file or directory - Failed to exec "bash"
If an instance doesn’t have Bash (e.g. Alpine Linux), reconnect the terminal
while specifying sh command instead of bash.
Tips and tricks
Disabling useless WiFi on boot
WiFi connection is rarely useful on servers, especially when using a network configured in previous chapter. If you’re not planning to use WiFi interface, it is possible to disable it at boot time.
Or, if config.txt includes usercfg.txt:
On next reboot, Linux will start without wlan0 interface.
Saving disk space via ZFS compression
ZFS has built-in compression, which allows to compress and decompress blocks on
the fly in order to save file system space. It also can give faster write speeds
if data is susceptible to compression. This is usually true for most volumes;
especially for Linux root volume, because ELF binaries are easily
compressed. I recommend enabling compression by default on every ZFS volume
in rpool. Algorithm “LZ4” is one of the fastest in terms of (de)compression
speed.
Some volumes (including those created in Incus) are only supposed to have compressed data (e.g. download cache, package repository). In that case, turn off compression for them specifically:
I also recommend turning compression off for every block volume that’s used as a partition table/file system (e.g. VM root disk), because file systems in that volume can have a compression of their own.
Using one Syslog server for all Linux instances
There is a very high chance that most, if not all, Incus instances will run a Unix-like OS. They will almost always have Syslog, and many programs use it. One Unix-like host usually runs Syslog daemon for saving logs, log rotation job and applications which send logs to the daemon. On a network it can be easier for Syslog daemon to send logs to a remote host; only that host will be responsible for log rotation.
This chapter shows a centralized Syslog setup using Rsyslog on Raspberry Pi
and Busybox’s syslog on Incus instances.
- On Raspberry Pi, install dependencies:
- Add following lines to Rsyslog’s configuration, which enables listening for
messages on UDP socket.
- On every instance, configure Syslog daemon to send messages to remote host
only.
Installing and serving Incus documentation
Installed in previous chapter web UI has many links to the
documentation that should be served via Incus daemon itself; however, no package
on Alpine Linux provides that documentation yet. If there was a package like
incus-doc, it would be possible to use INCUS_DOCUMENTATION variable, just
like with web UI[9]. I made a merge request for incus-doc
though; if you Abuild Incus from my branch, you’ll get Incus documentation of
the same version as Incus daemon.
Conclusion
It is possible to host many system containers and virtual machines on Raspberry Pi, as on many other servers with Intel/AMD CPUs. While the process gets a little complex, it allows to set up a cheap virtual hosting solution that can be used at home and is easy to maintain.
The support, however, is not ideal yet on AArch64 (e.g. no virtiosfd) and Raspberry Pi (no nested KVM). Perhaps other device which runs x86_64 would support virtualization better. However, because every software used in this article is open source, there is a high probability that these problems will be noticed and fixed in the future, making Raspberry Pi a flawless virtual host.
References
Alpine Linux. "alpine-conf" (2025-12-03), ver. 3.21.0. Accessed: 2026-03-07.
Alpine Linux. "Create a Bootable Device" (2025-02-05). Accessed: 2026-03-07.
Alpine Linux. "Diskless mode, Upgrading a diskless system" (2025-02-28). Accessed: 2026-03-07.
Alpine Linux. "Root on ZFS with native encryption" (2025-04-02). Accessed: 2026-03-07.
Michail Litvak. iproute2. "ip-link(8)" (2012-12-13).
Daniel Lezcano. "lxc-create(1)" (2021-06-03).
Daniel Lezcano. "lxc.container.conf(5)" (2021-06-03).
The QEMU Project Developers. "QEMU User Documentation" (2026), ver. 10.2.50. Accessed: 2026-03-11.
Incus contributors. "Incus documentation" (2025-08-16), ver. 6.0.5. Accessed: 2026-03-14.