Virtual Desktop Infrastructure (VDI) is quite a buzz-word now in enterprise computing, and it’s something I’d like to experiment more with in my homelab. Essentially, it’s a new way to describe old school terminal servers, but with modern features and marketing. The primary difference is that VDI normally implies that each ‘seat’ is a virtual machine and has some resources associated with it, as opposed to a terminal session running on a shared server. By using VDI, an admin can centralize all of the compute resources and the end devices only need to provide an interface (video / keyboard / mouse), and also guarantee resources such as RAM or GPU to the virtual desktop (something a terminal server does not do). This means the end devices can be significantly cheaper, since they aren’t doing much real work, although they now have to deal with a video stream of the virtual desktop.

In my specific use case, I would like to use a Raspberry Pi attached to the back of the monitor as a general purpose PC in the kitchen. I could just use the Pi itself, or a more expensive device like a NUC, but I already have a Raspberry Pi B+ and a perfectly useful server, so putting compute resources on the server would be ideal for me. Plus, I’d like to expand my knowledge of the different methods for VDI over the next few months, and this is a good start.

My goals for the experiment:

  • Use a Raspberry Pi B+ as a usable general purpose web browsing desktop
  • Host the general purpose desktop on my lab server (Proxmox)
  • Boot the Raspberry Pi directly into the server session without needing to log in to the Pi or launch the session.
  • Must support both Linux and Windows targets

Contents

Since this is a really long post, here are some bookmarks to help you navigate:

Video Form

I produced a video which covers some of these topics, click on the thumbnail below to watch it. While recording the video, I was able to find a Raspberry Pi 2 Model B. The article was written with an original Raspberry Pi, and the Pi 2’s performance is far more usable for general desktop usage, but not video playback. Video Thumbnail

Setup Linux VM for testing

I’ll use Ubuntu 21.10 Desktop for this, since it’s the latest version of Ubuntu as of this writing. I created a new VM in Proxmox with pretty minimal hardware:

  • 4 CPUs
  • 2GB RAM
  • 32GB SCSI disk - emulated SSD, enable discard (TRIM support)
  • Graphics as SPICE
  • Add a device for audio, using ich9-intel-hda driver, with a SPICE backend

After creating the VM, I booted it to run the installation. Since graphics are SPICE, I need to use a SPICE client on my workstation (instead of the usual noVNC web app). I installed virt-viewer from virt-manager for Windows, and it includes remote-viewer which is designed for this purpose. I selected a normal desktop for Ubuntu and let it run. I had absolutely no issues with SPICE integration on my workstation, the Ubuntu VM integrated the mouse seamlessly with the host and the keyboard worked on the guest when the moust was over the host, as expected.

Setup Windows VM for testing

I also wanted to prove that this approach to VDI works with Windows, so I installed Windows 10 64-bit in a VM as well. It’s not as happy with little RAM as Linux is, so it gets 4GB. My Proxmox test host isn’t exactly well endowed with memory. The rest of the setup is the same, emulated SSD, enable TRIM, graphics and sound as SPICE.

Windows installed fine with SPICE, but once it was installed and rebooted, the SPICE keyboard and mouse didn’t work. I stopped the VM, switched back to default (VGA emulation) and rebooted it. Now using noVNC, they worked again. With this, I was able to get Windows to boot, let it deal with a ton of updates (as is Windows), and then it made me select my region and keyboard layout (all things Ubuntu did while it was copying files), and then it ‘had some important setup to do’ and left me waiting again. What a pain. Windows also really didn’t want me to setup an offline account, telling me what a ’limited experience’ it would be. I perservered and got my offline account on my test VM. The whole setup experience is maddening compared to Linux. Plus, Microsoft Edge demanded a setup wizard of its own. I don’t know how people live with this.

After all of this, I installed the qemu guest drivers and SPICE guest drivers and restarted the VM, re-enabling SPICE.

Setting Up Proxmox Authentication

We’re eventually going to need to authenticate with the Proxmox API to download the temporary proxy authentication token for SPICE. This requires hardcoding a username/password into the shell script which will launch on boot on the Pi. So, to prevent hardcoding our Proxmox admin password into the Pi’s shell script, we can create a new user just for this purpose and only give it permissions to view the console of our VDI servers. I’m using a single Proxmox host with built-in authentication, so if you’re using a more complex authentication method like AD or LDAP, you’ll have to figure this out on your own.

First, I created a new role (‘VDIViewer’) which only has access to VM.Console. This means our user can only view the console and nothing else. Viewing the console is quite powerful for the VM, but has no access to the host. You might also want to give this group VM.CDROM and VM.PowerMgmt permissions so they can insert their own virtual CDs and restart their VM when it dies or if they accidentally power it off, but that’s beyond the scope of this example.

Next, I created a new user in the pve (Proxmox VE Authentication Server) realm, and gave it a password.

Finally, I go to the VM(s) I want to let it access, scroll down to Permissions, and add the user to the VM with role VDIViewer.

If I had a larger number of VMs or a pool of VMs and VDI users, I could also make a user group and pool, add all of the VMs to the pool, and add the user group with the role VDIViewer. Since I only have one of each, just adding the user directly to the VM is easiest.

Setup the Raspberry Pi

Raspberry Pi OS Lite Setup

All I had free at this time was a Raspberry Pi Model B+ rev 1.2 (the original single core Pi, upgraded to the 40-pin header). Hopefully it can handle SPICE. I downloaded and installed the lite version of Raspberry Pi OS Bullseye (NOT desktop) and imaged a new SD card. I also enabled SSH by creating an empty file named ‘ssh’ in the boot partition. After this, the Pi booted up and was ready for me to start.

As with any new Pi, we need to run raspi-config and do all of the usual new Pi setup

sudo raspi-config

The important options here are

  • System Options -> Hostname, set it to something unique (I used ‘vdiclient’ for this example)
  • System Options -> Password, change it from ‘raspberry’
  • System Options -> Boot / Auto Login, set to Console Autologin
  • System Options -> Network at Boot, set to Yes so it won’t login until network is available (since we kinda need that to be a thin client)
  • Localization Options -> Timezone, set to the right time zone

Once finished, reboot like it asks. It should now auto-login on the physical console. If you need to use Wifi, you can also configure that here. I use wired Ethernet.

And, of course, we need to do updates!

sudo apt update
sudo apt upgrade

Minimal GUI Dependencies

Since I want to minimize the amount of software I install on this poor Pi, I’ve skipped the desktop environment and just have a console. However, I still need a little tiny bit of graphical environment to run the SPICE client. So, I’m going to install an X server and window manager, but skip the desktop environment and login manager. This means I will not have a graphical desktop (no desktop environment), and I will need to launch in to the graphical environment from an already logged in terminal (no login manager), but I can still launch programs which expect a working X session.

So, now I need to install all of this

#Install the X server and Openbox window manager
sudo apt install xserver-xorg x11-xserver-utils xinit openbox

We must be patient, but apt will take care of us. Then, we can continue on.

SPICE Client for the Pi

I now need to install the SPICE client and ideally test it out before setting it up to auto-run on boot. The SPICE client is ‘remote-viewer’, part of the ‘virt-viewer’ package, which itself is part of the ‘virt-manager’ project. In this case, we can install it from the Debian repo:

sudo apt install virt-viewer

Again, more patience. Unfortunately, we can’t test it yet since we don’t have a functional graphical environment, and we also don’t have a SPICE server to connect to.

Proxmox SPICE Proxy

After the viewer is installed, we need to get a configuration file for the SPICE client. Proxmox generates these, but it uses a temporary auth token with a limited lifetime, so we need to download a new configuration file each time we launch the remote viewer. Proxmox has a script available for this which we will use and modify. We could just run the script as-is, but without an x environment running at this point, it won’t work. However, I did test the script as-is on a graphical version of Raspbian and it did work fine, so I trust it.

I made a modified version of this script which hardcodes everything that we need. We just need to call this new script when the graphical environment is ready.

nano thinclient.sh
#!/bin/bash
set -e

# Set auth options
PASSWORD='vdiuser'
USERNAME='vdiuser@pve'

# Set VM ID
VMID="100"

# Set Node
# This must either be a DNS address or name of the node in the cluster
NODE="pvehost"

# Proxy equals node if node is a DNS address
# Otherwise, you need to set the IP address of the node here
PROXY="$NODE"

#The rest of the script from Proxmox
NODE="${NODE%%\.*}"

DATA="$(curl -f -s -S -k --data-urlencode "username=$USERNAME" --data-urlencode "password=$PASSWORD" "https://$PROXY:8006/api2/json/access/ticket")"

echo "AUTH OK"

TICKET="${DATA//\"/}"
TICKET="${TICKET##*ticket:}"
TICKET="${TICKET%%,*}"
TICKET="${TICKET%%\}*}"

CSRF="${DATA//\"/}"
CSRF="${CSRF##*CSRFPreventionToken:}"
CSRF="${CSRF%%,*}"
CSRF="${CSRF%%\}*}"

curl -f -s -S -k -b "PVEAuthCookie=$TICKET" -H "CSRFPreventionToken: $CSRF" "https://$PROXY:8006/api2/spiceconfig/nodes/$NODE/qemu/$VMID/spiceproxy" -d "proxy=$PROXY" > spiceproxy

#Launch remote-viewer with spiceproxy file, in kiosk mode, quit on disconnect
#The run loop will get a new ticket and launch us again if we disconnect
exec remote-viewer -k --kiosk-quit on-disconnect spiceproxy

And of course, don’t forget to make it executable

chmod +x thinclient.sh

At this point, we are almost ready to test, but we still can’t just launch the script without a running X server, so we need to configure Openbox (our window manager)

Openbox Window Manager

Openbox is the window manager I’ve installed, so we need to create a startup script for it to run which will launch our one graphical program.

sudo nano /etc/xdg/openbox/autostart

Replace all of the contents with the new startup script:

#Allow exit of X server with ctrl+alt+backspace
#If you don't want to let the user terminate/restart, leave this out
#You can always `killall xinit` via SSH to return to a terminal
setxkbmap -option terminate:ctrl_alt_bksp

#Start the shell script we already wrote in our home directory
#Runloop restarts the thin client (new access token, new config file)
#if the session is terminated (i.e the VM is inaccessible or restarts)
#User will see a black screen with a cursor during this process
while true
do
    ~/thinclient.sh
done

And, the final moment we’ve all been waiting for, from the physical terminal (not the SSH one), start the X server:

startx --

Start On Boot

The final task is to make startx run on boot (specifically, when our console user logs in to tty1). This one is pretty simple. We need to edit the bash profile to run startx on the first console only

nano .bash_profile
[[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && startx --

Try rebooting to see if it works

sudo reboot

After all this, we have a working SPICE thin client running on a Raspberry Pi, with functional video, keyboard, mouse, and speakers.

Conclusions

I tested this with both my Ubuntu VM and Windows 10 VM and it worked correctly in both cases. The Pi 1 is far too slow to be usable with Windows 10, stuttering through all of the menu animations. Ubuntu was better, but still not a daily driver experience. It’s still better than using the GUI on the Pi 1 itself. I’d imagine a newer Pi would work much better, but I’ve used all of my good ones for projects and am waiting on a few on backorder, so more testing will come in the future once I can get ahold of more Pi’s.

While filming the video for this blog post, I was able to swap out a Pi 2 from another project and test the setup with the quad-core version. It’s still not nearly as powerful as the Pi 4, but it makes general desktop usage completely usable on both Windows 10 and Ubuntu, although video playback and animations still struggle. Still, depending on your application, a Pi 3 or Pi 4 for each node in a computer lab or a minimal setup for kids homework is very low cost and perfectly functional. The SPICE protocol on capable client hardware is capable of really anything except low latency gaming, so the real limitation is in the Pi 1 and 2’s limited CPU/GPU power to deal with the screen video stream.

I found that, with the Pi 2, the Ubuntu desktop experience was excellent aside from streaming video, but Windows desktop animations and especially the login screen would occasionally cause major stuttering in display updates. It’s also very possible that this is a Windows issue due to running without hardware graphics acceleration, and not a problem with the thin client at all.

I didn’t mention this in either the article or the video, but the SPICE protocol is also capable of USB forwarding, although there are some setup hoops on the Windows client. I didn’t test this yet, so it’s a topic for a future project. Also, this setup should support audio, but the minimal Raspbian installation on the Pi doesn’t, so audio on the Pi is also a topic for a future project.