In previous posts, I’ve been building up a thin client / VDI infrastructure based on Proxmox hosted virtual machines, using the SPICE protocol. This has gone well. However, the current setup basically launches the computer into a purely thin client mode, where it’s hardcoded to a specific VM and can do nothing else. There has been some interest in making some kind of launcher to select VMs to log in to, and I decided to find a solution for this. I started on the latest Raspberry Pi OS, but I found that they’ve hacked at their display manager / desktop environment enough that it’s not exactly like any other distribution, so in this blog post I’m going to try and implement this using both Raspberry Pi OS and Debian with LXDE installed. This is as close as I can get to the same environment as the Pi, so the instructions are similar.

This article is part of the Thin Client Series

This article was written with Raspberry Pi OS (32-bit) Bullseye release 2022-01-28 and Debian Bullseye release 11.2.0.

Sections

This page is really long, so here are pointers to individual sections

Video

Of course, I have a video to go along with this topic. Click the thumbnail to watch it on Youtube. Video Thumbnail

Setup for Raspberry Pi

Getting started, we need to use Raspberry Pi Imager to image an SD card with the latest version of Raspberry Pi OS. This time, we are starting from the full version, not lite, so we get the full graphical environment.

Once it’s installed, boot it up, and go through the ‘Welcome to Raspberry Pi’ wizard, which sets your keyboard layout, timezone, etc, and let it run updates. It might take awhile.

Now, we have a few more configuration things to do. Open a terminal, and run sudo raspi-config. We need to configure the following things:

  • System Options -> Boot / Auto Login, set to Desktop instead of Desktop Autologin (we want to require a login)
  • Interface Options -> SSH and enable it (we will use it to configure the Virtual Session users)

Now reboot and continue with the software install below.

Setup for Debian

I started with the Debian Netinst image and selected ‘Install’ (NOT ‘Graphical Install’), so we will end up with a console environment and need to install the graphical bits later. Important bits for the installer:

  • Do not set a root password, so Debian installs sudo
  • The new user you create will be the administrative user (with sudo permissions), so choose something good, not something you’re going to want to use later as a virtual session user
  • Choose Guided -> Use Entire Disk for partition type
  • Choose ‘All files in one partition’

Eventually, it will ask you if you would like to install a desktop environment.

  • Deselect GNOME
  • Select LXDE
  • Select SSH server (so we can debug like we could on the Pi)

We have one last configuration bit to modify: editing the lightdm configuration to show the user list. Edit it with sudo nano /etc/lightdm/lightdm.conf. Scroll down until you find #greeter-hide-users=false and remove the #, so the user list is not hidden.

Installing Dependencies

We already have a working graphical environment, so we need to update (just to be safe) and install remote-viewer (part of the virt-viewer package), and then create our thin client script. We also need curl, which isn’t installed by default on Debian but is on Raspberry Pi OS.

sudo apt update
sudo apt upgrade
sudo apt install virt-viewer curl

Thin Client Script

We’re using basically the same script as the previous iteration of this project, except this time we are sharing the script across users and passing in the VM ID as an argument. In addition, at the end of the script, we are killing lxsession which effectively logs us out, so instead of the user being stuck in kiosk mode forever, any attempt to exit the thin client results in being sent back to the login screen. Since both Raspberry Pi OS and Debian are using LXDE, the same script works for both.

Important note here. This script will ask the Proxmox API to return a spiceproxy file. The spiceproxy file, by default, will contain the DNS name of the Proxmox node currently running the VM (the name you set in the Proxmox installer). If that name does not resolve correctly on your network, you will need to add an argument to curl to force the spiceproxy file to use the specified proxy address instead (which is $PROXY, likely the IP address). I’ve left a line commented out that shows what the alternate command would be, so replace the uncommented curl command with the commented one if you need the proxy argument.

sudo nano /usr/local/bin/thinclient
#!/bin/bash
set -e

# Set auth options
PASSWORD='vdiuser'
USERNAME='vdiuser@pve'

# Set VM ID from the first and only argument
VMID="$1"

# Set Node
# This must either be a DNS address or name of the node in the cluster
NODE="pvehost"

# Proxy equals node if node is a DNS address
# Otherwise, you need to set the IP address of the node here
PROXY="$NODE"

#The rest of the script originated from a Proxmox example

#Strip the DNS name to get the node name
NODE="${NODE%%\.*}"

#Authenticate to the API and get a ticket and CSRF token
DATA="$(curl -f -s -S -k --data-urlencode "username=$USERNAME" --data-urlencode "password=$PASSWORD" "https://$PROXY:8006/api2/json/access/ticket")"

echo "AUTH OK"

#Extract the ticket an CSRF token from the returned data
TICKET="${DATA//\"/}"
TICKET="${TICKET##*ticket:}"
TICKET="${TICKET%%,*}"
TICKET="${TICKET%%\}*}"

CSRF="${DATA//\"/}"
CSRF="${CSRF##*CSRFPreventionToken:}"
CSRF="${CSRF%%,*}"
CSRF="${CSRF%%\}*}"

#Request a SPICE config file from the API
#Note that I've removed the proxy argument
#This results in Proxmox pointing remote-viewer to the node that is currently running the VM,
#instead of the node that we specified with PROXY. Only useful in clustered scenarios,
#but it doesn't hurt to leave it out.
#I left the other command commented out, so you can replace the first curl with the second if you need the argument
curl -f -s -S -k -b "PVEAuthCookie=$TICKET" -H "CSRFPreventionToken: $CSRF" "https://$PROXY:8006/api2/spiceconfig/nodes/$NODE/qemu/$VMID/spiceproxy" -X POST > spiceproxy
#curl -f -s -S -k -b "PVEAuthCookie=$TICKET" -H "CSRFPreventionToken: $CSRF" "https://$PROXY:8006/api2/spiceconfig/nodes/$NODE/qemu/$VMID/spiceproxy" -X POST -d "proxy=$PROXY" > spiceproxy


#Launch remote-viewer with spiceproxy file, in full screen mode
#You can add USB passthrough options here if you'd like
#Not calling via exec, so the script continues after remote-viewer exits
remote-viewer -f spiceproxy

#Kill lxsession
#This is how LXDE is designed to logout, it's not a hack lol
killall lxsession

And of course, don’t forget to make it executable

sudo chmod +x /usr/local/bin/thinclient

Creating Virtual Session Users

Since we are going to abuse the login system to select which thin client to launch, we need to create a new user on the local system for each VM we want to be directed to.

To do this, we use sudo adduser <username> It will ask you for a password, and you must give the account a password. If you don’t want users to require a password, we will get there later. When it asks you for user information, fill in the ‘Full Name’ field with the description of the VM. This field is what’s displayed in the login box, so you could set the full name to something like Windows 10 w/ GPU so users know what VM they are logging into.

Passwordless Login (optional)

To allow users to login without specifying a password, we’re going to modify the PAM configuration to automatically succeed if the user is part of the nopasswdlogin group. To do this, we need to modify /etc/pam.d/lightdm and add the following line right after #%PAM-1.0 at the top of the file:

auth    sufficient  pam_succeed_if.so user ingroup nopasswdlogin

And of course we need to actually create that group

sudo groupadd -r nopasswdlogin

Now, for each no-password virtual session user, add them to the group nopasswdlogin

sudo gpasswd -a <username> nopasswdlogin

Configuring LXDE Autostart

This is the step where we break everything and then fix it. Make sure SSH works so you can correct anything broken.

In short, when LXDE starts, it runs the autostart file for the system, followed by the autostart file for the user. The autostart file for the system includes things like a program which displays desktop icons (pcmanfm), and a program which displays the menu toolbar with useful buttons and widgets (lxpanel), and also the screensaver (xscreensaver). For virtual session users, we aren’t really a fan of any of these programs, since thin client shouldn’t have access to the underlying Linux system. However, we still need to retain a way to launch these programs when a local user logs in.

In order to achieve this, we need to completely empty the system autostart file and put all of the usual system-wide LXDE utilities (pcmanfm, lxpanel, xscreensaver) into the user’s autostart file, but only for users which have local control. This means any new users you create will be trapped with a desktop background and no way to do anything until you create a user autostart file, so be careful with this process.

To do this, log in as the user you want to retain local access (probably Pi?), backup the autostart file in case we need it later, and then copy it as the local users autostart file.

Commands for Raspberry Pi:

#Backup old system autostart file
sudo mv /etc/xdg/lxsession/LXDE-pi/autostart /etc/xdg/lxsession/LXDE-pi/autostart.bak
#Create new empty system autostart file
sudo touch /etc/xdg/lxsession/LXDE-pi/autostart
#Create configuration directory and parents for the local user
mkdir -p ~/.config/lxsession/LXDE-pi
#Copy the original system autostart as our new user autostart
cp /etc/xdg/lxsession/LXDE-pi/autostart.bak ~/.config/lxsession/LXDE-pi/autostart

Commands for Debian:

#Backup old system autostart file
sudo mv /etc/xdg/lxsession/LXDE/autostart /etc/xdg/lxsession/LXDE/autostart.bak
#Create new empty system autostart file
sudo touch /etc/xdg/lxsession/LXDE/autostart
#Create configuration directory and parents for the local user
mkdir -p ~/.config/lxsession/LXDE
#Copy the original system autostart as our new user autostart
cp /etc/xdg/lxsession/LXDE/autostart.bak ~/.config/lxsession/LXDE/autostart

Now, one at a time, we are going to create autostart files for each thin client user. Since you need to run these commands as the user in question, it’s easiest if you login over ssh as the virtual session user. Repeat this step for each virtual session user you have.

Commands for Raspberry Pi:

#Create folder where the autostart file goes
mkdir -p ~/.config/lxsession/LXDE-pi
#Create autostart file and edit it
nano ~/.config/lxsession/LXDE-pi/autostart

Commands for Debian:

#Create folder where the autostart file goes
mkdir -p ~/.config/lxsession/LXDE
#Create autostart file and edit it
nano ~/.config/lxsession/LXDE/autostart

Contents of the file:

@/usr/bin/bash /usr/local/bin/thinclient 100

Replace 100 with the VM ID you want this user to be launch. This file is not a shell script and is not executed as such, so we need to call bash (with the full path!), and pass it an argument for the script to run and arguments to further pass to the script.

Conclusions

This is a great solution if you don’t want the thin client to be tied to a specific VM. Ideally we’d be able to dynamically clone VMs as commercial VDI solutions do, but at this point, each virtual session user is tied to a specific VM ID in Proxmox.