I’ve made many videos on Thin Clients before, all of them relying on Proxmox and the SPICE protocol. This works well when you control both the client and the hypervisor, and allows a lot of flexibility in the guest OS at the expense of flexibility at the client. If you want to rely on a remote access / Bring-Your-Own-Device type solution, you probably care more about solid multi-platform client support than flexibility in mixing VM OSes and running with no software installation on the VM. To this end, I’ve setup a modern Linux terminal server, which can be used to allow many clients to simultaneously connect to their own Linux desktops remotely, from nearly any device OS in common use today.

This project is part of the Multiuser, Multidesktop, and Multiseat Megaproject.


This is a very long article, filled with many commands to copy and paste along with the reasoning behind them. Feel free to jump around to the sections relevant to you using the links below.


This project has a video! Click on the thumbnail to watch it. Video Thumbnail

Choosing a Distribution

I’ve chosen to base this install off xubuntu 21.10 Impish Indri. I chose xubuntu becasue it’s lighter weight than the GNOME or KDE desktops, and I’m hoping the reduced fluff will improve memory usage on systems with a large number of clients. I also enjoy XFCE as a generally simple and easy to use desktop environment. As to the release, I chose 21.10 as it’s the most recent release as of this writing, although support ends in just a few months as the 22.04 LTS drops very soon.

Even though I’m setting up a terminal server, I installed the Desktop version, since I’m going to need the full desktop environment and all of the user applications anyway. This also means the server’s physical console is now graphical, which isn’t really a huge deal, but you cannot log in to the physical console and virtual session at the same time, and the remote user cannot kick the physical console user.

Instead of using a hypervisor as I usually do, I’ve installed this on a bare metal server, since I’m trying to load it up heavily with clients. My test platform is an AMD Ryzen 3400G (4c/8t) APU with 32G of RAM and a 500G SATA SSD.

After installation I ran the usual software updates (sudo apt update && sudo apt upgrade) and installed OpenSSH server so I can comfortably access it remotely (sudo apt install openssh-server).

Install XRDP

First we need to install xrdp itself. Ubuntu has a package for it, so it’s pretty simple: sudo apt install xrdp At this point, you should be able to connect to the system using an existing user and get the XFCE desktop we know and love. However, you’ll notice a decently big problem - there is no sound.

XRDP relies on an internal PulseAudio API which requires it to be compiled for the specific version of PulseAudio on the system, so we have to build the PulseAudio plugin ourselves. XRDP has a great guide on this here.

So, here’s all of the commands from their guide compressed into one. It’ll ask for your sudo password and for you to confirm a few things. If you’re using ZFS, or remotely mounting home directories via NFS/SMB, or have another unusual configuration of home mount points, skip down where I have a more complicated set of instructions for you.

#Install packages and make sure git is installed
sudo apt install build-essential dpkg-dev libpulse-dev git autoconf libtool -y
#Change to home directory for git checkout
cd ~
#Clone repository
git clone https://github.com/neutrinolabs/pulseaudio-module-xrdp.git
#cd into repository
cd pulseaudio-module-xrdp
#Run the script they provide to get started
#Configure and make
./bootstrap && ./configure PULSE_DIR=~/pulseaudio.src
sudo make install

ZFS Quirk

If you’re using ZFS or remotely mounting the home directories, the script and default schroot configuration do not play nicely. The script uses schroot to create a temporary environment to build PulseAudio, and it bind mounts /home into the environment to copy the source files it needs out of the temporary environment. However, with ZFS, each user’s home directory is a separate zfs dataset (and therefore a separate mount point), so bind mounting /home doesn’t mount any home directories. Oops.

The developer has fixed this, but the change has not yet been merged. Until it’s merged, we can download the change directly out of his branch and use it. Once he merges it, I’ll update this page and remove this section.

#Install packages and make sure git is installed
sudo apt install build-essential dpkg-dev libpulse-dev git autoconf libtool -y
#Change to home directory for git checkout
cd ~
#Clone repository
git clone https://github.com/neutrinolabs/pulseaudio-module-xrdp.git
#cd into repository
cd pulseaudio-module-xrdp
#Install curl
sudo apt install curl -y
#Download the modified script which hasn't yet been merged
curl https://raw.githubusercontent.com/matt335672/pulseaudio-module-xrdp/remove_home_dependency/scripts/install_pulseaudio_sources_apt_wrapper.sh > scripts/install_pulseaudio_sources_apt_wrapper.sh
#Make it executable
chmod +x scripts/install_pulseaudio_sources_apt_wrapper.sh
#Run the modified script
#Configure and make
./bootstrap && ./configure PULSE_DIR=~/pulseaudio.src
sudo make install

RDP Security Tips

RDP with modern security settings to prevent protocol downgrade attacks, is generally considered to be a secure protocol on its own. User traffic is encrypted via TLS and user sessions are secure from eavesdropping and information leakage. However, the RDP server is not so lucky. Since RDP runs on a well-known port number, it’s not uncommon for bots to scan the entire internet for open RDP servers and try to connect with common Windows usernames and passwords. So not only is the protocol only as secure as the user’s password, but in many cases the RDP server will spawn a new remote session and then allow graphical login via the remote session, so bots will increase your CPU load by spawning a bunch of login screens.

For some sense of scale, shodan.io currently lists 3.8 million public facing RDP servers, and lists the operating system, FQDL, and certificate info for many of them. You WILL be on this list if you open your server up to the internet.

Given all of this, I’d recommend against opening your RDP server directly to the internet. The xrdp developers have recently merged changes to log data specifically so fail2ban can block repeat abusers (as is common with SSH), but that release isn’t yet in Ubuntu and isn’t scheduled to be in the 22.04 LTS either. And even with fail2ban, you’re still letting an attacker connect to the server, start a graphical session, and then be disconnected for authentication failure, something which uses a lot more resources with xrdp than it does with SSH.

Using a proper remote-access VPN solution to your home/business network or a cloud relay point is good security practice anyway, and I’d feel comfortable leaving RDP exposed to users within my private home/business network without further protection.

Even with protocol level protections, each users on the system is still a user on the same system, so it’s possible for them to interact with the system and other users in a potentially undesirable way. Here are a few options you have to restrict user access a bit, even though they aren’t a comprehensive security guide.

Restrict Users to Remote Access Group

By default, sesman (xrdp session manager) will restrict access to only users in the group tsusers, but only if the group exists. Since the package doesn’t create it, it won’t exist, and any account can login. Additionally, /etc/xrdp/sesman.ini has an option to disable root login, separately from the tsusers group.

It’s up to you to decide if you want to limit access or allow any user on the system to connect. Depending on how exposed this server is, allowing users with sudo permissions may be more dangerous than you’d like. This all ties into the security concerns you have for the system. You should absolutely edit sesman.ini to disable root access though.

So, if you want to restrict user access, let’s create this group and add our VDI users to it:

sudo groupadd -r tsusers

To add a user, it’s pretty simple also:

sudo gpasswd -a <username> tsusers

Limiting Resources Per User

This section is optional. If you want to prevent a single user from hogging all of the system resources, then you can use Linux control groups to achieve this. Depending on how you set your limits, this won’t stop multiple users from really bogging down the system, so be aware of how many resources you actually want to give each user, but at the same time give them enough to do what they need successfully. If you occasionally have 10 users but normally have 1-2, setting the limits to 1/3 of the system capacity would let the normal users perform more processor intensive work (such as compiling) without being artificially limited to a fraction of an otherwise unused CPU.

Systemd has a reference on resource control for user slices here.

Systemd has a folder structure where we can limit resources per user - they need to go in /etc/systemd/system/user-1000.slice.d for user id 1000. However, we can create defaults by creating a .conf file in the folder folder /etc/systemd/system/user-.slice.d/*.conf. Systemd recognizes the user- as a default path if there is no user-specific file or folder.

So, we will create this default user slice configuration folder and a configuration file inside of it:

sudo mkdir -p /etc/systemd/system/user-.slice.d
sudo nano /etc/systemd/system/user-.slice.d/50-vdiusers.conf

And the contents - CPUQuota is a % relative to a single thread, so if you have a 4c/8t CPU, the maximum would be 800%. In this case, I’ve limited it to a two threads. MemoryMax can be specified in M or G, and the OOM Killer will come and reap processes from the user when they exceed their limits as if the system only had that much memory and could not swap. There’s also a MemoryHigh option you can use which will start slowing down processes when the memory reaches above the threshold, if you’d like to set that to something less than MemoryMax.


If you want to read the full systemd documentation on resource control, you can read it here. There are a lot of options there if you really want fine-grained control of resource usage by your users.

Since XRDP runs as its own user as a system service, it’s not part of the user pool, it’s part of the service pool. This means that xdrp’s actions of performing graphical compression are not counted to the user’s quotas, but they can add up, especially if the user is doing a lot of work with motion video or graphics.

Color Managed Device Error

This one is intermittent. On Ubuntu 21.10, we can fix this by adding a PolKit configuration file to allow users to change their own color profiles. Note that the version of polkit matters here, version 0.105 is shipped with Ubuntu 21.10 and requires pkla files.

sudo nano /etc/polkit-1/localauthority/50-local.d/45-allow-colord.pkla
[Allow Colord all Users]

For various reasons, Ubuntu has stuck with backporting security fixes to an ancient version of polkit instead of upgrading to a more modern release that uses the javascript configuration method, as they do not want to depend on mozjs (Spidermonkey) for security critical functions. Since polkit has just switched to a proper tiny embedded javascript engine, it’s very likely that all of the rules syntax above is going to become obsolete within a few Ubuntu releases.s


I chose RDP over a VNC-based solution as the protocol is extremely well standardized and has very wide client support, including clients available for the usual Windows/macOS/Linux, but also iOS, iPadOS, Android, Android TV, and even Samsung’s Tizen OS for smart TVs. It’s also extremely easy to get a basic setup working. Performance in terms of number of users on a single system is good, since we aren’t relying on virtualization at all, and all users are able to efficiently share system resources.

This setup is good if you want to:

  • Centralize computing / storage in a bring-your-own device fashion
  • Get full desktop functionality out of an otherwise limited operating system (i.e. Android, iOS, Smart TVs)
  • Connect back to your Linux desktop while away from home, which also keeps sensitive data off your mobile devices while traveling in case they are lost / stolen / forced to be unlocked by customs and border patrol
  • Less system overhead and resource utilization than VMs
  • Hardware GPU acceleration is available to all users from a single GPU for transcoding (but not OpenGL AFAIK)

It’s not a great solution if you want:

  • Windows (Microsoft offers this for $$$)
  • Complete multi-user and entire filesystem isolation for each session
  • Ephemeral clones of the entire system (VM), cleared after each user