This video started as the answer to a simple question - how can I self-host a service for my friends and family, behind cgnat, without requiring them to install any apps (like tunnels)? This video turned into a bunch of different ways to proxy IPv4 to IPv6, so you can receive IPv6 traffic natively and bring in legacy traffic from a VPS which does have public IPv4.

While I’m giving you a lot of different examples and methods here, you can mix and match a lot of them to fit your needs. For example, you can use snid for your TLS traffic (possibly listening on multiple ports for e.g. HTTPS and MQTTS), along with HAProxy or Tayga for the rest of your traffic. You can add in a Wireguard tunnel if you want, but since we are relaying to public IPv6s, it’s not needed.

Anyway, come along on this adventure!



Video Thumbnail

Comparison Table

Feature Pure IPv6 Cloudflare
SNID HAProxy Tayga Wireguard
Free / Open Source Software
Free (Hosting) ?1 ?2 ?2 ?2 ?2
Works with IPv6 Clients
Works with IPv4 Clients
Uses Standard Port Numbers ?3,5 ?3 ?3
Requires Server Name / DNS 4 ?5
Requires IPv6 Origin
Direct Route (via v6) ?5,6 ?6 ?6
Direct Route (via v4)
Local Certificates
Supports HTTP/3 (QUIC)
Supports TLS (non-HTTPS)
Supports TCP (non-TLS)
Supports UDP


  1. Cloudflare Tunnels are free but require sign-up with credit card. They also have an unknown bandwidth limit, as they are ‘designed for HTML / web sites and not streaming video’. Depending on your use case this may or may not be fine.
  2. I’m aware that Oracle has a free tier but I don’t trust them at all. Also, to do NAT46, you need at least a routed /96 to your VPS, and not all cloud providers offer this (DigitalOcean in particular has awful IPv6 support). Try Hetzner of Mythic Beasts for good IPv6. ‘Good’ in this case means a /64 subnet routed (not on-link) to your server. You can tell if it’s routed if their gateway IP is within your /64 or not, routed will have a ‘far’ gateway (or link-local).
  3. With these methods, since they don’t know the Server Name Identifier (SNI), you can only use the ‘standard’ port for one backend. This is a limitation of the DNS name not being transmitted over non-TLS protocols. You are of course free to use nonstandard ports for additional copies of the same service (i.e. multiple SSH servers).
  4. You must host your domain with Cloudflare to use Cloudflare Tunnels for free
  5. HAProxy supports both TLS SNI forwarding of TCP as well as pure TCP proxying
  6. Direct Routing via v6 is only possible if you use the same port number on the v4 gateway and v6 origin. If you are using multiple services on the same port on different IPv6 addresses, you will run into port limitations on the v4 address. TLS connections do not suffer from this, as the server name is specified in the connections. If your software supports SRV records, you can use multiple SRV records to point to the v6 origin + port and the v4 gateway + port.

Remember, you can mix and match these on the same VPS! For example, use snid on port 443 for TLS, plus HAProxy or Tayga on other ports! And, while I’m showing how to set these up in a VPS to route around CGNAT, you can also set most of these up at your non-CGNAT router as an IPv4 to IPv6 transition mechanism.

Option 1 - SNID

This comes to us from AGWA’s Blog Post, and you can find the binaries on His Github. He explains the concept in great detail there.

Anyway, here’s the systemd service I wrote to go with it. I installed the static binary (Golang is awesome!) into /usr/local/bin.

Download the binary, then chmod +x it so it’s executable. Nothing else to install!

Not to replicate his docs here, but here’s what I used for each option:

  • listen tcp: - Listen only on IPv4, since IPv6 will go direct. If you want to listen on multiple ports, add this multiple times (i.e. 443 for HTTPS and 3389 for RDP). Backend connections use the same destination port of the listener they came in on.
  • mode nat46 - Encode the whole damn IPv4 space into our IPv6 prefix. The VPS has a /64, so this is fine.
  • nat46-prefix 2001:db8::4646:0:0 - I’m already using suffix ::1 for my SSH management, so use 4646:: for NAT46
  • backend-cidr 2001:db8::/48 - Put in your home IPv6 prefix here. Prevents you from becoming an open proxy on the internet, only allows connections that are within this range.

So create /etc/systemd/system/snid.service:

Description=SNI TLS Proxy Daemon

ExecStart=/usr/local/bin/snid -listen tcp: -mode nat46 -nat46-prefix 2601:db8:6969:420:4646:: -backend-cidr 2001:db8::/48


We also need to add a route for our whole /96 prefix to lo, so the kernel will accept packets for it. I added this line to my /etc/network/interfaces on Debian:

# control-alias eth0
iface eth0 inet6 static
    #Was /64, changed to /128 so we don't send packets on-link for other addresses
    address 2601:db8:6969:420::1/128
    dns-nameservers 2601:fe::fe 2601:fe::9
    gateway fe80::1
    #Add local route to the translation prefix
    post-up ip route add local 2601:db8:6969:420:4646::/96 dev lo
    post-down ip route del local 2601:db8:6969:420:4646::/96 dev lo

Of course, when we are done, we need to apply both of these:

  • systemctl daemon-reload every time you change the snid.service file
  • systemctl enable --now snid to enable and start it
  • systemctl restart snid if you change the service file
  • systemctl status snid to see how it’s going
  • journalctl -xeu snid to see how it’s going in more detail
  • ifdown eth0 && ifup eth0 to reload /etc/network/interfaces (or just reboot)

Option 2 - HAProxy

I’ve already made a video on this topic - you can find it here. That page goes through a lot of the theory if you are curious.

tl;dr install it with apt update && apt install haproxy -y

Then edit the config file /etc/haproxy/haproxy.cfg. Here are some example configs for you. I left the Debian defaults unmodified and added my sections at the end.

HTTP Redirect to HTTPS (Direct response to client)

# Listen on port 80, layer 7 (HTTP)
# Redirect everything to https
# That leaves the client to reconnect properly,
# and means we don't need to proxy HTTP, just HTTPS
frontend www
        mode http
        bind :80
        http-request redirect scheme https

HTTP Proxy to Origin (Layer 7)

# Layer 7 HTTP proxying (insecure), to backend servers
frontend www
        mode http
        bind :80
        # We are building the name of the backend from the 'host'
        # field in the request plus the literal '_http'
        # See backends for an example of how to name them
        use_backend %[req.hdr(host),lower,word(1,:)]_http

# Backends for HTTP
backend test1.apalrd.net_http
        mode http
        server test1_http 2601:40e:69:69:0:0:0:feed:80
backend test2.apalrd.net_http
        mode http
        server test2_http 2601:40e:69:69:0:0:0:beef:80

HTTPS Proxy to Origin (Layer 4 TLS Forwarding)

# Layer 4 TCP SNI proxy example
frontend www-tls
        # Layer 4 (TCP) mode
        mode tcp
        # Use TCPlog mode instead of HTTPlog
        option tcplog
        # Listen on TCP 443 (HTTP/1.1 and HTTP/2)
        bind :443

        # Wait for SSL Hello before forwarding
        tcp-request inspect-delay 5s
        tcp-request content accept if { req_ssl_hello_type 1 }

        # Select backends for each server
        # Similar method to above, but using '_tls' on the end
        use_backend %[req_ssl_sni,lower,word(1,:)]_tls

# Backends for TLS servers
backend test1.apalrd.net_tls
        mode tcp
        server test1_tls 2601:40e:69:69:0:0:0:feed:443
backend test2.apalrd.net_tls
        mode tcp
        server test2_tls 2601:40e:69:69:0:0:0:beef:443

TCP Proxy to Origin (Layer 4 Port Forward)

#Frontend for SSH on port 2222
frontend ssh
        mode tcp
        option tcplog
        bind :2222
        default_backend ssh_server
#This is 1 incoming port -> 1 outgoing port (no SNI with TCP)
backend ssh_server
        mode tcp
        server test1_ssh 2601:40e:69:69:0:0:0:feed:22

TCP Load Balance to Origin (Layer 4 Balancing)

#Frontend for RDP on port 3389. Works with any TCP protocol which can be load balanced.
#Other examples include databases, etc.
frontend rdp
        mode tcp
        bind :3389
        default_backend rdp_servers
#Choose one of the RDP servers based on least connections
backend rdp_servers
        mode tcp
        #Roundrobin is also a good option
        balance leastconn
        server test1_rdp 2601:40e:69:69:0:0:0:feed:3389
        server test2_rdp 2601:40e:69:69:0:0:0:feed:3389

Option 3 - v4 to v6 Port Forwarding with Tayga

Tayga is a tool which can directly translate IPv4 and IPv6 packets at layer 3, so it works for any higher layer protocol (TCP, UDP, and more). Using Tayga, we will create IPv4 -> IPv6 address mappings, and then we can ‘port forward’ from our one public IPv4 to multiple internal IPv6 addresses.

I’ve setup Tayga previously to do NAT64 (where clients access the IPv4 internet over IPv6), and this is a different method of configuration. Anyway, here’s the install steps:

# Install Tayga
apt update && apt install tayga -y
# Remove the old-ass init.d script
rm /etc/init.d/tayga
# Remove the old configuration file
rm /etc/tayga.conf

And a new configuration file (/etc/tayga.conf)! It’s pretty simple here. I’m using to hold the translation addresses, so you can translate to 252 hosts (-1 network, -1 broadcast, -1 for linux and -1 for tayga). You can use a larger subnet if you want to translate more hosts. This subnet is only used within the VPS.

# The name of the tun device (leave as-is)
tun-device nat64

# Tayga's IPv4 address on the translation network

# IPv4 translation prefix - used as src addr in IPv6 after translation
# Pull a random /96 out of your VPS prefix for this
# Our /64 literally gives us enough to have 4 billion IPv4 internets
# Just don't overlap with snid if you are using that too
prefix 2a01:4f9:c010:919d:64::/96

# If you need to ping Tayga, take the ipv4-addr above and merge it with your prefix
# In this case, that would be:
# 2a01:4f9:c010:919d:64::c0a8:e902

# Map a single IPv4 on our translation v4 subnet to a public IPv6
# Remember we can't use the first (network) and last (broadcast) in the subnet
# And we also used the first and second real addresses for Linux and Tayga. So start at 3. 
map 2001:db8:6969:420::1
map 2001:db8:6969:421::6

And a modern systemd service unit (/etc/systemd/system/tayga.service) for this guy. Note that I’m setting up a bunch of port forwards here! We are doing ’normal’ iptables port forwarding from the public v4 address to the translation addresses, then letting Tayga translate them v4->v6.

Description=Tayga NAT64

#Start Tayga in debug mode (let systemd daemonize it)
ExecStart=/usr/sbin/tayga --config /etc/tayga.conf -d

#Do port forwarding after start / before stop
#Make sure you have a Add and Del rule for each
ExecStartPost=iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 23  -j DNAT --to-destination
ExecStopPre=iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 23  -j DNAT --to-destination
#You can of course copy/paste that as many times as you want

#Configure tunnel interface after start
ExecStartPost=ip link set nat64 up
ExecStartPost=ip addr add dev nat64
#Update with the IP prefix you gave Tayga, take the first address
ExecStartPost=ip addr add 2a01:4f9:c010:919d:64::1/96 dev nat64
#No need to undo as the nat64 interface and its config is destroyed when tayga exits

#Enable IP forwarding (no need to modify)
ExecStartPre=/bin/bash -c "echo 1 > /proc/sys/net/ipv4/conf/all/forwarding"
ExecStartPre=/bin/bash -c "echo 1 > /proc/sys/net/ipv6/conf/all/forwarding"


Of course, when we are done, we need to apply the changes and start tayga

  • systemctl daemon-reload every time you change the tayga.service file (not after tayga.conf though)
  • systemctl enable --now tayga to enable and start it
  • systemctl restart tayga if you change the service file or tayga.conf
  • systemctl status tayga to see how it’s going
  • journalctl -xeu tayga to see how it’s going in more detail

Option 4 - Wireguard

Here I’m going to create a simple tunnel between my opnsense system at home and my VPS. Then I can port forward across the tunnel.

First we need to install wireguard tools (apt update && apt install wireguard-tools -y). Next, create a config on your OPNsense system, and then we will generate a client on OPNsense and copy it into the VPS.

Finally, we will add all of our iptables rules into the wg0.conf so they run automatically when the tunnel goes up and down:

# Need to enable IP forwarding
PostUp = /bin/bash -c "echo 1 > /proc/sys/net/ipv4/conf/all/forwarding"
PostUp = /bin/bash -c "echo 1 > /proc/sys/net/ipv6/conf/all/forwarding"

# Do port forwarding after start / before stop
# Make sure you have a Add and Del rule for each
PostUp=iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 23  -j DNAT --to-destination
PostDown=iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 23  -j DNAT --to-destination
# Do Masquerade so packets return the right way
PostUp=iptables -t nat -A POSTROUTING -o wg0 -p tcp --dport 23 -d -j SNAT --to-source
PostDown=iptables -t nat -D POSTROUTING -o wg0 -p tcp --dport 23 -d -j SNAT --to-source
# You can of course copy/paste that as many times as you want

Bonus - ASCIIMation Test Setup

In my TCP example I setup an ASCIImation server. I initially copied a project from Github, found it didn’t have IPv6 support, patched it myself, then went to make a fork / PR and realized that someone had already made a v6 fork, so just use their fork.

Here it is

# Install Git
apt update && apt install git -y
# Clone repo somewhere
cd /var/lib
git clone
# Create a system user+group for us to have less permissions
adduser --quiet --system --group vader

I also wrote a systemd unit (/etc/systemd/system/asciimation.service) for it, and a user account for it. I have it bound to port 23, which requires elevated privilages to bind to low numbered ports, and systemd does this for us.

Description=ASCIImation server

#Defaults to bind on :: on port 23, so no need to specify
ExecStart=/usr/bin/python3 /var/lib/ascii-telnet-server/  -f /var/lib/ascii-telnet-server/sample_movies/sw1.txt