How I Automated my Network with Ansible
Today, in the next eposide on my Personal AS series, I have added a third POP / fourth router, and that means I need to configure the POP all over again. That’s a lot of work, so I automated it with Ansible! Also follow along to see my first use of Netbox for automation
Contents⌗
- Video
- Ansible - Inventory
- Ansible - Software Setup
- Ansible - Network Config
- Ansible - Routing Config
Video⌗
Ansible Inventory⌗
Here’s the final version of my Ansible inventory/hosts.yml file. Long term, ideally I can move all of these attributes to custom attributes in Netbox, or even pulling the whole inventory from Netbox, instead of maintaining this file separately.
routers:
hosts:
waw-pe1.apalrd.fi:
router_id: 1
host: waw-pe1
nat64: true
pop: waw
backup: true
peers:
- lax-pe1.apalrd.fi
- sjy-p1.apalrd.fi
- grr-e1.apalrd.fi
lax-pe1.apalrd.fi:
router_id: 2
host: lax-pe1
nat64: true
pop: lax
backup: true
peers:
- waw-pe1.apalrd.fi
- sjy-p1.apalrd.fi
- grr-e1.apalrd.fi
sjy-p1.apalrd.fi:
router_id: 3
host: sjy-p1
peers:
- waw-pe1.apalrd.fi
- lax-pe1.apalrd.fi
- grr-e1.apalrd.fi
grr-e1.apalrd.fi:
router_id: 4
host: grr-e1
nat64: true
pop: grr
backup: true
peers:
- sjy-p1.apalrd.fi
- waw-pe1.apalrd.fi
- lax-pe1.apalrd.fi
Ansible Software Setup⌗
This is the script that installs all of the software (and running it again will update it!), enables backups, etc.
- hosts: "*"
tasks:
#Global package update
- name: apt upgrade
apt:
update_cache: yes
upgrade: yes
#Packages required to install package repos
- name: Pre required packages
ansible.builtin.apt:
pkg:
- apt-transport-https
- ca-certificates
- gpg
#Backup to PBS
- name: Proxmox apt key
ansible.builtin.get_url:
url: https://enterprise.proxmox.com/debian/proxmox-archive-keyring-trixie.gpg
dest: /usr/share/keyrings/proxmox-archive-keyring.gpg
when: backup is defined and backup
- name: Proxmox client repository
ansible.builtin.apt_repository:
repo: "deb [arch=amd64 signed-by=/usr/share/keyrings/proxmox-archive-keyring.gpg] http://download.proxmox.com/debian/pbs-client {{ ansible_distribution_release }} main"
state: present
when: backup is defined and backup
- name: Proxmox Backup Client install from packages
ansible.builtin.apt:
update_cache: yes
name: proxmox-backup-client
when: backup is defined and backup
- name: Copy Proxmox Backup Service
ansible.builtin.template:
src: proxmox-backup.service
dest: /etc/systemd/system/
owner: root
group: root
mode: '0644'
when: backup is defined and backup
- name: Copy Proxmox Backup Timer
ansible.builtin.template:
src: proxmox-backup.timer
dest: /etc/systemd/system/
owner: root
group: root
mode: '0644'
when: backup is defined and backup
- name: Enable backup timer
ansible.builtin.systemd_service:
name: proxmox-backup.timer
enabled: "{{ backup | default(false)}}"
daemon-reload: True
#Configuration for bird3
- name: bird3
block:
#Package install steps
- name: bird3 apt key
ansible.builtin.get_url:
url: https://pkg.labs.nic.cz/gpg
dest: /usr/share/keyrings/cznic-labs-pkg.gpg
- name: bird3 repo
ansible.builtin.apt_repository:
repo: "deb [signed-by=/usr/share/keyrings/cznic-labs-pkg.gpg] https://pkg.labs.nic.cz/bird3 {{ ansible_distribution_release }} main"
state: present
- name: Bird3 install from packages
ansible.builtin.apt:
update_cache: yes
name: bird3
#Tayga
- name: Tayga
when: nat64 is defined and nat64
block:
- name: tayga required packages
ansible.builtin.apt:
pkg:
- build-essential
- git
- name: Git checkout
ansible.builtin.git:
repo: 'https://github.com/apalrd/tayga'
dest: /root/tayga
force: true
- name: Compile
ansible.builtin.shell:
chdir: /root/tayga
cmd: make && make install WITH_SYSTEMD=1 LIVE=1
- name: Disable default nat64 instance
ansible.builtin.systemd_service:
name: tayga@default
enabled: False
state: stopped
- name: Tayga local service
ansible.builtin.systemd_service:
name: tayga@local
enabled: True
state: started
- name: Tayga wkpf service
ansible.builtin.systemd_service:
name: tayga@wkpf
enabled: True
state: started
Here’s my Proxmox Backup systemd service/timer, in case you want to use those too:
#/etc/systemd/system/proxmox-backup.service
[Unit]
Description=Run Backup to Proxmox Backup Server
After=network-online.target
[Service]
Environment=PBS_PASSWORD=your-api-key-nere
Environment=PBS_REPOSITORY=user@pbs!apikey@pbs.apalrd.fi:repo
Type=oneshot
ExecStart=proxmox-backup-client backup root.pxar:/
[Install]
WantedBy=default.target
#/etc/systemd/system/proxmox-backup.timer
[Unit]
Description=Backup System Daily
RefuseManualStart=no
RefuseManualStop=no
[Timer]
#Run 540 seconds after boot for the first time
OnBootSec=540
#Run at midnight UTC
OnCalendar=*-*-* 00:00:00
Unit=proxmox-backup.service
[Install]
Ansible Network Conf⌗
Here’s the two Ansible playbooks to configure systemd networkd:
#pop_conf_net.yml
- hosts: "*"
collections:
- netbox.netbox
- ansible.utils
vars:
netbox_url: "https://docs.peach.apalrd.fi"
netbox_token: 0DjI3ZTMQdQPEbti07i6KYnan8jOi0f2K9XS0Qm2
tasks:
#Query peer primary addresses
- name: Build peer_hosts list from inventory
set_fact:
peer_hosts: "{{ peer_hosts | default([]) + [hostvars[item].host] }}"
loop: "{{ peers }}"
when: hostvars[item].host is defined
- name: Query potential peers from Netbox VM list
set_fact:
all_vms: >-
{{
lookup(
'netbox.netbox.nb_lookup',
'virtual-machines',
api_endpoint=netbox_url,
token=netbox_token,
validate_certs=False
)
}}
- name: Build IPv6 map for requested VMs
set_fact:
peer_ipv6_map: >-
{{
peer_ipv6_map | default({}) |
combine({
item.value.name:
item.value.primary_ip6.address
| ansible.utils.ipaddr('address')
})
}}
loop: "{{ all_vms }}"
no_log: true
when:
- item.value.name in peer_hosts
- item.value.primary_ip6 is defined
- name: Fail if any peer is missing
fail:
msg: "No primary IPv6 found in NetBox for: {{ peer_hosts | difference(peer_ipv6_map.keys() | list) }}"
when: peer_hosts | difference(peer_ipv6_map.keys() | list) | length > 0
- name: Show peer IPv6 addresses
debug:
msg: "Primary IPv6 for {{ item }} is {{ peer_ipv6_map[item] }}"
loop: "{{peer_hosts}}"
- name: Get my own IPv6 address
set_fact:
my_ip: >-
{{
(all_vms | selectattr('value.name','equalto',host) | list)[0].value.primary_ip6.address
| ansible.utils.ipaddr('address')
}}
- name: Show my own IPv6 address
debug:
msg: "My IP is {{my_ip}}"
#Configuration for Tunnel Interfaces
- name: Tunnel Interfaces
ansible.builtin.include_tasks:
file: pop_conf_net_tunnel.yml
loop: '{{ peers }}'
loop_control:
loop_var: peer
#Configuration for Loopback Interface
- name: Loopback Addresses
notify: Reload Networkd
ansible.builtin.template:
src: loopback.network
dest: /etc/systemd/network/loopback.network
owner: root
group: root
mode: '0644'
#Restart handlers
handlers:
- name: Reload Networkd
ansible.builtin.shell:
cmd: networkctl reload
And a sub-playbook to setup a singular tunnel:
#pop_conf_net_tunnel.yml
#Ansible script to manage one tunnel interface
- name: Tunnel to {{peer}}
block:
- name: Interface Template Netdev to {{peer}}
notify: Reload Networkd
ansible.builtin.template:
src: tunnel.netdev
dest: /etc/systemd/network/tun_{{hostvars[peer]['host']|replace("-", "_")}}.netdev
owner: root
group: root
mode: '0644'
- name: Interface Template Network to {{peer}}
notify: Reload Networkd
ansible.builtin.template:
src: tunnel.network
dest: /etc/systemd/network/tun_{{hostvars[peer]['host']|replace("-", "_")}}.network
owner: root
group: root
mode: '0644'
They also rely on two template files, for netdev and network:
#tunnel.netdev
#Tunnel to {{peer}}
[NetDev]
Name=tun_{{hostvars[peer]['host']|replace("-", "_")}}
Kind=ip6tnl
[Tunnel]
Mode=ip6ip6
Local={{ my_ip }}
Remote={{ peer_ipv6_map[hostvars[peer]['host']] }}
TTL=64
Independent=True
#tunnel.network
#Tunnel to {{peer}}
[Match]
Name=tun_{{hostvars[peer]['host']|replace("-", "_")}}
[Network]
Address=fe80::{{router_id}}/64
ConfigureWithoutCarrier=yes
Ansible Routing Conf⌗
Here’s what my templated BIRD conf looks like:
- hosts: "*"
collections:
- netbox.netbox
- ansible.utils
vars:
netbox_url: "https://docs.peach.apalrd.fi"
netbox_token: 0DjI3ZTMQdQPEbti07i6KYnan8jOi0f2K9XS0Qm2
netbox_validate_certs: False
tasks:
#Build prefix lists
- name: Query NetBox for bgp-adv-dfz-ac prefixes
set_fact:
dfz_ac_prefixes: >-
{{
query(
'netbox.netbox.nb_lookup',
'prefixes',
api_endpoint=netbox_url,
token=netbox_token,
validate_certs=netbox_validate_certs | default(true),
api_filter="tag=bgp-adv-dfz-ac"
)
| map(attribute='value')
| map(attribute='prefix')
| list
}}
when: pop is defined
- name: Show prefixes for AC
debug:
msg: "Prefix for AC: {{ item }}"
loop: "{{dfz_ac_prefixes}}"
when: pop is defined
- name: Query NetBox for pop specific prefixes
set_fact:
dfz_pop_prefixes: >-
{{
query(
'netbox.netbox.nb_lookup',
'prefixes',
api_endpoint=netbox_url,
token=netbox_token,
validate_certs=netbox_validate_certs | default(true),
api_filter="tag=bgp-adv-dfz-" ~ pop
)
| map(attribute='value')
| map(attribute='prefix')
| list
}}
when: pop is defined
- name: Show prefixes for POP
debug:
msg: "Prefix for POP: {{ item }}}"
loop: "{{dfz_pop_prefixes}}"
when: pop is defined
#Configuration for bird3
- name: Bird3 copy common defs
notify: Reload Bird
ansible.builtin.template:
src: bird-defs.conf
dest: /etc/bird/defs.conf
owner: root
group: root
mode: '0644'
- name: Bird3 template IGP peers
notify: Reload Bird
ansible.builtin.template:
src: bird-igp.conf
dest: /etc/bird/igp.conf
owner: root
group: root
mode: '0644'
- name: Bird3 template iBGP peers
notify: Reload Bird
ansible.builtin.template:
src: bird-ibgp.conf
dest: /etc/bird/ibgp.conf
owner: root
group: root
mode: '0644'
- name: Bird3 template exports
notify: Reload Bird
ansible.builtin.template:
src: bird-exports.conf
dest: /etc/bird/exports.conf
owner: root
group: root
mode: '0644'
- name: Bird3 node-specific file
notify: Reload Bird
ansible.builtin.copy:
src: bird-{{host}}.conf
dest: /etc/bird/bird.conf
owner: root
group: root
mode: '0644'
#Configuration for Tayga
- name: Tayga configuration for local nat
notify: Restart Tayga Local
when: nat64 is defined and nat64
ansible.builtin.template:
src: tayga_local.conf
dest: /etc/tayga/local.conf
owner: root
group: root
mode: '0644'
- name: Tayga configuration for wkpf nat
notify: Restart Tayga Wkpf
when: nat64 is defined and nat64
ansible.builtin.template:
src: tayga_wkpf.conf
dest: /etc/tayga/wkpf.conf
owner: root
group: root
mode: '0644'
#Restart handlers
handlers:
- name: Reload Bird
ansible.builtin.shell:
cmd: birdc configure
- name: Restart Tayga Local
ansible.builtin.systemd_service:
name: tayga@local
state: restarted
- name: Restart Tayga Wkpf
ansible.builtin.systemd_service:
name: tayga@wkpf
state: restarted
It depends on a bunch of other files, which are templated (with Jinja):
#bird-defs.conf
# My AS
define MY_AS = 201726;
# IP addresses of routers
define IP_MINE = 2a0f:b240:1000::{{router_id}};
define IP_WAW_PE1 = 2a0f:b240:1000::1;
define IP_LAX_PE1 = 2a0f:b240:1000::2;
define IP_SJW_P1 = 2a0f:b240:1000::3;
define IP_GRR_E1 = 2a0f:b240:1000::4;
# Where the route entered
define BGP_ENTER_MINE = (ro, MY_AS, {{router_id}});
define BGP_ENTER_WAW = (ro, MY_AS,1);
define BGP_ENTER_LAX = (ro, MY_AS,2);
define BGP_ENTER_SJW = (ro, MY_AS,3);
define BGP_ENTER_GRR = (ro, MY_AS,4);
# Redistribute this route to DFZ
define BGP_TO_DFZ_MINE = (ro, MY_AS, {{ 420+router_id }});
define BGP_TO_DFZ_ALL = (ro, MY_AS,420);
define BGP_TO_DFZ_WAW = (ro, MY_AS,421);
define BGP_TO_DFZ_LAX = (ro, MY_AS,422);
define BGP_TO_DFZ_GRR = (ro, MY_AS,424);
# IGP-related
define BGP_TO_IGP = (ro, MY_AS,667);
define BGP_TO_HELL = (ro, MY_AS,666);
# This route is via nat64
define BGP_NAT64 = (ro, MY_AS,64);
#bird-exports.conf
# This file is generated by a template!
# Routes for export
protocol static adv_v6 {
ipv6;
# My own loopback address goes into the IGP
route 2a0f:b240:1000::{{router_id}}/128 via "lo" {
bgp_ext_community.add(BGP_TO_IGP);
};
{% if pop is defined %}
{% for prefix in dfz_ac_prefixes %}
# Routes which will be advertised as anycast
route {{prefix}} blackhole {
bgp_ext_community.add(BGP_TO_DFZ_ALL);
bgp_ext_community.add(BGP_ENTER_MINE);
};
{% endfor %}
# Routes which will be advertised from this pop
{% for prefix in dfz_pop_prefixes %}
route {{prefix}} blackhole {
bgp_ext_community.add(BGP_TO_DFZ_MINE);
bgp_ext_community.add(BGP_ENTER_MINE);
};
{% endfor %}
{% endif %}
{% if nat64 is defined and nat64 %}
# Routes for nat64
route 64:ff9b:1:{{router_id}}::/64 via "nat64" {
bgp_ext_community.add(BGP_NAT64);
bgp_ext_community.add(BGP_ENTER_MINE);
};
route 64:ff9b::/64 via "nat64wkpf" {
bgp_ext_community.add(BGP_NAT64);
bgp_ext_community.add(BGP_ENTER_MINE);
};
{% endif %}
};
#bird-igp.conf
# This file is generated by a template!
# It configures the IGP (OSPFv3) functionality for a given node
protocol ospf v3 ospf6 {
ipv6 {
import all;
export filter {
# only send routes which we specifically want going into the IGP
# and which are static routes from this router, not via iBGP
if (proto = "adv_v6" && bgp_ext_community ~ [ BGP_TO_IGP ]) then accept;
reject;
};
};
# All OSPF is in area zero
area 0 {
{% for peer in peers %}
# Interface for neighbor {{peer}}
interface "tun_{{hostvars[peer]['host']|replace("-", "_")}}" {
type ptmp;
check link no;
cost 10;
hello 5;
dead 20;
neighbors {
fe80::{{hostvars[peer]['router_id']}};
};
};
{% endfor %}
};
};
#bird-ibgp.conf
# This file is generated by a template!
# It configures iBGP peerings for a given node (full mesh)
# BGP locally within AS
filter to_bgp_local {
#For routes which we advertised
if proto = "adv_v6" then {
bgp_origin = ORIGIN_INCOMPLETE;
bgp_local_pref = 100;
bgp_next_hop = IP_MINE;
accept;
}
#For routes which came from the DFZ, set our own nexthop
if proto = "dfz_v6" || proto = "dfz_v4" then {
bgp_med = 100;
bgp_next_hop = IP_MINE;
accept;
}
#For routes which came from the anywhere else, redistribute as-is
accept;
};
{% for peer in peers %}
protocol bgp bgp_local_{{hostvars[peer]['host']|replace("-", "_")}} {
local as MY_AS;
neighbor 2a0f:b240:1000::{{hostvars[peer]['router_id']}} as MY_AS;
multihop;
source address 2a0f:b240:1000::{{router_id}};
ipv6 {
import filter from_bgp_local;
export filter to_bgp_local;
};
};
{% endfor %}
I also use Tayga, and here is one of the Tayga configs:
#tun device
tun-device nat64
#prefix
prefix 64:ff9b:1:{{router_id}}::/96
ipv4-addr 192.0.0.8
#dynamic pool is apipa space
dynamic-pool 169.254.0.0/20
udp-cksum-mode fwd
log drop reject icmp self dyn
tun-up yes
tun-route 169.254.0.0/20
tun-route 64:ff9b:1:{{router_id}}::/64
