LXC – Networking
5 March, 2021 by
LXC – Networking
Administrator
| No comments yet


I’ve heard a lot about virtualization recently with, at the top, docker. I wanted to understand a bit more about those, and I started with LXC which stands for Linux Containers, the goal of the project was (and is) to provide tools for lightweight virtualization.

Contrary to machine virtualization, which create a whole isolated system (VMWare and VirtualBox for example), LXC uses a system of container, those containers share portions of the host kernel and even part of the host operating system, meaning that containers have less overhead than ‘heavy’ virtualization. On the other hand, those containers do not guarantee as much isolation (hence security) of the system as machine virtualization does, as always, both have pros and cons, but this is not the topic of this article.

When I started this journey in virtualization, I had no background on system administration nor networking, and even if setting up a LXC containers is really easy, I’ve had a lot of trouble to connect the containers to the internet, and this is the topic of this article.

Requirement

You will obviously need LXC, I let you use your favorite package manager to get the last version. Then we will create a Ubuntu container with the following command (on arch, there was an issue with debootstrap and ubuntu keyrings, I had to do a yaourt -S ubuntu-keyring and create a symlink for ‘gpg1v’ : sudo ln -s /usr/bin/gpgv /usr/bin/gpg1v in order to solve this):

sudo lxc-create --name natContainer -t ubuntu

This will simply create us a container named natContainer with an ubuntu on it, we do not need to start it or to do anything with it for now, but if you want more details on LXC basis I recommend the DigitalOcean guide.

By default, containers do not have any internet connection, we will need to create a bridge interface between them and the host so they can be reached from the ‘out world’. In fact, we aren’t going to create a real bridge (which can be viewed as a virtual switch), but we will be doing NAT to the container (ServerFault post about bridge and NAT).

The main interest of using this technique is that the bridge is not linked to any existing interface, and as Wireless interface can’t (at least for what I know) be bridged, it will work the same way almost anywhere.

As I said, we need a bridge interface, we’ll see two ways of creating a bridge interface:

  • The first one with netctl a profile based network manager
  • And the other one with systemd-networkd which is the networking tool created by Systemd to setup network configuration with container (which is exactly our case).

I’m using my server as an host, the main interface is eth0 and its IP is my public IP, which is 5.196.95.238 at the time I’m writing this article but we don’t really need this at all (as we are not doing Bridged Connection). What we want is to create a subnet on 192.169.100.0/24 and connect our container(s) to it (containers will behave like computer in a local network).

Our previously created container will be at 192.168.100.10 (we are only interested in static configuration) and have a SSH server running on port 22 (with the default ubuntu/ubuntu login), the goal of this article is to establish a ssh connection from my laptop. Here is the following network schema (as you can see, NATint makes my host behaves like a router).

If you already have some networking knowledge, you might want to skip the next two sections, they are meant to be a very brief introduction of some network related element we’ll need after.

NAT Bridge

A NAT bridge is a virtual interface which isn’t connected to any existing interface (sometimes described as ‘standalone interface’) it basically creates a private subnet to which we can connect our VMs and containers. In fact, NATting is really common in our home private network, this is one of the main purpose of your router. By default, devices on a private network can’t be reach (or reach) the ouside world, but they can reach other machines on the same private network.

When setting up a NAT (on the gateway) all outgoing packets are going to be ‘parsed’ by the NAT and inserted in a lookup table, this table stores the source ip and port and destination ip and port. With that, when the NAT receives the response, it can redirect the packet to the correct device on the private network. NAT itself it not the topic of this article, so as always, if you’re interested, there are a lot of great resources out there.

One thing you need to keep in mind is that devices behind a NAT can’t reach (or be reach) from the internet (by default), which is the main difference with the ‘Host Bridge’.

Host Bridge

Contrary to the NAT Bridge, this type of bridge are actually ‘connected’ to an existing physical interface (eth0 in my case). Devices connected to this kind of bridge behave as if there were on the same network. Devices can reach other machines in the network, but also the internet. If a public IP is assigned to the devices in the network, they will even be accessible from the internet, without adding any routing ! In this case the bridge behave exactly as a switch does. In this article we will focus on NAT Bridge (there might be another one about host bridges and virtualization soon).

Creating a bridge using netctl

As I said, netctl is a profile based network manager, to create our new bridge interface, we will add a new file named lxcbridge in /etc/netctl/:

## sudo vim /etc/netctl/lxcbridge
Description="LXC Bridge"
Interface=lxcbr0
Connection=bridge
IP=static
Address='192.168.100.1/24'
SkipForwardingDelay=yes

This file will create lxcbr0 bridge interface with a static IP. To start the interface, we still need to load the profile:

sudo netctl start lxcbridge
sudo netctl enable lxcbridge # To start the interface at boot

You can check the interface’s parameters with a simple ip addr:

74: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 3e:11:8b:90:0e:a0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.1/24 brd 192.168.100.255 scope global lxcbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::3c11:8bff:fe90:ea0/64 scope link 
       valid_lft forever preferred_lft forever

Creating a bridge using networkd

Creating a new interface with networkd requires to add a file in /etc/systemd/network/, namely lxcbridge.netdev:

[NetDev]
Name=lxcbr0
Kind=bridge

This file only contains the data about the virtual network device, we then need to configure it using a second file: lxcbridge.network in the same directory:

[Match]
Name=lxcbr0

[Network]
Address=192.168.100.1/24
Gateway=192.168.100.1
IPForward=kernel # Very important, otherwise no routing and the network will be isolated

We then need to start this inteface:

sudo ip link del dev lxcbr0 # We make sure to delete the interface if it already exists
sudo systemctl stop systemd-networkd #We could also restart it, but I've had issues with restarting services
sudo systemctl start systemd-networkd

You should now be able to see your new interface:

# ip addr
128: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether fa:d3:06:37:22:01 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.1/24 brd 192.168.100.255 scope global lxcbr0
       valid_lft forever preferred_lft forever

LXC Config

To enable the network on our LXC container, we’ll have to change add some lines inside our container’s configuration. The configuration file for the containers are located in /var/lib/lxc/natContainer/config:

# sudo vim /var/lib/lxc/natContainer/config
lxc.network.type = veth
lxc.network.name = veth0
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.veth.pair = veth0-natContainer
lxc.network.ipv4 = 192.168.100.10/24
lxc.network.ipv4.gateway = 192.168.100.1

The network type specifies that we want to have a Virtual network. The name is the adapter name inside the LXC container.

The link is the related bridge of the host (the one we created earlier) and the pair key will be the name of the LXC interface but from the Host point of view. We then set a static IP to our container (192.168.100.10) as well as the gateway (the host in this case).

Routing

With the previous configurations, if you start the container, it should be inside a private network, meaning that every container can ping the others (and the host).

## On the host
sudo lxc-start --name natContainer
sudo lxc-console --name natContainer
## Login on the container
ping 192.168.100.1

But unfortunately, we still can’t reach the internet:

## ping google.fr
ping: unknown host google.fr

We need to add some rules to our host to enable packet routing/forwarding, we will need iptables for that:

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # Redirect all incoming traffic on NAT interfaces to eth0
sudo sysctl -w net.ipv4.ip_forward=1 # Enable ipv4 forwarding

You’ll also need to setup a dns server on the Gateway (the host in our case), I’ll use dnsmasq for this.

sudo pacman -S dnsmasq
sudo dnsmasq

It’s as easy ! You should now be able to reach any IP from inside your container, the NAT is working ! Even if our NAT is working, we can’t reach the container from the internet, meaning that we still can’t establish a SSH connection with them, we need to add a new rules to our iptables to redirect incoming traffic to our container:

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 31000 -j DNAT --to 192.168.100.10:22

From my laptop I should be able to connect to my server’s IP(5.196.95.238) on port 3100 and have a SSH shell on the container !

ssh [email protected] -p 31000
[email protected]'s password: 
ubuntu@ex0ns:~whoamiubuntuubuntu@ex0ns:~ 

Here is the end of this quick overview of LXC and network.

PS: I finally have a PGP key, see my contact page.

Resources

Sign in to leave a comment