- Provides complete isolation for containers on a host
- Provides the service that is running inside the container, not only to other co-located containers, but also to remote hosts.
Docker Container Networks
To provide service that is running inside the container in a secured matter, it is important to have control over the networks your applications running on. To see how container networks achieve that, we will examine the container networks from the following perspectives:
- Network modes
- Default networks vs user-defined networks
- Packet Forwarding and Filtering (Netfilter)
- Port mappings
- Bridge (veth Interface)
- DNS Configuration
- IP address
- Providing ways to configure any of the containers network interfaces to support services on different containers
- Providing ways to expose and publish a port on the container (also, mapping it to a port on the host)
When you install Docker, it creates three networks automatically:[34,37]
- Represents the docker0 (a virtual ethernet bridge) network present in all Docker installations.
- Each container's network interface is attached to the bridge, and network address translation (NAT) is used when containers need to make themselves visible to the Docker host and beyond.
- Unless you specify otherwise with the docker run --net=
option, the Docker daemon connects containers to this network by default.
- Docker does not support automatic service discovery on the default bridge network.
- Supports the use of port mapping and docker run --link to allow communications between containers in the docker0 network.
- Adds a container on the hosts network stack. You’ll find the network configuration inside the container is identical to the host.
- Because containers deployed in host mode share the same host network stack, you can’t use the same IP address for the same service on different containers on the same host.
- In this mode, you don't get port mapping anymore.
- Tells docker to put the container in its own network stack but not to do configure any of the containers network interfaces.
- This allows for you to create custom network configuration
Default Networks vs User-Defined Networks
Besides default networks, you can create your own user-defined networks that better isolate containers. Docker provides some default network drivers for creating these networks. The easiest user-defined network to create is a bridge network. This network is similar to the historical, default docker0 network. After you create the network, you can launch containers on it using the docker run --net=
You can read [34, 37] for more details.
Packet Forwarding and Filtering
Whether a container can talk to the world is governed by two factors.
- Whether the host machine is forwarding its IP packets
- In order for a remote host to consume a container's service, the Docker host must act like a router, forwarding traffic to the network associated with the ethernet bridge.
- IP packet forwarding is governed by the ip_forward system parameter in Docker
- Many using Docker will want ip_forward to be on, to at least make communication possible between containers and the wider world.
- Docker will never make changes to your host's iptables rules if you set --iptables=false when the daemon starts. Otherwise the Docker server will append forwarding rules to the DOCKER filter chain.
Netfilter offers various functions and operations for packet filtering, network address translation, and port translation, which provide the functionality required for directing packets through a network, as well as for providing ability to prohibit packets from reaching sensitive locations within a computer network.
Bridge (veth Interface)
To show information on the bridge and its attached ports (or interfaces), you do:
# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.56847afe9799 no veth33957e0
# ip link list
3: docker0: mtu 9000 qdisc noqueue state UP link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
11: veth33957e0: mtu 9000 qdisc noqueue master docker0 state UP link/ether 3e:01:d1:0f:24:b8 brd ff:ff:ff:ff:ff:ff
13: veth6cee79b: mtu 9000 qdisc noqueue master docker0 state UP link/ether fa:aa:84:15:82:5a brd ff:ff:ff:ff:ff:ff
Note that there are two containers on the host, hence two veth interfaces were shown. Those virtual interfaces work in pairs:
- eth0 in the container
- Will have an IPv4 address
- For all purposes, it looks like a normal interface.
- veth interface in the host
- Won't have an IPv4 address
How can Docker supply each container with a hostname and DNS configuration, without having to build a custom image with the hostname written inside? Its trick is to overlay three crucial /etc files inside the container with virtual files where it can write fresh information. You can see this by running mount inside a container:
/dev/mapper/vg--docker-dockerVolume on /etc/resolv.conf type btrfs ...
/dev/mapper/vg--docker-dockerVolume on /etc/hostname type btrfs ...
/dev/mapper/vg--docker-dockerVolume on /etc/hosts type btrfs ...
This arrangement allows Docker to do clever things like keep resolv.conf up to date across all containers when the host machine receives new configuration over DHCP later.
With DHCP, computers request IP addresses and networking parameters automatically from a DHCP server, reducing the need for a network administrator or a user to configure these settings manually. For resource constrained routers and firewalls, dnsmasq is often used for its small-footprint. Dnsmasq provides network infrastructure for small networks: DNS, DHCP, router advertisement and network boot.
- The TCP Maximum Segment Size and Related Topics
- Jumbo/Giant Frame Support on Catalyst Switches Configuration Example
- Ethernet Jumbo Frames\
- IP Fragmentation: How to Avoid It? (Xml and More)
- The Great Jumbo Frames Debate
- Resolve IP Fragmentation, MTU, MSS, and PMTUD Issues with GRE and IPSEC
- Sites with Broken/Working PMTUD
- Path MTU Discovery
- TCP headers
- bad TCP checksums
- MSS performance consideration
- Understanding Routing Table
- route (Linux man page)
- Docker should set host-side veth MTU #4378
- Add MTU to lxc conf to make host and container MTU match
- Xen Networking
- TCP parameter settings (/proc/sys/net/ipv4)
- Change the MTU of a network interface
- tcp_base_mss, tcp_mtu_probing, etc