Docker is a containerization platform that uses OS-level virtualization to package software applications and their dependencies into reusable units called containers. Docker containers can be run on any host with Docker or an equivalent container runtime installed locally on your laptop or in a remote cloud.

Docker includes a networking system for managing communications between containers, your Docker host, and the outside world. Several different network types are supported, facilitating a variety of common use cases. In this article, we’ll explain how Docker networks function, and then cover the basics of using them with container deployments.

Docker Network Types

Docker networks configure communications between neighboring containers and external services. Containers must be connected to a Docker network to receive any network connectivity. The communication routes available to the container depend on its network connections.

Docker comes with five built-in network drivers that implement core networking functionality:

  1. bridge
  2. host
  3. overlay
  4. IPvLAN
  5. macvlan
1. bridge

Bridge networks create a software-based bridge between your host and the container. Containers connected to the network can communicate with each other, but they’re isolated from those outside the network. Each container in the network is assigned its IP address. Because the network’s bridged to your host, containers are also able to communicate on your LAN and the internet. They will not appear as physical devices on your LAN, however.

2. host

Containers that use the host network mode shares your host’s network stack without any isolation. They aren’t allocated their IP addresses, and port binds will be published directly to your host’s network interface. This means a container process that listens on port 80 will bind to <your_host_ip>:80.

3. overlay

Overlay networks are distributed networks that span multiple Docker hosts. The network allows all the containers running on any of the hosts to communicate with each other without requiring OS-level routing support. Overlay networks implement the networking for Docker Swarm clusters, but you can also use them when you’re running two separate instances of Docker Engine with containers that must directly contact each other. This allows you to build your Swarm-like environments.

4. ipvlan

IPvLAN is an advanced driver that offers precise control over the IPv4 and IPv6 addresses assigned to your containers, as well as layer 2 and 3 VLAN tagging and routing. This driver is useful when you’re integrating containerized services with an existing physical network. IPvLAN networks are assigned their interfaces, which offer performance benefits over bridge-based networking.

5. macvlan

macvlan is another advanced option that allows containers to appear as physical devices on your network. It works by assigning each container in the network a unique MAC address. This network type requires you to dedicate one of your host’s physical interfaces to the virtual network. The wider network must also be appropriately configured to support the potentially large number of MAC addresses that could be created by an active Docker host running many containers.

Which Network Type Should I Use?

Bridge networks are the most suitable option for the majority of scenarios you’ll encounter. Containers in the network can communicate with each other using their own IP addresses and DNS names. They also have access to your host’s network, so they can reach the internet and your LAN.

Host networks are best when you want to bind ports directly to your host’s interfaces and aren’t concerned about network isolation. They allow containerized apps to function similarly to network services running directly on your host.

Overlay networks are required when containers on different Docker hosts need to communicate directly with each other. These networks let you set up your own distributed environments for high availability.

Macvlan networks are useful in situations where containers must appear as a physical device on your host’s network, such as when they run an application that monitors network traffic. IPvLAN networks are an advanced option when you have specific requirements around container IP addresses, tags, and routing.

Docker also supports third-party network plugins, which expand the networking system with additional operating modes. These include Kuryr, which implements networking using OpenStack Neutron, and Weave, an overlay network with an emphasis on service discovery, security, and fault tolerance.

Finally, Docker networking is always optional at the container level: setting a container’s network to none completely disable its networking stack. The container will be unable to reach its neighbors, your host’s services, or the internet. This helps improve security by sandboxing applications that aren’t expected to require connectivity.

How Docker Networking Works

Docker uses your host’s network stack to implement its networking system. It works by manipulating iptables rules to route traffic to your containers. This also provides isolation between Docker networks and your host.

iptables is the standard Linux packet filtering tool. Rules are added to iptables define how traffic is routed as it passes through your host’s network stack. Docker networks add filtering rules which direct matching traffic to your container’s application. The rules are automatically configured, so you don’t need to manually interact with iptables.

Docker containers are assigned their network namespace, a Linux kernel feature that provides isolated virtual network environments. Containers also create virtual network interfaces on your host that allow them to communicate outside their namespace using your host’s network.

The details of how Docker networking is implemented are relatively complex and low-level. Docker abstracts them away from end users, providing a seamless container networking experience that’s predictable and effective. However, more information is available in Docker’s documentation.

Docker Networking vs. VM Networking

Docker’s networking model provides isolated virtualized environments to containers. These networks satisfy common use cases but have some key differences compared to the virtual networks created by traditional VMs.

Whereas Docker achieves network isolation using namespaces and iptables rules, VMs typically run a separate networking stack for each virtual machine. There are also differences in terminology that can confuse: what Docker calls a “bridge” network is similar to a NAT-based network in most VM solutions.

Generally, VMs can support a broader range of network topologies than Docker natively permits. However, Docker includes all the tools to create the network solution you require, whether by using macvlan to assign container addresses on your physical network or by using a third-party plugin to implement missing network models.


Using Docker Networks

Ready to explore Docker networking? Here’s how to use networks to manage container communications. To follow along with this tutorial, you’ll need to open three terminal windows.

Creating Networks

To create a new network, use the docker network create command. You can specify the driver to use, such as bridge or host, by setting the -d flag. A bridge network will be created if you omit the flag.

Run the following in your first terminal window:

The ID of the created network is emitted to your terminal. The new network’s useless at the moment because no containers have been connected.

Connecting Containers to Networks

You can attach new containers to a network by setting the --network flag with your docker run command. Run this command in your second terminal window:

Next, open your third terminal window and start another Ubuntu container, this time without the --network flag:

Now try communicating between the two containers, using their names:

The containers aren’t in the same network yet, so they can’t directly communicate with each other.

Use your first terminal window to join container2 to the network:

The containers now share a network, which allows them to discover each other:

Using Host Networking

Bridge networks are what you’ll most commonly use to connect your containers. Let’s also explore the capabilities of host networks, where containers attach directly to your host’s interfaces. You can enable host networking for a container by connecting it to the built-in host network:

NGINX listens on port 80 by default. Because the container’s using a host network, you can access your NGINX server on your host’s localhost:80 outside the container, even though no ports have been explicitly bound:

Disabling Networking

When a container’s networking is disabled, it will have no connectivity available – either to other containers, or your wider network. Disable networking by attaching your container to the none network:

This lets you easily sandbox unknown services.

Removing Containers from Networks

Docker lets you freely manage network connections without restarting your containers. In the previous section, you saw how to connect a container after its creation; it’s also possible to remove containers from networks they no longer need to participate in:

Any changes you make will apply immediately.

Managing Networks

You can list all your Docker networks with the network ls command:

The output includes the built-in networks, bridgehost, and none, as well as the networks you’ve created.

To delete a network, disconnect or stop all the containers that use it, then pass the network’s ID or name to network rm:

You can automatically delete all unused networks using the network prune command:

Using Networks With Docker Compose

You can use networks with Docker Compose services too. It’s often possible to avoid manual networking configuration when you’re using Compose, because the services in your stack are automatically added to a shared bridge network:

Use Compose to deploy the stack:

As shown by the following output, Compose has created a network for your stack that includes both containers:

This means your application container can communicate directly with the neighboring mysql database container:


Adding Extra Networks

You can define additional networks inside your Compose file. Name the network in the top-level networks field, then connect your services by referencing the network in the service-level networks field:

This example overrides the behavior of the default stack-level network. Now, only the app service can communicate with mysqlhelper is unable to reach the database because the two services don’t share a network.

Key Points

Docker’s networking system provides flexible options for managing communication between containers, their neighbors, and your Docker host. Containers within a network are able to reach each other by name or IP address.

Networking is implemented by a set of pluggable drivers that accommodate the most common use cases. Networks rely on your host’s networking stack, but are isolated using namespaces. The separation is weaker than the virtual networking model used by VMs, although containers can still appear as physical network devices when you attach them to a macvlan network.

Looking for more Docker guides and tutorials? Check out our other articles on the Spacelift blog

Check out also how integrating with third party tools is easily done at Spacelift. You have the possibility of installing and configuring them before and after runner phases or you can simply bring your own Docker image and leverage them directly.

Leave a Comment