Learning Center

Docker Containers: A Complete Guide

January 30, 2024 by admin

Docker Containers: A Complete Guide

What Is a Docker Container?

A Docker container is a lightweight, standalone, executable software package that includes everything needed to run a piece of software. This includes the code, runtime, libraries, environment variables, and config files. Docker containers are built from Docker images, which are text-based templates used to create containers.

Docker containers are known for their portability. Containerized software will run the same, irrespective of the infrastructure. This consistency facilitates collaborative coding and eliminates the "it works on my machine" problem. Docker containers provide a consistent environment for development, testing, and deployment, accelerating the software development life cycle. This is why they have become an essential part of the DevOps toolchain.

In this article:

Benefits of Running Docker Containers

Here are some of the key benefits of Docker containers compared to traditional software deployment methods:

  • Isolation: Each Docker container operates in a self-contained environment, separate from the host system and other containers. This ensures consistent behavior across different stages of development.
  • Resource efficiency: Containers share the host system’s kernel, unlike virtual machines, which require their own operating systems. This leads to reduced system overhead, allowing for more containers to run on a single host machine.
  • Rapid deployment and scalability: Containers package all necessary components to run an application, simplifying the process of moving from development to production. This speeds up the software development pipeline.
  • Compatibility with microservices: Docker containers are well-suited for microservices architectures, where applications are composed of loosely coupled, independently deployable components. Containers make it easier to develop, deploy, and scale these individual components.

Docker Architecture and Components

(Back to top)(Next alert)

docker-architecture-and-components.png

Source: Docker

Docker Daemon

The Docker Daemon, also known as dockerd, is responsible for all the tasks related to building, running, and managing Docker containers. When you execute a Docker command, the Docker Daemon is the entity that actually carries out the task.

The Docker Daemon works in the background, listening for Docker API requests and managing Docker objects such as images, containers, networks, and volumes. It can also communicate with other daemons to manage Docker services.

When the Docker Daemon starts, it checks configurations, command-line options, and environment variables. It uses all these inputs to configure itself, set up logging, authorize clients, set up network bridges, and more.

Docker Client

The Docker Client, often just termed ‘Docker’, is the primary way users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The Docker Client can communicate with more than one daemon.

The Docker Client and Daemon can run on the same host, or you can connect a Docker Client to a remote Docker Daemon. They communicate through a REST API, over UNIX sockets or a network interface.

It’s worth noting that the Docker Client is not limited to the command-line interface. There are several GUI-based clients which provide a more user-friendly interface.

Docker Objects

When you use Docker, you’re interacting with objects—images, containers, networks, volumes, plugins, and other data. The main types of objects are:

  • Docker Images: These are read-only templates with instructions for creating a Docker container. They are the building blocks of a Docker container and can be built by you or by other Docker users. They can also be shared, allowing you to build on the work of others, or use public images to create your own containers.
  • Docker Containers: Runnable instances of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can also connect a container to one or more networks, attach storage, or even create a new image based on its current state.
  • Docker Networks: Manage the communication between Docker containers.
  • Docker Volumes: Handle the data stored by your Docker containers.

Docker Registries

A Docker Registry is a storage and distribution system for Docker Images. When you use the docker pull or docker run commands, the required images are pulled from a registry.

There are public registries such as Docker Hub and Docker Cloud where you can find millions of images from other community members. You can also use private registries to store images created and maintained by your organization.

When you use the docker push command, your image is pushed to the configured registry, making it available for others. Registries provide versioning and labeling, detailed manifests, and rich APIs that allow automation and tight integration into your workflow.

Docker Compose

Docker Compose is a tool for defining and managing multi-container Docker applications. It lets you use a YAML file to define the services that make up your application so they can be run together in an isolated environment. This means you can define an entire complex application, including all its dependent services, in a single file, and then run it with a single command.

Docker Compose works by reading a file, typically called docker-compose.yml, and then uses the variables and settings defined in that file to create and manage your application services. With Docker Compose, you can manage your application lifecycle from start to finish. You can start, stop and rebuild services, view the status of running services, stream the log output of running services, and run a one-off command on a service.

Learn more in our detailed guide to Docker architecture (coming soon)

Docker vs. Virtual Machine

Virtual Machines (VMs) and Docker containers both serve the purpose of isolating an application and its dependencies, but they do so in different ways:

  • VMs provide a high level of isolation by emulating a computer’s hardware. Each VM has its own full operating system. This results in significant resource overhead and longer startup times. VMs are portable, but typically cannot be moved to any system because of hardware dependencies.
  • Docker containers are more resource-efficient, as they share the host system’s kernel and isolate only the application processes. This leads to quicker startup and less resource usage. Docker containers are more portable because they encapsulate all application dependencies.

There are also significant differences when it comes to ongoing management:

  • VMs are managed via hypervisors like VMware and Hyper-V, while Docker containers are managed by the Docker runtime system. VMs are often configured through GUI-based tools. VMs can be scaled but often require more manual resources and intervention.
  • Docker uses text-based files like Dockerfiles for configuration. Networking is usually more complicated in a VM setup compared to Docker, which allows containers to share a single network stack. Docker containers are easily scaled and distributed, especially with orchestration platforms like Kubernetes.

VMs are still widely used, and are the underlying technology used by most cloud computing providers. However, containers, and technologies like Docker and Kubernetes, are widely recognized as the future of application deployment.

Docker vs. Kubernetes

Docker is an open-source platform that helps developers to automate the deployment, scaling, and management of applications. It allows them to package an application and its dependencies into a container, which can be easily transported and run on any system that supports Docker.

Kubernetes is a container orchestration system for automating application deployment, scaling, and management. It was designed by Google and is now maintained by the Cloud Native Computing Foundation. Kubernetes provides a platform for running and managing containers at scale, across multiple hosts. It ensures that the containers are running as expected, and can automatically replace containers that fail, kill containers that don’t respond to health checks, and automatically scale containers according to application requirements.

While Docker focuses on automating the deployment of applications inside containers, Kubernetes focuses on automating the deployment, scaling, and operations of containers across clusters of hosts. They are often used together in a containerized environment.

Learn more in our detailed guide to Docker vs Kubernetes (coming soon)

Docker Quick Start

This section will help you install Docker on your machine and use the basic command line operations.

Installing Docker

In order to start using Docker containers, you first need to install Docker on your system. The installation process varies depending on your operating system.

Docker installation on Ubuntu

Docker Engine comes bundled with Docker Desktop for Linux. This is the easiest and quickest way to get started.

Otherwise, you can install Docker on Ubuntu using the apt repository. Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository:

    # Add Docker's official GPG key:
    sudo apt-get update
    sudo apt-get install ca-certificates curl gnupg
    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
    # Add the repository to Apt sources:
    echo 
     "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu 
     "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | 
     sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    sudo apt-get update

To install the latest version of Docker, run:

    $ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Docker installation on Mac

To install Docker on Mac, visit the Docker Hub website and download the Docker Desktop for Mac with Intel Silicon or Mac with Intel chip. Once downloaded, open the installer and follow the instructions to install Docker Desktop.

After the installation process is complete, you should see a Docker icon in the top status bar indicating that Docker is running.

Docker installation on Windows

To install Docker on Windows, download Docker Desktop for Windows from the Docker Hub website. Once downloaded, run the installer and follow the instructions to install Docker Desktop.

After the installation is complete, you should see a Docker icon in the system tray indicating that Docker is running.

Verifying your Docker installation works

Verify that the Docker Engine installation is successful by running the hello-world image. On Ubuntu, the command is:

    $ sudo docker run hello-world

This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.

Basic Commands

Understanding the basic Docker commands is essential for working with containers. Here’s a quick guide to the most commonly used Docker commands:

docker –version

This command displays the installed Docker version. It helps verify that Docker is installed correctly on your machine.

Syntax:

    docker --version

Example:

    $ docker --version
    Docker version 20.10.7, build f0df350

docker pull

This command fetches a Docker image from a registry like Docker Hub.

Syntax:

    docker pull [OPTIONS] NAME[:TAG|@DIGEST]

Example:

    $ docker pull ubuntu:latest

docker run

Creates and starts a new container from a specified image.

Syntax:

    docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Example:

    $ docker run -d -p 8080:80 nginx

docker ps

Lists all running containers, showing their IDs, names, and other useful information.

Syntax:

    docker ps [OPTIONS]

Example:

    $ docker ps
    CONTAINER ID                                                     IMAGE                        COMMAND                CREATED              STATUS              PORTS               NAMES
    ca5534a51dd04bbcebe9b23ba05f389466cf0c190f1f8f182d7eea92a9671d00 ubuntu:22.04                 bash                   17 seconds ago       Up 16 seconds       3300-3310/tcp       webapp
    9ca9747b233100676a48cc7806131586213fa5dab86dd1972d6a8732e3a84a4d crosbymichael/redis:latest   /redis-server --dir    33 minutes ago       Up 33 minutes       6379/tcp            redis,webapp/db

docker exec

Runs a command in a running container.

Syntax:

    docker exec [OPTIONS] CONTAINER COMMAND [ARG...]

Example:

    $ docker exec -it my_container /bin/bash

docker stop

Stops one or more running containers.

Syntax:

    docker stop [OPTIONS] CONTAINER [CONTAINER...]

Example:

    $ docker stop my_container

docker push

Pushes an image to a Docker registry such as Docker Hub.

Syntax:

    docker push [OPTIONS] NAME[:TAG]

Example:

    $ docker push my_image:latest

docker build

Builds a Docker image from a Dockerfile and a given context. The Dockerfile can be referenced as a PATH, URL, or read from STDIN.

Syntax:

    docker build [OPTIONS] PATH | URL | -

Example:

    ## Build with PATH
    $ docker build .
    ## Build with URL
    $ docker build github.com/creack/docker-firefox
    ## Read a Dockerfile from STDIN (with/without context)
    $ docker build - < Dockerfile
    $ docker build - < context.tar.gz

Learn more in our detailed guide to Docker tutorial (coming soon)

Best Practices for Using Docker Containers

When implementing Docker Containers, you can follow these best practices to ensure your applications perform optimally, are secure, and are manageable.

Use Official Images

It’s always best to use official images when working with Docker Containers. These are images that Docker itself, or the organization responsible for a given software, maintains. You can trust that these images are secure and optimized for use in a Docker environment. They are thoroughly tested and updated regularly to fix any security vulnerabilities or bugs. Using official images can also save you time and effort, as you won’t need to build or maintain them yourself.

Scan Images for Vulnerabilities

Just like any other software, Docker images can have vulnerabilities. But the impact of these vulnerabilities can be much greater, because they affect all the containers created from the image. Therefore, it’s crucial to regularly scan your Docker images for vulnerabilities.

Several tools can help you with this, including Docker’s own Docker Security Scan. These tools can identify known vulnerabilities in your images and provide recommendations on how to fix them. It’s a good idea to integrate image scanning tools into your software delivery pipeline.

Minimize Container Size

The larger the size of the container, the more resources it will need to run. A large container also takes more time to deploy and to start. Prefer images based on Alpine Linux if possible, as these are usually lightweight. Additionally, try to remove unnecessary files and components from your images. This will make your containers more efficient and reduce the attack surface for potential attackers.

One Process per Container

The principle of one process per container is a key aspect of the ‘microservices’ architecture, which Docker containers support. This practice ensures that each container does a single job, which makes it easier to manage and scale your application.

When a single container is responsible for a single process, troubleshooting becomes simpler. If a process fails, you know exactly where to look. It also allows for better resource allocation. Each container can be given exactly what it needs to perform its task, and nothing more.

Use Version Tags

Tagging allows you to specify different versions of your images. This is particularly important in a CI/CD pipeline, where you will need to roll back to an earlier version of your application quickly. Always use explicit version numbers in your tags. Avoid using the latest tag, as this can lead to unpredictable behavior in your deployments.

Configure Logging

Logging is a fundamental part of any application’s lifecycle, and it’s even more crucial when using containers. Proper logging can help you diagnose problems, track performance, and understand how your application is being used.

By default, Docker sends all logs to a JSON file on the host machine. However, this might not be the most efficient or practical solution. Depending on the nature of your application and the environment in which it’s running, you may want to configure Docker to send logs to a central logging service or a log management tool.

Implement Health Checks

Health checks are a way of ensuring your Docker Containers are running correctly. They are small tests that check the internal state of your containers. If a container fails a health check, Docker can automatically restart it or take other corrective action. Implementing health checks in your containers will increase the reliability and uptime of your applications.

Manage Secrets Securely

In containerized environments, ‘secrets’ refer to sensitive data such as database passwords, API keys, and other credentials. It’s crucial to manage these secrets securely to prevent unauthorized access to your applications and data. Docker provides a secrets management system that allows you to store and manage your secrets securely. If you need something more robust, use a dedicated secret management system like Hashicorp Vault.

Secure Networking

By default, Docker provides a few networking modes to choose from, including bridge host and none
It’s a good idea to avoid using the host networking mode unless absolutely necessary. This mode gives the container full access to the host’s network stack, which can lead to security issues. Instead, consider using bridge mode, which isolates the container’s network stack from the host.

Running Docker Containers in the Cloud

Docker containers are known as a ‘cloud native’ technology, and are highly compatible with a cloud computing environment. Let’s see a few ways to run Docker containers in the cloud.

Running Docker Containers in a Virtual Machine / Instance

A simple way to run containers in the cloud is to start a virtual machine (VM), known as a machine instance in some cloud providers, and run a Docker container inside it. This involves:

  • Spinning up the VM (for example, an Amazon EC2 instance or Azure VM)
  • Connecting to the VM via SSH or other remote connection method
  • Installing Docker on it
  • Using the Docker command line to build and run your containers

This method allows you to maintain the isolation and consistency of your applications, as well as leverage the scalability and flexibility of the cloud infrastructure. The downside is you have to manage everything yourself—the VMs, the Docker runtime, and your individual containers. This is not feasible for larger deployments.

Running Docker Containers Using a Managed Cloud Service

Another way to run Docker containers in the cloud is by using a managed cloud service such as AWS Elastic Beanstalk, Google Cloud Run, or Azure Container Instances. These services simplify the process of deploying and managing Docker containers by providing a fully managed runtime environment.

The advantage of this approach is that you don’t need to manage the underlying infrastructure. You just package your applications into Docker images, and then deploy them to the managed service.

The process of deploying a Docker container to a managed service generally involves creating a new service, configuring the service to use your Docker image, and then deploying the service. The managed service takes care of scheduling and running your Docker containers, as well as scaling them to meet the demand.

While this method is very convenient, it comes at a cost—you need to pay for computing resources on an ongoing basis, and costs can sometimes be difficult to predict. Managed container services also provide limited flexibility and control over the underlying infrastructure—for example, you may not be able to access the server running your containers.

Running Docker Containers with Kubernetes

Kubernetes is a powerful open-source platform for managing containerized workloads and services. It provides a framework for automating deployment, scaling, and management of applications across clusters of hosts. Kubernetes can run on various cloud platforms, making it a popular choice for running Docker containers in the cloud.

With Kubernetes, you can define your applications using Kubernetes manifests, which describe the desired state of your application. Kubernetes then ensures that the actual state of your application matches the desired state.

To run Docker containers with Kubernetes, you need to create Docker images of your applications, push them to a Docker registry, and then use Kubernetes to deploy the images. Kubernetes creates a pod for each instance of your application, ensuring that your application is highly available and can scale to meet demand.

The downside of this approach is that Kubernetes is complex to learn and set up, and requires expertise to maintain on an ongoing basis. Also, while Kubernetes itself is free, there is a high infrastructure cost involved in running entire Kubernetes clusters in the cloud.

Running Docker Containers with Acorn: A Free Cloud Sandbox {#running-docker-containers-with-acorn-a-free-cloud-sandbox}

You can convert your Docker Compose files to Acornfiles for easy deployment in our cloud platform – Check out how to do this here.

Click here to try deploying ViewTube, one of our pre-built Docker apps, in Acorn (account registration required).

Releated Articles