Getting Started with Docker and Docker Compose on Ubuntu

Written by: Bagus Facsi Aginsa
Published at: 13 May 2026


You have a Node.js app that needs PostgreSQL and Redis. You install them all on your development machine. A teammate tries to run your code and spends two hours fighting version mismatches. You deploy to staging and something behaves differently because the OS is slightly different. You wonder if there is a better way.

There is. It is called Docker.

In this tutorial, you will install Docker and Docker Compose on Ubuntu, learn the concepts and commands you actually need, and run a realistic multi-container application using a single docker-compose.yml file. By the end, you will have the foundation to containerize any project and run it consistently anywhere.


What Is Docker?

Docker is a platform that packages your application and everything it needs to run the code, runtime, libraries, and config files into a single unit called a container. Containers are isolated from each other and from the host, but they share the host OS kernel, which makes them much lighter than virtual machines.

The key difference from a VM: a VM virtualizes the hardware and runs a full OS. A container virtualizes only the application layer and shares the kernel. You can run dozens of containers on a single machine that would struggle to run three VMs.

Core Concepts

Image: a read-only template used to create containers. Think of it as a blueprint. Images are built from a Dockerfile and stored in a registry like Docker Hub.

Container: a running instance of an image. You can start, stop, restart, and delete containers without touching the underlying image.

Docker Hub: the default public registry where images are published. When you run docker pull nginx, it downloads the official Nginx image from Docker Hub.

Volume: persistent storage that lives outside the container’s filesystem. Data written to a volume survives container restarts and deletions.

Network: Docker containers can talk to each other over virtual networks. By default, containers on the same Compose network can reach each other by service name.

What Is Docker Compose?

Docker Compose is a tool for defining and running multi-container applications. Instead of typing long docker run commands with dozens of flags, you describe your entire stack in a YAML file called docker-compose.yml. Then you bring everything up with one command.

Compose is ideal for local development, CI environments, and small-to-medium production deployments.


Prerequisites

  • Ubuntu 22.04 or 24.04 (these steps work on both)
  • A non-root user with sudo privileges
  • Basic Linux command-line familiarity
  • An internet connection (to pull Docker packages and images)

No prior Docker experience is required.


Step 1: Install Docker Engine

Ubuntu’s default apt repository ships an older version of Docker. Use Docker’s official repository instead to get the current stable release.

First, remove any old Docker packages that might already be installed:

sudo apt remove docker docker-engine docker.io containerd runc

Install the packages needed to add Docker’s repository:

sudo apt update
sudo apt install -y ca-certificates curl gnupg lsb-release

Add Docker’s official GPG key so apt can verify the packages:

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
  sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

Add the Docker repository to apt sources:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Now install Docker Engine along with the Compose plugin:

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io \
  docker-buildx-plugin docker-compose-plugin

Verify the installation:

sudo docker run hello-world

You should see a message that says “Hello from Docker!”. This confirms Docker pulled an image, created a container, ran it, and streamed the output back.


Step 2: Run Docker Without sudo

By default, the Docker daemon runs as root and requires sudo. To run Docker commands as your normal user, add yourself to the docker group:

sudo usermod -aG docker $USER

Log out and back in (or run newgrp docker in your current session) to apply the group change:

newgrp docker

Confirm it works:

docker ps

You should see an empty table of running containers, no sudo needed. If still not working, try to re-login to your VM.


Step 3: Learn the Essential Docker Commands

Before jumping into Compose, get comfortable with the core Docker commands. These are the ones you will use every day.

Pull an image from Docker Hub:

docker pull nginx:1.25

The part after the colon is the tag, it pins the version. Without a tag, Docker defaults to latest, which can cause surprises when the upstream image is updated.

Run a container:

docker run -d -p 8080:80 --name my-nginx nginx:1.25
  • -d runs it in detached mode (background)
  • -p 8080:80 maps port 8080 on the host to port 80 inside the container
  • --name my-nginx gives the container a friendly name

Open http://localhost:8080 in a browser, you will see the Nginx welcome page served from inside the container.

List running containers:

docker ps

Add -a to also see stopped containers.

View logs:

docker logs my-nginx

Add -f to follow (tail) the log in real time.

Execute a command inside a running container:

docker exec -it my-nginx bash

This opens an interactive bash shell inside the container. Type exit to leave.

Stop and remove a container:

docker stop my-nginx
docker rm my-nginx

List downloaded images:

docker images

Remove an image:

docker rmi nginx:1.25

Step 4: Write a Dockerfile

A Dockerfile is a script that tells Docker how to build your image. Let’s create a simple one to understand how images are made before moving to Compose.

Create a project directory:

mkdir ~/myapp && cd ~/myapp

Create a minimal Node.js application:

cat > app.js << 'EOF'
const http = require('http');
const server = http.createServer((req, res) => {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello from Docker!\n');
});
server.listen(3000, () => console.log('Listening on port 3000'));
EOF

Create the Dockerfile:

FROM node:20-alpine

WORKDIR /app

COPY app.js .

EXPOSE 3000

CMD ["node", "app.js"]

Breaking this down line by line:

  • FROM node:20-alpine, start from the official Node.js 20 image based on Alpine Linux (a minimal 5 MB base, much smaller than the full Debian-based image)
  • WORKDIR /app, set the working directory inside the container; subsequent commands run from here
  • COPY app.js ., copy your source file into the container
  • EXPOSE 3000, document that the container listens on port 3000 (does not actually publish it; that happens at docker run time)
  • CMD ["node", "app.js"], the command that runs when the container starts

Build the image:

docker build -t myapp:1.0 .

Run it:

docker run -d -p 3000:3000 --name myapp myapp:1.0
curl http://localhost:3000

Output:

Hello from Docker!

Step 5: Build a Multi-Container Stack with Docker Compose

Real applications rarely run as a single service. Let’s build something closer to production: a web application that uses PostgreSQL as its database and Redis as a cache, all defined in one docker-compose.yml.

Create a fresh directory for this project:

mkdir ~/webapp && cd ~/webapp

Create the docker-compose.yml file:

services:
  web:
    image: nginx:1.25-alpine
    ports:
      - "8080:80"
    volumes:
      - ./html:/usr/share/nginx/html:ro
    depends_on:
      - db
      - cache
    networks:
      - appnet
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: appdb
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: changeme
    volumes:
      - pgdata:/var/lib/postgresql/data
    networks:
      - appnet
    restart: unless-stopped

  cache:
    image: redis:7-alpine
    command: redis-server --save 60 1 --loglevel warning
    volumes:
      - redisdata:/data
    networks:
      - appnet
    restart: unless-stopped

volumes:
  pgdata:
  redisdata:

networks:
  appnet:

Let’s walk through this file:

services, defines each container in the stack. Each key (web, db, cache) becomes the service name and also the hostname other services use to reach it over the internal network.

image, the Docker image to use. Always pin a specific tag in production. nginx:1.25-alpine is more predictable than nginx:latest.

ports, maps host_port:container_port. Only web exposes a port to the outside world. The database and cache are only reachable from within the appnet network.

volumes, pgdata and redisdata are named volumes, managed by Docker and persisted on disk. The web service mounts a local ./html directory so you can edit HTML files without rebuilding the image.

depends_on, tells Compose to start db and cache before web. Note: it only waits for the container to start, not for the service inside to be ready. For health-check-based readiness, see the Best Practices section.

environment, sets environment variables inside the container. For real projects, move these to a .env file and reference them with ${VARIABLE} syntax so secrets do not live in the Compose file itself.

networks, appnet is a custom bridge network. All services on it can reach each other by service name. The Postgres container is reachable at db:5432, Redis at cache:6379.

restart: unless-stopped, automatically restarts the container if it crashes, unless you explicitly stopped it with docker compose stop.

Create a simple HTML file so Nginx has something to serve:

mkdir html
echo "<h1>Hello from Docker Compose</h1>" > html/index.html

Step 6: Start, Use, and Manage the Stack

Bring up the entire stack:

docker compose up -d

The first run will pull all three images. Subsequent runs use the cached layers and start in seconds.

Check that all three services are running:

docker compose ps
NAME            IMAGE                  COMMAND                  STATUS          PORTS
webapp-cache-1  redis:7-alpine         "docker-entrypoint.s…"   Up 2 minutes
webapp-db-1     postgres:16-alpine     "docker-entrypoint.s…"   Up 2 minutes
webapp-web-1    nginx:1.25-alpine      "/docker-entrypoint.…"   Up 2 minutes    0.0.0.0:8080->80/tcp

Open http://localhost:8080 in your browser or run:

curl http://localhost:8080

Verify PostgreSQL is accessible from within the web container (using the service name as hostname):

docker compose exec db psql -U appuser -d appdb -c "\l"

Verify Redis is accessible:

docker compose exec cache redis-cli ping

Output:

PONG

View logs from all services at once:

docker compose logs -f

Or just from one service:

docker compose logs -f db

Stop everything (containers go down, volumes are kept):

docker compose stop

Start again:

docker compose start

Tear down containers and networks (volumes are kept):

docker compose down

Tear down everything including volumes (destroys all data):

docker compose down -v

Use this only when you want a completely clean slate.


Common Mistakes and Troubleshooting

Port already in use:

Error starting userland proxy: listen tcp 0.0.0.0:8080: bind: address already in use

Something is already listening on port 8080 on the host. Find it with sudo lsof -i :8080 and either stop it or change the host port in docker-compose.yml to something like 8081:80.

Container exits immediately:

Run docker compose logs servicename to see why. Common causes: a bad environment variable, a missing file that was supposed to be mounted, or the entrypoint script failing.

“permission denied” errors inside a container:

This usually means the process inside the container is running as a user that does not have permission on the mounted volume. Check the image docs for the default user, or chown the host directory to match.

Data not persisting after docker compose down:

Make sure you are using named volumes (like pgdata:/var/lib/postgresql/data) and not bind mounts that point to a temporary location. Do not run docker compose down -v unless you intend to delete the data.

Service can’t reach another service by name:

All services must be on the same network. If you forget to add a service to networks: - appnet, it cannot reach the others by hostname. Double-check your network config and run docker compose down && docker compose up -d after any network changes.

Image not updated after editing Dockerfile:

When you use a custom image built from a Dockerfile, docker compose up does not rebuild it automatically. Run docker compose up -d --build to force a rebuild.


Best Practices

Never hardcode secrets in docker-compose.yml. Use a .env file in the same directory:

# .env
POSTGRES_PASSWORD=supersecret

Reference it in Compose with:

environment:
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}

Add .env to .gitignore so it never ends up in source control.

Pin image versions. image: postgres:latest will silently upgrade to a new major version the next time you pull. Use postgres:16-alpine (or even a specific digest) to keep builds reproducible.

Use health checks for service dependencies. depends_on only waits for the container to start, not for the app inside to be ready. PostgreSQL takes a few seconds to initialize. Use a health check so dependent services wait properly:

db:
  image: postgres:16-alpine
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
    interval: 5s
    timeout: 5s
    retries: 5

Then in the dependent service:

web:
  depends_on:
    db:
      condition: service_healthy

Keep your images small. Use Alpine-based images where possible. Remove build tools and temporary files inside the same RUN layer to avoid bloating the final image. Multi-stage builds (a more advanced topic) let you build in a fat image and copy only the binary to a minimal runtime image.

Do not run containers as root. Most official images already run as a non-root user. For custom images, add a USER instruction in your Dockerfile:

RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

Use .dockerignore. Just like .gitignore, a .dockerignore file prevents COPY . . from pulling in your node_modules, .git directory, or local .env file into the image:

.git
node_modules
.env
*.log

Conclusion

You now have Docker and Docker Compose installed on Ubuntu, you understand the core concepts and you have run a realistic three-service stack from a single docker-compose.yml file.

The workflow you just built is the same one used for local development, CI pipelines, and many production deployments. The patterns are consistent whether you are running two containers or twenty.

From here, good next steps are:

  • Write a real Dockerfile for your own application and publish it to Docker Hub or a private registry like Harbor
  • Add Prometheus monitoring to your stack, Docker exposes container metrics through cAdvisor, which feeds into the monitoring setup covered in Set Up Prometheus and Grafana on Ubuntu
  • Move to Kubernetes when your application outgrows what Compose can handle Kubernetes manages containers at scale with built-in service discovery, autoscaling, and rolling deployments