Automate Your Server Configuration using Ansible on Ubuntu

Written by: Bagus Facsi Aginsa
Published at: 11 May 2026


You manage three servers. Then five. Then ten. Every time you need to install a package, change a config file, or restart a service, you SSH into each machine and repeat the same steps, copy-pasting commands, hoping you did not miss a server, hoping you typed everything correctly. When something goes wrong at 2 AM, you have no record of what the state of each machine was supposed to be.

This is the problem Ansible solves. It lets you describe the desired state of your servers in plain YAML files, then applies that configuration to as many machines as you want in a single command.

In this tutorial, you will install Ansible on Ubuntu, set up a basic inventory, run ad-hoc commands against remote servers, and write a real playbook that installs and configures Nginx across multiple hosts. By the end, you will have a working automation setup you can build on for any future task.


What is Ansible?

Ansible is an open-source IT automation tool written in Python. It is used for configuration management, application deployment, infrastructure provisioning, and anything that involves making changes to systems.

What makes Ansible stand out from other automation tools is that it is agentless. There is no daemon running on the machines you manage, no special software to install on each server. Ansible connects over plain SSH (or WinRM for Windows), pushes small Python scripts called modules to the target machine, executes them, and then removes them. The managed servers need nothing more than SSH access and Python installed, which is true of virtually every Linux server by default.

The core concepts you need to know before starting:

  • Control node: the machine where Ansible is installed and from which you run commands. This is your laptop, jump box, or CI runner.
  • Managed nodes (also called hosts): the servers that Ansible configures. Ansible connects to these over SSH.
  • Inventory: a file that lists your managed nodes, optionally grouped. Ansible needs to know which machines to target.
  • Module: a reusable unit of work. apt installs packages, copy transfers files, service manages daemons, and so on. There are thousands of built-in modules.
  • Task: a single call to a module with specific arguments (e.g., “install nginx using the apt module”).
  • Playbook: a YAML file that contains one or more plays. Each play targets a group of hosts and runs a list of tasks in order. This is where you define what you want done on which servers.
  • Role: a structured way to organize playbooks, variables, templates, and files into reusable packages. Useful when your playbooks grow.

Prerequisites

Before you start, you need:

  • One Ubuntu 22.04 (or 24.04) machine as the control node. This is where Ansible runs. It can be your local machine or any Linux server.
  • One or more Ubuntu servers as managed nodes. They do not need Ansible installed. For this tutorial, having at least two managed nodes makes the examples more realistic, but one works fine too.
  • SSH key-based authentication between your control node and managed nodes. Password-based SSH works too, but key-based is what you will use in production.
  • Python 3 on managed nodes. Ubuntu 22.04 and later ship with Python 3 by default.
  • sudo access on the managed nodes for the user Ansible connects as.

If you do not have SSH key-based auth set up yet, run this on your control node for each managed node:

ssh-keygen -t ed25519 -C "ansible"
ssh-copy-id [email protected]
ssh-copy-id [email protected]

Test that it works without a password prompt:

ssh [email protected] "echo connected"

Step 1: Install Ansible on the Control Node

Ansible is available from Ubuntu’s default repositories, but the version there often lags behind. The official PPA maintained by the Ansible project gives you a more current release.

sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install -y ansible

Verify the installation:

ansible --version

You should see output similar to:

ansible [core 2.17.x]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/youruser/.ansible/plugins/modules', ...]
  python version = 3.12.x
  ...

The important thing is that it runs without errors. The exact version number does not matter much.


Step 2: Create a Project Directory

Rather than working out of /etc/ansible (which is the global default), it is better practice to keep your Ansible project in its own directory. Ansible looks for configuration and inventory files in the current directory first, which makes per-project settings easy.

mkdir ~/ansible-demo
cd ~/ansible-demo

All files you create in the rest of this tutorial go inside this directory.


Step 3: Write Your Inventory File

The inventory file tells Ansible which machines to manage. Create a file called inventory.ini:

[webservers]
web1 ansible_host=192.168.1.101
web2 ansible_host=192.168.1.102

[webservers:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=~/.ssh/id_ed25519

Breaking this down:

  • [webservers] defines a group named webservers. Groups let you target a subset of your machines.
  • web1 and web2 are the alias names you will use in playbooks. ansible_host is the actual IP or hostname.
  • [webservers:vars] sets variables that apply to every host in the webservers group. Here, you tell Ansible which user to connect as and which SSH key to use.

Adjust the IPs, user, and key path to match your actual setup.

You can have as many groups as you need, for example [databases], [loadbalancers], [staging]. A host can also belong to multiple groups.


Step 4: Test Connectivity with an Ad-Hoc Command

Before writing any playbook, verify that Ansible can reach your servers. Ad-hoc commands are one-off commands you run directly from the terminal without a playbook, useful for quick checks and one-time tasks.

The ping module checks that Ansible can connect and that Python is available on the target:

ansible webservers -i inventory.ini -m ping

If everything is configured correctly, you will see:

web1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
web2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

This is not an ICMP ping — it is Ansible connecting over SSH and running a small Python script. "changed": false means the task made no changes to the system, which is expected for a connectivity check.

A few more useful ad-hoc commands to get familiar with the syntax:

Check disk space on all webservers:

ansible webservers -i inventory.ini -m command -a "df -h /"

Check uptime:

ansible webservers -i inventory.ini -m command -a "uptime"

Install a package (the -b flag means “become root via sudo”):

ansible webservers -i inventory.ini -m apt -a "name=htop state=present" -b

The -m apt -a "name=htop state=present" tells Ansible to use the apt module to ensure htop is installed. This is already more powerful than a raw apt install command because Ansible checks whether htop is already installed before doing anything — it is idempotent, meaning running it ten times has the same result as running it once.


Step 5: Write Your First Playbook

Ad-hoc commands work for quick tasks, but playbooks are where Ansible’s real power shows. A playbook is a YAML file that describes a complete configuration scenario.

Create a file called site.yml:

---
- name: Configure web servers
  hosts: webservers
  become: true

  tasks:
    - name: Update apt cache
      apt:
        update_cache: true
        cache_valid_time: 3600

    - name: Install Nginx
      apt:
        name: nginx
        state: present

    - name: Ensure Nginx is started and enabled
      service:
        name: nginx
        state: started
        enabled: true

    - name: Deploy custom Nginx index page
      copy:
        dest: /var/www/html/index.html
        content: |
          <html>
          <body>
            <h1>Configured by Ansible</h1>
            <p>Server: </p>
          </body>
          </html>
        owner: www-data
        group: www-data
        mode: '0644'

Walk through what this does:

  • hosts: webservers — this play runs on every host in the webservers group.
  • become: true — all tasks in this play run with sudo (equivalent to -b in ad-hoc mode).
  • The first task updates the apt cache, but only if the cache is older than an hour (cache_valid_time: 3600 in seconds). This avoids a slow apt update every single run.
  • The second task installs Nginx using the apt module. state: present means “ensure it is installed”; Ansible skips this if Nginx is already installed.
  • The third task ensures Nginx is running and set to start on boot using the service module.
  • The fourth task deploys a custom index.html using the copy module. Notice `` — this is a Jinja2 template variable that Ansible replaces with the hostname of each server at runtime, so the page on web1 will show “web1” and the page on web2 will show “web2”.

Run the playbook:

ansible-playbook -i inventory.ini site.yml

The output shows each task and its result:

PLAY [Configure web servers] ***************************************************

TASK [Gathering Facts] *********************************************************
ok: [web1]
ok: [web2]

TASK [Update apt cache] ********************************************************
ok: [web1]
ok: [web2]

TASK [Install Nginx] ***********************************************************
changed: [web1]
changed: [web2]

TASK [Ensure Nginx is started and enabled] *************************************
changed: [web1]
changed: [web2]

TASK [Deploy custom Nginx index page] ******************************************
changed: [web1]
changed: [web2]

PLAY RECAP *********************************************************************
web1                       : ok=5    changed=3    unreachable=0    failed=0
web2                       : ok=5    changed=3    unreachable=0    failed=0

changed means the task made a modification. Run the playbook a second time without changing anything:

ansible-playbook -i inventory.ini site.yml

This time every task shows ok instead of changed. Ansible detected that Nginx is already installed, running, and that the index file already has the right content — so it did nothing. This is idempotency in action, and it is one of the most important properties of good automation. You can safely run your playbook as many times as you want without worrying about breaking things.


Step 6: Use Variables and Handlers

Hardcoding values directly in tasks gets unwieldy fast. Ansible supports variables at many levels. Here is a more organized version of the playbook that uses variables and adds a handler.

A handler is a special task that only runs when notified by another task. The classic use case is restarting a service only when its configuration file changes — there is no need to restart Nginx every single run.

Update site.yml:

---
- name: Configure web servers
  hosts: webservers
  become: true

  vars:
    nginx_port: 80
    site_root: /var/www/html

  tasks:
    - name: Update apt cache
      apt:
        update_cache: true
        cache_valid_time: 3600

    - name: Install Nginx
      apt:
        name: nginx
        state: present

    - name: Ensure Nginx is started and enabled
      service:
        name: nginx
        state: started
        enabled: true

    - name: Deploy Nginx site configuration
      template:
        src: templates/nginx-site.conf.j2
        dest: /etc/nginx/sites-available/default
        owner: root
        group: root
        mode: '0644'
      notify: Reload Nginx

    - name: Deploy custom index page
      copy:
        dest: "/index.html"
        content: |
          <html>
          <body>
            <h1>Configured by Ansible</h1>
            <p>Server: </p>
          </body>
          </html>
        owner: www-data
        group: www-data
        mode: '0644'

  handlers:
    - name: Reload Nginx
      service:
        name: nginx
        state: reloaded

Create the templates directory and the Nginx config template:

mkdir templates

Create templates/nginx-site.conf.j2:

server {
    listen ;
    server_name _;

    root ;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }
}

The .j2 extension signals that this is a Jinja2 template. Ansible renders it by replacing and with the variable values before copying it to the target server.

Now when you run the playbook, the “Deploy Nginx site configuration” task will notify the Reload Nginx handler only when the config file changes. If the config has not changed, the handler never runs — Nginx is not reloaded unnecessarily.


Step 7: Use a Separate Variables File

As your project grows, it is cleaner to store variables in their own file rather than embedding them in the playbook. Create vars/main.yml:

mkdir vars
# vars/main.yml
nginx_port: 80
site_root: /var/www/html

Then update the vars: section in site.yml to load from the file:

  vars_files:
    - vars/main.yml

This keeps your playbook logic separate from your configuration values — a structure that pays off when you have staging and production environments with different variable values.


Common Mistakes and Troubleshooting

SSH connection refused or timeout

If ansible webservers -m ping fails with a connection error, check that:

  • The IP addresses in your inventory are reachable: ping 192.168.1.101
  • SSH is running on the managed node: ssh [email protected]
  • The correct SSH user and key are specified in [webservers:vars]

Run Ansible with verbose output to see exactly what SSH command it is using:

ansible webservers -i inventory.ini -m ping -vvv

“sudo: a password is required”

This means your remote user requires a password for sudo. Either configure passwordless sudo on the managed node, or pass the sudo password to Ansible:

ansible-playbook -i inventory.ini site.yml -b --ask-become-pass

In production, set up passwordless sudo for the Ansible user using a file in /etc/sudoers.d/.

YAML indentation errors

Ansible playbooks are YAML, and YAML is sensitive to indentation. Always use spaces, never tabs. Two-space indentation is the convention. If a playbook fails to parse, run it through a YAML linter:

pip3 install yamllint
yamllint site.yml

“Changed” every run when it should not be

If a task shows changed on every run, it means Ansible cannot detect idempotency for that operation. Common culprits are the command and shell modules, which always report changed unless you add changed_when: false or use creates to check for an existing file. Prefer purpose-built modules (apt, copy, template, service) over raw command/shell whenever possible.

Template not found

If you get an error like “could not find or access templates/nginx-site.conf.j2”, Ansible looks for templates relative to the playbook file. Make sure the templates/ directory is in the same directory as site.yml, and the filename matches exactly.


Best Practices

Use key-based SSH authentication. Never store passwords in your inventory or playbooks. If you must handle secrets (passwords, API keys, database credentials), use Ansible Vault to encrypt them:

ansible-vault create vars/secrets.yml

Keep your playbooks idempotent. Design every task so it can run repeatedly without side effects. Prefer modules that have idempotency built in. When you must use shell, add appropriate guards with creates, removes, or when conditions.

Use inventory groups logically. Group hosts by role (webservers, databases) and by environment (production, staging). A host can belong to multiple groups. This lets you target specific subsets precisely.

Store your Ansible project in version control. Your playbooks are infrastructure code. Keep them in Git. This gives you a history of every change made to your infrastructure, which is invaluable when debugging.

Limit blast radius during development. Use --limit to target a single host when testing a new playbook before running it across your entire fleet:

ansible-playbook -i inventory.ini site.yml --limit web1

Use --check mode before applying changes. The --check flag runs the playbook in dry-run mode — it shows what would change without actually changing anything:

ansible-playbook -i inventory.ini site.yml --check

Prefer roles for reusable automation. Once your playbooks grow beyond a few tasks, break them into roles using ansible-galaxy init role-name. Roles enforce a consistent directory structure and are easy to share across projects. Ansible Galaxy (galaxy.ansible.com) hosts thousands of community roles you can use directly.


Conclusion

You now have a working Ansible setup. You installed Ansible on a control node, defined an inventory of managed servers, ran ad-hoc commands to verify connectivity, and wrote a complete playbook that installs Nginx, deploys a configuration template, and handles service reloads correctly. You also saw how variables, handlers, and template files keep playbooks organized and maintainable.

The key takeaway is idempotency: your playbooks describe desired state, not a sequence of imperative commands. You can run them against a freshly provisioned server to bring it to a known good state, or against an already-configured server to verify and correct any drift. That property is what makes infrastructure-as-code reliable at scale.

From here, good next steps are:

  • Ansible Vault — encrypt sensitive variables like passwords and API keys
  • Roles — structure larger projects with reusable, shareable units of automation
  • Ansible Galaxy — use community roles for common tasks like setting up Docker, PostgreSQL, or fail2ban
  • Dynamic inventory — connect Ansible to AWS EC2, GCP, or other APIs to generate your inventory automatically instead of maintaining a static file
  • AWX / Ansible Tower — the open-source web UI for scheduling playbooks, managing credentials, and giving your team controlled access to run automation without direct SSH access

If you are already managing your servers with Nginx, Redis, or Kubernetes (as covered in earlier posts on this blog), wrapping those setups in Ansible playbooks is a natural next step — and it means the next time you provision a server, you get from bare Ubuntu to fully configured in minutes.