How to Set Up Prometheus and Grafana Monitoring on Ubuntu

Written by: Bagus Facsi Aginsa
Published at: 09 May 2026


You deployed a server, your application is running, and everything seems fine. Then one morning you wake up to a flood of complaints. The service was slow for hours — maybe it ran out of memory, maybe the disk filled up, maybe CPU was pegged at 100%. You have no idea when it started or why, because you had no visibility into what was happening.

This is the problem that monitoring solves. And the most widely adopted open-source monitoring stack for Linux systems today is Prometheus + Grafana.

In this tutorial, you will install and configure Prometheus to collect metrics from your Ubuntu server, expose those metrics through Node Exporter, and visualize everything in Grafana with a production-ready dashboard. By the end, you will have a working monitoring setup that shows CPU usage, memory, disk I/O, and network traffic — all in real time.


What Are Prometheus and Grafana?

Before touching the terminal, it helps to understand what these tools actually do and how they relate to each other.

Prometheus is a time-series database and monitoring system. It works by scraping — periodically pulling metrics from target endpoints over HTTP. Those endpoints expose metrics in a simple text format. Prometheus stores the scraped data and provides a query language called PromQL to query it.

Node Exporter is a Prometheus exporter — a lightweight agent that runs on your Linux machine and exposes hundreds of OS-level metrics (CPU, memory, disk, filesystem, network) on an HTTP endpoint that Prometheus can scrape.

Grafana is a visualization platform. It connects to Prometheus as a data source and lets you build dashboards with graphs, gauges, tables, and alerts. Grafana does not store data itself — it queries Prometheus and renders the results.

The data flow looks like this:

Linux Server
  └── Node Exporter  (exposes metrics on :9100)
        ↑ scrape every 15s
  └── Prometheus     (stores metrics, queryable via :9090)
        ↑ query
  └── Grafana        (visualizes dashboards on :3000)

All three components will run on the same Ubuntu server in this tutorial. In production, you would typically run Prometheus and Grafana on a dedicated monitoring server and point them at multiple Node Exporters across your fleet.


Prerequisites

  • Ubuntu 20.04, 22.04, or 24.04 with at least 2 GB of RAM and 10 GB of free disk space
  • Root or sudo access
  • Ports 9090, 9100, and 3000 open in your firewall (or accessible from your local machine)
  • Basic comfort with the Linux terminal — editing files with nano, running commands with sudo

In the examples below, the server IP address is 192.168.1.10. Replace it with your actual server IP wherever it appears.


Step 1: Create System Users for Prometheus and Node Exporter

Running services as dedicated, unprivileged users is a security best practice. It limits the blast radius if a service is ever compromised.

sudo useradd --no-create-home --shell /bin/false prometheus
sudo useradd --no-create-home --shell /bin/false node_exporter

The --no-create-home flag skips creating a home directory (these accounts are not for human login), and --shell /bin/false prevents interactive shell access.


Step 2: Install Prometheus

Prometheus is distributed as a pre-compiled binary. Find the latest release at the Prometheus downloads page, then download it:

cd /tmp
curl -LO https://github.com/prometheus/prometheus/releases/download/v2.52.0/prometheus-2.52.0.linux-amd64.tar.gz
tar xvf prometheus-2.52.0.linux-amd64.tar.gz

Move the binaries to /usr/local/bin/:

sudo mv prometheus-2.52.0.linux-amd64/prometheus /usr/local/bin/
sudo mv prometheus-2.52.0.linux-amd64/promtool /usr/local/bin/

Set ownership so only the prometheus user can run them:

sudo chown prometheus:prometheus /usr/local/bin/prometheus
sudo chown prometheus:prometheus /usr/local/bin/promtool

Create the directories Prometheus needs for its configuration and data:

sudo mkdir /etc/prometheus
sudo mkdir /var/lib/prometheus
sudo chown prometheus:prometheus /etc/prometheus
sudo chown prometheus:prometheus /var/lib/prometheus

Copy the default console files (web UI templates) from the extracted archive:

sudo mv prometheus-2.52.0.linux-amd64/consoles /etc/prometheus/
sudo mv prometheus-2.52.0.linux-amd64/console_libraries /etc/prometheus/
sudo chown -R prometheus:prometheus /etc/prometheus/consoles
sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries

Clean up:

rm -rf /tmp/prometheus-2.52.0.linux-amd64*

Step 3: Configure Prometheus

Create the main configuration file:

sudo nano /etc/prometheus/prometheus.yml

Paste the following content:

global:
  scrape_interval: 15s
  evaluation_interval: 15s

alerting:
  alertmanagers:
    - static_configs:
        - targets: []

rule_files: []

scrape_configs:
  - job_name: "prometheus"
    static_configs:
      - targets: ["localhost:9090"]

  - job_name: "node_exporter"
    static_configs:
      - targets: ["localhost:9100"]

A quick explanation of the key fields:

  • scrape_interval: 15s — Prometheus polls each target every 15 seconds. Lower values give finer resolution but increase storage and CPU usage.
  • evaluation_interval: 15s — How often Prometheus evaluates alerting rules (not used here yet, but you will add rules later).
  • The scrape_configs section defines what to monitor. The first job (prometheus) makes Prometheus monitor itself. The second job (node_exporter) will scrape the Node Exporter you install in the next step.

Set the correct ownership:

sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml

Step 4: Create the Prometheus Systemd Service

Running Prometheus as a systemd service means it starts automatically on boot and can be managed with standard systemctl commands.

sudo nano /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus Monitoring System
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/var/lib/prometheus/ \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --storage.tsdb.retention.time=15d

[Install]
WantedBy=multi-user.target

The --storage.tsdb.retention.time=15d flag tells Prometheus to keep 15 days of metric history. Adjust this based on your disk space. A busy server with many targets can accumulate gigabytes of data quickly.

Reload systemd, start Prometheus, and enable it to run at boot:

sudo systemctl daemon-reload
sudo systemctl start prometheus
sudo systemctl enable prometheus

Verify it started successfully:

sudo systemctl status prometheus

You should see Active: active (running). Open a browser and navigate to http://192.168.1.10:9090 — the Prometheus web UI should load.


Step 5: Install Node Exporter

Node Exporter is the agent that exposes your server’s OS metrics. Download it from the Node Exporter releases page:

cd /tmp
curl -LO https://github.com/prometheus/node_exporter/releases/download/v1.8.0/node_exporter-1.8.0.linux-amd64.tar.gz
tar xvf node_exporter-1.8.0.linux-amd64.tar.gz
sudo mv node_exporter-1.8.0.linux-amd64/node_exporter /usr/local/bin/
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
rm -rf /tmp/node_exporter-1.8.0.linux-amd64*

Create the systemd service:

sudo nano /etc/systemd/system/node_exporter.service
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target

Start and enable it:

sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl enable node_exporter
sudo systemctl status node_exporter

You can verify that Node Exporter is exposing metrics by running:

curl http://localhost:9100/metrics | head -30

You should see output like:

# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.5901e-05
...
# HELP node_cpu_seconds_total Seconds the CPUs spent in each mode.
# TYPE node_cpu_seconds_total counter
node_cpu_seconds_total{cpu="0",mode="idle"} 12345.67

Step 6: Verify Prometheus Is Scraping Node Exporter

In the Prometheus web UI at http://192.168.1.10:9090, click Status → Targets. You should see two targets:

  • prometheus (1/1 up)
  • node_exporter (1/1 up)

If a target shows DOWN, it means Prometheus cannot reach that endpoint. Check that the service is running and the port is not blocked by a firewall.

You can also run a test query. In the Prometheus UI, click Graph, type the following PromQL expression, and hit Execute:

100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)

This gives you the CPU usage percentage. If you see a number, Prometheus is working correctly.


Step 7: Install Grafana

Grafana is available via its official APT repository. Add the repository and install:

sudo apt-get install -y apt-transport-https software-properties-common wget
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt-get update
sudo apt-get install -y grafana

Start and enable Grafana:

sudo systemctl start grafana-server
sudo systemctl enable grafana-server
sudo systemctl status grafana-server

Grafana runs on port 3000. Open http://192.168.1.10:3000 in your browser. The default login credentials are:

  • Username: admin
  • Password: admin

Grafana will immediately prompt you to change the password. Do it — do not leave the default credentials in place.


Step 8: Connect Grafana to Prometheus

Now you need to tell Grafana where to find your Prometheus data.

  1. In the Grafana sidebar, go to Connections → Data Sources
  2. Click Add data source
  3. Select Prometheus
  4. In the URL field, enter http://localhost:9090
  5. Leave everything else at defaults
  6. Click Save & Test

You should see a green message: “Successfully queried the Prometheus API.”


Step 9: Import a Pre-Built Dashboard

Building dashboards from scratch takes time. The Grafana community shares thousands of dashboards at grafana.com/grafana/dashboards. The most popular dashboard for Node Exporter is Node Exporter Full (Dashboard ID: 1860).

To import it:

  1. In the Grafana sidebar, go to Dashboards → Import
  2. In the Import via grafana.com field, type 1860 and click Load
  3. Select your Prometheus data source from the dropdown
  4. Click Import

Within seconds, you will see a full dashboard showing CPU usage, memory usage, disk I/O, filesystem space, network throughput, and more — all pulling live data from your server.


Common Mistakes and Troubleshooting

Prometheus target shows DOWN

Check whether the service is running:

sudo systemctl status node_exporter

Check whether the port is listening:

ss -tlnp | grep 9100

If the port is not open, the service failed to start. Check the logs:

sudo journalctl -u node_exporter -n 50

“Connection refused” when opening the web UI

Your firewall may be blocking the port. If you are using ufw:

sudo ufw allow 9090/tcp
sudo ufw allow 9100/tcp
sudo ufw allow 3000/tcp

Grafana shows “No data” on a dashboard

This usually means the time range selected in the dashboard is wrong. The default range is “Last 1 hour” — if you just installed everything, there may not be 1 hour of data yet. Change the range to “Last 15 minutes” and the graphs should populate.

Prometheus disk usage grows too fast

The default retention in this tutorial is 15 days. If disk space is a concern, lower it:

--storage.tsdb.retention.time=7d

Or set a size limit instead of a time limit:

--storage.tsdb.retention.size=5GB

YAML configuration errors

Prometheus will fail to start if prometheus.yml has YAML syntax errors. Validate it with:

promtool check config /etc/prometheus/prometheus.yml

Best Practices

Restrict port access. Node Exporter on port 9100 exposes detailed system information. It should not be publicly accessible. Use a firewall to restrict access so only your Prometheus server can reach it:

sudo ufw allow from 192.168.1.10 to any port 9100

Use authentication for Prometheus and Grafana. By default, Prometheus has no authentication. In production, put Nginx in front of Prometheus as a reverse proxy with HTTP Basic Auth. Grafana has built-in user management — create individual user accounts instead of sharing the admin password.

Set up alerting. Grafana supports alerting natively (since Grafana 8). You can define alert rules on any panel and route notifications to Slack, PagerDuty, email, or Webhook. Add an alert for when disk usage exceeds 80% or memory usage stays above 90% for more than 5 minutes. These simple alerts catch the most common production incidents.

Monitor your monitoring stack. Prometheus scrapes itself by default (the prometheus job in your config). Add alerts for Prometheus too — alert when it cannot reach a target, or when its own memory usage spikes.

Use descriptive labels when adding more targets. When you start monitoring multiple servers, the targets list in prometheus.yml gets long. Use the labels field to tag each target:

scrape_configs:
  - job_name: "node_exporter"
    static_configs:
      - targets: ["10.0.0.1:9100"]
        labels:
          env: production
          role: web
      - targets: ["10.0.0.2:9100"]
        labels:
          env: production
          role: database

These labels become queryable in PromQL and Grafana, letting you filter dashboards by environment or role.

Separate data from the monitoring server. For serious production use, consider running Prometheus with remote write to a long-term storage backend like Thanos or Mimir. This gives you years of metric history without managing disk on a single machine.


Conclusion

You now have a working monitoring stack: Node Exporter collecting OS metrics, Prometheus scraping and storing them, and Grafana rendering live dashboards. In about 30 minutes of setup, you have the visibility that would have caught that 3 AM incident before it became a crisis.

From here, good next steps are:

  • Add more exporters. Prometheus has exporters for MySQL, Redis, Nginx, Kubernetes, RabbitMQ, and hundreds of other services. Search exporterhub.io for the one that matches your stack.
  • Set up Alertmanager. Prometheus ships with a companion called Alertmanager that handles routing, grouping, and deduplication of alerts. It integrates with Slack, PagerDuty, and email out of the box.
  • Explore PromQL. Learning PromQL lets you write custom queries and build dashboards tailored to your application. Start with the official PromQL basics guide.
  • Try Grafana Loki. Once you have metrics under control, logs are the natural next step. Loki is Grafana’s log aggregation tool and integrates natively with Grafana dashboards, so you can correlate a CPU spike with the exact log line that caused it.

Monitoring is not something you set up once and forget — it is a practice. The more you instrument, the better your intuition becomes about what “normal” looks like, and the faster you can diagnose when things go wrong.