You set up Prometheus and Grafana to watch your server’s CPU, memory, and disk. Metrics look healthy. But then a service crashes and you have no idea why — because metrics tell you what happened, but logs tell you why. If you are SSHing into each server and running grep on /var/log/syslog, you already know the pain.
This is exactly the problem that Grafana Loki solves. Loki is a log aggregation system built specifically to work alongside Prometheus and Grafana. Instead of replacing your logging stack with something heavy and expensive like Elasticsearch, Loki takes a lightweight, label-based approach that most developers find much easier to operate.
In this tutorial, you will install Loki and Promtail on an Ubuntu server, configure Promtail to ship system logs into Loki, and then query those logs directly inside Grafana. By the end, you will have a working centralized log setup that you can actually use in production.
What Are Loki and Promtail?
Before running any commands, it helps to understand how these two components fit together.
Grafana Loki is the log aggregation backend. It receives log streams, indexes only the labels attached to those streams (not the full log content), and stores the actual log lines in compressed chunks. Because it does not full-text-index the log bodies, it uses far less memory and disk than Elasticsearch. The trade-off is that searching requires you to provide at least one label to narrow the stream before filtering by content.
Promtail is the log shipping agent. It runs on each machine you want to collect logs from, tails log files (just like tail -f), attaches labels (such as job, host, and filename), and ships those labeled streams to Loki. Think of it as the Loki equivalent of Prometheus’s Node Exporter — a lightweight agent that runs on the edge.
LogQL is Loki’s query language. If you know PromQL from Prometheus, LogQL will feel familiar. A basic query looks like this:
{job="varlogs"} |= "error"
This means: give me all log lines from streams labeled job="varlogs" that contain the word “error”.
The data flow looks like this:
Log files (/var/log/*)
↓
Promtail (tail + label + ship)
↓
Loki (receive + store)
↓
Grafana (query + visualize)
If you already have Grafana running from a previous Prometheus setup, you only need to add Loki as a second data source. Everything else stays the same.
Prerequisites
Before starting, make sure you have:
- Ubuntu 22.04 or 24.04 (commands are tested on both)
- A user with
sudoprivileges - At least 1 GB of RAM and 10 GB of free disk space
- Grafana already installed (or you can install it fresh during this tutorial)
- Basic comfort with the Linux command line and text editors like
nanoorvim
You do not need Prometheus running to follow this tutorial. Loki and Promtail are completely independent of the Prometheus stack.
Step 1: Download and Install Loki
Loki is distributed as a single binary. There is no package manager installation for the latest version — you download the binary directly from the GitHub releases page.
First, check what the latest Loki version is at the Grafana Loki releases page. At the time of writing, version 3.x is current. Adjust the version number below if a newer one is available.
# Set the version you want to install
LOKI_VERSION="3.4.2"
# Download the Loki binary
wget "https://github.com/grafana/loki/releases/download/v${LOKI_VERSION}/loki-linux-amd64.zip"
# Unzip and make it executable
unzip loki-linux-amd64.zip
chmod +x loki-linux-amd64
# Move it to a system-wide location
sudo mv loki-linux-amd64 /usr/local/bin/loki
Verify the binary works:
loki --version
You should see output like loki, version 3.4.2.
Step 2: Create Loki’s Configuration File
Loki needs a configuration file that tells it where to store data, how to handle incoming streams, and what port to listen on.
Create a directory for Loki’s configuration and data:
sudo mkdir -p /etc/loki
sudo mkdir -p /var/lib/loki
Create the configuration file:
sudo nano /etc/loki/loki-config.yaml
Paste in the following configuration:
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /var/lib/loki
storage:
filesystem:
chunks_directory: /var/lib/loki/chunks
rules_directory: /var/lib/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
schema_config:
configs:
- from: 2024-01-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
Here is what the key sections do:
auth_enabled: false— disables multi-tenant authentication. For a single-server setup this is fine; you can enable it later if needed.server.http_listen_port: 3100— the port that Promtail and Grafana will talk to.common.path_prefix— the root directory for all Loki data files.schema_config— defines how Loki organizes and indexes data internally. Thev13schema withtsdbstorage is the current recommended configuration.
Step 3: Run Loki as a Systemd Service
Running Loki manually in a terminal is fine for testing, but you want it to start automatically on boot and restart if it crashes. For this, you create a systemd service unit.
First, create a dedicated system user for Loki (no login shell needed):
sudo useradd --no-create-home --shell /bin/false loki
sudo chown -R loki:loki /var/lib/loki
sudo chown -R loki:loki /etc/loki
Now create the service file:
sudo nano /etc/systemd/system/loki.service
Paste in:
[Unit]
Description=Loki Log Aggregation System
After=network.target
[Service]
User=loki
Group=loki
Type=simple
ExecStart=/usr/local/bin/loki -config.file=/etc/loki/loki-config.yaml
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
Enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable loki
sudo systemctl start loki
Check that Loki is running:
sudo systemctl status loki
You should see active (running). Also confirm Loki is listening on port 3100:
curl -s http://localhost:3100/ready
If Loki is healthy it will respond with ready.
Step 4: Download and Install Promtail
Promtail is also distributed as a single binary and lives on the GitHub releases page alongside Loki. Use the same version number for compatibility.
LOKI_VERSION="3.4.2"
wget "https://github.com/grafana/loki/releases/download/v${LOKI_VERSION}/promtail-linux-amd64.zip"
unzip promtail-linux-amd64.zip
chmod +x promtail-linux-amd64
sudo mv promtail-linux-amd64 /usr/local/bin/promtail
Verify the installation:
promtail --version
Step 5: Configure Promtail
Promtail’s configuration tells it which log files to tail, what labels to attach, and where to send the logs.
Create the configuration directory:
sudo mkdir -p /etc/promtail
Create the configuration file:
sudo nano /etc/promtail/promtail-config.yaml
Paste in the following:
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /var/lib/promtail/positions.yaml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: varlogs
static_configs:
- targets:
- localhost
labels:
job: varlogs
host: ubuntu-server
__path__: /var/log/*.log
- job_name: syslog
static_configs:
- targets:
- localhost
labels:
job: syslog
host: ubuntu-server
__path__: /var/log/syslog
Here is what each section does:
positions.filename— Promtail records the last byte position it read in each log file here. If Promtail restarts, it picks up from where it left off instead of replaying the entire file.clients.url— the Loki push endpoint. Since Loki and Promtail are on the same machine, we uselocalhost:3100.scrape_configs— defines what to tail. Each entry has ajob_nameand labels. The special__path__label uses a glob pattern to match files. Labels likejobandhostare what you use to filter logs in Grafana.
Create the positions directory and set permissions:
sudo mkdir -p /var/lib/promtail
sudo useradd --no-create-home --shell /bin/false promtail
sudo usermod -aG adm promtail
sudo chown -R promtail:promtail /var/lib/promtail
sudo chown -R promtail:promtail /etc/promtail
The adm group is important — on Ubuntu, most log files in /var/log/ are readable by the adm group. Without this, Promtail will silently fail to read system logs.
Step 6: Run Promtail as a Systemd Service
sudo nano /etc/systemd/system/promtail.service
Paste in:
[Unit]
Description=Promtail Log Shipping Agent
After=network.target
[Service]
User=promtail
Group=promtail
Type=simple
ExecStart=/usr/local/bin/promtail -config.file=/etc/promtail/promtail-config.yaml
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
Enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable promtail
sudo systemctl start promtail
Check the status:
sudo systemctl status promtail
Promtail exposes its own status page — open this in a browser to verify it is tailing files correctly:
http://your-server-ip:9080/targets
You should see your configured log paths listed with their current read positions.
Step 7: Add Loki as a Grafana Data Source
If you already have Grafana running, open it in your browser (usually at http://your-server-ip:3000).
- In the left sidebar, click Connections → Data sources.
- Click Add data source.
- Search for Loki and select it.
- In the URL field, enter:
http://localhost:3100 - Scroll down and click Save & test.
Grafana will run a quick health check against Loki. If everything is working, you will see a green Data source successfully connected message.
Step 8: Query Logs with LogQL in Grafana
Now for the payoff. In Grafana’s left sidebar, click the Explore icon (the compass).
From the data source dropdown at the top, select Loki.
In the query box, type:
{job="varlogs"}
Click Run query (or press Shift+Enter). You should see log lines from /var/log/*.log streaming in.
Try filtering for errors:
{job="syslog"} |= "error"
Or use regex matching to find authentication failures:
{job="syslog"} |~ "Failed password|authentication failure"
You can also use label filters with pipelines to parse structured logs:
{job="varlogs"} | pattern `<_> <level> <_>` | level = "ERROR"
LogQL is powerful but you can accomplish most tasks with just |= (contains) and != (does not contain) filters to start.
Common Mistakes and Troubleshooting
Promtail cannot read log files
If you see permission denied errors in Promtail’s logs, the promtail user does not have read access to the log files. The fix is to add it to the adm group:
sudo usermod -aG adm promtail
sudo systemctl restart promtail
Loki returns “no results” even though Promtail is running
Check whether Promtail is actually shipping data by looking at its own logs:
journalctl -u promtail -f
Look for lines that say level=info msg="successfully flushed". If you see errors about connections refused, make sure Loki is running and listening on port 3100.
Grafana shows “Unable to connect” when testing the Loki data source
Make sure you are using http://localhost:3100 and not https. Loki in this configuration does not use TLS. If Grafana is running in Docker or on a different machine, replace localhost with the actual IP address of the Loki server.
Promtail position file fills up the disk
The positions file only stores byte offsets, so it stays tiny. If disk usage is growing, check /var/lib/loki/chunks/ instead. You may need to add retention settings to Loki’s configuration (see Best Practices below).
Old logs are not appearing in Grafana
By default, Promtail starts tailing from the end of the file (current position). If you want to ingest historical logs, delete the positions file and restart Promtail:
sudo systemctl stop promtail
sudo rm /var/lib/promtail/positions.yaml
sudo systemctl start promtail
Be careful doing this in production — it will re-ingest everything from the beginning.
Best Practices
Set up log retention in Loki
Without retention, Loki will store logs forever and eventually fill your disk. Add a retention period to loki-config.yaml under the limits_config section:
limits_config:
retention_period: 30d
compactor:
working_directory: /var/lib/loki/compactor
retention_enabled: true
Thirty days is a reasonable default for most setups. Adjust based on your disk capacity and compliance requirements.
Use structured logging in your applications
Loki is far more useful when your application emits JSON logs. Instead of:
2026-05-09 10:34:22 ERROR failed to connect to database
Emit:
{"time":"2026-05-09T10:34:22Z","level":"error","msg":"failed to connect to database","db":"postgres","attempt":3}
You can then parse those fields in LogQL and filter by level, db, or any other field without grep-style string matching.
Add resource limits to the systemd services
Prevent Loki from consuming unbounded memory by adding limits to the service unit:
[Service]
MemoryLimit=512M
CPUQuota=50%
Restart the service after editing the unit file with sudo systemctl daemon-reload && sudo systemctl restart loki.
Protect Loki behind a reverse proxy for remote access
The default Loki setup has no authentication. If you need to access it from outside the server, put it behind an Nginx reverse proxy with HTTP basic auth or integrate it with Grafana’s built-in auth proxy. Never expose port 3100 directly to the internet.
Monitor Loki itself with Prometheus
Loki exposes Prometheus metrics at http://localhost:3100/metrics. Add it as a scrape target in your prometheus.yml to track ingestion rates, chunk counts, and query latency — so you know if your logging pipeline itself is healthy.
Conclusion
You now have a working log aggregation pipeline on Ubuntu. Loki is receiving logs, Promtail is shipping them from /var/log/, and Grafana is ready to query and visualize them. The key things you set up:
- Loki as a lightweight log storage backend running on port 3100
- Promtail as a log tailing agent attached to system log files
- Both services managed by systemd, running under dedicated low-privilege users
- Loki connected to Grafana as a data source
From here, a natural next step is to add Promtail to your other servers — point each one at the same central Loki instance and use the host label to tell them apart. You can also explore Loki alerting rules, which let you trigger alerts based on log content (for example, fire an alert when more than 10 authentication failures appear per minute), and pipe those alerts through Prometheus Alertmanager or Grafana’s own alerting system.
The combination of Prometheus for metrics and Loki for logs, both displayed in Grafana, gives you observability coverage that would have required a much larger and more expensive toolset just a few years ago.