Running a local AI model with Ollama is powerful, but the terminal is not the most comfortable place to have a long conversation with an LLM. Typing prompts into ollama run, reading streamed text, and scrolling back through previous responses gets old quickly. You want a proper chat interface that remembers your conversations, lets you switch models from a dropdown, and works from any browser on your network.
Open WebUI is that interface. It is a self-hosted, full-featured web application that gives you a ChatGPT-style chat experience backed entirely by your own local Ollama models. No data leaves your network, no API keys, no subscriptions. It connects to the Ollama API running on your machine (or server) and wraps it in a polished browser UI.
If you have already set up Ollama and want a better way to interact with your models, this tutorial is for you. If you have not yet set up Ollama, check out Ollama vs LM Studio: Choosing the Right Tool to Run Local LLMs on Ubuntu first, then come back here.
What Open WebUI Actually Is
Open WebUI (formerly known as Ollama WebUI) is an open-source project built with SvelteKit and Python (FastAPI). It runs as a web application that your browser connects to. On the backend, Open WebUI proxies requests to the Ollama API and adds its own layer of features on top:
- Persistent conversation history stored in a local SQLite database
- Multi-user support with separate accounts and conversation histories
- Model switcher to change between your downloaded Ollama models mid-session
- System prompt management to save and reuse custom personas or instructions
- Document ingestion (RAG) to upload files and have the model answer questions about them
- Image generation via AUTOMATIC1111 or ComfyUI if you have those set up
- OpenAI API compatibility so you can point it at OpenAI’s servers too, if you want
For most people, the most valuable features are the conversation history and the model switcher. Everything else is a bonus.
Prerequisites
Before starting, make sure you have:
- Ubuntu 20.04, 22.04, or 24.04
- Ollama installed and running (
systemctl status ollamashould showactive (running)) - At least one model already pulled (
ollama pull llama3.2:3bif you have not done this) - A user with
sudoprivileges - Either Docker and Docker Compose installed, or Python 3.11+
You do not need a GPU to follow this tutorial, but inference will be slow on CPU-only machines for models larger than 3B parameters.
Check that Ollama is responding before you continue:
curl http://localhost:11434/api/tags
You should get a JSON response listing your downloaded models. If you get a connection error, start Ollama first:
systemctl start ollama
Two Ways to Install Open WebUI
Open WebUI supports two installation methods: Docker and pip (Python). Docker is the official, recommended approach because it handles all dependencies and avoids Python version conflicts. The pip method is useful on servers where Docker is not available or you want to avoid the container overhead.
This tutorial covers both. Pick the one that fits your environment.
Method 1: Install with Docker (Recommended)
Step 1: Install Docker
If you already have Docker installed, skip this step. Otherwise:
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker
The usermod command adds your current user to the docker group so you can run Docker commands without sudo. The newgrp command applies the group change to your current shell session without logging out.
Verify it works:
docker --version
docker run hello-world
Step 2: Run Open WebUI
The Open WebUI Docker image is published to GitHub Container Registry. Run it with the following command:
docker run -d \
--name open-webui \
--restart always \
-p 3000:8080 \
-v open-webui:/app/backend/data \
--add-host=host.docker.internal:host-gateway \
ghcr.io/open-webui/open-webui:main
Here is what each flag does:
-drun detached (in the background)--name open-webuiis the container name for easy management--restart alwaysis to restart the container automatically if it crashes or the machine reboots-p 3000:8080is to expose the container’s port 8080 as port 3000 on your host-v open-webui:/app/backend/datais to create a named Docker volume to persist your conversations and settings--add-host=host.docker.internal:host-gatewayis to let the container reach Ollama running on the host machine athttp://host.docker.internal:11434
The image is about 1.5 GB and will take a couple of minutes to download the first time.
Step 3: Verify the Container Is Running
docker ps
You should see the open-webui container in the list with status Up. Check the logs if you want to see the startup output:
docker logs open-webui
Look for a line like Uvicorn running on http://0.0.0.0:8080. That confirms the backend is ready.
Step 4: Open the Browser
Navigate to http://localhost:3000 (or replace localhost with your server’s IP address if you are accessing it from another machine).
The first time you visit, Open WebUI will prompt you to create an admin account. This is the account with full access to all settings. Enter an email address and password, then click Create Admin Account. After that, you will be taken directly to the chat interface.
Step 5: Confirm Ollama Connection
In the top left of the chat interface, there is a model selector dropdown. Click it. If Ollama is connected, you will see the models you downloaded listed there (for example, llama3.2:3b). Select one and type a message to confirm everything is working end to end.
If the dropdown is empty, go to Settings → Connections and verify that the Ollama URL is set to http://host.docker.internal:11434. This is the address that Docker uses to reach the host machine.
Method 2: Install with pip (No Docker)
If you prefer not to use Docker, Open WebUI can be installed directly as a Python package.
Step 1: Ensure Python 3.11 is Available
Open WebUI requires Python 3.11 or newer. Check your version:
python3 --version
On Ubuntu 22.04, the default Python is 3.10. Install 3.11:
sudo apt update
sudo apt install python3.11 python3.11-venv python3.11-dev -y
On Ubuntu 24.04, Python 3.12 is the default and works fine.
Step 2: Create a Virtual Environment
Installing into a virtual environment keeps Open WebUI’s dependencies isolated from your system Python:
python3.11 -m venv ~/open-webui-venv
source ~/open-webui-venv/bin/activate
Your terminal prompt will change to show (open-webui-venv) indicating the virtual environment is active.
Step 3: Install Open WebUI
pip install open-webui
This installs the package along with all its Python dependencies. It will take a few minutes.
Step 4: Start Open WebUI
open-webui serve
The first time you run this command, it will download the frontend assets and set up the local SQLite database. This takes about a minute. Once you see Uvicorn running on http://0.0.0.0:8080, the server is ready.
By default it binds to port 8080. Access it at http://localhost:8080.
Because Ollama runs directly on the host (not inside Docker), no special network configuration is needed. Open WebUI will connect to http://localhost:11434 automatically.
Step 5: Run as a systemd Service
Running open-webui serve in a terminal works for development, but for production you want it running as a background service that starts automatically on boot. Create a systemd unit file:
sudo nano /etc/systemd/system/open-webui.service
Paste the following, replacing bagus with your actual username:
[Unit]
Description=Open WebUI
After=network.target ollama.service
Wants=ollama.service
[Service]
Type=simple
User=bagus
WorkingDirectory=/home/bagus
ExecStart=/home/bagus/open-webui-venv/bin/open-webui serve
Restart=on-failure
RestartSec=5
Environment="HOME=/home/bagus"
Environment="DATA_DIR=/home/bagus/.open-webui"
[Install]
WantedBy=multi-user.target
The After=ollama.service and Wants=ollama.service lines ensure systemd starts Ollama before Open WebUI, which prevents startup errors when the machine boots.
Enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable open-webui
sudo systemctl start open-webui
Check that it is running:
systemctl status open-webui
Exposing Open WebUI Behind Nginx
By default, Open WebUI is accessible only on port 3000 (Docker) or 8080 (pip) over plain HTTP. If you want to access it from outside your local machine, put it behind an Nginx reverse proxy.
This section assumes you have Nginx installed. If you do not, install it:
sudo apt install nginx -y
Create a new Nginx server block:
sudo nano /etc/nginx/sites-available/open-webui
Paste the following configuration, replacing webui.example.com with your actual domain or server IP:
server {
listen 80;
server_name webui.example.com;
client_max_body_size 50M;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
}
}
The two important non-obvious settings here:
proxy_http_version 1.1with theUpgradeandConnectionheaders. Open WebUI uses WebSockets for streaming model responses. Without these, the streamed text will not render in real time; you will see a blank screen until the full response completes.proxy_read_timeout 300s, large models or long prompts can take more than the default 60-second timeout. 300 seconds gives you a comfortable buffer.client_max_body_size 50Mis needed if you plan to use the document upload (RAG) feature.
Enable the site and reload Nginx:
sudo ln -s /etc/nginx/sites-available/open-webui /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
For HTTPS, use Certbot to obtain a free Let’s Encrypt certificate. If you have not done this before, the process is straightforward and covered in Secure Nginx with Let’s Encrypt SSL Using Certbot on Ubuntu.
Managing Open WebUI
Update the Docker Image
Open WebUI releases new versions frequently. To update:
docker pull ghcr.io/open-webui/open-webui:main
docker stop open-webui
docker rm open-webui
Then re-run the original docker run command from Step 2. Your conversations and settings are stored in the open-webui Docker volume and will survive the container recreation.
Update the pip Installation
With the virtual environment active:
source ~/open-webui-venv/bin/activate
pip install --upgrade open-webui
sudo systemctl restart open-webui
Pull New Models from the UI
You do not have to drop to the terminal every time you want a new model. In Open WebUI, go to Settings → Models and use the model pull field to download models directly. Type the model name (for example qwen2.5:7b) and click the download button. Open WebUI sends the pull command to the Ollama API and shows you a progress bar.
Add a Second User
Navigate to Settings → Admin Panel → Users and click Add User. You can create additional accounts with the User role, which gives them access to chat but not to admin settings. Each user has their own conversation history.
Common Mistakes and Troubleshooting
Open WebUI cannot connect to Ollama (Docker installation)
The most common cause is missing or wrong --add-host flag. The container needs to reach Ollama on the host, and host.docker.internal is the hostname that resolves to the host’s IP from inside a Docker container. Verify it is set:
docker inspect open-webui | grep HostConfig -A 20 | grep ExtraHosts
You should see host.docker.internal:host-gateway. If not, remove the container and re-run with the correct flags.
Also check that Ollama is actually listening on 0.0.0.0 rather than 127.0.0.1. By default, Ollama binds to 127.0.0.1:11434, which means only loopback connections are accepted. The --add-host trick routes to the host’s loopback, so this usually works. But if you have customized the OLLAMA_HOST environment variable, make sure it is set to 0.0.0.0:11434:
sudo systemctl edit ollama
Add:
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Then restart Ollama:
sudo systemctl daemon-reload
sudo systemctl restart ollama
Streamed responses do not show in real time behind Nginx
Missing the WebSocket proxy headers. Make sure your Nginx config includes:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
Without these, Nginx downgrades to HTTP/1.0 and buffers the response, so you see nothing until the full completion is done.
The page loads but the chat box does nothing
Open your browser’s developer tools (F12) and check the Console and Network tabs for errors. If you see WebSocket connection failures, the Nginx proxy headers are likely the issue. If you see 502 errors, the backend is not running, check docker logs open-webui or systemctl status open-webui.
First admin account was created with the wrong email
Reset the admin password from the host by running a command against the container:
docker exec -it open-webui open-webui migrate-admin --email [email protected] --password newpassword
Or, if using the pip installation, activate the virtual environment and run the same command directly.
The pip installation errors out during pip install open-webui
Make sure you have the python3.11-dev package installed, some dependencies require the Python headers to compile native extensions. Also ensure you have enough disk space; the full installation uses about 2 GB.
Best Practices
Create a dedicated system user for the pip service. Instead of running Open WebUI as your personal user, create a webui user with no login shell and own the virtual environment and data directory under it. This limits what the process can access if something goes wrong.
Back up the data volume regularly. For Docker, the named volume open-webui stores your entire database. Back it up by copying its contents:
docker run --rm \
-v open-webui:/data \
-v /backup:/backup \
alpine tar czf /backup/open-webui-$(date +%F).tar.gz /data
For the pip installation, back up the DATA_DIR directory (default ~/.open-webui).
Do not expose Open WebUI directly to the internet without authentication. If you have set Open WebUI to allow new user signups, anyone who reaches the URL can create an account and use your compute resources. Either disable signup in Settings → Admin Panel → General and use invite-only registration, or gate the application behind your VPN. See Connect Remote Servers with ZeroTier on Ubuntu for a lightweight private network option.
Use the system prompt feature consistently. If you interact with models for a specific domain (writing, code review, infrastructure), create a saved system prompt for that context and switch to it at the start of each session. It removes the need to repeat context in every conversation.
Keep Open WebUI updated. The project is actively developed and releases patch versions that fix bugs and security issues. Pin to the :main tag (which always points to the latest stable build) and run the update procedure monthly.
Conclusion
You now have a full-featured chat interface running locally, backed by your own Ollama models with no data leaving your machine. Open WebUI gives you persistent conversations, model switching, and document uploads without the subscription fees or privacy tradeoffs.
The setup you have at this point is:
- Ollama running as a systemd daemon, managing and serving your local models
- Open WebUI running as a persistent service, giving you a browser-based chat interface
- Optionally, Nginx in front of Open WebUI with TLS, making the interface accessible securely from anywhere on your network
From here, the logical next steps are:
- Explore the RAG feature: Upload your own documents and ask the model questions about them. Open WebUI handles chunking and retrieval automatically.
- Try different models: Smaller models like
phi3:miniare fast and surprisingly capable for structured tasks. Larger models likellama3.1:70b(if your hardware supports it) push the quality ceiling much higher. - Connect to OpenAI: Open WebUI can also proxy to the OpenAI API. You can switch between local Ollama models and cloud models from the same interface, which is useful when you need a task done faster or at higher quality than your hardware can deliver.
- Set up model-specific system prompts: Create different “assistants” for different workflows and save them as presets you can load instantly.