Crafting Conversations with Ollama-WebUI: Your Server, Your Rules

Empower your server with Ollama-WebUI—where conversations become an art. This guide unveils the steps to customize and deploy for a tailored conversational masterpiece.

Crafting Conversations with Ollama-WebUI: Your Server, Your Rules


Enhance your conversational AI experience with Ollama-WebUI—a powerful web interface for Ollama that combines intuitive design with robust features. This guide will walk you through the deployment process, ensuring a seamless setup on your own server.

Get up and running with large language models, locally.


Ollama-WebUI boasts a range of features designed to elevate your conversational AI interactions:

  • Intuitive Interface: Inspired by ChatGPT for a user-friendly experience.
  • Responsive Design: Seamlessly usable on desktop and mobile devices.
  • Swift Responsiveness: Enjoy fast and responsive performance.
  • Effortless Setup: Hassle-free installation using Docker.
  • Code Syntax Highlighting: Enhanced code readability.
  • Full Markdown and LaTeX Support: Comprehensive formatting capabilities.
  • Download/Delete Models: Manage models directly from the web UI.
  • Multiple Model Support: Switch between different chat models.
  • Many Models Conversations: Engage with various models simultaneously.
  • OpenAI Model Integration: Utilize OpenAI models alongside Ollama.
  • Regeneration History Access: Revisit and explore your entire regeneration history.
  • Chat History: Access and manage your conversation history.
  • Import/Export Chat History: Move chat data in and out of the platform.
  • Voice Input Support: Interact with models through voice; send voice input automatically after 3 seconds of silence.
  • Fine-Tuned Control with Advanced Parameters: Adjust parameters for a tailored conversation.
  • Auth Header Support: Enhance security with Authorization headers.
  • External Ollama Server Connection: Link to an external Ollama server hosted on a different address.
  • Backend Reverse Proxy Support: Strengthen security with direct communication between Ollama Web UI backend and Ollama.
GitHub - jmorganca/ollama: Get up and running with Llama 2 and other large language models locally
Get up and running with Llama 2 and other large language models locally - GitHub - jmorganca/ollama: Get up and running with Llama 2 and other large language models locally
GitHub - ollama-webui/ollama-webui: ChatGPT-Style Web UI Client for Ollama 🦙
ChatGPT-Style Web UI Client for Ollama 🦙. Contribute to ollama-webui/ollama-webui development by creating an account on GitHub.

Deployment Steps

Installing Both Ollama and Ollama Web UI Using Docker Compose

If you don't have Ollama installed yet, follow these steps:

  1. Clone the repository:

    git clone
    cd ollama-webui
  2. Run Docker Compose:

    nano compose.yaml

    Ensure to modify the compose.yaml file for GPU support and expose Ollama API outside the container stack if needed.


version: '3.8'

    # Uncomment below for GPU support
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: 1
    #           capabilities:
    #             - gpu
      - ./ollama:/root/.ollama
    # Uncomment below to expose Ollama API outside the container stack
    # ports:
    #   - 11434:11434
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:latest

  # Uncomment below for WIP: Auth support
  # ollama-webui-db:
  #   image: mongo
  #   container_name: ollama-webui-db
  #   restart: always
  #   # Make sure to change the username/password!
  #   environment:

#    build:
#      context: .
#      args:
#        OLLAMA_API_BASE_URL: '/ollama/api'
#      dockerfile: Dockerfile
    image: ollama-webui:latest
    container_name: ollama-webui
      - ollama
      # Uncomment below for WIP: Auth support
      # - ollama-webui-db
      - "OLLAMA_API_BASE_URL=http://ollama:11434/api"
      # Uncomment below for WIP: Auth support
      # - "WEBUI_AUTH=TRUE"
      # - "WEBUI_DB_URL=mongodb://root:example@ollama-webui-db:27017/"
      - host.docker.internal:host-gateway
    restart: unless-stopped
  1. Access the web UI at localhost:3000.
    Now you can stop the container and comment out the “build” block in compose.yaml

Configuring Nginx

  1. Add user authentication via .htpasswd:

    sudo htpasswd -c /etc/nginx/.htpasswd username
  2. Open the Nginx configuration file:

    sudo nano /etc/nginx/conf.d/ollama.conf
  3. Edit ollama.conf

server {
        listen 80;
        if ($host = {
                return 301 https://$host$request_uri;
        return 404;

server {
  listen 443 ssl http2;

  ssl_certificate       /your/path/fullchain.pem;
  ssl_certificate_key   /your/path/privkey.pem;
  ssl_session_timeout 1d;
  ssl_session_cache shared:MozSSL:10m;
  ssl_session_tickets off;

  client_max_body_size 10G;

  ssl_protocols         TLSv1.2 TLSv1.3;

  ssl_prefer_server_ciphers off;

  server_name ;
    add_header Content-Security-Policy upgrade-insecure-requests;
  location / {
#   Set the path to your password file
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.htpasswd;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_next_upstream off;
  1. Bring up the container:

    sudo docker compose up -d
  2. Restart Nginx:

    sudo systemctl restart nginx

Now, your Ollama-WebUI is deployed and secured with Nginx. Enjoy a feature-rich conversational AI experience on your own server!


Model: Dolphin-2.2-Mistral-7B

Copyright statement: Unless otherwise stated, all articles on this blog adopt the CC BY-NC-SA 4.0 license agreement. For non-commercial reprints and citations, please indicate the author: Henry, and original article URL. For commercial reprints, please contact the author for authorization.