How to Install and Use Ollama WebUI on Windows

Ollama WebUI is a revolutionary LLM local deployment framework with chatGPT like web interface. Let’s get chatGPT like web ui interface for your ollama deployed LLMs.

What is Open WebUI (Formerly Ollama WebUI)?

Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs.

Ollama WebUI is a revolutionary LLM local deployment framework with chatGPT like web interface. Let’s get chatGPT like web ui interface for your ollama deployed LLMs. Just follow these 5 steps to get up and get going.

System Requirements

Windows 10 64-bit: Minimum required is Home or Pro 21H2 (build 19044) or higher, or Enterprise or Education 21H2 (build 19044) or higher.

Windows 11 64-bit: Home or Pro version 21H2 or higher, or Enterprise or Education version 21H2 or higher.

WSL version 1.1.3.0 or later. Turn on the WSL 2 feature on Windows.

8 cores 64-bit processor, 16GB+ RAM, Nvidia Graphics Card with 4GB+ VRAM

Docker Desktop, the latest version

5 Steps to Install and Use Ollama Web UI

Digging deeper into Ollama and Ollama WebUI on a Windows computer is an exciting journey into the world of artificial intelligence and machine learning. This detailed guide walks you through each step and provides examples to ensure a smooth launch.

Step 1 - Install Ollama
install ollama on windows

Download Ollama from https://ollama.com/download/windows, then right click on the downloaded OllamaSetup.exe file and run the installer as administrator. Once the installation is complete, Ollama is ready to use on your Windows system. An Ollama icon will be added to the tray area at the bottom of the desktop.

To run Ollama and start utilizing its AI models, you'll need to use a terminal on Windows. We'll skip it here and let's see how to install WebUI for a better experience.

Step 2 - Install Ollama WebUI

Run the below docker command to deploy ollama-webui docker container on your local machine. If Ollama is on your computer, use this command:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

To connect to Ollama on another server, change the OLLAMA_BASE_URL to the server's URL. So if Ollama is on a Different Server, use this command:

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
install ollama webui

Note: When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.

Step 3 - Sign in to Open WebUI

After installation, you can access Open WebUI at http://localhost:3000. For the first time you need to register by clicking "Sign up".

sign in to open webui

Once registered, you will be routed to the home page of ollama-webui.

open webui home page
Step 4 - Pull a model from Ollama.com

Click the settings icon in the upper right corner to open the settings window, as shown in the figure below, enter the model tag(e.g. llama2:7b, gemma:2b), click the download button on the right, and wait for the model to be downloaded.

open ui manage ollama models
Step 5 - Select a model and Enjoy your AI chat

As shown below, we have downloaded the gemma:2b and llama2:7b models, select gemma:2b.

Select Gemma:2b model

Enter the prompt word "What is the future of AI?" and press Enter to send a message.

chat with Gemma 2b model

Conclusion

In this tutorial, we cover the basics of getting started with Ollama WebUI on Windows. Ollama stands out for its ease of use, automatic hardware acceleration, and access to a comprehensive model library. Ollama WebUI is what makes it a valuable tool for anyone interested in artificial intelligence and machine learning.