Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs.
Ollama WebUI is a revolutionary LLM local deployment framework with chatGPT like web interface. Let’s get chatGPT like web ui interface for your ollama deployed LLMs. Just follow these 5 steps to get up and get going.
Windows 10 64-bit: Minimum required is Home or Pro 21H2 (build 19044) or higher, or Enterprise or Education 21H2 (build 19044) or higher.
Windows 11 64-bit: Home or Pro version 21H2 or higher, or Enterprise or Education version 21H2 or higher.
WSL version 1.1.3.0 or later. Turn on the WSL 2 feature on Windows.
8 cores 64-bit processor, 16GB+ RAM, Nvidia Graphics Card with 4GB+ VRAM
Docker Desktop, the latest version
Digging deeper into Ollama and Ollama WebUI on a Windows computer is an exciting journey into the world of artificial intelligence and machine learning. This detailed guide walks you through each step and provides examples to ensure a smooth launch.
Download Ollama from https://ollama.com/download/windows, then right click on the downloaded OllamaSetup.exe file and run the installer as administrator. Once the installation is complete, Ollama is ready to use on your Windows system. An Ollama icon will be added to the tray area at the bottom of the desktop.
To run Ollama and start utilizing its AI models, you'll need to use a terminal on Windows. We'll skip it here and let's see how to install WebUI for a better experience.
Run the below docker command to deploy ollama-webui docker container on your local machine. If Ollama is on your computer, use this command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
To connect to Ollama on another server, change the OLLAMA_BASE_URL to the server's URL. So if Ollama is on a Different Server, use this command:
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Note: When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
After installation, you can access Open WebUI at http://localhost:3000. For the first time you need to register by clicking "Sign up".
Once registered, you will be routed to the home page of ollama-webui.
Click the settings icon in the upper right corner to open the settings window, as shown in the figure below, enter the model tag(e.g. llama2:7b, gemma:2b), click the download button on the right, and wait for the model to be downloaded.
As shown below, we have downloaded the gemma:2b and llama2:7b models, select gemma:2b.
Enter the prompt word "What is the future of AI?" and press Enter to send a message.
In this tutorial, we cover the basics of getting started with Ollama WebUI on Windows. Ollama stands out for its ease of use, automatic hardware acceleration, and access to a comprehensive model library. Ollama WebUI is what makes it a valuable tool for anyone interested in artificial intelligence and machine learning.
Professional GPU VPS - A4000
Advanced GPU Dedicated Server - A4000
Advanced GPU Dedicated Server - A5000
Enterprise GPU Dedicated Server - RTX A6000
Enterprise GPU Dedicated Server - A40
Multi-GPU Dedicated Server - 3xRTX A5000
Multi-GPU Dedicated Server - 3xRTX A6000
If you can't find a suitable GPU Plan, or have a need to customize a GPU server, or have ideas for cooperation, please leave me a message. We will reach you back within 36 hours.