How to Run LLaMA 3 with Ollama

Meta Llama 3 is the state-of-the-art , available in both 8B and 70B parameter sizes. Let's see how to run Llama 3 with Ollama.

What's LLaMA 3?

Meta Llama 3: The most capable openly available LLM to date

LLaMA 3 is a type of artificial intelligence (AI) model developed by Meta AI, a research laboratory that focuses on natural language processing (NLP) and other AI-related areas.

What makes LLaMA 3 special is its ability to understand and respond to a wide range of topics and questions, often with a high degree of accuracy and coherence. It's been trained on a massive dataset of text from the internet and can adapt to different contexts and styles.

Key features of LLaMA 3

LLaMA 3 has many potential applications, such as chatbots, virtual assistants, language translation, and content generation. It's an exciting development in the field of AI, and I'm happy to chat with you more about it!

Conversational dialogue: LLaMA 3 can engage in natural-sounding conversations, using context and understanding to respond to questions and statements.

Knowledge retrieval: It can access a vast knowledge base to provide accurate information on a wide range of topics.

Common sense: LLaMA 3 has been designed to understand common sense and real-world concepts, making its responses more relatable and human-like.

Fine-tuned and optimized: Llama 3 instruction-tuned models are fine-tuned and optimized for dialogue/chat use cases and outperform many of the available open-source chat models on common benchmarks.

Meta Llama 3 Instruct model performance
Meta Llama 3 Pre-trained model performance

The most capable model

Llama 3 represents a large improvement over Llama 2 and other openly available models:

Trained on a dataset seven times larger than Llama 2

Double the context length of 8K from Llama 2

Encodes language much more efficiently using a larger token vocabulary with 128K tokens

Less than 1⁄3 of the false “refusals” when compared to Llama 2

How to run LLaMA 3 with Ollama

Llama 3 is now available to run using Ollama. To get started, Download Ollama and run Llama 3.

CLI

Open the terminal and run ollama run llama3

The initial release of Llama 3 includes two sizes:8B and 70B parameters:

# 8B Parameters
ollama run llama3:8b

# 70B Parameters
ollama run llama3:70b

API

Example using curl:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt":"Why is the sky blue?"
 }'

Model variants

Instruct is fine-tuned for chat/dialogue use cases. Example:

ollama run llama3
ollama run llama3:70b

Pre-trained is the base model. Example:

ollama run llama3:text
ollama run llama3:70b-text

References