Posted on:November 24, 2024 at 02:30 AM

Setting up Ollama

Setting up Ollama

1. Introduction

Ollama is a powerful tool that allows you to easily run Large Language Models (LLMs) in your local environment. It supports various open-source models and can be easily integrated through its API.

Key Features

  • Run LLMs in local environment
  • Support for various models (Llama 2, Mistral, CodeLlama, etc.)
  • Simple installation and usage
  • REST API support
  • Cross-platform support (MacOS, Linux, Windows)

2. Installation and Setup

MacOS Installation

brew install ollama

Installing Models

You can install desired models with the following commands:

ollama pull llama2
ollama pull mistral
ollama pull codellama

Check Installed Models

ollama list

3. Using with Python

Installing Python Package

pip install ollama

Basic Usage Examples

import ollama

# Simple chat example
response = ollama.chat(model='mistral', messages=[
    {
        'role': 'user',
        'content': 'Why is the sky blue?',
    },
])
print(response['message']['content'])

# Getting streaming response
for chunk in ollama.chat(
    model='llama2',
    messages=[{'role': 'user', 'content': 'Write a short poem about programming'}],
    stream=True,
):
    print(chunk['message']['content'], end='', flush=True)

4. Using with Javascript

Installing Node.js Package

npm install ollama

Basic Usage Examples

import ollama from 'ollama';

// Simple chat example
const response = await ollama.chat({
  model: 'mistral',
  messages: [{
    role: 'user',
    content: 'Why is the sky blue?'
  }]
});
console.log(response.message.content);

// Getting streaming response
const stream = await ollama.chat({
  model: 'llama2',
  messages: [{
    role: 'user',
    content: 'Write a short poem about programming'
  }],
  stream: true
});

for await (const chunk of stream) {
  process.stdout.write(chunk.message.content);
}

5. Using REST API

Ollama also supports access through REST API:

curl http://localhost:11434/api/chat -d '{
  "model": "llama2",
  "messages": [
    {
      "role": "user",
      "content": "Why is the sky blue?"
    }
  ]
}'

6. Model Customization

You can create custom models using a Modelfile:

FROM llama2
PARAMETER temperature 0.7
PARAMETER top_p 0.9
SYSTEM You are a helpful AI assistant that specializes in programming.

Creating a custom model:

ollama create custom-assistant -f Modelfile

7. Conclusion

Ollama is an excellent tool for running and managing LLMs in a local environment. It can be customized for various use cases and integrated into different applications through its REST API.

References