Skip to content

Getting Started with Ollama

Posted on:November 5, 2023 at 05:06 AM

Getting Started with Ollama

Welcome to the world of Ollama, where the power of language models is brought directly to your local environment. Whether you’re a seasoned developer, an AI enthusiast, or a curious beginner, this post will guide you through the basics of setting up and running your first language model with Ollama.

Why Choose Ollama?

Ollama stands out in the AI landscape for its commitment to open-source principles, privacy, and local control. It’s designed for those who prefer to keep their data close and their costs lower than what cloud platforms demand. With Ollama, you’re not just using a tool; you’re joining a movement that values freedom, collaboration, and accessibility in AI.


Before diving into Ollama, ensure you have the following:


Ollama is pretty awesome and has been included in the homebrew package manager for mac. Which is my preferred method of installing thing on my Mac.



To install with Homebrew simply run:

brew install ollama

Install into Applications from Zip

# Download the macOS package
curl -O
# Unzip and install
unzip -d /Applications


Coming soon! Stay tuned for updates.

Linux & WSL2

# Run the installation script
curl | sh

For detailed instructions, visit the manual install guide.

Running Your First Model

With Ollama installed, let’s run our first model. Open your terminal and type:

ollama run llama2

This simple command fires up the Llama 2 model, and you’re ready to interact with it directly from your command line.

Runinng as a service

You can also run Ollama as a system service with brew allowing you to run the models for local applications like Obsidian and other integrations from startup by running:

brew service start ollama

Exploring the Model Library

Ollama boasts a rich library of open-source models. You can browse the collection at and choose the one that fits your needs. Here’s how to run a different model:

# Replace 'model_name' with the actual name of the model you wish to run
ollama run model_name

Customizing Your Experience

Ollama allows you to import models and even customize prompts to tailor the AI’s responses. Here’s a quick example of customizing a prompt:

# Pull the base model
ollama pull llama2
# Create a Modelfile with your custom prompt
echo 'FROM llama2\nPARAMETER temperature 1\nSYSTEM "Your custom prompt here"' > Modelfile
# Create and run your custom model
ollama create my_custom_model -f Modelfile
ollama run my_custom_model

Next Steps

Congratulations! You’ve just taken your first steps into the local AI world with Ollama. From here, you can explore more complex configurations, integrate Ollama with other applications, or even contribute to its development.

Remember, Ollama is more than software; it’s a community. Join the Ollama Discord to connect with other users, share your experiences, and get support.

Happy modeling!