Ollama 0.4 is released with support for Meta's Llama 3.2 Vision models locally

Ollama now supports tool calling with popular models in local LLM

Google announces Firebase Genkit with Ollama support

Llama 3 feels significantly less censored than its predecessor

Run llama3 locally with 1M token context

Embedding models

Ollama now supports AMD graphics cards

Run Llama 2 uncensored locally

Ollama is now available on Windows in preview