Files
computer-playbook/roles/svc-ai-ollama
Kevin Veen-Birkenbach f4cf55b3c8 Open WebUI OIDC & proxy fixes + Ollama preload + async-safe pull
- svc-ai-ollama:
  - Add preload_models (llama3, mistral, nomic-embed-text)
  - Pre-pull task: loop_var=model, async-safe changed_when/failed_when

- sys-svc-proxy (OpenResty):
  - Forward Authorization header
  - Ensure proxy_pass_request_headers on

- web-app-openwebui:
  - ADMIN_EMAIL from users.administrator.email
  - Request RBAC group scope in OAUTH_SCOPES

Ref: ChatGPT support (2025-09-23) — https://chatgpt.com/share/68d20588-2584-800f-aed4-26ce710c69c4
2025-09-23 04:27:46 +02:00
..

Ollama

Description

Ollama is a local model server that runs open LLMs on your hardware and exposes a simple HTTP API. Its the backbone for privacy-first AI: prompts and data stay on your machines.

Overview

After the first model pull, Ollama serves models to clients like Open WebUI (for chat) and Flowise (for workflows). Models are cached locally for quick reuse and can run fully offline when required.

Features

  • Run popular open models (chat, code, embeddings) locally
  • Simple, predictable HTTP API for developers
  • Local caching to avoid repeated downloads
  • Works seamlessly with Open WebUI and Flowise
  • Offline-capable for air-gapped deployments

Further Resources