mirror of
https://github.com/kevinveenbirkenbach/computer-playbook.git
synced 2025-12-02 15:39:57 +00:00
- Replace legacy utils/run_once.yml with the new helpers utils/once_flag.yml and utils/once_finalize.yml - Introduce utils/compose_up.yml to ensure docker-compose stacks are up and to flush handlers safely without coupling to run-once flags - Migrate all affected roles (desk-*, dev-*, sys-ctl-*, sys-svc-*, web-app-*, web-svc-*, util-*) to the new run-once helpers - Rework sys-svc-msmtp to auto-load Mailu once per deploy, check reachability, and reuse the running stack instead of requiring multiple playbook passes - Adjust web-app-mailu to integrate cert deployment, handler flushing, and run-once handling so Mailu is fully initialized in a single deploy - Improve Matomo, CDN, logout and CSP/health-check related roles to cooperate with the new compose_up / once_* pattern - Simplify alarm/backup/timer/service orchestration (sys-ctl-alm-*, sys-bkp-provider, sys-timer-cln-bkps, etc.) by moving run-once logic into dedicated 01_core.yml files - Update integration tests so utils/once_flag.yml and utils/once_finalize.yml are recognised as valid run-once providers, keeping the global run_once_* guarantees consistent - Align frontend injection and service dependencies so Mastodon- and Mailu-related services can be brought up coherently within a single deployment cycle rather than several iterations
Ollama
Description
Ollama is a local model server that runs open LLMs on your hardware and exposes a simple HTTP API. It’s the backbone for privacy-first AI: prompts and data stay on your machines.
Overview
After the first model pull, Ollama serves models to clients like Open WebUI (for chat) and Flowise (for workflows). Models are cached locally for quick reuse and can run fully offline when required.
Features
- Run popular open models (chat, code, embeddings) locally
- Simple, predictable HTTP API for developers
- Local caching to avoid repeated downloads
- Works seamlessly with Open WebUI and Flowise
- Offline-capable for air-gapped deployments
Further Resources
- Ollama — https://ollama.com
- Ollama Model Library — https://ollama.com/library