feat(ai): introduce dedicated AI roles and wiring; clean up legacy AI stack

• Add svc-ai category under roles and load it in constructor stage

• Create new 'svc-ai-ollama' role (vars, tasks, compose, meta, README) and dedicated network

• Refactor former AI stack into separate app roles: web-app-flowise and web-app-openwebui

• Add web-app-minio role; adjust config (no central DB), meta (fa-database, run_after), compose networks include, volume key

• Provide user-focused READMEs for Flowise, OpenWebUI, MinIO, Ollama

• Networks: add subnets for web-app-openwebui, web-app-flowise, web-app-minio; rename web-app-ai → svc-ai-ollama

• Ports: rename ai_* keys to web-app-openwebui / web-app-flowise; keep minio_api/minio_console

• Add group_vars/all/17_ai.yml (OLLAMA_BASE_LOCAL_URL, OLLAMA_LOCAL_ENABLED)

• Replace hardcoded include paths with path_join in multiple roles (svc-db-postgres, sys-service, sys-stk-front-proxy, sys-stk-full-stateful, sys-svc-webserver, web-svc-cdn, web-app-keycloak)

• Remove obsolete web-app-ai templates/vars/env; split Flowise into its own role

• Minor config cleanups (CSP flags to {}, central_database=false)

https://chatgpt.com/share/68d15cb8-cf18-800f-b853-78962f751f81
This commit is contained in:
2025-09-22 18:39:40 +02:00
parent aeab7e7358
commit 5d1210d651
44 changed files with 530 additions and 204 deletions

View File

@@ -0,0 +1,24 @@
Here are user-focused **README.md** drafts for the four roles, following your template structure and describing the **role** (what the installed software does for users), not the folder.
# Open WebUI
## Description
**Open WebUI** provides a clean, fast chat interface for working with local AI models (e.g., via Ollama). It delivers a ChatGPT-like experience on your own infrastructure to keep prompts and data private.
## Overview
End users access a web page, pick a model, and start chatting. Conversations remain on your servers. Admins can enable strict offline behavior so no external network calls occur. The UI can also point at OpenAI-compatible endpoints if needed.
## Features
* Familiar multi-chat interface with quick model switching
* Supports local backends (Ollama) and OpenAI-compatible APIs
* Optional **offline mode** for air-gapped environments
* File/paste input for summaries and extraction (model dependent)
* Suitable for teams: predictable, private, reproducible
## Further Resources
* Open WebUI — [https://openwebui.com](https://openwebui.com)
* Ollama — [https://ollama.com](https://ollama.com)

View File

@@ -0,0 +1,41 @@
features:
matomo: true
css: true
desktop: true
central_database: false
logout: true
javascript: false
server:
domains:
canonical:
openwebui: "chat.ai.{{ PRIMARY_DOMAIN }}"
aliases: []
csp:
flags: {}
#script-src-elem:
# unsafe-inline: true
#script-src:
# unsafe-inline: true
# unsafe-eval: true
#style-src:
# unsafe-inline: true
whitelist:
font-src: []
connect-src: []
docker:
services:
openwebui:
backup:
no_stop_required: true
image: ghcr.io/open-webui/open-webui
version: main
name: open-webui
offline_mode: false
hf_hub_offline: false
redis:
enabled: false
database:
enabled: false
volumes:
openwebui: ai_openwebui_data
credentials: {}

View File

@@ -0,0 +1,28 @@
---
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "Installs Open WebUI — a clean, fast chat interface for local/private AI models (e.g., via Ollama)."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
galaxy_tags:
- ai
- llm
- chat
- privacy
- self-hosted
- offline
- openwebui
- ollama
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://s.infinito.nexus/code/"
logo:
class: "fa-solid fa-comments"
run_after:
- web-app-keycloak
- web-app-matomo
dependencies: []

View File

@@ -0,0 +1,13 @@
---
- name: "Install Ollama Dependency"
include_role:
name: svc-ai-ollama
vars:
flush_handlers: true
when:
- run_once_svc_ai_ollama is not defined
- OLLAMA_LOCAL_ENABLED | bool
- name: "load docker, proxy for '{{ application_id }}'"
include_role:
name: sys-stk-full-stateless

View File

@@ -0,0 +1,30 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
ollama:
{% include 'roles/docker-container/templates/base.yml.j2' %}
image: {{ OLLAMA_IMAGE }}:{{ OLLAMA_VERSION }}
container_name: {{ OLLAMA_CONTAINER }}
expose:
- "{{ OLLAMA_PORT }}"
volumes:
- ollama_models:/root/.ollama
{% include 'roles/docker-container/templates/networks.yml.j2' %}
openwebui:
{% include 'roles/docker-container/templates/base.yml.j2' %}
image: {{ OPENWEBUI_IMAGE }}:{{ OPENWEBUI_VERSION }}
container_name: {{ OPENWEBUI_CONTAINER }}
depends_on:
- ollama
ports:
- "127.0.0.1:{{ OPENWEBUI_PORT_PUBLIC }}:8080"
volumes:
- openwebui_data:/app/backend/data
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
ollama_models:
name: {{ OLLAMA_VOLUME }}
openwebui_data:
name: {{ OPENWEBUI_VOLUME }}

View File

@@ -0,0 +1,5 @@
# Open WebUI
OLLAMA_BASE_URL={{ OLLAMA_BASE_LOCAL_URL }}
OFFLINE_MODE={{ OPENWEBUI_OFFLINE_MODE | ternary(1, 0) }}
HF_HUB_OFFLINE={{ OPENWEBUI_HF_HUB_OFFLINE | ternary(1, 0) }}
ENABLE_PERSISTENT_CONFIG=False

View File

@@ -0,0 +1,17 @@
# General
application_id: "web-app-openwebui"
# Docker
docker_pull_git_repository: false
docker_compose_file_creation_enabled: true
# Open WebUI
# https://openwebui.com/
OPENWEBUI_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.openwebui.version') }}"
OPENWEBUI_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.openwebui.image') }}"
OPENWEBUI_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.openwebui.name') }}"
OPENWEBUI_OFFLINE_MODE: "{{ applications | get_app_conf(application_id, 'docker.services.openwebui.offline_mode') }}"
OPENWEBUI_HF_HUB_OFFLINE: "{{ applications | get_app_conf(application_id, 'docker.services.openwebui.hf_hub_offline') }}"
OPENWEBUI_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.openwebui') }}"
OPENWEBUI_PORT_PUBLIC: "{{ ports.localhost.http[application_id] }}"