194 Commits

Author SHA1 Message Date
6fcf6a1ab6 feat(keycloak): add automation service account client support
Introduce a confidential service-account client (Option A) to replace user-based
kcadm sessions. The client is created automatically, granted realm-admin role,
and used for all subsequent Keycloak updates. Includes improved error handling
for HTTP 401 responses.

Discussion: https://chatgpt.com/share/68e01da3-39fc-800f-81be-2d0c8efd81a1
2025-10-03 21:02:16 +02:00
4d9890406e fix(sys-ctl-hlth-csp): ensure '--' separator is added when passing ignore list to checkcsp
Updated README to reflect correct usage with '--', adjusted script.py to always append separator, and simplified task template handling for consistency.

Ref: https://chatgpt.com/share/68dfc69b-7c94-800f-871b-3525deb8e374
2025-10-03 20:50:49 +02:00
59b652958f feat(sys-ctl-hlth-csp): add support for ignoring network block domains
Introduced new variable HEALTH_CSP_IGNORE_NETWORK_BLOCKS_FROM (list, default [])
to suppress network block reports (e.g., ORB) from specific external domains.
Updated script.py to accept and forward the flag, extended systemd exec command
in tasks, added defaults, and documented usage in README.

Ref: https://chatgpt.com/share/68dfc69b-7c94-800f-871b-3525deb8e374
2025-10-03 15:23:57 +02:00
a327adf8db Removed failing healthcheck 2025-10-02 20:02:10 +02:00
7a38cb90fb Added correct resources for baserow 2025-10-02 19:59:04 +02:00
9d6cf03f5b Fix: Replace unsupported /dev/tcp healthcheck with onboard PHP socket check for websocket service
Replaced the previous shell-based /dev/tcp healthcheck with a PHP fsockopen() test to ensure compatibility with minimal base images. This avoids dependency on missing tools like nc or curl and provides a reliable onboard check.

Conversation: https://chatgpt.com/share/68deb8ec-d920-800f-bd35-2869544fe30f
2025-10-02 19:40:13 +02:00
9439ac7f76 fix(web-app-xwiki): raise XWiki container resources and align YAML formatting
- Set cpus=1.0, mem_reservation=1g, mem_limit=2g, pids_limit=1024
- Keep LTS image/tag templating and Postgres type
- Normalize spacing/alignment for readability

Reason: Tomcat/XWiki needs >1 GB; low limits caused slow boots/502 upstream not ready.
Conversation: https://chatgpt.com/share/68de5266-c8a0-800f-bfbc-de85262de53e
2025-10-02 12:22:49 +02:00
23353ac878 infra(sys-service): centralize async control + pre-deploy backup safeguard
- Added MODE_BACKUP to trigger backup before the rest of the deployment

- sys-ctl-bkp-docker-2-loc: force linear sync and force flush when MODE_BACKUP is true

- Unified name resolution via system_service_name across handlers and tasks

- Introduced system_service_force_linear_sync and system_service_force_flush (rename from system_force_flush)

- Drive async/poll via system_service_async/system_service_poll using omit when disabled

- Propagated per-role overrides (cleanup, repair, cert tasks) for clarity and safety

- Minor formatting and consistency cleanups

Why: Ensure the backup runs before the deployment routine to safeguard data integrity.

Refs: Conversation https://chatgpt.com/share/68de4c41-b6e4-800f-85cd-ce6949097b5e
Signed-off-by: Kevin Veen-Birkenbach <kevin@veen.world>
2025-10-02 11:58:23 +02:00
8beda2d45d fix(svc-db-postgres): pin Postgres version to 17-3.5, add entity_name var, and dynamically resolve major version for dev package
- Changed default Docker image version from 'latest' to '17-3.5' in config
- Introduced entity_name var for consistent lookups
- Added POSTGRES_VERSION and POSTGRES_VERSION_MAJOR extraction
- Updated Dockerfile to install postgresql-server-dev-<major> with default fallback to 'all'
- Minor YAML formatting improvements

Ref: https://chatgpt.com/share/68de40b4-2eb8-800f-ab5b-11cc873c3604
2025-10-02 11:07:17 +02:00
5773409bd7 Changed nextcloud domain to next.cloud.primary_domain 2025-10-02 09:19:32 +02:00
b3ea962338 Implemented sleeping time for server 2025-10-02 09:08:32 +02:00
b9fbf92461 proxy(cors): make ACAO opt-in; remove hardcoded default
Stop forcing Access-Control-Allow-Origin to $scheme://$host. This default broke Element (element.infinito.nexus) -> Synapse (matrix.infinito.nexus) CORS and blocked login. Now ACAO is only set when 'aca_origin' is provided; otherwise we defer to the upstream app (e.g., Synapse) to emit correct CORS headers. Also convert top comments to Jinja block comment.

Discussion & debugging details: https://chatgpt.com/share/68de2236-4aec-800f-adc5-d025922c8753
2025-10-02 08:57:14 +02:00
6824e444b0 Changed bitnami images to legacy. See https://github.com/bitnami/containers/issues/83267. 2025-10-02 07:31:20 +02:00
5cdcc18a99 Fix PeerTube OIDC plugin automation
- Store oidc_settings as proper YAML dict with correct keys
- Ensure plugin is installed only if missing
- Update DB settings as jsonb and enforce enabled/uninstalled state
- Add CLI enforcement for plugin activation
- Correct task conditions (enable/disable logic) with boolean filters

Ref: https://chatgpt.com/share/68dd1d16-9b34-800f-b2bf-a3fe058f25b1
2025-10-01 14:23:07 +02:00
e7702948b8 EspoCRM role: custom image + single data volume + runtime flag setter
• Build a custom image and replace upstream entrypoint with docker-entrypoint-custom.sh (strict fail on flag script).

• Introduce set_flags.php and wire via ESPOCRM_SET_FLAGS_SCRIPT; apply flags at container start; clear cache afterwards.

• Keep exactly one Docker volume (data:/var/www/html/); drop separate custom/extensions mounts.

• Compose: use custom image, add healthchecks & depends_on for daemon/websocket; keep service healthy gating.

• Ansible: deploy scripts, build & up via handlers; patch siteUrl as www-data; run upgrade non-fatal; always run flag setter.

• Vars/Env: add ESPO_INIT_* toggles and ESPOCRM_SET_FLAGS_SCRIPT; refactor variables for scripts & custom image paths.

Conversation context: https://chatgpt.com/share/68dd1992-020c-800f-bcf5-2db60cb4aab2
2025-10-01 14:08:09 +02:00
09a4c243d7 Add centralized include for Access-Control-Allow headers across proxy/service Nginx templates and align ACA vars for simpleicons task.
Ref: https://chatgpt.com/share/68dbf59c-f424-800f-aa27-787db52e260f
2025-09-30 17:22:28 +02:00
1d5a50abf2 Optimized path building 2025-09-30 16:36:28 +02:00
0d99c7f297 Nextcloud: refactor Talk → HPB, switch to bridge mode, and template cleanups
- Change Talk (HPB) network_mode from host → bridge and drop TURN relay range mapping
- Remove obsolete nginx restart handler; rely on 'docker compose up' notify
- Fix spreed task condition to use HPB standalone flag
- docker-compose.yml.j2: parameterize service names, use NEXTCLOUD_*_SERVICE vars, align host-gateway condition with HPB, tidy ports/expose/network blocks
- env.j2/nginx configs: rename TALK_* → HPB_* variables and locations; use templated NEXTCLOUD_SERVICE for php upstream
- vars: introduce entity_name; centralize *SERVICE keys; rename all Talk vars to HPB; adjust whiteboard keys; compute URLs/JSON configs accordingly
- spreed plugin vars: point to HPB signaling/STUN/TURN and internal secret

Ref: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 12:52:15 +02:00
0a17e54d8c Nextcloud: set conservative Docker resource limits and template cleanups
- Add CPU/memory/PID limits for redis, database, proxy, cron, talk, whiteboard
- Keep nextcloud service unchanged except existing settings
- Normalize service_name templating and indentation in docker-compose.yml.j2
- Mount Janus config for Talk via volume

Ref: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 11:54:14 +02:00
bf94338845 Nextcloud/Nginx: wire Talk signaling WS location via reusable snippet
Conditionally include the generic WebSocket proxy block for NEXTCLOUD_TALK_SIGNALING_ENABLED. Set location_ws to '^~ <location>' and ws_port to NEXTCLOUD_PORT, then include roles/sys-svc-proxy/templates/location/ws.conf.j2. This enables proper Upgrade/Connection headers and disables buffering for the signaling path.

Context: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 11:17:54 +02:00
5d42b78b3d Nextcloud: extend CSP for Talk & disable keeporsweep
CSP: add cloud.<PRIMARY_DOMAIN> to connect-src and frame-src (both HTTP and WS) and allow worker-src 'blob:' for web workers used by Talk/Collabora.

Apps: disable keeporsweep (installation no longer possible) and document reason.

Context: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 11:15:32 +02:00
26a1992d84 Nextcloud/Talk: add Janus config & fix WebSocket proxying
Nginx: define 'map $http_upgrade $connection_upgrade' once in http{} and reuse; drop duplicate map from ws_generic vhost; tidy ws location headers/spacing. Nextcloud: add WS location for standalone signaling; render & mount Janus config (NAT 1:1, ICE enforce/ignore lists, libnice hardening); extend CSP (connect-src/frame-src for cloud & collabora, worker-src blob:); disable keeporsweep app; replace nginx reload handler with compose up; add NEXTCLOUD_HOST_JANUS_CONF_PATH and related vars.

Context: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 11:14:15 +02:00
2439beb95a Added correct minio http statuscodes 2025-09-29 17:29:29 +02:00
251f7b227d Add healthchecks for all Taiga services, fix RabbitMQ env var names, and define TAIGA_HOSTNAME
Details:
- Implemented healthchecks for taiga, async, rabbitmq, front, events, protected, and gateway
- Corrected RabbitMQ env variables (RABBITMQ_DEFAULT_USER/PASS/VHOST/ERLANG_COOKIE)
- Added TAIGA_HOSTNAME for backend service

See: https://chatgpt.com/share/68da9d6b-b164-800f-bcb7-410b40219a1e
2025-09-29 17:09:42 +02:00
3fbb9c38a8 Solved coturn volume bug 2025-09-29 15:33:50 +02:00
29e8b3a590 Deactivated recording for Big Blue Button 2025-09-29 15:23:48 +02:00
27b89d8fb6 Taiga: refactor service naming & resource limits
Add CPU/memory/pids limits for taiga, async, front, gateway, events, async-rabbitmq, events-rabbitmq, manager, and protected. Align manager service usage (was taiga-manage) in admin tasks and inits compose. Switch to variable-driven service names (TAIGA_* vars), add container_name patterns, normalize volume mappings via TAIGA_VOLUME_STATIC/MEDIA, fix depends_on to use TAIGA_* vars, and set RabbitMQ hostnames from vars. Remove obsolete Development.md.

Conversation reference: https://chatgpt.com/share/68da83b7-0cb4-800f-9702-d8a2d4ebea71  (replace with this chat’s share link)
2025-09-29 15:04:12 +02:00
55f2d15e93 Fix coturn container/volume separation: use COTURN_CONTAINER for container_name and map COTURN_VOLUME to /var/lib/coturn
Details:
- Removed anonoumys volume
- Renamed container_name variable to COTURN_CONTAINER
- Added dedicated COTURN_VOLUME with _data suffix for persistence
- Mount COTURN_VOLUME into /var/lib/coturn

Reference: https://chatgpt.com/share/68da6f12-b238-800f-932b-e37c8a50dddd
2025-09-29 13:36:30 +02:00
aa19a97ed6 CORS/CSP hardening & centralization
- Add reusable Nginx include: roles/sys-svc-proxy/templates/headers/access_control_allow.conf.j2
  (dynamic ACAO/credentials/methods/headers via role vars)
- Set global 'Vary: Origin' in nginx.conf.j2 to prevent cache poisoning
- CSP: allow Simple Icons via connect-src when feature is enabled
- Front proxy: rename vars to lowercase + flush handlers after config deploy
- Desktop: gate & load Simple Icons role; inject brand logos when enabled
- Bluesky + Logout: replace inline CORS with centralized include
- Simpleicons: public CORS (ACAO='*', no credentials), keep GET/OPTIONS, allow headers
- Taiga: adjust canonical domain to taiga.kanban.{{ PRIMARY_DOMAIN }}
- LibreTranslate: remove unused images/versions keys

Fixes: https://open.project.infinito.nexus/projects/cymais/work_packages/342/activity
Discussion: https://chatgpt.com/share/68da5e27-ffd4-800f-91a3-0ef103058d44
2025-09-29 12:23:58 +02:00
c06d1c4d17 Refactor yay update handling:
- Move AUR update task into dev-yay role
- Centralize defaults (AUR_HELPER, AUR_BUILDER_USER, etc.)
- Remove separate update-yay role (redundant)

See conversation with ChatGPT https://chatgpt.com/share/68da3219-6d78-800f-92ad-0a5061bac8be and related work item:
https://open.project.infinito.nexus/projects/cymais/work_packages/341/activity
2025-09-29 09:16:02 +02:00
66f294537d Replaced fixed 'web' service call for exec with 'ESPOCRM_SERVICE' variable for exec call 2025-09-28 15:52:44 +02:00
a9097a3ec3 web-app-espocrm: add resource limits, init/stop settings and cleanups
- Added CPU, memory and PID limits for espocrm, daemon and websocket services
- Enabled init process and graceful stop (SIGTERM, 30s) in docker-compose
- Adjusted env template (removed forced True/default flags)
- Introduced entity_name/ESPOCRM_SERVICE in vars for service naming
- Minor cleanup of get_app_conf defaults

Ref: https://chatgpt.com/share/68d937ce-9c34-800f-9136-54baed9c91c7
2025-09-28 15:50:28 +02:00
fc59c64273 Nextcloud Talk: fix virtual-background web check by
- adding explicit MIME types for .wasm and .tflite in internal Nginx
- relaxing CSP (script-src: allow 'unsafe-eval') for WebAssembly
- removing obsolete turnserver draft.
Details: https://chatgpt.com/share/68d7dd39-50b8-800f-ab59-cfb1d3cf07cb
2025-09-27 14:49:42 +02:00
dbbb3510f3 Refactor TURN/STUN config:
- Removed ?transport=udp from Nextcloud Talk TURN server definitions
- Dropped --no-tcp-relay to allow TCP fallback
- Removed invalid UDP mapping on TLS port
- Introduced switch between REST secret auth and lt-cred-mech via COTURN_USER_AUTH_ENABLED
- Added user_auth_enabled flag in coturn config for flexibility

See: https://chatgpt.com/share/68d7d601-3558-800f-bc84-00d7e8fc3243
2025-09-27 14:18:29 +02:00
eb3bf543a4 Removed turn and stun protocol prefix 2025-09-27 13:58:46 +02:00
4f5602c791 Nextcloud Talk: fix TURN/STUN config
- Removed duplicate Admin Manual link in README
- Fixed turnserver.config.php draft return syntax
- Unified onboard port handling in docker-compose and env
- Updated vars to define STUN/TURN configs with correct schemas
- Ensured spreed plugin config serializes clean JSON arrays

Ref: https://chatgpt.com/share/68d7cfa2-7378-800f-9ecf-09b6bb768f13
2025-09-27 13:51:17 +02:00
75d476267e Optimized Nextcloud variables 2025-09-27 12:14:57 +02:00
c3e5db7f2e Cleaned up LDAP entries to keep it more clean 2025-09-27 11:30:39 +02:00
dfd2d243b7 Enabled recordings for BBB because https://github.com/bigbluebutton/bigbluebutton/issues/9202 was solved 2025-09-27 11:28:07 +02:00
78ad2ea4b6 nextcloud(spreed): output valid JSON via to_json for signaling/stun/turn; keep internal_secret plain https://chatgpt.com/share/68d75f71-6de8-800f-854c-207771c8d883 2025-09-27 05:52:32 +02:00
c362e160fc Nextcloud: switch Talk to host networking; update proxy routing and compose; centralize Talk secrets & spreed config; remove Greenlight block
Conversation: https://chatgpt.com/share/68d74e25-c068-800f-ae20-d0e34ac8ee12
2025-09-27 05:03:48 +02:00
a044028e03 Nextcloud Talk integration cleanup: unify secrets and signaling config
- Replace inline get_app_conf secrets in env.j2 with dedicated vars (TURN, signaling, internal)
- Correctly model signaling_servers as object {servers, secret} in spreed.yml
- Use UDP stun_turn port instead of TLS for transport=udp
- Add fallback logic for standalone Coturn role in main.yml
- Remove obsolete Greenlight section from BBB override

Ref: https://chatgpt.com/share/68d74e25-c068-800f-ae20-d0e34ac8ee12
2025-09-27 04:39:11 +02:00
7405883b48 BigBlueButton & Nextcloud:
- Switch to custom BBB Docker repository
- Externalize Coturn and Collabora by default
- Add dedicated 03_dependencies.yml for dependency handling
- Improve env templating with lowercased feature flags
- Add conditional healthcheck for Greenlight
- Refactor TURN/STUN/relay handling with role variable _BBB_COTURN_ROLE
- Extend Collabora/Greenlight dependency wiring in override file
- Nextcloud Talk: refine vars and enable/disable logic with separate plugin/service flags, add network_mode support and conditional nginx proxy block

Ref: https://chatgpt.com/share/68d741ff-a544-800f-9e81-a565e0bab0eb
2025-09-27 03:46:57 +02:00
85db0a40db Refactor Coturn port configuration: unify STUN and TURN into stun_turn and stun_turn_tls, update vars, docker-compose template, and add robust healthcheck [https://chatgpt.com/share/68d73a2d-ef34-800f-90d2-1628822ca541] 2025-09-27 03:14:53 +02:00
8af39c32ec Override docker conf variables from parents 2025-09-27 02:41:13 +02:00
31e86ac0fc Optimized networks 2025-09-27 02:21:19 +02:00
4d223f1784 feat(web-svc-coturn): add configurable network_mode (default host) and adjust credential generation
- Introduced `COTURN_NETWORK_MODE` to support both host and bridge modes
- Updated docker-compose template to skip port publishing in host mode
- Changed user_password credential algorithm to random_hex for stronger randomness
- Set default network_mode: host in config

Ref: https://chatgpt.com/share/68d72a50-c36c-800f-9367-32c4ae520000
2025-09-27 02:05:48 +02:00
926def3d01 web-svc-coturn: Add resource limits and fix docker-compose template
- Set CPU, memory reservation/limit, and PID limit for coturn
- Ensure docker_compose_file_creation_enabled and disable git repo pulling
- Move certificate mounts to volumes and fix env var interpolation in command
- Correct realm and user formatting

See: https://chatgpt.com/share/66f65f18-799c-800a-95f4-b6b26511e9cb
2025-09-27 01:40:37 +02:00
083b7d2914 Refactor relay port ranges: set coturn to 20000–39999, BigBlueButton to 40000–49999, and Nextcloud to 50000–59999
See: https://chatgpt.com/share/68d6f6c2-f7c0-800f-bc2e-10876afff4a8
2025-09-26 22:26:44 +02:00
73a38e0b2b Refactor TURN/STUN handling:
- Split internal/external Coturn for BBB and Nextcloud
- Added dedicated relay port ranges per app
- Updated env and compose overrides for coturn
- Ensure coturn role is loaded conditionally
- Standardize credential/env passing for coturn
@See https://chatgpt.com/share/68d6f376-4878-800f-b4f7-62822caa49ea
2025-09-26 22:11:55 +02:00
e3c0880e98 Fix semi-stateless run_once label and load both docker-compose & openresty handlers
See: https://chatgpt.com/share/68d6f2f3-59a4-800f-b20e-ed1c7df9e2ff
2025-09-26 22:09:39 +02:00
a817d964e4 refactor(front-stack): introduce sys-stk-front-base and semi-stateless stack; improve coturn role docs
- Extract common HTTPS + Cloudflare + handler bootstrap into new role sys-stk-front-base
- Update sys-stk-front-proxy, web-svc-cdn, web-svc-file, web-svc-html to depend on sys-stk-front-base
- Add new sys-stk-semi-stateless role combining front-base + back-stateless
- Update web-svc-coturn to use sys-stk-semi-stateless and rewrite README/meta with detailed Coturn description
- Unify sys-util-csp-cert README heading

Ref: ChatGPT conversation https://chatgpt.com/share/68d6cea2-3570-800f-acb3-c3277317f17b
2025-09-26 20:25:53 +02:00
7572134e9d Removed leftover 2025-09-26 19:39:38 +02:00
97af4990aa refactor(webserver): rename roles and update references
- Rename sys-svc-webserver -> sys-svc-webserver-core
- Rename sys-stk-front-pure -> sys-svc-webserver-https
- Update includes, run_once flags, and docs across:
  * sys-ctl-mtn-cert-renew
  * sys-front-inj-*
  * sys-stk-front-proxy
  * sys-svc-certs
  * sys-svc-cln-domains
  * web-opt-rdr-*
  * web-svc-*
- Remove redundant webserver include in web-opt-rdr-www
- Fix documentation links

Ref: ChatGPT conversation https://chatgpt.com/share/68d6cea2-3570-800f-acb3-c3277317f17b
2025-09-26 19:34:42 +02:00
b6d0535173 Cleaned up comment 2025-09-26 18:53:46 +02:00
27d33435f8 fix(bbb): align TURN/STUN configuration with shared coturn service
- added entity_name to vars for consistent docker.service lookup
- switched docker_repository_* vars to use entity_name dynamically
- introduced BBB_TURN_DOMAIN, BBB_TURN_PORT, and BBB_STUN_PORT
  → fallback to web-svc-coturn when BBB_COTURN_ENABLED is false
- updated env.j2 to use new BBB_TURN_* vars instead of hardcoded domain/ports
- cleaned up obsolete comments and spacing

Conversation: https://chatgpt.com/share/68d6c4a8-d524-800f-9592-e8a3407cd721
2025-09-26 18:53:21 +02:00
3cc4014edf feat(coturn): add dedicated web-svc-coturn role with schema, ports, network, and docker-compose template
- registered subnet 192.168.104.48/28 for coturn in group_vars/all/09_networks.yml
- defined public ports for stun/turn and relay port range in group_vars/all/10_ports.yml
- removed obsolete TODO.md and env.j2 from role
- added schema/main.yml with credentials validation (user_password, auth_secret)
- refactored tasks to load sys-stk-back-stateless instead of sys-stk-full-stateful
- implemented docker-compose.yml.j2 with auth-secret + lt-cred-mech and TLS config
- restructured vars/main.yml with docker, ports, credentials, and certificates
- updated config/main.yml.j2 with canonical domain and service definitions

Conversation: https://chatgpt.com/share/68d6c4a8-d524-800f-9592-e8a3407cd721
2025-09-26 18:52:13 +02:00
63da669c33 Removed unnecessary 'sys-svc-proxy' wrapper tasks 2025-09-26 18:50:12 +02:00
fb04a4c7a0 Restructured Front Proxy variables 2025-09-26 18:43:09 +02:00
2968ac7f0a Removed unnecessary sys-svc-webserver import 2025-09-26 18:22:46 +02:00
1daa53017e Refactor BigBlueButton role:
- Aligned schema/main.yml credential definitions with consistent spacing
- Changed PostgreSQL secret to use random_hex_32 instead of bcrypt
- Improved administrator creation logic in tasks/02_administrator.yml:
  * First try with primary password
  * Retry with starred password if OIDC is enabled
  * Fallback to user:set_admin_role if both fail
See: https://chatgpt.com/share/68d6aa34-19cc-800f-828a-a5121fda589f
2025-09-26 16:59:28 +02:00
9082443753 Refactor docker compose exec usage
Introduce centralized variables:
- docker_compose_command_base
- docker_compose_command_exec

Replaced hardcoded 'docker compose exec' with '{{ docker_compose_command_exec }}'
across multiple roles (BigBlueButton, EspoCRM, Friendica, Listmonk, Mailu, Matrix, OpenProject).
Ensures consistent environment file loading and reduces duplicated code.

Details: https://chatgpt.com/share/68d6a276-19d0-800f-839d-d191d97f7c41
2025-09-26 16:26:17 +02:00
bcee1fecdf feat(inventory): add random_hex_32 generator
feat(bbb/schema): auto-generate etherpad_api_key; set fsesl_password to alphanumeric_32
test(unit): add InventoryManager tests (Option B) expecting feature-generated creds as plain strings
docs: full autocreation of credentials for BigBlueButton now enabled
See: https://chatgpt.com/share/68d69ee8-3fd4-800f-9209-60026b338934
2025-09-26 16:11:05 +02:00
0602148caa bbb: pin mediasoup to IPv4-only and single worker via compose override
Set MS_WORKERS=1, MS_ENABLE_IPV6=false, and MS_WEBRTC_LISTEN_IPS to announce only EXTERNAL_IPv4 for webrtc-sfu. Helps avoid mediasoup router init issues seen when IPv6 is present.

Context/conversation: https://chatgpt.com/share/68d69a0e-22b0-800f-890b-13721a35f51b
2025-09-26 15:50:28 +02:00
cbfb991e79 Hardened BBB Version 2025-09-26 15:21:01 +02:00
fa7b1400bd Created mail account for blackhole to prevent delivery failure messages 2025-09-26 15:11:33 +02:00
c7cae93597 Optimized IP6 deactivation 2025-09-26 13:46:55 +02:00
6ea0d09f14 bbb: WIP—stabilize env/compose wiring & prep SFU override
Context: debugging mediasoup/WebRTC failures caused by empty/interpolated vars (EXTERNAL_IPv4, etc.).
- Normalize config/main.yml (ip6_enabled flag, enable greenlight/coturn) and tidy formatting.
- Extend vars/main.yml with BBB_* switches (IPv6, Greenlight, Coturn), TURN/Coturn cert paths.
- env.j2: wire secrets & toggles, guard IPv6 via BBB_IP6_ENABLED, switch LDAP/OIDC to role flags, add TURN/STUN, and general cleanup.
- tasks/main.yml: use BBB_* fact names, robust path joins, write docker-compose.override.yml, and notify compose on env/override changes.
- tasks/01_docker-compose.yml: reference new BBB_DOCKER_COMPOSE_* facts.
- Add templates/docker-compose.override.yml.j2 (placeholder for SFU overrides to avoid bad defaults during runs).
Rationale: make Compose brings deterministic (no empty ), paving the way to set MS_WEBRTC_LISTEN_IPS in override without risk.

Chat reference: debugging thread with GPT-5 Thinking on 2025-09-26 https://chatgpt.com/share/68d59d98-4388-800f-a627-07b6a603d0b2.
2025-09-26 12:49:12 +02:00
5e4cda0ac9 Documented docker-compose.override.yml 2025-09-26 12:14:08 +02:00
1d29617f85 Added creation of docker-compose.override.yml file 2025-09-26 12:03:47 +02:00
7c5ad8e6a1 Optimized XWIKI Nextcloud Bridge 2025-09-26 09:35:14 +02:00
a26538d1b3 web-app-openproject: upgrade to OpenProject 15
- bumped image version from 14 to 15
- removed dedicated migration task (now handled by upstream entrypoints)
- renamed tasks for cleaner numbering:
  * 02_settings.yml → 01_settings.yml
  * 03_ldap.yml → 02_ldap.yml
  * 04_admin.yml → 03_admin.yml

Ref: https://chatgpt.com/share/68d57770-2430-800f-ae53-e7eda6993a8d
2025-09-25 19:39:45 +02:00
f55b0ca797 web-app-openproject: migrate from OpenProject 13 to 14
- updated base image from openproject/community:13 to openproject/openproject:14
- added dedicated migration task (db:migrate + schema cache clear)
- moved settings, ldap, and admin tasks to separate files
- adjusted docker-compose template to use OPENPROJECT_WEB_SERVICE / OPENPROJECT_SEEDER_SERVICE variables
- replaced postinstall.sh with precompile-assets.sh
- ensured depends_on uses variable-based service names

Ref: https://chatgpt.com/share/68d57770-2430-800f-ae53-e7eda6993a8d
2025-09-25 19:10:46 +02:00
6f3522dc28 fix(csp): resolve all CSP-related issues and extend webserver health checks
- Added _normalize_codes to support lists of valid HTTP status codes
- Updated web_health_expectations to handle multiple codes, deduplication, and fallback logic
- Extended unit tests with coverage for list/default combinations, invalid values, and alias behavior
- Fixed Flowise CSP flags and whitelist entries
- Adjusted Flowise, MinIO, and Pretix docker service resource limits
- Updated docker-compose templates with explicit service_name
- Corrected MinIO status_codes to 301 redirects

 All CSP errors fixed

See details: https://chatgpt.com/share/68d557ad-fc10-800f-b68b-0411d20ea6eb
2025-09-25 18:05:41 +02:00
5186eb5714 Optimized OpenProject and CSP rules 2025-09-25 14:47:28 +02:00
73bcdcaf45 Deactivated proxying of bluesky web domain 2025-09-25 13:31:18 +02:00
9e402c863f Optimized Bleusky API redirect domain 2025-09-25 13:29:45 +02:00
84865d61b8 Install swapfile tool correct 2025-09-25 13:16:13 +02:00
423850d3e6 Refactor svc-opt-swapfile role: move core logic into 01_core.yml, simplify tasks/main.yml, and integrate swapfile setup into sys-svc-docker/01_core.yml to prevent OOM failures. See https://chatgpt.com/share/68d518f2-ba0c-800f-8a3a-c6b045763ac6 2025-09-25 12:27:13 +02:00
598f4e854a Increase OpenProject container resources
- Raised web service to 3 CPUs, 3–4 GB RAM, 2048 pids
- Raised worker service to 2 CPUs, 2–3 GB RAM, 2048 pids
- Increased cache mem_reservation to 512m
- Adjusted formatting for proxy service

Ref: https://chatgpt.com/share/68d513c1-8c10-800f-bf57-351754e3f5c2
2025-09-25 12:05:03 +02:00
1f99a6b84b Refactor: force early evaluation of BlueSky redirect_domain_mappings before include_role
Ensures that redirect_domain_mappings is resolved via set_fact
before passing it into the web-opt-rdr-domains role.
See: https://chatgpt.com/share/68d51125-14f4-800f-be6a-a7be3faeb028
2025-09-25 11:55:13 +02:00
189aaaa9ec Deactivated OpenProject LDAP Administrator Flag 2025-09-25 11:10:46 +02:00
ca52dcda43 Refactor OpenProject role:
- Add CPU, memory and PID limits to all services in config/main.yml to prevent OOM
- Replace old LDAP admin bootstrap with new 02_admin.yml using OPENPROJECT_ADMINISTRATOR_* vars
- Standardize variable names (uppercase convention)
- Fix HTTPS/HSTS port check (443 instead of 433)
- Allow docker_restart_policy override in base.yml.j2
- Cleanup redundant LDAP admin runner in 01_ldap.yml
See: https://chatgpt.com/share/68d40c6e-ab9c-800f-a4a0-d9338d8c1b32
2025-09-24 17:22:47 +02:00
4f59e8e48b Added cdn.jsdelivr.net to connect-src for web-app-desktop 2025-09-24 15:35:11 +02:00
a993c153dd fix(docker-container): ensure service_name and context are passed correctly to resource.yml.j2 by switching from lookup() to include with indent filter
Ref: https://chatgpt.com/share/68d3db3d-b6b4-800f-be4b-24ac50005552
2025-09-24 13:51:44 +02:00
8d6ebb4693 Mailu/Redis: add explicit service resource limits & clamav_db volume
- use lookup(template) for redis resource injection
- add cpus/mem/pids configs for all Mailu services
- switch antivirus to dedicated clamav_db volume
- add MAILU_CLAMAV_VOLUME var
- cleanup set service_name per service in docker-compose template
https://chatgpt.com/share/68d3d69b-06f0-800f-8c4d-4a74471ab961
2025-09-24 13:31:54 +02:00
567babfdfc Fix CPU resource calculation by enforcing a minimum of 0.5 cores per container using list-based max filter. See: https://chatgpt.com/share/68d3d645-e4c4-800f-8910-b6b27bb408e7 2025-09-24 13:30:32 +02:00
18e5f001d0 Mailu: disable hardened_malloc LD_PRELOAD (set to empty) to prevent /proc/cpuinfo PermissionError in socrate startup
Details: https://chatgpt.com/share/68d3ba3b-783c-800f-bf3d-0b0ef1296f93
2025-09-24 11:31:44 +02:00
7d9cb5820f feat(jvm): add robust JVM sizing filters and apply across Confluence/Jira
Introduce filter_plugins/jvm_filters.py with jvm_max_mb/jvm_min_mb. Derive Xmx/Xms from docker mem_limit/mem_reservation using safe rules: Xmx=min(70% limit, limit-1024MB, 12288MB), floored at 1024MB; Xms=min(Xmx/2, reservation, Xmx), floored at 512MB. Parse human-readable sizes (k/m/g/t) with binary units.

Wire filters into roles: set JVM_MINIMUM_MEMORY/JVM_MAXIMUM_MEMORY via filters; stop relying on host RAM. Keep env templates simple and stable.

Add unit tests under tests/unit/filter_plugins/test_jvm_filters.py covering typical sizes, floors, caps, invalid inputs, and entity-name derivation.

Ref: https://chatgpt.com/share/68d3b9f6-8d18-800f-aa8d-8a743ddf164d
2025-09-24 11:29:40 +02:00
c181c7f6cd fix(webserver): ensure numeric casting for worker_processes and worker_connections
- Cast WEBSERVER_CPUS_EFFECTIVE to float before comparison to avoid
  'AnsibleUnsafeText < int' type errors.
- Ensure correct numeric coercion for pids_limit values.
- This prevents runtime templating errors when rendering nginx config.

Ref: https://chatgpt.com/share/68d3b047-56ac-800f-a73f-2fb144dbb7c4
2025-09-24 10:48:23 +02:00
929cddec0e Refactor resource_filter to delegate default handling to get_app_conf and update unittests accordingly https://chatgpt.com/share/68d3ad6d-76b4-800f-b04e-5e1fb70b44f3 2025-09-24 10:46:21 +02:00
9ba0efc1a1 Refactor resource configuration:
- Introduce new resource_filter plugin (mandatory hard_default, auto entity_name fallback)
- Replace get_app_conf calls with resource_filter in resource.yml.j2
- Add WEBSERVER_CPUS_EFFECTIVE, WEBSERVER_WORKER_PROCESSES, WEBSERVER_WORKER_CONNECTIONS to 05_webserver.yml
- Update Nginx templates (sys-svc-webserver, web-app-magento, web-app-nextcloud) to use new vars
- Extend svc-prx-openresty config with cpus/mem limits
- Add unit tests for resource_filter

Details: https://chatgpt.com/share/68d3a493-9a5c-800f-8cd2-bd2e7a3e3fda
2025-09-24 09:58:30 +02:00
9bf77e1e35 mastodon: tighten resources, robust exec tasks, and env defaults
- resources: per-service cpus/mem/pids for mastodon/streaming/sidekiq/redis/db
- compose: rename service key to "mastodon" (was: web), set service_name blocks
- tasks(01_setup): run rails db:migrate via docker exec (non-tty, login shell)
- tasks(02_administrator): healthchecks for 'mastodon', sed with absolute path,
  tootctl as user 'mastodon' (non-tty), optional re-health wait
- env.j2: add RAILS_ENV={{ ENVIRONMENT | default('production') }}
- resource.yml.j2: fix get_app_conf path (service_name default spacing)
- docs: remove outdated Installation/Administration files

Context: https://chatgpt.com/share/68d332a0-ae98-800f-b418-c0d0262eaa2e
2025-09-24 01:52:18 +02:00
426ba32c11 feat(services): add CPU/RAM/PIDs defaults for heavy roles and align service names
Add per-service resource overrides (cpus, mem_reservation, mem_limit, pids_limit) for ollama, mariadb, postgres, confluence, gitlab, jira, keycloak, nextcloud; light formatting fixes in wordpress.

Rename service keys from generic 'application/web' to concrete names (jira, confluence, gitlab, keycloak) and update compose templates accordingly.

Jira: introduce JIRA_STORAGE_PATH and switch mounts/README accordingly.

https://chatgpt.com/share/68d2d96c-9bf4-800f-bbec-d4f2c0051c06
2025-09-23 21:43:50 +02:00
ff7b7aeb2d feat(filters): add active_docker_container_count filter and use it for fair resource splits
Compute per-container CPU/RAM shares based on active services (web-/svc-*, enabled=true or undefined). Cast host facts to numbers, add safe min=1, and output compose-ready values. Include robust unit test.

Also: include resource.yml.j2 in base template and minor formatting tidy-up.

https://chatgpt.com/share/68d2d96c-9bf4-800f-bbec-d4f2c0051c06
2025-09-23 21:35:12 +02:00
c523d8d8d4 Casted WWW_REDIRECT_ENABLED to bool 2025-09-23 19:18:22 +02:00
12d05ef013 Bluesky: add redirects for deactivated web/view domains to BLUESKY_API_DOMAIN via web-opt-rdr-domains
Ref: https://chatgpt.com/share/68d2cf5f-4a88-800f-a739-485580d84566
2025-09-23 18:48:47 +02:00
3cbf37d774 Added correct health status code for minio api 2025-09-23 18:34:59 +02:00
fc99c72f86 Optimized Swapfiles variables and enabled async 2025-09-23 18:34:18 +02:00
3211dd7cea Optimized README.md 2025-09-23 13:47:46 +02:00
c07a9835fc Updated Flowise Credentials 2025-09-23 12:48:43 +02:00
f4cf55b3c8 Open WebUI OIDC & proxy fixes + Ollama preload + async-safe pull
- svc-ai-ollama:
  - Add preload_models (llama3, mistral, nomic-embed-text)
  - Pre-pull task: loop_var=model, async-safe changed_when/failed_when

- sys-svc-proxy (OpenResty):
  - Forward Authorization header
  - Ensure proxy_pass_request_headers on

- web-app-openwebui:
  - ADMIN_EMAIL from users.administrator.email
  - Request RBAC group scope in OAUTH_SCOPES

Ref: ChatGPT support (2025-09-23) — https://chatgpt.com/share/68d20588-2584-800f-aed4-26ce710c69c4
2025-09-23 04:27:46 +02:00
1b91ddeac2 Optimized flowise 2025-09-23 03:03:11 +02:00
b638d00d73 Removed unneccessary MINIO_OIDC_POLICY_NAME_SAFE 2025-09-23 03:02:40 +02:00
75c36a1d71 web-app-minio: manage OIDC policy via containerized mc and fix policy JSON
- Use dockerized mc with MC_HOST_minio (stateless), no temp files/dirs
- Create only RAW policy name with slash to match Keycloak claim
- Split policy: s3:* on S3 ARNs; admin:* on Resource "*"
- Add mc vars (image, MC_HOST components) to vars/main.yml
- Remove unused Ollama dependency block from tasks

Refs: ChatGPT conversation → https://chatgpt.com/share/68d1eab9-a35c-800f-aa81-76fb2101bd93
2025-09-23 02:33:35 +02:00
7a119c3175 Deactivated CSS for Open WebUI 2025-09-23 02:21:59 +02:00
3e6193ffce Solved ollama network bug 2025-09-23 02:21:20 +02:00
9d8e06015f Added whitespaces 2025-09-23 00:59:55 +02:00
5daf3387bf web-app-minio: enable OIDC integration and policy handling
- Added OIDC and LDAP feature flags in config
- Introduced API/Console URL vars for proxy alignment
- Implemented automatic MinIO policy creation for OIDC admin group
- Replaced static env.J2 with dynamic env.j2 (OIDC-aware)
- Added policy.json.j2 template with full admin rights
- Cleaned up tasks to use stdin instead of file for mc policy apply

Ref: https://chatgpt.com/share/68d1d3ef-ca84-800f-abe2-11ab70e20c4e
2025-09-23 00:56:11 +02:00
6da7f28370 Optimized whitespacing 2025-09-23 00:51:23 +02:00
208848579d svc-db-openldap: make LDIF import idempotent, unify container var, and tidy role
- Add handlers/main.yml to load memberof/refint modules and import groups via docker exec
- Use OPENLDAP_CONTAINER consistently (replace OPENLDAP_NAME)
- Rename tasks/ldifs_creation.yml -> tasks/_ldifs_creation.yml and update includes
- Drop default param from get_app_conf calls; add explicit meta: flush_handlers
- docker-compose: honor OPENLDAP_NETWORK_EXPOSE_LOCAL | bool; minor formatting
- env template: formatting/comments consistency
- Remove unused 01_rbac_group.ldif.j2; rename 02_rbac_roles -> 01_rbac_roles and fix filter to LDAP
- vars: rename OPENLDAP_NAME -> OPENLDAP_CONTAINER; prune LDIF schema type

Conversation: https://chatgpt.com/share/68d1d25d-e788-800f-bfb6-13b1f5bc6121
2025-09-23 00:49:57 +02:00
d8c73e9fc3 Renamed to correct handler 2025-09-23 00:37:26 +02:00
10b20cc3c4 tests: treat mixed Jinja in notify/package_notify as wildcard regex; ignore pure Jinja; add reverse check so all notify targets map to existing handlers. See: https://chatgpt.com/share/68d1cf5a-f7e8-800f-910c-a2215d06c2a4 2025-09-23 00:36:50 +02:00
790c184e66 feat(web-app-openwebui): add bootstrap admin configuration via ADMIN_EMAIL
Introduce ADMIN_EMAIL and SHOW_ADMIN_DETAILS options to bootstrap the first
administrator account on fresh installations. This ensures at least one admin
exists without manual database intervention.

Conversation: https://chatgpt.com/share/68d18e02-d6b8-800f-aaab-920c61b9284a
2025-09-22 21:41:32 +02:00
93d165fa4c Solved CSP issue 2025-09-22 21:22:35 +02:00
1f3abb95af Required to move handler reloading one level higher 2025-09-22 21:07:34 +02:00
7ca3a73f21 Normalized OpenLDAP variables 2025-09-22 21:02:24 +02:00
08720a43c1 feat(web-app-openwebui): enable OIDC role-based admin mapping
Activate ENABLE_OAUTH_ROLE_MANAGEMENT and configure OAUTH_ROLES_CLAIM from
RBAC.GROUP.CLAIM. Define OAUTH_ADMIN_ROLES dynamically based on RBAC group
and application administrator naming convention.

Conversation: https://chatgpt.com/share/68d18e02-d6b8-800f-aaab-920c61b9284a
2025-09-22 20:27:01 +02:00
1baed62078 Removed ollama dependendy because it's managed via Ansible and not docker compose dependency 2025-09-22 20:22:54 +02:00
963e1aea21 Removed ollama from openwebui 2025-09-22 20:15:33 +02:00
a819a05737 Activated network for svc-ai-ollama 2025-09-22 20:12:34 +02:00
4cb58bec0f Added correct portmapping for ollama 2025-09-22 20:09:01 +02:00
002f45d1df Added LDAP draft for Open WebUI - Deactivated just PoC, because OIDC is anyhow prefered 2025-09-22 20:02:36 +02:00
cbc4dad1d1 Removed wrong : 2025-09-22 20:00:55 +02:00
70d395ed15 feat(web-app-openwebui): add OIDC support via env.j2 with feature flag
Enables OIDC login by adding feature flag (features.oidc), rendering OIDC-related
environment variables, and introducing OPENWEBUI_OIDC_ENABLED.

Conversation: https://chatgpt.com/share/68d18e02-d6b8-800f-aaab-920c61b9284a
2025-09-22 19:57:55 +02:00
e20a709f04 Solved wrong image bug for minio 2025-09-22 19:56:24 +02:00
d129f71cef Added Ollama network 2025-09-22 19:19:44 +02:00
4cb428274a Add new 'Artificial Intelligence' portfolio menu category for AI tools (Ollama, OpenWebUI, Flowise, MinIO, Qdrant, LiteLLM) 🤖
Details: Introduced dedicated AI category with proper description, tags, and robot icon to group AI-related applications.

Reference: https://chatgpt.com/share/68d183ea-04dc-800f-97c9-2e83d0ca3753
2025-09-22 19:14:36 +02:00
97e2d440b2 Normalized OpenLDAP constants 2025-09-22 19:08:11 +02:00
588cd1959f Added local_ai configuration feature 2025-09-22 18:56:38 +02:00
5d1210d651 feat(ai): introduce dedicated AI roles and wiring; clean up legacy AI stack
• Add svc-ai category under roles and load it in constructor stage

• Create new 'svc-ai-ollama' role (vars, tasks, compose, meta, README) and dedicated network

• Refactor former AI stack into separate app roles: web-app-flowise and web-app-openwebui

• Add web-app-minio role; adjust config (no central DB), meta (fa-database, run_after), compose networks include, volume key

• Provide user-focused READMEs for Flowise, OpenWebUI, MinIO, Ollama

• Networks: add subnets for web-app-openwebui, web-app-flowise, web-app-minio; rename web-app-ai → svc-ai-ollama

• Ports: rename ai_* keys to web-app-openwebui / web-app-flowise; keep minio_api/minio_console

• Add group_vars/all/17_ai.yml (OLLAMA_BASE_LOCAL_URL, OLLAMA_LOCAL_ENABLED)

• Replace hardcoded include paths with path_join in multiple roles (svc-db-postgres, sys-service, sys-stk-front-proxy, sys-stk-full-stateful, sys-svc-webserver, web-svc-cdn, web-app-keycloak)

• Remove obsolete web-app-ai templates/vars/env; split Flowise into its own role

• Minor config cleanups (CSP flags to {}, central_database=false)

https://chatgpt.com/share/68d15cb8-cf18-800f-b853-78962f751f81
2025-09-22 18:40:20 +02:00
aeab7e7358 Improve CSP configuration test: validate section types safely and include role/file path in error output
See ChatGPT conversation: https://chatgpt.com/share/68d1762d-7930-800f-bba5-55f1de7446b1
2025-09-22 18:16:01 +02:00
fa6bb67a66 Removed whitespaces in templates: 2025-09-22 16:28:57 +02:00
3dc2fbd47c refactor(objstore): extract MinIO into dedicated role 'web-app-minio' and adjust AI role
• Rename ports: web-app-ai_minio_* → web-app-minio_* in group_vars

• Remove MinIO from web-app-ai (service, volumes, ENV)

• Add new role web-app-minio (config, tasks, compose, env, vars) incl. front-proxy matrix

• AI role: front-proxy loop via matrix; unify domain/port vars (OPENWEBUI/Flowise *_PORT_PUBLIC/_PORT_INTERNAL, *_DOMAIN)

• Update compose templates accordingly

Ref: https://chatgpt.com/share/68d15cb8-cf18-800f-b853-78962f751f81
2025-09-22 16:27:51 +02:00
4b56ab3d18 Normalized Nextcloud port variable mapping 2025-09-22 16:20:32 +02:00
8e934677ff refactor(nextcloud): introduce NEXTCLOUD_INTERNAL_OCC_COMMAND for consistency
Details:
- Added NEXTCLOUD_INTERNAL_OCC_COMMAND to centralize occ path handling
- Updated NEXTCLOUD_DOCKER_EXEC_OCC to reuse internal occ command
- Replaced hardcoded occ path in docker-compose healthchecks with variable
- Improves maintainability and avoids duplication

See: https://chatgpt.com/share/68d14d85-3d80-800f-9d1d-fcf6bb8ce449
2025-09-22 15:35:26 +02:00
0a927f49a2 refactor(nextcloud): use path_join for config/occ paths to avoid double slashes
Details:
- NEXTCLOUD_DOCKER_CONF_DIRECTORY, NEXTCLOUD_DOCKER_CONFIG_FILE, NEXTCLOUD_DOCKER_CONF_ADD_PATH
  now built with path_join instead of string concat
- NEXTCLOUD_DOCKER_EXEC_OCC now uses path_join for occ command
- makes path handling more robust and consistent

See: https://chatgpt.com/share/68d14d85-3d80-800f-9d1d-fcf6bb8ce449
2025-09-22 15:22:41 +02:00
e6803e5614 refactor(ansible): normalize include_role syntax and unify host config paths via path_join
- Remove stray spaces after include_role: across many roles to ensure clean YAML and
  consistent linting/formatting.
- Listmonk:
  - Introduce LISTMONK_CONFIG_HOST = [ docker_compose.directories.config, 'config.toml' ] | path_join
  - Use that var in the template task (dest) and the docker-compose volume mount
- Matrix:
  - Build MATRIX_SYNAPSE_CONFIG_PATH_HOST, MATRIX_SYNAPSE_LOG_PATH_HOST, and
    MATRIX_ELEMENT_CONFIG_PATH_HOST via path_join
- Mobilizon:
  - Build mobilizon_host_conf_exs_file via path_join
  - Keep get_app_conf strictness unchanged (defaults to True in our filter), so behavior
    remains strict even though the explicit third arg was dropped
- Simpleicons:
  - Build server.js and package.json host paths via path_join
- Numerous web-app roles (Confluence, Discourse, EspoCRM, Friendica, Funkwhale, Gitea,
  GitLab, Jenkins, Joomla, Listmonk, Mailu, Mastodon, Matomo, Matrix, MediaWiki,
  Mobilizon, Moodle, Nextcloud, OpenProject, Peertube, Pixelfed, Pretix, Roulette Wheel,
  Snipe-IT, Syncope, Taiga, WordPress, XWiki, Yourls) and web-svc roles (coturn,
  libretranslate, simpleicons) updated for consistent include_role formatting

Why:
- path_join avoids double slashes and missing separators across different config roots
- Consistent include_role: formatting improves readability and prevents linter noise

Ref:
- Conversation: https://chatgpt.com/share/68d14711-727c-800f-b454-7dc4c3c1f4cb
2025-09-22 14:55:25 +02:00
6cf6c74802 Inverted docker_compose_skipp_file_creation to don't use double negation 2025-09-22 13:40:28 +02:00
734b8764f2 Optimized web-app-ai draft 2025-09-22 13:35:13 +02:00
3edb66f444 Merge branch 'master' of github.com:kevinveenbirkenbach/infinito-nexus 2025-09-22 11:17:40 +02:00
181b2d0542 Little optimations 2025-09-22 11:17:31 +02:00
78ebf4d075 Added draft base for AI assistant 2025-09-22 11:14:50 +02:00
d523629cdd Refactor docker-compose templates: replace {% include 'build.yml.j2' %} with lookup() + indent for proper YAML embedding. Also adjusted build.yml.j2 to remove leading spaces. See: https://chatgpt.com/share/68ce584a-a430-800f-8e2a-0f96884cc8d1 2025-09-20 09:31:49 +02:00
08ac8b6a9d Explicit activated async for creating of parent DNS entries 2025-09-20 09:30:16 +02:00
79db2419a6 fix(Makefile, playbook.yml): ensure Ansible syntax-check has access to group_vars and clean up playbook formatting
- Add all group_vars/all/*.yml as extra-vars (-e @file) in Makefile syntax-check
- Use consistent quoting in playbook.yml for SOFTWARE_NAME and host_type templating

Ref: https://chatgpt.com/share/68cdee8a-4e88-800f-bf62-bed66dbbb417
2025-09-20 02:00:25 +02:00
c424afa935 Fix CLI workflow and container startup
- Updated GitHub Actions workflow to call `infinito make ...` inside container
- Simplified Dockerfile CMD to run `infinito --help` and keep container alive
- Adjusted docker-compose.yml to use explicit image name

See: https://chatgpt.com/share/68cde606-c3f8-800f-8ac5-fc035386da87
2025-09-20 01:24:20 +02:00
974a83fe6e web-app-bluesky: enable custom AppView domain and refactor DNS records
- Un-commented `view.bluesky.{{ PRIMARY_DOMAIN }}` in config to allow
  explicit AppView domain definition.
- Reworked `03_dns.yml` to build `cloudflare_records` list programmatically,
  including conditional addition of AppView records only if the domain is
  not `api.bsky.app`.
- Improved AAAA handling with `| default('')` and proper ternary
  expressions for `present/absent`.
- Updated `vars/main.yml` to remove default port fallback for
  `BLUESKY_VIEW_PORT`.

Refs: https://chatgpt.com/share/68cdde1d-1bd4-800f-a4bb-319372752fcd
2025-09-20 00:50:31 +02:00
0168167769 Docker: introduce docker-compose setup and simplify CMD
- Replaced ENTRYPOINT/CMD with a single CMD ["infinito --help"] in Dockerfile
- Added docker-compose.yml with service 'infinito', port bindings, volumes, networks
- Added env.sample for BIND_IP, SUBNET, GATEWAY defaults

See conversation: https://chatgpt.com/share/68cda4d5-1fe0-800f-a7f7-191cb8b70d84
2025-09-19 21:22:45 +02:00
1c7152ceb2 Solved build bug 2025-09-19 20:51:06 +02:00
2a98b265bc Reduced port exposal to local for better encapsulation 2025-09-19 19:43:16 +02:00
14d1362dc8 Removed alias from bookwyrm 2025-09-19 19:14:55 +02:00
a4a8061998 Refactor: unify Docker build config via build.yml.j2 include
Replaced duplicated inline build definitions in multiple docker-compose.yml.j2
templates with a shared include (roles/docker-container/templates/build.yml.j2).
This ensures consistent use of pull_policy: never and Dockerfile context across
services (Postgres, Bookwyrm, Bridgy Fed, Chess, Confluence, Jira, Moodle,
OpenProject, Pretix, Roulette Wheel, WordPress, XWiki, Simpleicons).

Conversation: https://chatgpt.com/share/68cd8f35-b764-800f-9b00-2c837103d2fb
2025-09-19 19:13:44 +02:00
96ded68ef4 Refactor DNS handling and add solo record support
- Added 'solo' flag support for A/AAAA, CNAME/MX/TXT, and SRV records in sys-dns-cloudflare-records.
- Simplified sys-svc-dns: removed NS management tasks and CLOUDFLARE_NAMESERVERS default.
- Renamed 03_apex.yml back to 02_apex.yml, adjusted AAAA task name.
- Updated web-app-bluesky DNS tasks: marked critical records with 'solo'.
- Updated web-app-mailu DNS tasks: removed cleanup block, enforced 'solo' on all records.
- Adjusted constructor stage to call domain_mappings with AUTO_BUILD_ALIASES parameter.

Conversation: https://chatgpt.com/share/68cd20d8-9ba8-800f-b070-f7294f072c40
2025-09-19 15:29:11 +02:00
2d8967d559 added www. alias for desktop as default 2025-09-19 14:55:40 +02:00
5e616d3962 web: general domain cleanup (canonical/aliases normalization)
- Normalize domain blocks across apps:
  - Add explicit 'aliases: []' everywhere (no implicit aliases)
  - Standardize canonical subdomains for consistency:
    * Bluesky: web/api under *.bluesky.<PRIMARY_DOMAIN>
    * EspoCRM: espo.crm.<PRIMARY_DOMAIN>
    * Gitea:   tea.git.<PRIMARY_DOMAIN>
    * GitLab:  lab.git.<PRIMARY_DOMAIN>
    * Joomla:  joomla.cms.<PRIMARY_DOMAIN>
    * Magento: magento.shop.<PRIMARY_DOMAIN>
    * OpenProject: open.project.<PRIMARY_DOMAIN>
    * Pretix:  ticket.shop.<PRIMARY_DOMAIN>
    * Taiga:   kanban.project.<PRIMARY_DOMAIN>
  - Remove legacy/duplicate aliases and use empty list instead
  - Fix 'alias' -> 'aliases' where applicable

Context: preparing for AUTO_BUILD_ALIASES=False and deterministic redirect mapping.

Ref: conversation https://chatgpt.com/share/68cd512c-c878-800f-bdf2-81737adf7e0e
2025-09-19 14:51:56 +02:00
0f85d27a4d filter/domain_redirect_mappings: add auto_build_alias parameter
- Extend filter signature with auto_build_alias flag to control automatic
  default→canonical alias creation
- group_vars/all: introduce AUTO_BUILD_ALIASES variable for global toggle
- Update unit tests: adjust calls to new signature and add dedicated
  test cases for auto_build_aliases=False

Ref: conversation https://chatgpt.com/share/68cd512c-c878-800f-bdf2-81737adf7e0e
2025-09-19 14:49:02 +02:00
c6677ca61b tests: ignore Jinja variables inside raw blocks in variable definitions check
- Added regex masking to skip {{ var }} usages inside {% raw %}…{% endraw %} blocks.
- Simplified code by removing redundant comments.
- Cleaned up task file for XWiki role by removing outdated note.

Ref: https://chatgpt.com/share/68cd2558-e92c-800f-a80a-a79d3c81476e
2025-09-19 11:42:01 +02:00
83ce88a048 Solved all open test issues 2025-09-19 11:32:58 +02:00
7d150fa021 DNS & certs refactor:
- Switch certbot flag from MODE_TEST → MODE_DUMMY in dedicated certs
- Add sys-svc-dns defaults for CLOUDFLARE_NAMESERVERS
- Introduce 02_nameservers.yml for NS cleanup + enforce, adjust task ordering (apex now 03_apex.yml)
- Enforce quoting for Bluesky and Mailu TXT records
- Add cleanup of MX/TXT/DMARC/DKIM in Mailu role
- Normalize no_log handling in Nextcloud plugin
- Simplify async conditionals in Collabora role
Conversation: https://chatgpt.com/share/68cd20d8-9ba8-800f-b070-f7294f072c40
2025-09-19 11:22:51 +02:00
2806aab89e Removed deathlock between sys-ctl-bkp-docker-2-loc and sys-ctl-cln-faild-bkps - Timer handles now cleanup exclusively 2025-09-19 11:21:18 +02:00
61772d5916 Solved testing mode bug 2025-09-19 11:18:29 +02:00
a10ba78a5a Bluesky: update Ansible patches to use new geolocation module path
Replaced hardcoded path to src/state/geolocation.tsx with variable BLUESKY_GEOLOCATION_PATH pointing to src/state/geolocation/index.tsx.
This ensures BAPP_CONFIG_URL and IPCC_URL replacements work with the updated Bluesky code structure.

Ref: https://chatgpt.com/share/68cb16d5-d698-800f-97e5-cc7d9016f27c
2025-09-17 22:15:30 +02:00
6854acf204 Used database type instead of database host for postgres 2025-09-17 20:53:48 +02:00
54d4eeb1ab Fix network alias assignment for DB services
Ensure that the database host alias is only attached to the database
containers themselves, not to dependent application containers. This
avoids DNS collisions where multiple containers expose the same alias
(e.g. 'postgres') on the same network, which led to connection refused
errors in XWiki.

See conversation: https://chatgpt.com/share/68cae4e5-94e4-800f-b291-d2acdb36af21
2025-09-17 18:42:36 +02:00
52fb7accac Disabled unnecessary variables temporary to make debugging easier and solved oidc bugs 2025-09-17 17:45:46 +02:00
d4c62dbf72 docker-container: ensure explicit network alias for DB services
Added explicit aliases in the networks configuration for database containers
(Postgres/MariaDB). This guarantees that the configured 'database_host' is always
resolvable across external networks, fixing intermittent 'UnknownHostException'
issues when restarting dependent services (e.g., Confluence).

Ref: https://chatgpt.com/share/68cabfac-8618-800f-bcf4-609fdff432ed
2025-09-17 16:26:02 +02:00
9ef4f91ec4 Added debug properties for xwiki but they don't seem to have any relevant effect 2025-09-17 15:07:45 +02:00
5bc635109a mediawiki: normalize LocalSettings.php base settings (clean+append once); fail if missing
oidc.php: autologin/localLogin templated via vars; optionally disable wgPasswordAttemptThrottle when 'web-svc-logout' present

vars: set defaults (AUTOLOGIN=true, LOCALLOGIN=false); use path_join/url_join for clean paths/URLs

Context: https://chatgpt.com/share/68caaf41-d098-800f-beb0-a473ff08c9c5
2025-09-17 14:53:53 +02:00
efb5488cfc Optimized variables 2025-09-17 13:16:57 +02:00
1dceabfd46 Added proxy conf variables for xwiki 2025-09-17 07:19:29 +02:00
c64ac0b4dc web-app-xwiki: verify extensions via Groovy page + new filter
- Added new filter 'xwiki_extension_status' (strips HTML, handles &nbsp;) -> returns 200/404
- Introduced checker tasks (_check_extension_via_groovy.yml) instead of REST probe
- Added early assert: superadmin login before extension installation
- Collect and assert probe results in 04_extensions.yml
- Set OIDC extension version to 'latest' (empty string)

https://chatgpt.com/share/68ca36cb-ac38-800f-8281-8dea480b6676
2025-09-17 06:20:28 +02:00
e94aac1d78 Removed non existing plugin 2025-09-17 05:18:03 +02:00
c274c1a5d4 refactor(xwiki): move extension installer logic into static Groovy file and switch to plugins dict
- Added 'plugins' section in config/main.yml to declare enabled extensions in a structured way
- Introduced new static file 'files/extension_installer_b64.groovy' that decodes Base64 JSON of requested plugins
- Simplified 04_extensions.yml: now builds installer code from static file and removed hardcoded OIDC/LDAP checks
- Dropped redundant XWIKI_EXT_* variables in vars/main.yml
- Added XWIKI_PLUGINS fact to collect enabled plugin items from config/main.yml

This refactor makes extension installation more generic, easier to unit test, and extendable beyond OIDC/LDAP.

See: https://chatgpt.com/share/68ca25e3-cbc4-800f-a45e-2b152369811a
2025-09-17 05:08:02 +02:00
62493ac5a9 XWiki: increase installer execution timeout and add retries
The task 'XWIKI | Execute installer page' now uses:
- timeout: 300 (allow up to 5 min per request)
- retries: 20
- delay: 15
- until: condition

This prevents early failures during the first Distribution Wizard bootstrap when hundreds of extensions are still being installed.

Context: https://chatgpt.com/share/68ca0f18-2124-800f-a70d-df1811966107
2025-09-17 03:30:40 +02:00
cc2b9d476f Added blob csp rule for xwiki 2025-09-17 02:47:01 +02:00
d9c527e2e2 Changed handler order 2025-09-17 02:36:17 +02:00
eafdacc378 Optimized CSP for XWIKI 2025-09-17 02:33:28 +02:00
c93ec6d43a feat(web-app-xwiki): install OIDC/LDAP via temporary Groovy page (PUT→execute→verify→delete)
Replace REST jobs flow with services.extension.install executed from a transient XWiki.InstallExtensions page.
- Build wishlist from Ansible vars; print machine-readable markers; assert success.
- Execute from XWiki space; delete page afterwards; fix delete changed_when.
- Use Jinja raw + indent for clean macro embedding.

https://chatgpt.com/share/68c9ebf5-f5e0-800f-9b80-372b4b31e772
2025-09-17 01:00:25 +02:00
0839b8e37f fix(xwiki): enable superadmin flag in xwiki.cfg and always force Distribution Wizard
- Added 'xwiki.superadmin=1' alongside the password in 'xwiki.cfg' to properly activate the superadmin account during bootstrap.
- Simplified 'xwiki.properties': Distribution Wizard config is now always present instead of conditional on the superadmin switch.
- Ensures that the Distribution Wizard ('distribution.wizard.enabled=true') and flavor bootstrap run automatically on first startup.
- This fixes the issue where REST endpoints (/rest/jobs, /repositories) stayed at 404 because the DW never executed.

Ref: https://chat.openai.com/share/7a5d58d2-8e91-4e34-8fa0-8b7d62494e4a
2025-09-16 23:53:14 +02:00
def6dc96d8 fix(xwiki): enable superadmin flag in xwiki.cfg and always force Distribution Wizard
- Added 'xwiki.superadmin=1' alongside the password in 'xwiki.cfg' to properly activate the superadmin account during bootstrap.
- Simplified 'xwiki.properties': Distribution Wizard config is now always present instead of conditional on the superadmin switch.
- Ensures that the Distribution Wizard ('distribution.wizard.enabled=true') and flavor bootstrap run automatically on first startup.
- This fixes the issue where REST endpoints (/rest/jobs, /repositories) stayed at 404 because the DW never executed.

Ref: https://chat.openai.com/share/7a5d58d2-8e91-4e34-8fa0-8b7d62494e4a
2025-09-16 23:30:07 +02:00
364f4799bc In between commit xwiki OIDC integration 2025-09-16 20:16:19 +02:00
6eb4ba45f7 Removed installjobrequest.xml.j2 2025-09-16 19:57:32 +02:00
0566c426c9 Refactored administrator page variables 2025-09-16 19:57:07 +02:00
9ce73b9c71 Harmonized saving path 2025-09-16 19:12:08 +02:00
83936edf73 fix(xwiki): use proper InstallRequest XML format for extension installation
- Replace custom <request> with class='org.xwiki.extension.job.InstallRequest'
- Use loop over extensions_to_install to build <extensionId> list
- Move namespace into <namespaces><string>wiki:xwiki</string>
- Remove unused <id>/<jobType> from root
- Ensure installDependencies, interactive, verbose inside request
- Fixes issue where server echoed <rest><list/> instead of actual extensions
2025-09-16 15:25:34 +02:00
40ecbc5466 Added correct extension install logic to prevent overwritte 2025-09-16 14:53:37 +02:00
b18b3b104c Implemented performance switch for Front Proxy 2025-09-16 13:58:46 +02:00
2f992983f4 xwiki: install/verify via REST Job API; add 'xwiki_job_id' filter; refactor extension probe; remove invalid /extensions/{id} verify; README wording
Context: fixed 404 on 'Verify OIDC extension is installed' by polling jobstatus and parsing job id via filter plugin.
Conversation: https://chatgpt.com/share/68c435b7-96c0-800f-b7d6-b3fe99b443e0
2025-09-12 17:01:37 +02:00
d7d8578b13 fix(xwiki): correct extension.repositories format to id:type:url
Changed repository definition from 'maven:xwiki-public ...' to 'xwiki-public:maven:...'
so that the XWiki Extension Manager can correctly register Maven repositories.
This resolves the 'Unsupported repository type [central]' error and allows OIDC extension installation.

Details: https://chatgpt.com/share/68c42c4f-fda4-800f-a003-c16bcc9bd2a3
2025-09-12 16:21:23 +02:00
f106d5ec36 web-app-xwiki: admin bootstrap & REST/extension install fixes
• Guard admin tasks via XWIKI_SSO_ENABLED
• Create admin using XWikiUsers object API
• Wait for REST without DW redirect
• Install OIDC/LDAP via /rest/jobs (+verify)
• Mount xwiki.cfg/properties under Tomcat WEB-INF
• Build REST URLs with url_join; enable DW auto bootstrap + repos

https://chatgpt.com/share/68c42502-a5cc-800f-b05a-a1dbe48f014d
2025-09-12 15:50:30 +02:00
53b3a3a7b1 Deactivated LDAP by default 2025-09-12 14:13:13 +02:00
f576b42579 XWiki: two-phase bootstrap + extension install before enabling auth; add XOR validation
- Add 02_validation.yml to prevent OIDC+LDAP enabled simultaneously
- Introduce _flush_config.yml with switches (OIDC/LDAP/superadmin)
- Bootstrap with native+superadmin → create admin → install extensions (superadmin) → enable final auth
- Refactor REST vars (XWIKI_REST_BASE, XWIKI_REST_XWIKI, XWIKI_REST_EXTENSION_INSTALL)
- Update templates to use switch vars; gate OIDC block in properties
- Idempotent REST readiness waits

Conversation: https://chatgpt.com/share/68c40c1e-2b3c-800f-b59f-8d37baa9ebb2
2025-09-12 14:04:02 +02:00
b0f10aa0d0 Removed unnecessary just up 2025-09-12 13:21:29 +02:00
440 changed files with 5884 additions and 2140 deletions

View File

@@ -21,12 +21,12 @@ jobs:
- name: Clean build artifacts
run: |
docker run --rm infinito:latest make clean
docker run --rm infinito:latest infinito make clean
- name: Generate project outputs
run: |
docker run --rm infinito:latest make build
docker run --rm infinito:latest infinito make build
- name: Run tests
run: |
docker run --rm infinito:latest make test
docker run --rm infinito:latest infinito make test

View File

@@ -59,11 +59,4 @@ RUN INFINITO_PATH=$(pkgmgr path infinito) && \
ln -sf "$INFINITO_PATH"/main.py /usr/local/bin/infinito && \
chmod +x /usr/local/bin/infinito
# 10) Run integration tests
# This needed to be deactivated becaus it doesn't work with gitthub workflow
#RUN INFINITO_PATH=$(pkgmgr path infinito) && \
# cd "$INFINITO_PATH" && \
# make test
ENTRYPOINT ["infinito"]
CMD ["--help"]
CMD sh -c "infinito --help && exec tail -f /dev/null"

View File

@@ -73,7 +73,7 @@ messy-test:
@echo "🧪 Running Python tests…"
PYTHONPATH=. python -m unittest discover -s tests
@echo "📑 Checking Ansible syntax…"
ansible-playbook playbook.yml --syntax-check
ansible-playbook -i localhost, -c local $(foreach f,$(wildcard group_vars/all/*.yml),-e @$(f)) playbook.yml --syntax-check
install: build
@echo "⚙️ Install complete."

View File

@@ -17,6 +17,7 @@ def run_ansible_playbook(
password_file=None,
verbose=0,
skip_build=False,
skip_tests=False,
logs=False
):
start_time = datetime.datetime.now()
@@ -56,9 +57,8 @@ def run_ansible_playbook(
except subprocess.CalledProcessError:
print("\n❌ Inventory validation failed. Deployment aborted.\n", file=sys.stderr)
sys.exit(1)
# Tests are controlled via MODE_TEST
if modes.get("MODE_TEST", False):
if not skip_tests:
print("\n🧪 Running tests (make messy-test)...\n")
subprocess.run(["make", "messy-test"], check=True)
@@ -255,6 +255,12 @@ def main():
action="store_true",
help="Skip running 'make build' before deployment.",
)
parser.add_argument(
"-t",
"--skip-tests",
action="store_true",
help="Skip running 'make messy-tests' before deployment.",
)
parser.add_argument(
"-i",
"--id",
@@ -301,6 +307,7 @@ def main():
password_file=args.password_file,
verbose=args.verbose,
skip_build=args.skip_build,
skip_tests=args.skip_tests,
logs=args.logs,
)

60
docker-compose.yml Normal file
View File

@@ -0,0 +1,60 @@
version: "3.9"
services:
infinito:
build:
context: .
dockerfile: Dockerfile
network: host
pull_policy: never
container_name: infinito_nexus
image: infinito_nexus
restart: unless-stopped
volumes:
- data:/var/lib/docker/volumes/
- backups:/Backups/
- letsencrypt:/etc/letsencrypt/
ports:
# --- Mail services (classic + secure) ---
- "${BIND_IP:-127.0.0.1}:25:25" # SMTP
- "${BIND_IP:-127.0.0.1}:110:110" # POP3
- "${BIND_IP:-127.0.0.1}:143:143" # IMAP
- "${BIND_IP:-127.0.0.1}:465:465" # SMTPS
- "${BIND_IP:-127.0.0.1}:587:587" # Submission (SMTP)
- "${BIND_IP:-127.0.0.1}:993:993" # IMAPS (bound to public IP)
- "${BIND_IP:-127.0.0.1}:995:995" # POP3S
- "${BIND_IP:-127.0.0.1}:4190:4190" # Sieve (ManageSieve)
# --- Web / API services ---
- "${BIND_IP:-127.0.0.1}:80:80" # HTTP
- "${BIND_IP:-127.0.0.1}:443:443" # HTTPS
- "${BIND_IP:-127.0.0.1}:8448:8448" # Matrix federation port
# --- TURN / STUN (UDP + TCP) ---
- "${BIND_IP:-127.0.0.1}:3478-3480:3478-3480/udp" # TURN/STUN UDP
- "${BIND_IP:-127.0.0.1}:3478-3480:3478-3480" # TURN/STUN TCP
# --- Streaming / RTMP ---
- "${BIND_IP:-127.0.0.1}:1935:1935" # Peertube
# --- Custom / application ports ---
- "${BIND_IP:-127.0.0.1}:2201:2201" # Gitea
- "${BIND_IP:-127.0.0.1}:2202:2202" # Gitlab
- "${BIND_IP:-127.0.0.1}:2203:22" # SSH
- "${BIND_IP:-127.0.0.1}:33552:33552"
# --- Consecutive ranges ---
- "${BIND_IP:-127.0.0.1}:48081-48083:48081-48083"
- "${BIND_IP:-127.0.0.1}:48087:48087"
volumes:
data:
backups:
letsencrypt:
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: ${SUBNET:-172.30.0.0/24}
gateway: ${GATEWAY:-172.30.0.1}

3
env.sample Normal file
View File

@@ -0,0 +1,3 @@
BIND_IP=127.0.0.1
SUBNET=172.30.0.0/24
GATEWAY=172.30.0.1

View File

@@ -0,0 +1,79 @@
# -*- coding: utf-8 -*-
"""
Ansible filter to count active docker services for current host.
Active means:
- application key is in group_names
- application key matches prefix regex (default: ^(web-|svc-).* )
- under applications[app]['docker']['services'] each service is counted if:
- 'enabled' is True, OR
- 'enabled' is missing/undefined (treated as active)
Returns an integer. If ensure_min_one=True, returns at least 1.
"""
import re
from typing import Any, Dict, Mapping, Iterable
def _is_mapping(x: Any) -> bool:
# be liberal: Mapping covers dict-like; fallback to dict check
try:
return isinstance(x, Mapping)
except Exception:
return isinstance(x, dict)
def active_docker_container_count(applications: Mapping[str, Any],
group_names: Iterable[str],
prefix_regex: str = r'^(web-|svc-).*',
ensure_min_one: bool = False) -> int:
if not _is_mapping(applications):
return 1 if ensure_min_one else 0
group_set = set(group_names or [])
try:
pattern = re.compile(prefix_regex)
except re.error:
pattern = re.compile(r'^(web-|svc-).*') # fallback
count = 0
for app_key, app_val in applications.items():
# host selection + name prefix
if app_key not in group_set:
continue
if not pattern.match(str(app_key)):
continue
docker = app_val.get('docker') if _is_mapping(app_val) else None
services = docker.get('services') if _is_mapping(docker) else None
if not _is_mapping(services):
# sometimes roles define a single service name string; ignore
continue
for _svc_name, svc_cfg in services.items():
if not _is_mapping(svc_cfg):
# allow shorthand like: service: {} or image string -> counts as enabled
count += 1
continue
enabled = svc_cfg.get('enabled', True)
if isinstance(enabled, bool):
if enabled:
count += 1
else:
# non-bool enabled -> treat "truthy" as enabled
if bool(enabled):
count += 1
if ensure_min_one and count < 1:
return 1
return count
class FilterModule(object):
def filters(self):
return {
# usage: {{ applications | active_docker_container_count(group_names) }}
'active_docker_container_count': active_docker_container_count,
}

View File

@@ -158,26 +158,31 @@ class FilterModule(object):
for directive in directives:
tokens = ["'self'"]
# 1) Load flags (includes defaults from get_csp_flags)
# Load flags (includes defaults from get_csp_flags)
flags = self.get_csp_flags(applications, application_id, directive)
tokens += flags
# 2) Allow fetching from internal CDN by default for selected directives
# Allow fetching from internal CDN by default for selected directives
if directive in ['script-src-elem', 'connect-src', 'style-src-elem']:
tokens.append(get_url(domains, 'web-svc-cdn', web_protocol))
# 3) Matomo integration if feature is enabled
# Matomo integration if feature is enabled
if directive in ['script-src-elem', 'connect-src']:
if self.is_feature_enabled(applications, matomo_feature_name, application_id):
tokens.append(get_url(domains, 'web-app-matomo', web_protocol))
# 4) ReCaptcha integration (scripts + frames) if feature is enabled
# Simpleicons integration if feature is enabled
if directive in ['connect-src']:
if self.is_feature_enabled(applications, 'simpleicons', application_id):
tokens.append(get_url(domains, 'web-svc-simpleicons', web_protocol))
# ReCaptcha integration (scripts + frames) if feature is enabled
if self.is_feature_enabled(applications, 'recaptcha', application_id):
if directive in ['script-src-elem', 'frame-src']:
tokens.append('https://www.gstatic.com')
tokens.append('https://www.google.com')
# 5) Frame ancestors handling (desktop + logout support)
# Frame ancestors handling (desktop + logout support)
if directive == 'frame-ancestors':
if self.is_feature_enabled(applications, 'desktop', application_id):
# Allow being embedded by the desktop app domain (and potentially its parent)
@@ -189,10 +194,10 @@ class FilterModule(object):
tokens.append(get_url(domains, 'web-svc-logout', web_protocol))
tokens.append(get_url(domains, 'web-app-keycloak', web_protocol))
# 6) Custom whitelist entries
# Custom whitelist entries
tokens += self.get_csp_whitelist(applications, application_id, directive)
# 7) Add inline content hashes ONLY if final tokens do NOT include 'unsafe-inline'
# Add inline content hashes ONLY if final tokens do NOT include 'unsafe-inline'
# (Check tokens, not flags, to include defaults and later modifications.)
if "'unsafe-inline'" not in tokens:
for snippet in self.get_csp_inline_content(applications, application_id, directive):
@@ -201,7 +206,7 @@ class FilterModule(object):
# Append directive
parts.append(f"{directive} {' '.join(tokens)};")
# 8) Static img-src directive (kept permissive for data/blob and any host)
# Static img-src directive (kept permissive for data/blob and any host)
parts.append("img-src * data: blob:;")
return ' '.join(parts)

View File

@@ -7,7 +7,7 @@ class FilterModule(object):
def filters(self):
return {'domain_mappings': self.domain_mappings}
def domain_mappings(self, apps, PRIMARY_DOMAIN):
def domain_mappings(self, apps, primary_domain, auto_build_alias):
"""
Build a flat list of redirect mappings for all apps:
- source: each alias domain
@@ -43,7 +43,7 @@ class FilterModule(object):
domains_cfg = cfg.get('server',{}).get('domains',{})
entry = domains_cfg.get('canonical')
if entry is None:
canonical_map[app_id] = [default_domain(app_id, PRIMARY_DOMAIN)]
canonical_map[app_id] = [default_domain(app_id, primary_domain)]
elif isinstance(entry, dict):
canonical_map[app_id] = list(entry.values())
elif isinstance(entry, list):
@@ -61,11 +61,11 @@ class FilterModule(object):
alias_map[app_id] = []
continue
if isinstance(domains_cfg, dict) and not domains_cfg:
alias_map[app_id] = [default_domain(app_id, PRIMARY_DOMAIN)]
alias_map[app_id] = [default_domain(app_id, primary_domain)]
continue
aliases = parse_entry(domains_cfg, 'aliases', app_id) or []
default = default_domain(app_id, PRIMARY_DOMAIN)
default = default_domain(app_id, primary_domain)
has_aliases = 'aliases' in domains_cfg
has_canonical = 'canonical' in domains_cfg
@@ -74,7 +74,7 @@ class FilterModule(object):
aliases.append(default)
elif has_canonical:
canon = canonical_map.get(app_id, [])
if default not in canon and default not in aliases:
if default not in canon and default not in aliases and auto_build_alias:
aliases.append(default)
alias_map[app_id] = aliases
@@ -84,7 +84,7 @@ class FilterModule(object):
mappings = []
for app_id, sources in alias_map.items():
canon_list = canonical_map.get(app_id, [])
target = canon_list[0] if canon_list else default_domain(app_id, PRIMARY_DOMAIN)
target = canon_list[0] if canon_list else default_domain(app_id, primary_domain)
for src in sources:
if src == target:
# skip self-redirects

View File

@@ -4,7 +4,7 @@ class FilterModule(object):
def filters(self):
return {'generate_all_domains': self.generate_all_domains}
def generate_all_domains(self, domains_dict, include_www=True):
def generate_all_domains(self, domains_dict, include_www:bool=True):
"""
Transform a dict of domains (values: str, list, dict) into a flat list,
optionally add 'www.' prefixes, dedupe and sort alphabetically.

View File

@@ -20,9 +20,10 @@ def get_docker_paths(application_id: str, path_docker_compose_instances: str) ->
'config': f"{base}config/",
},
'files': {
'env': f"{base}.env/env",
'docker_compose': f"{base}docker-compose.yml",
'dockerfile': f"{base}Dockerfile",
'env': f"{base}.env/env",
'docker_compose': f"{base}docker-compose.yml",
'docker_compose_override': f"{base}docker-compose.override.yml",
'dockerfile': f"{base}Dockerfile",
}
}

View File

@@ -0,0 +1,77 @@
from __future__ import annotations
import sys, os, re
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from ansible.errors import AnsibleFilterError
from module_utils.config_utils import get_app_conf
from module_utils.entity_name_utils import get_entity_name
_UNIT_RE = re.compile(r'^\s*(\d+(?:\.\d+)?)\s*([kKmMgGtT]?[bB]?)?\s*$')
_FACTORS = {
'': 1, 'b': 1,
'k': 1024, 'kb': 1024,
'm': 1024**2, 'mb': 1024**2,
'g': 1024**3, 'gb': 1024**3,
't': 1024**4, 'tb': 1024**4,
}
def _to_bytes(v: str) -> int:
if v is None:
raise AnsibleFilterError("jvm_filters: size value is None")
s = str(v).strip()
m = _UNIT_RE.match(s)
if not m:
raise AnsibleFilterError(f"jvm_filters: invalid size '{v}'")
num, unit = m.group(1), (m.group(2) or '').lower()
try:
val = float(num)
except ValueError as e:
raise AnsibleFilterError(f"jvm_filters: invalid numeric size '{v}'") from e
factor = _FACTORS.get(unit)
if factor is None:
raise AnsibleFilterError(f"jvm_filters: unknown unit in '{v}'")
return int(val * factor)
def _to_mb(v: str) -> int:
return max(0, _to_bytes(v) // (1024 * 1024))
def _svc(app_id: str) -> str:
return get_entity_name(app_id)
def _mem_limit_mb(apps: dict, app_id: str) -> int:
svc = _svc(app_id)
raw = get_app_conf(apps, app_id, f"docker.services.{svc}.mem_limit")
mb = _to_mb(raw)
if mb <= 0:
raise AnsibleFilterError(f"jvm_filters: mem_limit for '{svc}' must be > 0 MB (got '{raw}')")
return mb
def _mem_res_mb(apps: dict, app_id: str) -> int:
svc = _svc(app_id)
raw = get_app_conf(apps, app_id, f"docker.services.{svc}.mem_reservation")
mb = _to_mb(raw)
if mb <= 0:
raise AnsibleFilterError(f"jvm_filters: mem_reservation for '{svc}' must be > 0 MB (got '{raw}')")
return mb
def jvm_max_mb(apps: dict, app_id: str) -> int:
"""Xmx = min( floor(0.7*limit), limit-1024, 12288 ) with floor at 1024 MB."""
limit_mb = _mem_limit_mb(apps, app_id)
c1 = (limit_mb * 7) // 10
c2 = max(0, limit_mb - 1024)
c3 = 12288
return max(1024, min(c1, c2, c3))
def jvm_min_mb(apps: dict, app_id: str) -> int:
"""Xms = min( floor(Xmx/2), mem_reservation, Xmx ) with floor at 512 MB."""
xmx = jvm_max_mb(apps, app_id)
res = _mem_res_mb(apps, app_id)
return max(512, min(xmx // 2, res, xmx))
class FilterModule(object):
def filters(self):
return {
"jvm_max_mb": jvm_max_mb,
"jvm_min_mb": jvm_min_mb,
}

View File

@@ -0,0 +1,40 @@
# filter_plugins/resource_filter.py
from __future__ import annotations
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.config_utils import get_app_conf, AppConfigKeyError, ConfigEntryNotSetError # noqa: F401
from module_utils.entity_name_utils import get_entity_name
from ansible.errors import AnsibleFilterError
def resource_filter(
applications: dict,
application_id: str,
key: str,
service_name: str,
hard_default,
):
"""
Lookup order:
1) docker.services.<service_name or get_entity_name(application_id)>.<key>
2) hard_default (mandatory)
- service_name may be "" → will resolve to get_entity_name(application_id).
- hard_default is mandatory (no implicit None).
- required=False always.
"""
try:
primary_service = service_name if service_name != "" else get_entity_name(application_id)
return get_app_conf(applications, application_id, f"docker.services.{primary_service}.{key}", False, hard_default)
except (AppConfigKeyError, ConfigEntryNotSetError) as e:
raise AnsibleFilterError(str(e))
class FilterModule(object):
def filters(self):
return {
"resource_filter": resource_filter,
}

View File

@@ -32,8 +32,10 @@ WEBSOCKET_PROTOCOL: "{{ 'wss' if WEB_PROTOCOL == 'https' else 'ws' }}"
# WWW-Redirect to None WWW-Domains enabled
WWW_REDIRECT_ENABLED: "{{ ('web-opt-rdr-www' in group_names) | bool }}"
AUTO_BUILD_ALIASES: False # If enabled it creates an alias domain for each web application by the entity name, recommended to set to false to safge domain space
# Domain
PRIMARY_DOMAIN: "localhost" # Primary Domain of the server
PRIMARY_DOMAIN: "localhost" # Primary Domain of the server
DNS_PROVIDER: cloudflare # The DNS Provider\Registrar for the domain
@@ -58,7 +60,7 @@ DOCKER_WHITELISTET_ANON_VOLUMES: []
# Asyn Confitguration
ASYNC_ENABLED: "{{ not MODE_DEBUG | bool }}" # Activate async, deactivated for debugging
ASYNC_TIME: "{{ 300 if ASYNC_ENABLED | bool else omit }}" # Run for mnax 5min
ASYNC_TIME: "{{ 300 if ASYNC_ENABLED | bool else omit }}" # Run for max 5min
ASYNC_POLL: "{{ 0 if ASYNC_ENABLED | bool else 10 }}" # Don't wait for task
# default value if not set via CLI (-e) or in playbook vars
@@ -84,4 +86,4 @@ _applications_nextcloud_oidc_flavor: >-
RBAC:
GROUP:
NAME: "/roles" # Name of the group which holds the RBAC roles
CLAIM: "groups" # Name of the claim containing the RBAC groups
CLAIM: "groups" # Name of the claim containing the RBAC groups

View File

@@ -1,9 +1,10 @@
# Mode
# The following modes can be combined with each other
MODE_TEST: false # Executes test routines instead of productive routines
MODE_DUMMY: false # Executes dummy/test routines instead of productive routines
MODE_UPDATE: true # Executes updates
MODE_DEBUG: false # This enables debugging in ansible and in the apps, You SHOULD NOT enable this on production servers
MODE_RESET: false # Cleans up all Infinito.Nexus files. It's necessary to run to whole playbook and not particial roles when using this function.
MODE_CLEANUP: "{{ MODE_DEBUG | bool }}" # Cleanup unused files and configurations
MODE_ASSERT: "{{ MODE_DEBUG | bool }}" # Executes validation tasks during the run.
MODE_BACKUP: true # Executes the Backup before the deployment

View File

@@ -29,4 +29,31 @@ NGINX:
IMAGE: "/tmp/cache_nginx_image/" # Directory which nginx uses to cache images
USER: "http" # Default nginx user in ArchLinux
# Effective CPUs (float) across proxy and the current app
WEBSERVER_CPUS_EFFECTIVE: >-
{{
[
(applications | resource_filter('svc-prx-openresty', 'cpus', service_name | default(''), RESOURCE_CPUS)) | float,
(applications | resource_filter(application_id, 'cpus', service_name | default(''), RESOURCE_CPUS)) | float
] | min
}}
# Nginx requires an integer for worker_processes:
# - if cpus < 1 → 1
# - else → floor to int
WEBSERVER_WORKER_PROCESSES: >-
{{
1 if (WEBSERVER_CPUS_EFFECTIVE | float) < 1
else (WEBSERVER_CPUS_EFFECTIVE | float | int)
}}
# worker_connections from pids_limit (use the smaller one), with correct key/defaults
WEBSERVER_WORKER_CONNECTIONS: >-
{{
[
(applications | resource_filter('svc-prx-openresty', 'pids_limit', service_name | default(''), RESOURCE_PIDS_LIMIT)) | int,
(applications | resource_filter(application_id, 'pids_limit', service_name | default(''), RESOURCE_PIDS_LIMIT)) | int
] | min
}}
# @todo It propably makes sense to distinguish between target and source mount path, so that the config files can be stored in the openresty volumes folder

View File

@@ -6,12 +6,12 @@ SYS_TIMER_ALL_ENABLED: "{{ MODE_DEBUG }}" # Runtime Var
## Server Tact Variables
HOURS_SERVER_AWAKE: "0..23" # Ours in which the server is "awake" (100% working). Rest of the time is reserved for maintanance
HOURS_SERVER_AWAKE: "6..23" # Ours in which the server is "awake" (100% working). Rest of the time is reserved for maintanance
RANDOMIZED_DELAY_SEC: "5min" # Random delay for systemd timers to avoid peak loads.
## Timeouts for all services
SYS_TIMEOUT_DOCKER_RPR_HARD: "10min"
SYS_TIMEOUT_DOCKER_RPR_SOFT: "{{ SYS_TIMEOUT_DOCKER_RPR_HARD }}"
SYS_TIMEOUT_DOCKER_RPR_SOFT: "{{ SYS_TIMEOUT_DOCKER_RPR_HARD }}"
SYS_TIMEOUT_CLEANUP_SERVICES: "15min"
SYS_TIMEOUT_DOCKER_UPDATE: "20min"
SYS_TIMEOUT_STORAGE_OPTIMIZER: "{{ SYS_TIMEOUT_DOCKER_UPDATE }}"

View File

@@ -104,6 +104,14 @@ defaults_networks:
subnet: 192.168.103.224/28
web-app-xwiki:
subnet: 192.168.103.240/28
web-app-openwebui:
subnet: 192.168.104.0/28
web-app-flowise:
subnet: 192.168.104.16/28
web-app-minio:
subnet: 192.168.104.32/28
web-svc-coturn:
subnet: 192.168.104.48/28
# /24 Networks / 254 Usable Clients
web-app-bigbluebutton:
@@ -116,3 +124,5 @@ defaults_networks:
subnet: 192.168.201.0/24
svc-db-openldap:
subnet: 192.168.202.0/24
svc-ai-ollama:
subnet: 192.168.203.0/24 # Big network to bridge applications into ai

View File

@@ -75,21 +75,34 @@ ports:
web-app-bluesky_view: 8051
web-app-magento: 8052
web-app-bridgy-fed: 8053
web-app-xwiki: 8054
web-app-xwiki: 8054
web-app-openwebui: 8055
web-app-flowise: 8056
web-app-minio_api: 8057
web-app-minio_console: 8058
web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port
public:
# The following ports should be changed to 22 on the subdomain via stream mapping
ssh:
web-app-gitea: 2201
web-app-gitlab: 2202
web-app-gitea: 2201
web-app-gitlab: 2202
ldaps:
svc-db-openldap: 636
stun:
web-app-bigbluebutton: 3478 # Not sure if it's right placed here or if it should be moved to localhost section
# Occupied by BBB: 3479
web-app-nextcloud: 3480
turn:
web-app-bigbluebutton: 5349 # Not sure if it's right placed here or if it should be moved to localhost section
web-app-nextcloud: 5350 # Not used yet
svc-db-openldap: 636
stun_turn:
web-app-bigbluebutton: 3478 # Not sure if it's right placed here or if it should be moved to localhost section
# Occupied by BBB: 3479
web-app-nextcloud: 3480
web-svc-coturn: 3481
stun_turn_tls:
web-app-bigbluebutton: 5349 # Not sure if it's right placed here or if it should be moved to localhost section
web-app-nextcloud: 5350 # Not used yet
web-svc-coturn: 5351
federation:
web-app-matrix_synapse: 8448
relay_port_ranges:
web-svc-coturn_start: 20000
web-svc-coturn_end: 39999
web-app-bigbluebutton_start: 40000
web-app-bigbluebutton_end: 49999
web-app-nextcloud_start: 50000
web-app-nextcloud_end: 59999

3
group_vars/all/17_ai.yml Normal file
View File

@@ -0,0 +1,3 @@
# URL of Local Ollama Container
OLLAMA_BASE_LOCAL_URL: "http://{{ applications | get_app_conf('svc-ai-ollama', 'docker.services.ollama.name') }}:{{ applications | get_app_conf('svc-ai-ollama', 'docker.services.ollama.port') }}"
OLLAMA_LOCAL_ENABLED: "{{ applications | get_app_conf(application_id, 'features.local_ai') }}"

View File

@@ -0,0 +1,47 @@
# Host resources
RESOURCE_HOST_CPUS: "{{ ansible_processor_vcpus | int }}"
RESOURCE_HOST_MEM: "{{ (ansible_memtotal_mb | int) // 1024 }}"
# Reserve for OS
RESOURCE_HOST_RESERVE_CPU: 2
RESOURCE_HOST_RESERVE_MEM: 4
# Available for apps
RESOURCE_AVAIL_CPUS: "{{ (RESOURCE_HOST_CPUS | int) - (RESOURCE_HOST_RESERVE_CPU | int) }}"
RESOURCE_AVAIL_MEM: "{{ (RESOURCE_HOST_MEM | int) - (RESOURCE_HOST_RESERVE_MEM | int) }}"
# Count active docker services (only roles starting with web- or svc-; service counts if enabled==true OR enabled is undefined)
RESOURCE_ACTIVE_DOCKER_CONTAINER_COUNT: >-
{{
applications
| active_docker_container_count(group_names, '^(web-|svc-).*', ensure_min_one=True)
}}
# Per-container fair share (numbers!), later we append 'g' only for the string fields in compose
RESOURCE_CPUS_NUM: >-
{{
[
(
((RESOURCE_AVAIL_CPUS | float) / (RESOURCE_ACTIVE_DOCKER_CONTAINER_COUNT | float))
| round(2)
),
0.5
] | max
}}
RESOURCE_MEM_RESERVATION_NUM: >-
{{
(((RESOURCE_AVAIL_MEM | float) / (RESOURCE_ACTIVE_DOCKER_CONTAINER_COUNT | float)) * 0.7)
| round(1)
}}
RESOURCE_MEM_LIMIT_NUM: >-
{{
(((RESOURCE_AVAIL_MEM | float) / (RESOURCE_ACTIVE_DOCKER_CONTAINER_COUNT | float)) * 1.0)
| round(1)
}}
# Final strings with units for compose defaults (keep numbers above for math elsewhere if needed)
RESOURCE_CPUS: "{{ RESOURCE_CPUS_NUM }}"
RESOURCE_MEM_RESERVATION: "{{ RESOURCE_MEM_RESERVATION_NUM }}g"
RESOURCE_MEM_LIMIT: "{{ RESOURCE_MEM_LIMIT_NUM }}g"
RESOURCE_PIDS_LIMIT: 512

View File

@@ -142,7 +142,8 @@ class InventoryManager:
"""
if algorithm == "random_hex":
return secrets.token_hex(64)
if algorithm == "random_hex_32":
return secrets.token_hex(32)
if algorithm == "sha256":
return hashlib.sha256(secrets.token_bytes(32)).hexdigest()
if algorithm == "sha1":

View File

@@ -1,10 +1,10 @@
- name: Execute {{ SOFTWARE_NAME }} Play
- name: "Execute {{ SOFTWARE_NAME }} Play"
hosts: all
tasks:
- name: "Load 'constructor' tasks"
include_tasks: "tasks/stages/01_constructor.yml"
- name: "Load '{{host_type}}' tasks"
include_tasks: "tasks/stages/02_{{host_type}}.yml"
- name: "Load '{{ host_type }}' tasks"
include_tasks: "tasks/stages/02_{{ host_type }}.yml"
- name: "Load 'destructor' tasks"
include_tasks: "tasks/stages/03_destructor.yml"
become: true

View File

@@ -148,6 +148,11 @@ roles:
description: "Network setup (DNS, Let's Encrypt HTTP, WireGuard, etc.)"
icon: "fas fa-globe"
invokable: true
ai:
title: "AI Services"
description: "Core AI building blocks—model serving, OpenAI-compatible gateways, vector databases, orchestration, and chat UIs."
icon: "fas fa-brain"
invokable: true
user:
title: "Users & Access"
description: "User accounts & access control"

View File

@@ -5,8 +5,8 @@
- name: Link homefolders to cloud
ansible.builtin.file:
src: "{{nextcloud_cloud_directory}}{{item}}"
dest: "{{nextcloud_user_home_directory}}{{item}}"
src: "{{nextcloud_cloud_directory}}{{ item }}"
dest: "{{nextcloud_user_home_directory}}{{ item }}"
owner: "{{ users[desktop_username].username }}"
group: "{{ users[desktop_username].username }}"
state: link

View File

@@ -1,11 +1,11 @@
---
- name: Setup locale.gen
template:
template:
src: locale.gen.j2
dest: /etc/locale.gen
- name: Setup locale.conf
template:
template:
src: locale.conf.j2
dest: /etc/locale.conf

View File

@@ -0,0 +1,4 @@
AUR_HELPER: yay
AUR_BUILDER_USER: aur_builder
AUR_BUILDER_GROUP: wheel
AUR_BUILDER_SUDOERS_PATH: /etc/sudoers.d/11-install-aur_builder

View File

@@ -6,42 +6,53 @@
- dev-git
- dev-base-devel
- name: install yay
- name: Install yay build prerequisites
community.general.pacman:
name:
- base-devel
- patch
state: present
- name: Create the `aur_builder` user
- name: Create the AUR builder user
become: true
ansible.builtin.user:
name: aur_builder
name: "{{ AUR_BUILDER_USER }}"
create_home: yes
group: wheel
group: "{{ AUR_BUILDER_GROUP }}"
- name: Allow the `aur_builder` user to run `sudo pacman` without a password
- name: Allow AUR builder to run pacman without password
become: true
ansible.builtin.lineinfile:
path: /etc/sudoers.d/11-install-aur_builder
line: 'aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman'
path: "{{ AUR_BUILDER_SUDOERS_PATH }}"
line: '{{ AUR_BUILDER_USER }} ALL=(ALL) NOPASSWD: /usr/bin/pacman'
create: yes
validate: 'visudo -cf %s'
- name: Clone yay from AUR
become: true
become_user: aur_builder
become_user: "{{ AUR_BUILDER_USER }}"
git:
repo: https://aur.archlinux.org/yay.git
dest: /home/aur_builder/yay
dest: "/home/{{ AUR_BUILDER_USER }}/yay"
clone: yes
update: yes
- name: Build and install yay
become: true
become_user: aur_builder
become_user: "{{ AUR_BUILDER_USER }}"
shell: |
cd /home/aur_builder/yay
cd /home/{{ AUR_BUILDER_USER }}/yay
makepkg -si --noconfirm
args:
creates: /usr/bin/yay
- name: upgrade the system using yay, only act on AUR packages.
become: true
become_user: "{{ AUR_BUILDER_USER }}"
kewlfft.aur.aur:
upgrade: yes
use: "{{ AUR_HELPER }}"
aur_only: yes
when: MODE_UPDATE | bool
- include_tasks: utils/run_once.yml

View File

@@ -1,5 +1,3 @@
- block:
- include_tasks: 01_core.yml
- set_fact:
run_once_dev_yay: true
when: run_once_dev_yay is not defined

View File

@@ -1,3 +1,3 @@
docker_compose_skipp_file_creation: false # If set to true the file creation will be skipped
docker_pull_git_repository: false # Activates docker repository download and routine
docker_compose_flush_handlers: false # Set to true in the vars/main.yml of the including role to autoflush after docker compose routine
docker_compose_file_creation_enabled: true # If set to true the file creation will be skipped
docker_pull_git_repository: false # Activates docker repository download and routine
docker_compose_flush_handlers: false # Set to true in the vars/main.yml of the including role to autoflush after docker compose routine

View File

@@ -9,7 +9,6 @@
listen:
- docker compose up
- docker compose restart
- docker compose just up
when: MODE_ASSERT | bool
- name: docker compose pull
@@ -41,9 +40,8 @@
listen:
- docker compose up
- docker compose restart
- docker compose just up
- name: Build docker compose
- name: Build docker compose
shell: |
set -euo pipefail
docker compose build || {
@@ -77,7 +75,6 @@
DOCKER_CLIENT_TIMEOUT: 600
listen:
- docker compose up
- docker compose just up # @todo replace later just up by up when code is refactored, build atm is also listening to up
- name: docker compose restart
command:

View File

@@ -1,15 +1,18 @@
- name: Set default docker_repository_path
set_fact:
docker_repository_path: "{{docker_compose.directories.services}}repository/"
docker_repository_path: "{{ [ docker_compose.directories.services, 'repository/' ] | path_join }}"
- name: pull docker repository
git:
repo: "{{ docker_repository_address }}"
dest: "{{ docker_repository_path }}"
version: "{{ docker_repository_branch | default('main') }}"
depth: 1
update: yes
recursive: yes
repo: "{{ docker_repository_address }}"
dest: "{{ docker_repository_path }}"
version: "{{ docker_repository_branch | default('main') }}"
single_branch: yes
depth: 1
update: yes
recursive: yes
force: yes
accept_hostkey: yes
notify:
- docker compose build
- docker compose up

View File

@@ -6,8 +6,8 @@
- "{{ application_id | abs_role_path_by_application_id }}/templates/Dockerfile.j2"
- "{{ application_id | abs_role_path_by_application_id }}/files/Dockerfile"
notify:
- docker compose up
- docker compose build
- docker compose up
register: create_dockerfile_result
failed_when:
- create_dockerfile_result is failed
@@ -28,6 +28,21 @@
- env_template is failed
- "'Could not find or access' not in env_template.msg"
- name: "Create (optional) '{{ docker_compose.files.docker_compose_override }}'"
template:
src: "{{ item }}"
dest: "{{ docker_compose.files.docker_compose_override }}"
mode: '770'
force: yes
notify: docker compose up
register: docker_compose_override_template
loop:
- "{{ application_id | abs_role_path_by_application_id }}/templates/docker-compose.override.yml.j2"
- "{{ application_id | abs_role_path_by_application_id }}/files/docker-compose.override.yml"
failed_when:
- docker_compose_override_template is failed
- "'Could not find or access' not in docker_compose_override_template.msg"
- name: "Create (obligatoric) '{{ docker_compose.files.docker_compose }}'"
template:
src: "docker-compose.yml.j2"

View File

@@ -24,7 +24,7 @@
include_tasks: "04_files.yml"
- name: "Ensure that {{ docker_compose.directories.instance }} is up"
include_tasks: "05_ensure_up.yml"
when: not docker_compose_skipp_file_creation | bool
when: docker_compose_file_creation_enabled | bool
- name: "flush docker compose for '{{ application_id }}'"
meta: flush_handlers

View File

@@ -1,5 +1,6 @@
{# This template needs to be included in docker-compose.yml #}
networks:
{# Central RDMS-Database Network #}
{% if
(applications | get_app_conf(application_id, 'features.central_database', False) and database_type is defined) or
application_id in ['svc-db-mariadb','svc-db-postgres']
@@ -7,6 +8,7 @@ networks:
{{ applications | get_app_conf('svc-db-' ~ database_type, 'docker.network') }}:
external: true
{% endif %}
{# Central LDAP Network #}
{% if
applications | get_app_conf(application_id, 'features.ldap', False) and
applications | get_app_conf('svc-db-openldap', 'network.docker', False)
@@ -14,7 +16,13 @@ networks:
{{ applications | get_app_conf('svc-db-openldap', 'docker.network') }}:
external: true
{% endif %}
{% if not application_id.startswith('svc-db-') %}
{# Central AI Network #}
{% if applications | get_app_conf(application_id, 'features.local_ai', False) %}
{{ applications | get_app_conf('svc-ai-ollama', 'docker.network') }}:
external: true
{% endif %}
{# Default Network #}
{% if not application_id.startswith('svc-db-') and not application_id.startswith('svc-ai-') %}
default:
{% if
application_id in networks.local and
@@ -25,7 +33,7 @@ networks:
ipam:
driver: default
config:
- subnet: {{networks.local[application_id].subnet}}
- subnet: {{ networks.local[application_id].subnet }}
{% endif %}
{% endif %}
{{ "\n" }}

View File

@@ -1,4 +1,6 @@
users:
blackhole:
description: "Everything what will be send to this user will disapear"
username: "blackhole"
username: "blackhole"
roles:
- mail-bot

View File

@@ -1,2 +1,4 @@
# @See https://chatgpt.com/share/67a23d18-fb54-800f-983c-d6d00752b0b4
docker_compose: "{{ application_id | get_docker_paths(PATH_DOCKER_COMPOSE_INSTANCES) }}"
docker_compose: "{{ application_id | get_docker_paths(PATH_DOCKER_COMPOSE_INSTANCES) }}"
docker_compose_command_base: "docker compose --env-file {{ docker_compose.files.env }}"
docker_compose_command_exec: "{{ docker_compose_command_base }} exec"

View File

@@ -1,11 +1,13 @@
{# Base for docker services #}
restart: {{ DOCKER_RESTART_POLICY }}
restart: {{ docker_restart_policy | default(DOCKER_RESTART_POLICY) }}
{% if application_id | has_env %}
env_file:
- "{{ docker_compose.files.env }}"
{% endif %}
logging:
driver: journald
{% filter indent(4) %}
{% include 'roles/docker-container/templates/resource.yml.j2' %}
{% endfilter %}
{{ "\n" }}

View File

@@ -0,0 +1,6 @@
{# integrate it into service sections to be build by Dockerfile #}
pull_policy: never
build:
context: .
dockerfile: Dockerfile
{# pass Arguments here #}

View File

@@ -1,15 +1,25 @@
{# This template needs to be included in docker-compose.yml containers #}
networks:
{# Central RDMS-Database Network #}
{% if
(applications | get_app_conf(application_id, 'features.central_database', False) and database_type is defined) or
application_id in ['svc-db-mariadb','svc-db-postgres']
%}
{{ applications | get_app_conf('svc-db-' ~ database_type, 'docker.network') }}:
{% if application_id in ['svc-db-mariadb','svc-db-postgres'] %}
aliases:
- {{ database_type }}
{% endif %}
{% endif %}
{# Central LDAP Network #}
{% if applications | get_app_conf(application_id, 'features.ldap', False) and applications | get_app_conf('svc-db-openldap', 'network.docker') %}
{{ applications | get_app_conf('svc-db-openldap', 'docker.network') }}:
{% endif %}
{% if application_id != 'svc-db-openldap' %}
{# Central AI Network #}
{% if applications | get_app_conf(application_id, 'features.local_ai', False) %}
{{ applications | get_app_conf('svc-ai-ollama', 'docker.network') }}:
{% endif %}
{% if not application_id.startswith('svc-db-') and not application_id.startswith('svc-ai-') %}
default:
{% endif %}
{{ "\n" }}

View File

@@ -0,0 +1,4 @@
cpus: {{ applications | resource_filter(application_id, 'cpus', service_name | default(''), RESOURCE_CPUS) }}
mem_reservation: {{ applications | resource_filter(application_id, 'mem_reservation', service_name | default(''), RESOURCE_MEM_RESERVATION) }}
mem_limit: {{ applications | resource_filter(application_id, 'mem_limit', service_name | default(''), RESOURCE_MEM_LIMIT) }}
pids_limit: {{ applications | resource_filter(application_id, 'pids_limit', service_name | default(''), RESOURCE_PIDS_LIMIT) }}

View File

@@ -4,7 +4,7 @@
run_once_pkgmgr_install: true
when: run_once_pkgmgr_install is not defined
- name: update {{ package_name }}
- name: "update {{ package_name }}"
ansible.builtin.shell: |
source ~/.venvs/pkgmgr/bin/activate
pkgmgr update {{ package_name }} --dependencies --clone-mode https

View File

@@ -0,0 +1,23 @@
# Ollama
## Description
**Ollama** is a local model server that runs open LLMs on your hardware and exposes a simple HTTP API. Its the backbone for privacy-first AI: prompts and data stay on your machines.
## Overview
After the first model pull, Ollama serves models to clients like Open WebUI (for chat) and Flowise (for workflows). Models are cached locally for quick reuse and can run fully offline when required.
## Features
* Run popular open models (chat, code, embeddings) locally
* Simple, predictable HTTP API for developers
* Local caching to avoid repeated downloads
* Works seamlessly with Open WebUI and Flowise
* Offline-capable for air-gapped deployments
## Further Resources
* Ollama — [https://ollama.com](https://ollama.com)
* Ollama Model Library — [https://ollama.com/library](https://ollama.com/library)

View File

@@ -0,0 +1,22 @@
features:
local_ai: true # Needs to be set so that network is loaded
docker:
services:
ollama:
backup:
no_stop_required: true
image: ollama/ollama
version: latest
name: ollama
port: 11434
cpus: "4.0"
mem_reservation: "6g"
mem_limit: "8g"
pids_limit: 2048
volumes:
models: "ollama_models"
network: "ollama"
preload_models:
- "llama3:latest"
- "mistral:latest"
- "nomic-embed-text:latest"

View File

@@ -0,0 +1,25 @@
---
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "Installs Ollama — a local model server for running open LLMs with a simple HTTP API."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
galaxy_tags:
- ai
- llm
- inference
- offline
- privacy
- self-hosted
- ollama
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://s.infinito.nexus/code/"
logo:
class: "fa-solid fa-microchip"
run_after: []
dependencies: []

View File

@@ -0,0 +1,38 @@
- name: create docker network for Ollama, so that other applications can access it
community.docker.docker_network:
name: "{{ OLLAMA_NETWORK }}"
state: present
ipam_config:
- subnet: "{{ networks.local[application_id].subnet }}"
- name: Include dependency 'sys-svc-docker'
include_role:
name: sys-svc-docker
when: run_once_sys_svc_docker is not defined
- name: "include docker-compose role"
include_role:
name: docker-compose
vars:
docker_compose_flush_handlers: true
- name: Pre-pull Ollama models
vars:
_cmd: "docker exec -i {{ OLLAMA_CONTAINER }} ollama pull {{ model }}"
shell: "{{ _cmd }}"
register: pull_result
loop: "{{ OLLAMA_PRELOAD_MODELS }}"
loop_control:
loop_var: model
async: "{{ ASYNC_TIME if ASYNC_ENABLED | bool else omit }}"
poll: "{{ ASYNC_POLL if ASYNC_ENABLED | bool else omit }}"
changed_when: >
(not (ASYNC_ENABLED | bool)) and (
'downloaded' in (pull_result.stdout | default('')) or
'pulling manifest' in (pull_result.stdout | default(''))
)
failed_when: >
(pull_result.rc | default(0)) != 0 and
('up to date' not in (pull_result.stdout | default('')))
- include_tasks: utils/run_once.yml

View File

@@ -0,0 +1,5 @@
- block:
- include_tasks: 01_core.yml
vars:
flush_handlers: true
when: run_once_svc_ai_ollama is not defined

View File

@@ -0,0 +1,17 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
ollama:
{% include 'roles/docker-container/templates/base.yml.j2' %}
image: {{ OLLAMA_IMAGE }}:{{ OLLAMA_VERSION }}
container_name: {{ OLLAMA_CONTAINER }}
expose:
- "{{ OLLAMA_PORT }}"
volumes:
- ollama_models:/root/.ollama
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
ollama_models:
name: {{ OLLAMA_VOLUME }}

View File

@@ -0,0 +1,16 @@
# General
application_id: "svc-ai-ollama"
# Docker
docker_compose_flush_handlers: true
# Ollama
# https://ollama.com/
OLLAMA_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.ollama.version') }}"
OLLAMA_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.ollama.image') }}"
OLLAMA_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.ollama.name') }}"
OLLAMA_PORT: "{{ applications | get_app_conf(application_id, 'docker.services.ollama.port') }}"
OLLAMA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.models') }}"
OLLAMA_NETWORK: "{{ applications | get_app_conf(application_id, 'docker.network') }}"
OLLAMA_PRELOAD_MODELS: "{{ applications | get_app_conf(application_id, 'preload_models') }}"

View File

@@ -1,11 +1,16 @@
docker:
services:
mariadb:
version: "latest"
image: "mariadb"
name: "mariadb"
version: "latest"
image: "mariadb"
name: "mariadb"
backup:
database_routine: true
# Performance Variables aren't used yet, but will be in the future as soon as an docker file is implemented
cpus: "2.0"
mem_reservation: "2g"
mem_limit: "4g"
pids_limit: 1024
network: "mariadb"
volumes:
data: "mariadb_data"

View File

@@ -5,7 +5,7 @@ network:
docker:
services:
openldap:
image: "bitnami/openldap"
image: "bitnamilegacy/openldap"
name: "openldap"
version: "latest"
network: "openldap"

View File

@@ -0,0 +1,40 @@
- name: Load memberof module from file in OpenLDAP container
shell: >
docker exec -i {{ OPENLDAP_CONTAINER }} ldapmodify -Y EXTERNAL -H ldapi:/// -f "{{ [OPENLDAP_LDIF_PATH_DOCKER, 'configuration/01_member_of_configuration.ldif' ] | path_join }}"
listen:
- "Import configuration LDIF files"
# @todo Remove the following ignore errors when setting up a new server
# Just here because debugging would take to much time
ignore_errors: true
- name: Refint Module Activation for OpenLDAP
shell: >
docker exec -i {{ OPENLDAP_CONTAINER }} ldapadd -Y EXTERNAL -H ldapi:/// -f "{{ [ OPENLDAP_LDIF_PATH_DOCKER, 'configuration/02_member_of_configuration.ldif' ] | path_join }}"
listen:
- "Import configuration LDIF files"
register: ldapadd_result
failed_when: ldapadd_result.rc not in [0, 68]
# @todo Remove the following ignore errors when setting up a new server
# Just here because debugging would take to much time
ignore_errors: true
- name: Refint Overlay Configuration for OpenLDAP
shell: >
docker exec -i {{ OPENLDAP_CONTAINER }} ldapmodify -Y EXTERNAL -H ldapi:/// -f "{{ [ OPENLDAP_LDIF_PATH_DOCKER, 'configuration/03_member_of_configuration.ldif' ] | path_join }}"
listen:
- "Import configuration LDIF files"
register: ldapadd_result
failed_when: ldapadd_result.rc not in [0, 68]
# @todo Remove the following ignore errors when setting up a new server
# Just here because debugging would take to much time
ignore_errors: true
- name: "Import users, groups, etc. to LDAP"
shell: >
docker exec -i {{ OPENLDAP_CONTAINER }} ldapadd -x -D "{{ LDAP.DN.ADMINISTRATOR.DATA }}" -w "{{ LDAP.BIND_CREDENTIAL }}" -c -f "{{ [ OPENLDAP_LDIF_PATH_DOCKER, 'groups', (item | basename | regex_replace('\.j2$', '')) ] | path_join }}"
register: ldapadd_result
changed_when: "'adding new entry' in ldapadd_result.stdout"
failed_when: ldapadd_result.rc not in [0, 20, 68, 65]
listen:
- "Import groups LDIF files"
loop: "{{ query('fileglob', role_path ~ '/templates/ldif/groups/*.j2') | sort }}"

View File

@@ -3,7 +3,7 @@
- name: "Query available LDAP databases"
shell: |
docker exec {{ openldap_name }} \
docker exec {{ OPENLDAP_CONTAINER }} \
ldapsearch -Y EXTERNAL -H ldapi:/// -LLL -b cn=config "(olcDatabase=*)" dn
register: ldap_databases
@@ -27,13 +27,13 @@
- name: "Generate hash for Database Admin password"
shell: |
docker exec {{ openldap_name }} \
docker exec {{ OPENLDAP_CONTAINER }} \
slappasswd -s "{{ LDAP.BIND_CREDENTIAL }}"
register: database_admin_pw_hash
- name: "Reset Database Admin password in LDAP (olcRootPW)"
shell: |
docker exec -i {{ openldap_name }} ldapmodify -Y EXTERNAL -H ldapi:/// <<EOF
docker exec -i {{ OPENLDAP_CONTAINER }} ldapmodify -Y EXTERNAL -H ldapi:/// <<EOF
dn: {{ data_backend_dn }}
changetype: modify
replace: olcRootPW
@@ -42,13 +42,13 @@
- name: "Generate hash for Configuration Admin password"
shell: |
docker exec {{ openldap_name }} \
docker exec {{ OPENLDAP_CONTAINER }} \
slappasswd -s "{{ applications | get_app_conf(application_id, 'credentials.administrator_password', True) }}"
register: config_admin_pw_hash
- name: "Reset Configuration Admin password in LDAP (olcRootPW)"
shell: |
docker exec -i {{ openldap_name }} ldapmodify -Y EXTERNAL -H ldapi:/// <<EOF
docker exec -i {{ OPENLDAP_CONTAINER }} ldapmodify -Y EXTERNAL -H ldapi:/// <<EOF
dn: {{ config_backend_dn }}
changetype: modify
replace: olcRootPW

View File

@@ -4,7 +4,7 @@
- name: Ensure LDAP users exist
community.general.ldap_entry:
dn: "{{ LDAP.USER.ATTRIBUTES.ID }}={{ item.key }},{{ LDAP.DN.OU.USERS }}"
server_uri: "{{ openldap_server_uri }}"
server_uri: "{{ OPENLDAP_SERVER_URI }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
objectClass: "{{ LDAP.USER.OBJECTS.STRUCTURAL }}"
@@ -30,7 +30,7 @@
- name: Ensure required objectClass values and mail address are present
community.general.ldap_attrs:
dn: "{{ LDAP.USER.ATTRIBUTES.ID }}={{ item.key }},{{ LDAP.DN.OU.USERS }}"
server_uri: "{{ openldap_server_uri }}"
server_uri: "{{ OPENLDAP_SERVER_URI }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
attributes:
@@ -46,7 +46,7 @@
- name: "Ensure container for application roles exists"
community.general.ldap_entry:
dn: "{{ LDAP.DN.OU.ROLES }}"
server_uri: "{{ openldap_server_uri }}"
server_uri: "{{ OPENLDAP_SERVER_URI }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
objectClass: organizationalUnit

View File

@@ -1,6 +1,6 @@
- name: Gather all users with their current objectClass list
community.general.ldap_search:
server_uri: "{{ openldap_server_uri }}"
server_uri: "{{ OPENLDAP_SERVER_URI }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
dn: "{{ LDAP.DN.OU.USERS }}"
@@ -14,16 +14,16 @@
- name: Add only missing auxiliary classes
community.general.ldap_attrs:
server_uri: "{{ openldap_server_uri }}"
server_uri: "{{ OPENLDAP_SERVER_URI }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
dn: "{{ item.dn }}"
attributes:
objectClass: "{{ missing_auxiliary }}"
state: present
async: "{{ ASYNC_TIME if ASYNC_ENABLED | bool else omit }}"
poll: "{{ ASYNC_POLL if ASYNC_ENABLED | bool else omit }}"
loop: "{{ ldap_users_with_classes.results }}"
async: "{{ ASYNC_TIME if ASYNC_ENABLED | bool else omit }}"
poll: "{{ ASYNC_POLL if ASYNC_ENABLED | bool else omit }}"
loop: "{{ ldap_users_with_classes.results }}"
loop_control:
label: "{{ item.dn }}"
vars:

View File

@@ -1,7 +1,7 @@
- name: "Create LDIF files at {{ openldap_ldif_host_path }}{{ folder }}"
- name: "Create LDIF files at {{ OPENLDAP_LDIF_PATH_HOST }}{{ folder }}"
template:
src: "{{ item }}"
dest: "{{ openldap_ldif_host_path }}{{ folder }}/{{ item | basename | regex_replace('\\.j2$', '') }}"
dest: "{{ OPENLDAP_LDIF_PATH_HOST }}{{ folder }}/{{ item | basename | regex_replace('\\.j2$', '') }}"
mode: "0770"
loop: >-
{{

View File

@@ -1,25 +1,25 @@
---
- name: "include docker-compose role"
include_role:
include_role:
name: docker-compose
- name: Create {{ domains | get_domain(application_id) }}.conf if LDAP is exposed to internet
template:
template:
src: "nginx.stream.conf.j2"
dest: "{{ NGINX.DIRECTORIES.STREAMS }}{{ domains | get_domain(application_id) }}.conf"
notify: restart openresty
when: applications | get_app_conf(application_id, 'network.public', True) | bool
when: OPENLDAP_NETWORK_SWITCH_PUBLIC | bool
- name: Remove {{ domains | get_domain(application_id) }}.conf if LDAP is not exposed to internet
file:
path: "{{ NGINX.DIRECTORIES.STREAMS }}{{ domains | get_domain(application_id) }}.conf"
state: absent
when: not applications | get_app_conf(application_id, 'network.public', True) | bool
when: not OPENLDAP_NETWORK_SWITCH_PUBLIC | bool
- name: create docker network for LDAP, so that other applications can access it
community.docker.docker_network:
name: "{{ openldap_network }}"
name: "{{ OPENLDAP_NETWORK }}"
state: present
ipam_config:
- subnet: "{{ networks.local[application_id].subnet }}"
@@ -37,23 +37,23 @@
- name: "Reset LDAP Credentials"
include_tasks: 01_credentials.yml
when:
- applications | get_app_conf(application_id, 'network.local')
- applications | get_app_conf(application_id, 'provisioning.credentials', True)
- OPENLDAP_NETWORK_SWITCH_LOCAL | bool
- applications | get_app_conf(application_id, 'provisioning.credentials')
- name: "create directory {{openldap_ldif_host_path}}{{item}}"
- name: "create directory {{ OPENLDAP_LDIF_PATH_HOST }}{{ item }}"
file:
path: "{{openldap_ldif_host_path}}{{item}}"
path: "{{ OPENLDAP_LDIF_PATH_HOST }}{{ item }}"
state: directory
mode: "0755"
loop: "{{openldap_ldif_types}}"
loop: "{{ OPENLDAP_LDIF_TYPES }}"
- name: "Import LDIF Configuration"
include_tasks: ldifs_creation.yml
include_tasks: _ldifs_creation.yml
loop:
- configuration
loop_control:
loop_var: folder
when: applications | get_app_conf(application_id, 'provisioning.configuration', True)
when: applications | get_app_conf(application_id, 'provisioning.configuration')
- name: flush LDIF handlers
meta: flush_handlers
@@ -66,20 +66,22 @@
- name: "Include Schemas (if enabled)"
include_tasks: 02_schemas.yml
when: applications | get_app_conf(application_id, 'provisioning.schemas', True)
when: applications | get_app_conf(application_id, 'provisioning.schemas')
- name: "Import LDAP Entries (if enabled)"
include_tasks: 03_users.yml
when: applications | get_app_conf(application_id, 'provisioning.users', True)
when: applications | get_app_conf(application_id, 'provisioning.users')
- name: "Import LDIF Data (if enabled)"
include_tasks: ldifs_creation.yml
include_tasks: _ldifs_creation.yml
loop:
- groups
loop_control:
loop_var: folder
when: applications | get_app_conf(application_id, 'provisioning.groups', True)
when: applications | get_app_conf(application_id, 'provisioning.groups')
- meta: flush_handlers
- name: "Add Objects to all users"
include_tasks: 04_update.yml
when: applications | get_app_conf(application_id, 'provisioning.update', True)
when: applications | get_app_conf(application_id, 'provisioning.update')

View File

@@ -13,9 +13,9 @@
- "( 1.3.6.1.4.1.99999.2 NAME '{{ LDAP.USER.OBJECTS.AUXILIARY.NEXTCLOUD_USER }}' DESC 'Auxiliary class for Nextcloud attributes' AUXILIARY MAY ( {{ LDAP.USER.ATTRIBUTES.NEXTCLOUD_QUOTA }} ) )"
command: >
ldapsm
-s {{ openldap_server_uri }}
-D '{{ openldap_bind_dn }}'
-W '{{ openldap_bind_pw }}'
-s {{ OPENLDAP_SERVER_URI }}
-D '{{ OPENLDAP_BIND_DN }}'
-W '{{ OPENLDAP_BIND_PW }}'
-n {{ schema_name }}
{% for at in attribute_defs %}
-a "{{ at }}"

View File

@@ -21,9 +21,9 @@
command: >
ldapsm
-s {{ openldap_server_uri }}
-D '{{ openldap_bind_dn }}'
-W '{{ openldap_bind_pw }}'
-s {{ OPENLDAP_SERVER_URI }}
-D '{{ OPENLDAP_BIND_DN }}'
-W '{{ OPENLDAP_BIND_PW }}'
-n {{ schema_name }}
{% for at in attribute_defs %}
-a "{{ at }}"

View File

@@ -1,20 +1,20 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
application:
image: "{{ openldap_image }}:{{ openldap_version }}"
container_name: "{{ openldap_name }}"
image: "{{ OPENLDAP_IMAGE }}:{{ OPENLDAP_VERSION }}"
container_name: "{{ OPENLDAP_CONTAINER }}"
{% include 'roles/docker-container/templates/base.yml.j2' %}
{% if openldap_network_expose_local %}
{% if OPENLDAP_NETWORK_EXPOSE_LOCAL | bool %}
ports:
- 127.0.0.1:{{ports.localhost.ldap['svc-db-openldap']}}:{{openldap_docker_port_open}}
- 127.0.0.1:{{ports.localhost.ldap['svc-db-openldap']}}:{{ OPENLDAP_DOCKER_PORT_OPEN }}
{% endif %}
volumes:
- 'data:/bitnami/openldap'
- '{{openldap_ldif_host_path}}:{{ openldap_ldif_docker_path }}:ro'
- '{{ OPENLDAP_LDIF_PATH_HOST }}:{{ OPENLDAP_LDIF_PATH_DOCKER }}:ro'
healthcheck:
test: >
bash -c '
ldapsearch -x -H ldap://localhost:{{ openldap_docker_port_open }} \
ldapsearch -x -H ldap://localhost:{{ OPENLDAP_DOCKER_PORT_OPEN }} \
-D "{{ LDAP.DN.ADMINISTRATOR.DATA }}" -w "{{ LDAP.BIND_CREDENTIAL }}" -b "{{ LDAP.DN.ROOT }}" > /dev/null \
&& ldapsearch -Y EXTERNAL -H ldapi:/// \
-b cn=config "(&(objectClass=olcOverlayConfig)(olcOverlay=memberof))" \
@@ -24,6 +24,6 @@
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
data:
name: "{{ openldap_volume }}"
name: "{{ OPENLDAP_VOLUME }}"
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -3,24 +3,24 @@
# GENERAL
## Admin (Data)
LDAP_ADMIN_USERNAME= {{ applications | get_app_conf(application_id, 'users.administrator.username') }} # LDAP database admin user.
LDAP_ADMIN_PASSWORD= {{ LDAP.BIND_CREDENTIAL }} # LDAP database admin password.
LDAP_ADMIN_USERNAME= {{ applications | get_app_conf(application_id, 'users.administrator.username') }} # LDAP database admin user.
LDAP_ADMIN_PASSWORD= {{ LDAP.BIND_CREDENTIAL }} # LDAP database admin password.
## Users
LDAP_USERS= ' ' # Comma separated list of LDAP users to create in the default LDAP tree. Default: user01,user02
LDAP_PASSWORDS= ' ' # Comma separated list of passwords to use for LDAP users. Default: bitnami1,bitnami2
LDAP_ROOT= {{ LDAP.DN.ROOT }} # LDAP baseDN (or suffix) of the LDAP tree. Default: dc=example,dc=org
LDAP_USERS= ' ' # Comma separated list of LDAP users to create in the default LDAP tree. Default: user01,user02
LDAP_PASSWORDS= ' ' # Comma separated list of passwords to use for LDAP users. Default: bitnami1,bitnami2
LDAP_ROOT= {{ LDAP.DN.ROOT }} # LDAP baseDN (or suffix) of the LDAP tree. Default: dc=example,dc=org
## Admin (Config)
LDAP_ADMIN_DN= {{LDAP.DN.ADMINISTRATOR.DATA}}
LDAP_ADMIN_DN= {{ LDAP.DN.ADMINISTRATOR.DATA }}
LDAP_CONFIG_ADMIN_ENABLED= yes
LDAP_CONFIG_ADMIN_USERNAME= {{ applications | get_app_conf(application_id, 'users.administrator.username') }}
LDAP_CONFIG_ADMIN_PASSWORD= {{ applications | get_app_conf(application_id, 'credentials.administrator_password') }}
# Network
LDAP_PORT_NUMBER= {{openldap_docker_port_open}} # Route to default port
LDAP_ENABLE_TLS= no # Using nginx proxy for tls
LDAP_LDAPS_PORT_NUMBER= {{openldap_docker_port_secure}} # Port used for TLS secure traffic. Priviledged port is supported (e.g. 636). Default: 1636 (non privileged port).
LDAP_PORT_NUMBER= {{ OPENLDAP_DOCKER_PORT_OPEN }} # Route to default port
LDAP_ENABLE_TLS= no # Using nginx proxy for tls
LDAP_LDAPS_PORT_NUMBER= {{ OPENLDAP_DOCKER_PORT_SECURE }} # Port used for TLS secure traffic. Priviledged port is supported (e.g. 636). Default: 1636 (non privileged port).
# Security
LDAP_ALLOW_ANON_BINDING= no # Allow anonymous bindings to the LDAP server. Default: yes.
LDAP_ALLOW_ANON_BINDING= no # Allow anonymous bindings to the LDAP server. Default: yes.

View File

@@ -1,30 +0,0 @@
{#
@todo: activate
{% for dn, entry in (applications | build_ldap_role_entries(users, ldap)).items() %}
dn: {{ dn }}
{% for oc in entry.objectClass %}
objectClass: {{ oc }}
{% endfor %}
{% if entry.ou is defined %}
ou: {{ entry.ou }}
{% else %}
cn: {{ entry.cn }}
{% endif %}
{% if entry.gidNumber is defined %}
gidNumber: {{ entry.gidNumber }}
{% endif %}
description: {{ entry.description }}
{% if entry.memberUid is defined %}
{% for uid in entry.memberUid %}
memberUid: {{ uid }}
{% endfor %}
{% endif %}
{% if entry.member is defined %}
{% for m in entry.member %}
member: {{ m }}
{% endfor %}
{% endif %}
{% endfor %}
#}

View File

@@ -1,4 +1,4 @@
{% for dn, entry in (applications | build_ldap_role_entries(users, ldap)).items() %}
{% for dn, entry in (applications | build_ldap_role_entries(users, LDAP)).items() %}
dn: {{ dn }}
{% for oc in entry.objectClass %}

View File

@@ -1,24 +1,27 @@
application_id: "svc-db-openldap"
# LDAP Variables
openldap_docker_port_secure: 636
openldap_docker_port_open: 389
openldap_server_uri: "ldap://127.0.0.1:{{ ports.localhost.ldap[application_id] }}"
openldap_bind_dn: "{{ LDAP.DN.ADMINISTRATOR.CONFIGURATION }}"
openldap_bind_pw: "{{ applications | get_app_conf(application_id, 'credentials.administrator_password', True) }}"
OPENLDAP_DOCKER_PORT_SECURE: 636
OPENLDAP_DOCKER_PORT_OPEN: 389
OPENLDAP_SERVER_URI: "ldap://127.0.0.1:{{ ports.localhost.ldap[application_id] }}"
OPENLDAP_BIND_DN: "{{ LDAP.DN.ADMINISTRATOR.CONFIGURATION }}"
OPENLDAP_BIND_PW: "{{ applications | get_app_conf(application_id, 'credentials.administrator_password') }}"
# LDIF Variables
openldap_ldif_host_path: "{{ docker_compose.directories.volumes }}ldif/"
openldap_ldif_docker_path: "/tmp/ldif/"
openldap_ldif_types:
OPENLDAP_LDIF_PATH_HOST: "{{ docker_compose.directories.volumes }}ldif/"
OPENLDAP_LDIF_PATH_DOCKER: "/tmp/ldif/"
OPENLDAP_LDIF_TYPES:
- configuration
- groups
- schema # Don't know if this is still needed, it's now setup via tasks
openldap_name: "{{ applications | get_app_conf(application_id, 'docker.services.openldap.name', True) }}"
openldap_image: "{{ applications | get_app_conf(application_id, 'docker.services.openldap.image', True) }}"
openldap_version: "{{ applications | get_app_conf(application_id, 'docker.services.openldap.version', True) }}"
openldap_volume: "{{ applications | get_app_conf(application_id, 'docker.volumes.data', True) }}"
openldap_network: "{{ applications | get_app_conf(application_id, 'docker.network', True) }}"
# Container
OPENLDAP_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.openldap.name') }}"
OPENLDAP_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.openldap.image') }}"
OPENLDAP_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.openldap.version') }}"
OPENLDAP_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
OPENLDAP_NETWORK: "{{ applications | get_app_conf(application_id, 'docker.network') }}"
openldap_network_expose_local: "{{ applications | get_app_conf(application_id, 'network.public', True) | bool or applications | get_app_conf(application_id, 'network.local') | bool }}"
# Network
OPENLDAP_NETWORK_SWITCH_PUBLIC: "{{ applications | get_app_conf(application_id, 'network.public') }}"
OPENLDAP_NETWORK_SWITCH_LOCAL: "{{ applications | get_app_conf(application_id, 'network.local') }}"
OPENLDAP_NETWORK_EXPOSE_LOCAL: "{{ OPENLDAP_NETWORK_SWITCH_PUBLIC | bool or OPENLDAP_NETWORK_SWITCH_LOCAL | bool }}"

View File

@@ -2,13 +2,17 @@ docker:
services:
postgres:
# Postgis is necessary for mobilizon
image: postgis/postgis
name: postgres
image: postgis/postgis
name: postgres
# Please set an version in your inventory file!
# Rolling release isn't recommended
version: "latest"
version: "17-3.5"
backup:
database_routine: true
cpus: "2.0"
mem_reservation: "4g"
mem_limit: "6g"
pids_limit: 1024
volumes:
data: "postgres_data"
network: "postgres"
data: "postgres_data"
network: "postgres"

View File

@@ -5,7 +5,7 @@
flush_handlers: true
when: run_once_svc_db_postgres is not defined
- include_tasks: "{{ playbook_dir }}/tasks/utils/load_handlers.yml"
- include_tasks: "{{ [ playbook_dir, 'tasks/utils/load_handlers.yml' ] | path_join }}"
# Necessary because docker handlers are overwritten by condition
vars:
handler_role_name: "docker-compose"

View File

@@ -5,7 +5,7 @@ RUN apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
git \
postgresql-server-dev-all \
postgresql-server-dev-{{ POSTGRES_VERSION_MAJOR | default('all', true) }} \
&& git clone https://github.com/pgvector/pgvector.git /tmp/pgvector \
&& cd /tmp/pgvector \
&& make \

View File

@@ -3,10 +3,7 @@
postgres:
container_name: "{{ POSTGRES_CONTAINER }}"
image: "{{ POSTGRES_CUSTOM_IMAGE_NAME }}"
build:
context: .
dockerfile: Dockerfile
pull_policy: never
{{ lookup('template', 'roles/docker-container/templates/build.yml.j2') | indent(4) }}
command:
- "postgres"
- "-c"

View File

@@ -1,5 +1,6 @@
# General
application_id: svc-db-postgres
entity_name: "{{ application_id | get_entity_name }}"
# Docker
docker_compose_flush_handlers: true
@@ -9,11 +10,12 @@ database_type: "{{ application_id | get_entity_name }
## Postgres
POSTGRES_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
POSTGRES_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.postgres.name') }}"
POSTGRES_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.postgres.image') }}"
POSTGRES_SUBNET: "{{ networks.local['svc-db-postgres'].subnet }}"
POSTGRES_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.' ~ entity_name ~ '.name') }}"
POSTGRES_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.' ~ entity_name ~ '.image') }}"
POSTGRES_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.' ~ entity_name ~ '.version') }}"
POSTGRES_VERSION_MAJOR: "{{ POSTGRES_VERSION | regex_replace('^([0-9]+).*', '\\1') }}"
POSTGRES_NETWORK_NAME: "{{ applications | get_app_conf(application_id, 'docker.network') }}"
POSTGRES_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.postgres.version') }}"
POSTGRES_SUBNET: "{{ networks.local['svc-db-postgres'].subnet }}"
POSTGRES_PASSWORD: "{{ applications | get_app_conf(application_id, 'credentials.POSTGRES_PASSWORD') }}"
POSTGRES_PORT: "{{ database_port | default(ports.localhost.database[ application_id ]) }}"
POSTGRES_INIT: "{{ database_username is defined and database_password is defined and database_name is defined }}"

View File

@@ -16,4 +16,5 @@
retries: 30
networks:
- default
{{ lookup('template', 'roles/docker-container/templates/resource.yml.j2',vars={'service_name':'redis'}) | indent(4) }}
{{ "\n" }}

View File

@@ -0,0 +1,14 @@
- name: Install '
include_role:
name: pkgmgr-install
vars:
package_name: "{{ SWAPFILE_PKG }}"
when: run_once_pkgmgr_install is not defined
- name: Execute create swapfile script
shell: "{{ SWAPFILE_PKG }} '{{ SWAPFILE_SIZE }}'"
become: true
async: "{{ ASYNC_TIME if ASYNC_ENABLED | bool else omit }}"
poll: "{{ ASYNC_POLL if ASYNC_ENABLED | bool else omit }}"
- include_tasks: utils/run_once.yml

View File

@@ -1,17 +1,3 @@
- block:
- name: Include dependency 'pkgmgr-install'
include_role:
name: pkgmgr-install
when: run_once_pkgmgr_install is not defined
- include_tasks: utils/run_once.yml
- include_tasks: 01_core.yml
when: run_once_svc_opt_swapfile is not defined
- name: "pkgmgr install"
include_role:
name: pkgmgr-install
vars:
package_name: swap-forge
- name: Execute create swapfile script
shell: swap-forge "{{swapfile_size}}"
become: true

View File

@@ -1,2 +1,4 @@
application_id: "svc-opt-swapfile"
swapfile_size: "{{ applications | get_app_conf(application_id, 'swapfile_size') }}"
SWAPFILE_SIZE: "{{ applications | get_app_conf(application_id, 'swapfile_size') }}"
SWAPFILE_PKG: "swap-forge"

View File

@@ -1,7 +1,10 @@
docker:
services:
openresty:
name: "openresty"
name: "openresty"
cpus: 0.5
mem_reservation: 1g
mem_limit: 2g
volumes:
www: "/var/www/"
nginx: "/etc/nginx/"

View File

@@ -1,6 +1,6 @@
- block:
- name: "For '{{ application_id }}': Load docker-compose"
include_role:
include_role:
name: docker-compose
vars:
docker_compose_flush_handlers: true

View File

@@ -5,21 +5,23 @@
- sys-ctl-alm-telegram
- sys-ctl-alm-email
vars:
flush_handlers: true
system_service_timer_enabled: false
system_service_copy_files: true
system_service_tpl_exec_start: "{{ system_service_script_exec }} %I"
system_service_tpl_on_failure: ""
flush_handlers: true
system_service_timer_enabled: false
system_service_copy_files: true
system_service_tpl_exec_start: "{{ system_service_script_exec }} %I"
system_service_tpl_on_failure: ""
system_service_force_linear_sync: false
- name: "Include core service for '{{ system_service_id }}'"
include_role:
name: sys-service
vars:
flush_handlers: true
system_service_timer_enabled: false
system_service_copy_files: true
system_service_tpl_exec_start: "{{ system_service_script_exec }} %I"
system_service_tpl_on_failure: "" # No on failure needed, because it's anyhow the default on failure procedure
flush_handlers: true
system_service_timer_enabled: false
system_service_copy_files: true
system_service_tpl_exec_start: "{{ system_service_script_exec }} %I"
system_service_tpl_on_failure: "" # No on failure needed, because it's anyhow the default on failure procedure
system_service_force_linear_sync: false
- name: Assert '{{ system_service_id }}'
block:

View File

@@ -19,10 +19,12 @@
vars:
system_service_copy_files: false
system_service_timer_enabled: true
system_service_force_linear_sync: true
system_service_force_flush: "{{ MODE_BACKUP | bool }}"
system_service_on_calendar: "{{ SYS_SCHEDULE_BACKUP_DOCKER_TO_LOCAL }}"
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_BACKUP_DOCKER_2_LOC }} --timeout "{{ SYS_TIMEOUT_BACKUP_SERVICES }}"'
system_service_tpl_exec_start: "/bin/sh -c '{{ BKP_DOCKER_2_LOC_EXEC }}'"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }} {{ SYS_SERVICE_CLEANUP_BACKUPS_FAILED }}"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
# system_service_tpl_exec_start_post: "/usr/bin/systemctl start {{ SYS_SERVICE_CLEANUP_BACKUPS }}" # Not possible to use it because it's a deathlock. Keep this line for documentation purposes
- include_tasks: utils/run_once.yml

View File

@@ -12,6 +12,7 @@
system_service_tpl_exec_start: dockreap --no-confirmation
system_service_tpl_exec_start_pre: "" # Anonymous volumes can allways be removed. It isn't necessary to wait for any service to stop.
system_service_copy_files: false
system_service_force_linear_sync: false
- include_tasks: utils/run_once.yml
when:

View File

@@ -20,6 +20,7 @@
system_service_tpl_exec_start: "{{ system_service_script_exec }} --backups-folder-path {{ BACKUPS_FOLDER_PATH }} --maximum-backup-size-percent {{SIZE_PERCENT_MAXIMUM_BACKUP}}"
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP | join(" ") }} --timeout "{{ SYS_TIMEOUT_BACKUP_SERVICES }}"'
system_service_copy_files: true
system_service_force_linear_sync: false
- include_tasks: utils/run_once.yml
vars:

View File

@@ -14,6 +14,7 @@
- include_role:
name: sys-service
vars:
system_service_timer_enabled: true
system_service_on_calendar: "{{ SYS_SCHEDULE_CLEANUP_CERTS }}"
system_service_copy_files: false
system_service_timer_enabled: true
system_service_on_calendar: "{{ SYS_SCHEDULE_CLEANUP_CERTS }}"
system_service_copy_files: false
system_service_force_linear_sync: false

View File

@@ -14,3 +14,4 @@
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start: "{{ system_service_script_exec }} {{ SIZE_PERCENT_CLEANUP_DISC_SPACE }}"
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP | join(" ") }} --timeout "{{ SYS_TIMEOUT_BACKUP_SERVICES }}"'
system_service_force_linear_sync: false

View File

@@ -19,7 +19,7 @@
system_service_on_calendar: "{{ SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS }}"
system_service_copy_files: false
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start: '/bin/sh -c "{{ CLEANUP_FAILED_BACKUPS_PKG }} --all --workers {{ CLEANUP_FAILED_BACKUPS_WORKERS }} --yes"'
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP| join(" ") }} --timeout "{{ SYS_TIMEOUT_CLEANUP_SERVICES }}"'
system_service_tpl_exec_start: '/bin/sh -c "{{ CLEANUP_FAILED_BACKUPS_PKG }} --all --workers {{ CLEANUP_FAILED_BACKUPS_WORKERS }} --yes"'
system_service_force_linear_sync: false
- include_tasks: utils/run_once.yml

View File

@@ -14,6 +14,32 @@ Designed for Archlinux systems, this role periodically checks whether web resour
- **Domain Extraction:** Parses all `.conf` files in the NGINX config folder to determine the list of domains to check.
- **Automated Execution:** Registers a systemd service and timer for recurring health checks.
- **Error Notification:** Integrates with `sys-ctl-alm-compose` for alerting on failure.
- **Ignore List Support:** Optional variable to suppress network block reports from specific external domains.
## Configuration
### Variables
- **`HEALTH_CSP_IGNORE_NETWORK_BLOCKS_FROM`** (list, default: `[]`)
Optional list of domains whose network block failures (e.g., ORB) should be ignored during CSP checks.
Example:
```yaml
HEALTH_CSP_IGNORE_NETWORK_BLOCKS_FROM:
- pxscdn.com
- cdn.example.org
```
This will run the CSP checker with:
```bash
checkcsp start --short --ignore-network-blocks-from pxscdn.com -- cdn.example.org <domains...>
```
### Systemd Integration
The role configures a systemd service and timer which executes the CSP crawler periodically against all NGINX domains.
## License
@@ -24,4 +50,4 @@ Infinito.Nexus NonCommercial License
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
[https://www.veen.world](https://www.veen.world)
[https://www.veen.world](https://www.veen.world)

View File

@@ -0,0 +1,5 @@
# List of domains whose network block failures (e.g., ORB) should be ignored
# during CSP checks. This is useful for suppressing known external resources
# (e.g., third-party CDNs) that cannot be influenced but otherwise cause
# unnecessary alerts in the crawler reports.
HEALTH_CSP_IGNORE_NETWORK_BLOCKS_FROM: []

View File

@@ -21,11 +21,20 @@ def extract_domains(config_path):
print(f"Directory {config_path} not found.", file=sys.stderr)
return None
def run_checkcsp(domains):
def run_checkcsp(domains, ignore_network_blocks_from):
"""
Executes the 'checkcsp' command with the given domains.
Executes the 'checkcsp' command with the given domains and optional ignores.
"""
cmd = ["checkcsp", "start", "--short"] + domains
cmd = ["checkcsp", "start", "--short"]
# pass through ignore list only if not empty
if ignore_network_blocks_from:
cmd.append("--ignore-network-blocks-from")
cmd.extend(ignore_network_blocks_from)
cmd.append("--")
cmd += domains
try:
result = subprocess.run(cmd, check=True)
return result.returncode
@@ -45,6 +54,12 @@ def main():
required=True,
help="Directory containing NGINX .conf files"
)
parser.add_argument(
"--ignore-network-blocks-from",
nargs="*",
default=[],
help="Optional: one or more domains whose network block failures should be ignored"
)
args = parser.parse_args()
domains = extract_domains(args.nginx_config_dir)
@@ -55,7 +70,7 @@ def main():
print("No domains found to check.")
sys.exit(0)
rc = run_checkcsp(domains)
rc = run_checkcsp(domains, args.ignore_network_blocks_from)
sys.exit(rc)
if __name__ == "__main__":

View File

@@ -18,6 +18,9 @@
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_timeout_start_sec: "{{ CURRENT_PLAY_DOMAINS_ALL | timeout_start_sec_for_domains }}"
system_service_tpl_exec_start: "{{ system_service_script_exec }} --nginx-config-dir={{ NGINX.DIRECTORIES.HTTP.SERVERS }}"
system_service_tpl_exec_start: >-
{{ system_service_script_exec }}
--nginx-config-dir={{ NGINX.DIRECTORIES.HTTP.SERVERS }}
--ignore-network-blocks-from {{ HEALTH_CSP_IGNORE_NETWORK_BLOCKS_FROM | join(' ') }}
- include_tasks: utils/run_once.yml

View File

@@ -1,4 +1,3 @@
# roles/sys-ctl-hlth-webserver/filter_plugins/web_health_expectations.py
import os
import sys
from collections.abc import Mapping
@@ -94,6 +93,26 @@ def _normalize_selection(group_names):
raise ValueError("web_health_expectations: 'group_names' must be provided and non-empty")
return sel
def _normalize_codes(x):
"""
Accepts:
- single code (int or str)
- list/tuple/set of codes
Returns a de-duplicated list of valid ints (100..599) in original order.
"""
if x is None:
return []
if isinstance(x, (list, tuple, set)):
out = []
seen = set()
for v in x:
c = _valid_http_code(v)
if c is not None and c not in seen:
seen.add(c)
out.append(c)
return out
c = _valid_http_code(x)
return [c] if c is not None else []
def web_health_expectations(applications, www_enabled: bool = False, group_names=None, redirect_maps=None):
"""Produce a **flat mapping**: domain -> [expected_status_codes].
@@ -138,17 +157,15 @@ def web_health_expectations(applications, www_enabled: bool = False, group_names
sc_map = {}
if isinstance(sc_raw, Mapping):
for k, v in sc_raw.items():
code = _valid_http_code(v)
if code is not None:
sc_map[str(k)] = code
codes = _normalize_codes(v)
if codes:
sc_map[str(k)] = codes
if isinstance(canonical_raw, Mapping) and canonical_raw:
for key, domains in canonical_raw.items():
domains_list = _to_list(domains, allow_mapping=False)
code = _valid_http_code(sc_map.get(key))
if code is None:
code = _valid_http_code(sc_map.get("default"))
expected = [code] if code is not None else list(DEFAULT_OK)
codes = sc_map.get(key) or sc_map.get("default")
expected = list(codes) if codes else list(DEFAULT_OK)
for d in domains_list:
if d:
expectations[d] = expected
@@ -156,8 +173,8 @@ def web_health_expectations(applications, www_enabled: bool = False, group_names
for d in _to_list(canonical_raw, allow_mapping=True):
if not d:
continue
code = _valid_http_code(sc_map.get("default"))
expectations[d] = [code] if code is not None else list(DEFAULT_OK)
codes = sc_map.get("default")
expectations[d] = list(codes) if codes else list(DEFAULT_OK)
for d in aliases:
if d:

View File

@@ -23,7 +23,7 @@
system_service_tpl_exec_start: >
{{ system_service_script_exec }}
--web-protocol {{ WEB_PROTOCOL }}
--expectations '{{ applications | web_health_expectations(www_enabled=WWW_REDIRECT_ENABLED, group_names=group_names) | to_json }}'
--expectations '{{ applications | web_health_expectations(www_enabled=WWW_REDIRECT_ENABLED | bool, group_names=group_names) | to_json }}'
system_service_suppress_flush: true # The healthcheck will just work after all routines passed
- include_tasks: utils/run_once.yml

View File

@@ -8,8 +8,9 @@
- include_role:
name: sys-service
vars:
system_service_state: restarted
system_service_on_calendar: "{{ SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_DEPLOY }}"
persistent: "true"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_state: restarted
system_service_on_calendar: "{{ SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_DEPLOY }}"
persistent: "true"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_force_linear_sync: false

View File

@@ -3,7 +3,7 @@
name: '{{ item }}'
loop:
- sys-svc-certbot
- sys-svc-webserver
- sys-svc-webserver-core
- sys-ctl-alm-compose
- name: install certbot
@@ -15,8 +15,9 @@
- include_role:
name: sys-service
vars:
system_service_copy_files: false
system_service_on_calendar: "{{ SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_RENEW }}"
persistent: true
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_copy_files: false
system_service_on_calendar: "{{ SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_RENEW }}"
persistent: true
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_force_linear_sync: false

View File

@@ -12,9 +12,10 @@
- include_role:
name: sys-service
vars:
system_service_suppress_flush: true # It takes a super long time - Better wait for failure of timed service instead of executing it on every play
system_service_copy_files: false
system_service_on_calendar: "{{ SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start: "/bin/sh -c 'btrfs-auto-balancer 90 10'"
system_service_suppress_flush: true # It takes a super long time - Better wait for failure of timed service instead of executing it on every play
system_service_copy_files: false
system_service_on_calendar: "{{ SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start: "/bin/sh -c 'btrfs-auto-balancer 90 10'"
system_service_force_linear_sync: true

View File

@@ -12,5 +12,6 @@
system_service_tpl_exec_start: '{{ system_service_script_exec }} {{ PATH_DOCKER_COMPOSE_INSTANCES }}'
system_service_tpl_exec_start_post: "/usr/bin/systemctl start {{ SYS_SERVICE_CLEANUP_ANONYMOUS_VOLUMES }}"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_force_linear_sync: true
- include_tasks: utils/run_once.yml

View File

@@ -10,5 +10,6 @@
system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP| join(' ') }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }} --timeout '{{ SYS_TIMEOUT_DOCKER_RPR_SOFT }}'"
system_service_tpl_exec_start: >
/bin/sh -c '{{ system_service_script_exec }} --manipulation-string "{{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }}" {{ PATH_DOCKER_COMPOSE_INSTANCES }}'
system_service_force_linear_sync: true
- include_tasks: utils/run_once.yml

View File

@@ -17,6 +17,7 @@
proxied: "{{ item.proxied | default(false) }}"
ttl: "{{ item.ttl | default(1) }}"
state: "{{ item.state | default('present') }}"
solo: "{{ item.solo | default(false) }}"
loop: "{{ cloudflare_records | selectattr('type','in',['A','AAAA']) | list }}"
loop_control: { label: "{{ item.type }} {{ item.name }} -> {{ item.content }}" }
async: "{{ cloudflare_async_enabled | ternary(cloudflare_async_time, omit) }}"
@@ -48,6 +49,7 @@
ttl: "{{ item.ttl | default(1) }}"
priority: "{{ (item.type == 'MX') | ternary(item.priority | default(10), omit) }}"
state: "{{ item.state | default('present') }}"
solo: "{{ item.solo | default(false) }}"
loop: "{{ cloudflare_records | selectattr('type','in',['CNAME','MX','TXT']) | list }}"
loop_control: { label: "{{ item.type }} {{ item.name }} -> {{ item.value }}" }
async: "{{ cloudflare_async_enabled | ternary(cloudflare_async_time, omit) }}"
@@ -83,6 +85,7 @@
value: "{{ item.value }}"
ttl: "{{ item.ttl | default(1) }}"
state: "{{ item.state | default('present') }}"
solo: "{{ item.solo | default(false) }}"
loop: "{{ cloudflare_records | selectattr('type','equalto','SRV') | list }}"
loop_control: { label: "SRV {{ item.service }}.{{ item.proto }} {{ item.name }} -> {{ item.value }}:{{ item.port }}" }
ignore_errors: "{{ item.ignore_errors | default(true) }}"

View File

@@ -3,7 +3,10 @@
include_role:
name: sys-dns-cloudflare-records
vars:
cloudflare_records: "{{ SYN_DNS_WILDCARD_RECORDS }}"
cloudflare_records: "{{ SYN_DNS_WILDCARD_RECORDS }}"
cloudflare_async_enabled: "{{ ASYNC_ENABLED | bool }}"
cloudflare_async_time: "{{ ASYNC_TIME }}"
cloudflare_async_poll: "{{ ASYNC_POLL }}"
when: DNS_PROVIDER == 'cloudflare'
- include_tasks: utils/run_once.yml

View File

@@ -41,9 +41,9 @@
when: inj_enabled.logout
- block:
- name: Include dependency 'sys-svc-webserver'
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver
when: run_once_sys_svc_webserver is not defined
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_all is not defined

View File

@@ -1,7 +1,7 @@
- name: Include dependency 'sys-svc-webserver'
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver
when: run_once_sys_svc_webserver is not defined
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- name: Generate color palette with colorscheme-generator
set_fact:

View File

@@ -1,8 +1,8 @@
- block:
- name: Include dependency 'sys-svc-webserver'
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver
when: run_once_sys_svc_webserver is not defined
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: 01_deploy.yml
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_desktop is not defined

View File

@@ -1,9 +1,9 @@
- block:
- name: Include dependency 'sys-svc-webserver'
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver
when: run_once_sys_svc_webserver is not defined
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_javascript is not defined

View File

@@ -1,8 +1,8 @@
- name: Include dependency 'sys-svc-webserver'
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver
name: sys-svc-webserver-core
when:
- run_once_sys_svc_webserver is not defined
- run_once_sys_svc_webserver_core is not defined
- name: "deploy the logout.js"
include_tasks: "02_deploy.yml"

Some files were not shown because too many files have changed in this diff Show More