548 Commits

Author SHA1 Message Date
2f46b99e4e XWiki: add diagnostic and modern AuthService handling
- Added 05_set_authservice.yml to set XWikiPreferences.authenticationService
  to modern component hints (standard, oidc, ldap).
- Added _auth_diag.yml to introspect registered AuthService components and
  verify the active preference.
- Updated docker-compose.yml.j2 to use -Dxwiki.authentication.authservice
  instead of deprecated authclass syntax.
- Temporarily included AuthDiag task in 01_core.yml for runtime verification.

Context: https://chatgpt.com/share/69005d88-6bf8-800f-af41-73b0e5dc9c13
2025-10-28 07:07:42 +01:00
295ae7e477 Solved Mediawiki CPS bug whichg prevented OIDC Login 2025-10-27 20:33:07 +01:00
c67ccc1df6 Used path_join @ web-app-friendica 2025-10-26 15:48:28 +01:00
cb483f60d1 optimized for easier debugging 2025-10-25 12:52:17 +02:00
2be73502ca Solved tests 2025-10-25 11:46:36 +02:00
57d5269b07 CSP (Safari-safe): merge -elem/-attr into base; respect explicit disables; no mirror-back; header only for documents/workers
- Add CSP3 support for style/script: include -elem and -attr directives
- Base (style-src, script-src) now unions elem/attr (CSP2/Safari fallback)
- Respect explicit base disables (e.g. style-src.unsafe-inline: false)
- Hashes only when 'unsafe-inline' absent in the final base tokens
- Nginx: set CSP only for HTML/worker via header_filter_by_lua_block; drop for subresources
- Remove per-location header_filter; keep body_filter only
- Update app role flags to *-attr where appropriate; extend desktop CSS sources
- Add comprehensive unit tests for union/explicit-disable/no-mirror-back

Ref: https://chatgpt.com/share/68f87a0a-cebc-800f-bb3e-8c8ab4dee8ee
2025-10-22 13:53:06 +02:00
1eefdea050 Solved CSP errors for MiniQR 2025-10-22 12:49:22 +02:00
561160504e Add new web-app-mini-qr role
- Introduced new role 'web-app-mini-qr' to deploy the lightweight, self-hosted Mini-QR application.
- Added dedicated subnet and localhost port mapping (8059) in group_vars.
- Ensured proper dependency structure and run_once handling in MIG role.
- Included upstream reference and CSP whitelist for temporary clarity.ms removal tracking.
- Added README.md and meta information following the Infinito.Nexus web-app schema.

See: https://chatgpt.com/share/68f890ab-5960-800f-85f8-ba30bd4350fe
2025-10-22 10:07:35 +02:00
9a4bf91276 feat(nextcloud): enable custom Alpine-based Whiteboard image with Chromium & ffmpeg support
- Added role tasks to deploy templated Dockerfile for Whiteboard service
- Configured build context and custom image name (nextcloud_whiteboard_custom)
- Increased PID limits and shm_size for stable recording
- Adjusted user ID variable naming consistency
- Integrated path_join for service directory variables
- Fixed build permissions (install as root, revert to nobody)

Reference: ChatGPT conversation https://chatgpt.com/share/68f771c6-0e98-800f-99ca-9e367f4cd0c2
2025-10-21 13:44:11 +02:00
468b6e734c Deactivated whiteboar 2025-10-20 21:17:06 +02:00
83cb94b6ff Refactored Redis resource include macro and increased memory limits
- Replaced deprecated lookup(vars=...) in svc-db-redis with macro-based include (Ansible/Jinja safe)
- Redis now uses higher resource values (1 CPU, 1G reserved, 8G max, 512 pids)
- Enables stable Whiteboard operation with >3.5 GB Redis memory usage
- Related conversation: https://chatgpt.com/share/68f67a00-d598-800f-a6be-ee5987e66fba
2025-10-20 20:08:38 +02:00
6857295969 Fix variable definition test to recognize block-style Jinja 'set ... endset' statements
This update extends the regex to detect block-style variable definitions such as:
  {% set var %} ... {% endset %}
Previously, only inline 'set var =' syntax was recognized, causing false positives
like '_snippet' being flagged as undefined in Jinja templates.

Reference: https://chatgpt.com/share/68f6799a-eb80-800f-ab5c-7c196d4c4661
2025-10-20 20:04:40 +02:00
8ab398f679 nextcloud:whiteboard: wait for Redis before start (depends_on: service_healthy) to prevent early SocketClosedUnexpectedlyError
Context: added depends_on on redis for the Whiteboard service so websockets don’t crash when Redis isn’t ready yet. See discussion: https://chatgpt.com/share/68f65a3e-aa54-800f-a1a7-e6878775fd7e
2025-10-20 17:50:47 +02:00
31133ddd90 Enhancement: Fix for Nextcloud Whiteboard recording and collaboration server
- Added Chromium headless flags and writable font cache/tmp volumes
- Enabled WebSocket proxy forwarding for /whiteboard/
- Verified and adjusted CSP and frontend integration
- Added Whiteboard-related variables and volumes in main.yml

See ChatGPT conversation (20 Oct 2025):
https://chatgpt.com/share/68f655e1-fa3c-800f-b35f-4f875dfed4fd
2025-10-20 17:31:59 +02:00
783b1e152d Added numpy 2025-10-20 11:03:44 +02:00
eca567fefd Made gitea LDAP Source primary domain independent 2025-10-18 10:54:39 +02:00
905f461ee8 Add basic healthcheck to oauth2-proxy container template using binary version check for distroless compatibility
Reference: https://chatgpt.com/share/68f35550-4248-800f-9c6a-dbd49a48592e
2025-10-18 10:52:58 +02:00
9f0b259ba9 Merge branch 'master' of github.com:kevinveenbirkenbach/infinito-nexus 2025-10-18 09:41:18 +02:00
06e4323faa Added ansible environmnet 2025-10-17 23:07:43 +02:00
3d99226f37 Refactor BigBlueButton and backup task structure:
- Moved database seed variables from vars/main.yml to task-level include in BigBlueButton
- Simplified core include logic in sys-ctl-bkp-docker-2-loc
- Ensured clean conditional for BKP_DOCKER_2_LOC_DB_ENABLED
See: https://chatgpt.com/share/68f216f7-62d8-800f-94e3-c82e4418e51b (deutsch)
2025-10-17 12:14:39 +02:00
73ba09fbe2 Optimize SSH connection performance by disabling GSSAPI authentication and reverse DNS lookups
- Added 'GSSAPIAuthentication no' to prevent unnecessary Kerberos negotiation delays.
- Added 'UseDNS no' to skip reverse DNS resolution during SSH login, improving connection speed.
- Both changes improve SSH responsiveness, especially in non-domain environments.

Reference: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
2025-10-15 18:37:09 +02:00
01ea9b76ce Enable pipelining globally and modernize SSH settings
- Activated pipelining in [defaults] for better performance.
- Replaced deprecated 'scp_if_ssh' with 'transfer_method'.
- Flattened multi-line ssh_args for compatibility.
- Verified configuration parsing as discussed in https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
2025-10-15 17:45:16 +02:00
c22acf202f Solved bugs 2025-10-15 17:03:57 +02:00
61e138c1a6 Optimize OpenLDAP container resources for up to 5k users (1.25 CPU / 1.5GB RAM / 1024 PIDs). See https://chatgpt.com/share/68ef7228-4028-800f-8986-54206a51b9c1 2025-10-15 12:06:51 +02:00
07c8e036ec Deactivated change when because its anyhow not trackable 2025-10-15 10:27:12 +02:00
0b36059cd2 feat(web-app-gitea): add optional Redis integration for caching, sessions, and queues
This update introduces conditional Redis support for Gitea, allowing connection
to either a local or centralized Redis instance depending on configuration.
Includes resource limits for the Redis service and corresponding environment
variables for cache, session, and queue backends.

Reference: ChatGPT conversation on centralized vs per-app Redis architecture (2025-10-15).
https://chatgpt.com/share/68ef5930-49c8-800f-b6b8-069e6fefda01
2025-10-15 10:20:18 +02:00
d76e384ae3 Enhance CertUtils to return the newest matching certificate and add comprehensive unit tests
- Added run_openssl_dates() to extract notBefore/notAfter timestamps.
- Modified mapping logic to store multiple cert entries per SAN with metadata.
- find_cert_for_domain() now selects the newest certificate based on notBefore and mtime.
- Exact SAN matches take precedence over wildcard matches.
- Added new unit tests (test_cert_utils_newest.py) verifying freshness logic, fallback handling, and wildcard behavior.

Reference: https://chatgpt.com/share/68ef4b4c-41d4-800f-9e50-5da4b6be1105
2025-10-15 09:21:00 +02:00
e6f4f3a6a4 feat(cli/build/defaults): ensure deterministic alphabetical sorting for applications and users
- Added sorting by application key and user key before YAML output.
- Ensures stable and reproducible file generation across runs.
- Added comprehensive unit tests verifying key order and output stability.

See: https://chatgpt.com/share/68ef4778-a848-800f-a50b-a46a3b878797
2025-10-15 09:04:39 +02:00
a80b26ed9e Moved bbb database seeding 2025-10-15 08:50:21 +02:00
45ec7b0ead Optimized include text 2025-10-15 08:39:37 +02:00
ec396d130c Optimized time schedule 2025-10-15 08:37:51 +02:00
93c2fbedd7 Added setting of timezone 2025-10-15 02:24:25 +02:00
d006f0ba5e Optimized schedule 2025-10-15 02:13:13 +02:00
dd43722e02 Raised memory for baserow 2025-10-14 21:59:10 +02:00
05d7ddc491 svc-bkp-rmt-2-loc: migrate pull script to Python + add unit tests; lock down backup-provider ACLs
- Replace Bash pull-specific-host.sh with Python pull-specific-host.py (argparse, identical logic)
- Update role vars and runner template to call python script
- Add __init__.py files for test discovery/imports
- Add unittest: tests/unit/roles/svc-bkp-rmt-2-loc/files/test_pull_specific_host.py (mocks subprocess/os/time; covers success, no types, find-fail, retry-exhaustion)
- Backup provider SSH wrapper: align allowed ls path (backup-docker-to-local)
- Split user role tasks: 01_core (sudoers), 02_permissions_ssh (SSH keys + wrapper), 03_permissions_folders (ownership + default ACLs + depth-limited chown/chmod)
- Ensure default ACLs grant rwx to 'backup' and none to group/other; keep sudo rsync working

Ref: ChatGPT discussion (2025-10-14) — https://chatgpt.com/share/68ee920a-9b98-800f-8806-ddcfe0255149
2025-10-14 20:10:49 +02:00
e54436821c Refactor sys-front-inj-all dependencies handling
Moved CDN and logout role inclusions into a dedicated '01_dependencies.yml' file for better modularity and reusability.
Added variable injection support via 'vars:' to allow flexible configuration like 'proxy_extra_configuration'.

See: https://chatgpt.com/share/68ee880d-cd80-800f-8dda-9e981631a5c7
2025-10-14 19:27:56 +02:00
ed73a37795 Improve get_app_conf robustness and add skip_missing_app parameter support
- Added new optional parameter 'skip_missing_app' to get_app_conf() in module_utils/config_utils.py to safely return defaults when applications are missing.
- Updated group_vars/all/00_general.yml and roles/web-app-nextcloud/config/main.yml to include skip_missing_app=True in all Nextcloud-related calls.
- Added comprehensive unit tests under tests/unit/module_utils/test_config_utils.py covering missing app handling, schema enforcement, nested lists, and index edge cases.

Ref: https://chatgpt.com/share/68ee6b5c-6db0-800f-bc20-d51470d7b39f
2025-10-14 17:25:37 +02:00
adff9271fd Solved rmt backup bugs 2025-10-14 16:29:42 +02:00
2f0fb2cb69 Merged network definitions before application definitions 2025-10-14 15:52:28 +02:00
6abf2629e0 Removed false - 2025-10-13 19:03:23 +02:00
6a8e0f38d8 fix(web-svc-collabora): add required Docker capabilities and resource limits for Collabora Jails
- Added security_opt (seccomp=unconfined, apparmor=unconfined) and cap_add (MKNOD, SYS_CHROOT, SETUID, SETGID, FOWNER)
  to allow Collabora's sandbox (coolmount/systemplate) to mount and chroot properly
- Increased resource limits (2 CPUs, 2 GB RAM, 2048 PIDs) to prevent document timeout and OOM issues
- Resolves 'coolmount: Operation not permitted' and systemplate performance warnings

Refs: https://chatgpt.com/share/68ed03cd-1afc-800f-904e-d1c1cb133914
2025-10-13 15:52:50 +02:00
ae618cbf19 refactor(web-app-desktop, web-app-discourse): improve initialization handling and HTTP readiness check
- Added HTTP readiness check for Desktop application to ensure all logos can be downloaded during initialization
- Introduced 'http_port' variable for better readability
- Simplified role execution structure by moving run_once inclusion into core task file
- Adjusted docker compose handler flushing behavior
- Applied consistent structure to Discourse role

See: https://chatgpt.com/share/68ed02aa-b44c-800f-a125-de8600b102d4
2025-10-13 15:48:26 +02:00
c835ca8f2c refactor(front-injection): stabilize run_once flow and explicit web-service loading
- sys-front-inj-all: load web-svc-cdn and web-svc-logout once; reinitialize inj_enabled after services; move run_once block to top; reorder injections.
- sys-front-inj-css: move run_once call into 01_core; fix app_style_present default; simplify main.
- sys-front-inj-desktop/js/matomo: deactivate per-role run_once blocks; keep utils/run_once where appropriate.
- sys-front-inj-logout: switch to files/logout.js + copy; update head_sub mtime lookup; mark set_fact tasks unchanged.
- sys-svc-cdn: inline former 01_core tasks into main; ensure shared/vendor dirs and set run_once in guarded block; remove 01_core.yml.

Rationale: prevent cascading 'false_condition: run_once_sys_svc_cdn is not defined' skips by setting run-once facts only after the necessary tasks and avoiding parent-scope guards; improves determinism and handler flushing.

Conversation: https://chatgpt.com/share/68ecfaa5-94a0-800f-b1b6-2b969074651f
2025-10-13 15:12:23 +02:00
087175a3c7 Solved mailu token bug 2025-10-13 10:50:59 +02:00
3da645f3b8 Mailu/MSMTP: split token mgmt, idempotent reload, safer guards
• Rename: 02_create-user.yml → 02_manage_user.yml; 03_create-token.yml → 03a_manage_user_token.yml + 03b_create_user_token.yml
• Only (re)run sys-svc-msmtp when no-reply token exists; set run_once_sys_svc_msmtp=true in 01_core
• Reset by setting run_once_sys_svc_msmtp=false after creating no-reply token; then include sys-svc-msmtp
• Harden when-guards (no '{{ }}' in when, safe .get lookups)
• Minor formatting and failed_when readability

Conversation: https://chatgpt.com/share/68ebd196-a264-800f-a215-3a89d0f96c79
2025-10-12 18:05:00 +02:00
a996e2190f feat(logout): wire injector to web-svc-logout and add robust CORS/CSP for /logout
- sys-front-inj-logout: depend on web-svc-logout (run-once guarded) and simplify task flow.
- web-svc-logout: align feature flags/formatting and extend CSP:
  - add cdn.jsdelivr.net to connect/script/style and quote values.
- Nginx: move CORS config into logout-proxy.conf.j2 with dynamic vars:
  - Access-Control-Allow-Origin set to canonical logout origin,
  - Allow-Credentials=true,
  - Allow-Methods=GET, OPTIONS,
  - basic headers list (Accept, Authorization),
  - cache disabled for /logout responses.
- Drop obsolete CORS var passing from 01_core.yml; headers now templated at proxy layer.

Prepares clean cross-origin logout orchestration from https://logout.veen.world.

Refs: ChatGPT discussion – https://chatgpt.com/share/68ebb75f-0170-800f-93c5-e5cb438b8ed4
2025-10-12 16:16:47 +02:00
7dccffd52d Optimized variable whitespacing 2025-10-12 14:37:45 +02:00
853f2c3e2d Added mastodon to disc space cleanup script 2025-10-12 14:37:19 +02:00
b2978a3141 Changed mailu token name 2025-10-12 14:36:13 +02:00
0e0b703ccd Added cleanup to mastodon 2025-10-12 14:03:28 +02:00
0b86b2f057 Set MODE_CLEANUP default true and solved tld localhost user name bug 2025-10-12 01:15:52 +02:00
80e048a274 Fix: make EspoCRM entrypoint POSIX-compliant and remove illegal 'pipefail' usage
The previous entrypoint script used 'set -euo pipefail', which caused runtime errors
because /bin/sh (Dash/BusyBox) does not support 'pipefail'. This commit makes the
entrypoint fully POSIX-safe, adds robust fallbacks for missing scripts, and improves
logging. Also removes a trailing newline in the navigator Docker Compose template.

Related ChatGPT discussion: https://chatgpt.com/share/68eab0b7-7a64-800f-a8aa-e7d7262a262e
2025-10-11 21:33:07 +02:00
2610aec293 Deactivated cakeday plugin because it's an onboard plugin 2025-10-11 18:23:38 +02:00
07db162368 Reformated navigation role 2025-10-11 18:04:58 +02:00
a526d1adc4 Solved Keycloak Master Email Configuration Update settings 2025-10-11 16:57:36 +02:00
ca95079111 Added Email Configuration for Keycloak Master Realm 2025-10-11 16:45:50 +02:00
e410d66cb4 Add health check for Keycloak container and grant global 'admin' realm role to permanent admin user
This update waits for the Keycloak container to become healthy before attempting login and replaces the old realm-management based role assignment with the global 'admin' realm role.
See: https://chatgpt.com/share/68e99953-e988-800f-8b82-9ffb14c11910
2025-10-11 01:40:48 +02:00
ab48cf522f fix(keycloak): make permanent admin creation idempotent and fix password command
- prevent task failure when 'User exists with same username'
- remove invalid '--temporary false' flag from set-password command
- ensure realm-admin role grant remains idempotent

See: https://chatgpt.com/share/68e99271-fdb0-800f-a8ad-11c15d02a670
2025-10-11 01:10:57 +02:00
41c12bdc12 Refactor Keycloak permanent admin creation:
- Remove jq dependency to avoid container command errors
- Use username-based operations instead of user ID lookups
- Improve idempotency and portability across environments

See: https://chatgpt.com/share/68e98e77-9b3c-800f-8393-71b0be22cb46
2025-10-11 00:54:04 +02:00
aae463b602 Optimized keycloak setup procedures 2025-10-11 00:20:27 +02:00
bb50551533 Restrucutred keycloak accounts 2025-10-10 23:40:58 +02:00
098099b41e Refactored web-app-keycloak 2025-10-10 22:45:26 +02:00
0a7d767252 Added contains to filter 2025-10-10 22:16:23 +02:00
d88599f76c Added 'to_nice_json' exception 2025-10-10 22:16:22 +02:00
4d9890406e fix(sys-ctl-hlth-csp): ensure '--' separator is added when passing ignore list to checkcsp
Updated README to reflect correct usage with '--', adjusted script.py to always append separator, and simplified task template handling for consistency.

Ref: https://chatgpt.com/share/68dfc69b-7c94-800f-871b-3525deb8e374
2025-10-03 20:50:49 +02:00
59b652958f feat(sys-ctl-hlth-csp): add support for ignoring network block domains
Introduced new variable HEALTH_CSP_IGNORE_NETWORK_BLOCKS_FROM (list, default [])
to suppress network block reports (e.g., ORB) from specific external domains.
Updated script.py to accept and forward the flag, extended systemd exec command
in tasks, added defaults, and documented usage in README.

Ref: https://chatgpt.com/share/68dfc69b-7c94-800f-871b-3525deb8e374
2025-10-03 15:23:57 +02:00
a327adf8db Removed failing healthcheck 2025-10-02 20:02:10 +02:00
7a38cb90fb Added correct resources for baserow 2025-10-02 19:59:04 +02:00
9d6cf03f5b Fix: Replace unsupported /dev/tcp healthcheck with onboard PHP socket check for websocket service
Replaced the previous shell-based /dev/tcp healthcheck with a PHP fsockopen() test to ensure compatibility with minimal base images. This avoids dependency on missing tools like nc or curl and provides a reliable onboard check.

Conversation: https://chatgpt.com/share/68deb8ec-d920-800f-bd35-2869544fe30f
2025-10-02 19:40:13 +02:00
9439ac7f76 fix(web-app-xwiki): raise XWiki container resources and align YAML formatting
- Set cpus=1.0, mem_reservation=1g, mem_limit=2g, pids_limit=1024
- Keep LTS image/tag templating and Postgres type
- Normalize spacing/alignment for readability

Reason: Tomcat/XWiki needs >1 GB; low limits caused slow boots/502 upstream not ready.
Conversation: https://chatgpt.com/share/68de5266-c8a0-800f-bfbc-de85262de53e
2025-10-02 12:22:49 +02:00
23353ac878 infra(sys-service): centralize async control + pre-deploy backup safeguard
- Added MODE_BACKUP to trigger backup before the rest of the deployment

- sys-ctl-bkp-docker-2-loc: force linear sync and force flush when MODE_BACKUP is true

- Unified name resolution via system_service_name across handlers and tasks

- Introduced system_service_force_linear_sync and system_service_force_flush (rename from system_force_flush)

- Drive async/poll via system_service_async/system_service_poll using omit when disabled

- Propagated per-role overrides (cleanup, repair, cert tasks) for clarity and safety

- Minor formatting and consistency cleanups

Why: Ensure the backup runs before the deployment routine to safeguard data integrity.

Refs: Conversation https://chatgpt.com/share/68de4c41-b6e4-800f-85cd-ce6949097b5e
Signed-off-by: Kevin Veen-Birkenbach <kevin@veen.world>
2025-10-02 11:58:23 +02:00
8beda2d45d fix(svc-db-postgres): pin Postgres version to 17-3.5, add entity_name var, and dynamically resolve major version for dev package
- Changed default Docker image version from 'latest' to '17-3.5' in config
- Introduced entity_name var for consistent lookups
- Added POSTGRES_VERSION and POSTGRES_VERSION_MAJOR extraction
- Updated Dockerfile to install postgresql-server-dev-<major> with default fallback to 'all'
- Minor YAML formatting improvements

Ref: https://chatgpt.com/share/68de40b4-2eb8-800f-ab5b-11cc873c3604
2025-10-02 11:07:17 +02:00
5773409bd7 Changed nextcloud domain to next.cloud.primary_domain 2025-10-02 09:19:32 +02:00
b3ea962338 Implemented sleeping time for server 2025-10-02 09:08:32 +02:00
b9fbf92461 proxy(cors): make ACAO opt-in; remove hardcoded default
Stop forcing Access-Control-Allow-Origin to $scheme://$host. This default broke Element (element.infinito.nexus) -> Synapse (matrix.infinito.nexus) CORS and blocked login. Now ACAO is only set when 'aca_origin' is provided; otherwise we defer to the upstream app (e.g., Synapse) to emit correct CORS headers. Also convert top comments to Jinja block comment.

Discussion & debugging details: https://chatgpt.com/share/68de2236-4aec-800f-adc5-d025922c8753
2025-10-02 08:57:14 +02:00
6824e444b0 Changed bitnami images to legacy. See https://github.com/bitnami/containers/issues/83267. 2025-10-02 07:31:20 +02:00
5cdcc18a99 Fix PeerTube OIDC plugin automation
- Store oidc_settings as proper YAML dict with correct keys
- Ensure plugin is installed only if missing
- Update DB settings as jsonb and enforce enabled/uninstalled state
- Add CLI enforcement for plugin activation
- Correct task conditions (enable/disable logic) with boolean filters

Ref: https://chatgpt.com/share/68dd1d16-9b34-800f-b2bf-a3fe058f25b1
2025-10-01 14:23:07 +02:00
e7702948b8 EspoCRM role: custom image + single data volume + runtime flag setter
• Build a custom image and replace upstream entrypoint with docker-entrypoint-custom.sh (strict fail on flag script).

• Introduce set_flags.php and wire via ESPOCRM_SET_FLAGS_SCRIPT; apply flags at container start; clear cache afterwards.

• Keep exactly one Docker volume (data:/var/www/html/); drop separate custom/extensions mounts.

• Compose: use custom image, add healthchecks & depends_on for daemon/websocket; keep service healthy gating.

• Ansible: deploy scripts, build & up via handlers; patch siteUrl as www-data; run upgrade non-fatal; always run flag setter.

• Vars/Env: add ESPO_INIT_* toggles and ESPOCRM_SET_FLAGS_SCRIPT; refactor variables for scripts & custom image paths.

Conversation context: https://chatgpt.com/share/68dd1992-020c-800f-bcf5-2db60cb4aab2
2025-10-01 14:08:09 +02:00
09a4c243d7 Add centralized include for Access-Control-Allow headers across proxy/service Nginx templates and align ACA vars for simpleicons task.
Ref: https://chatgpt.com/share/68dbf59c-f424-800f-aa27-787db52e260f
2025-09-30 17:22:28 +02:00
1d5a50abf2 Optimized path building 2025-09-30 16:36:28 +02:00
0d99c7f297 Nextcloud: refactor Talk → HPB, switch to bridge mode, and template cleanups
- Change Talk (HPB) network_mode from host → bridge and drop TURN relay range mapping
- Remove obsolete nginx restart handler; rely on 'docker compose up' notify
- Fix spreed task condition to use HPB standalone flag
- docker-compose.yml.j2: parameterize service names, use NEXTCLOUD_*_SERVICE vars, align host-gateway condition with HPB, tidy ports/expose/network blocks
- env.j2/nginx configs: rename TALK_* → HPB_* variables and locations; use templated NEXTCLOUD_SERVICE for php upstream
- vars: introduce entity_name; centralize *SERVICE keys; rename all Talk vars to HPB; adjust whiteboard keys; compute URLs/JSON configs accordingly
- spreed plugin vars: point to HPB signaling/STUN/TURN and internal secret

Ref: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 12:52:15 +02:00
0a17e54d8c Nextcloud: set conservative Docker resource limits and template cleanups
- Add CPU/memory/PID limits for redis, database, proxy, cron, talk, whiteboard
- Keep nextcloud service unchanged except existing settings
- Normalize service_name templating and indentation in docker-compose.yml.j2
- Mount Janus config for Talk via volume

Ref: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 11:54:14 +02:00
bf94338845 Nextcloud/Nginx: wire Talk signaling WS location via reusable snippet
Conditionally include the generic WebSocket proxy block for NEXTCLOUD_TALK_SIGNALING_ENABLED. Set location_ws to '^~ <location>' and ws_port to NEXTCLOUD_PORT, then include roles/sys-svc-proxy/templates/location/ws.conf.j2. This enables proper Upgrade/Connection headers and disables buffering for the signaling path.

Context: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 11:17:54 +02:00
5d42b78b3d Nextcloud: extend CSP for Talk & disable keeporsweep
CSP: add cloud.<PRIMARY_DOMAIN> to connect-src and frame-src (both HTTP and WS) and allow worker-src 'blob:' for web workers used by Talk/Collabora.

Apps: disable keeporsweep (installation no longer possible) and document reason.

Context: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 11:15:32 +02:00
26a1992d84 Nextcloud/Talk: add Janus config & fix WebSocket proxying
Nginx: define 'map $http_upgrade $connection_upgrade' once in http{} and reuse; drop duplicate map from ws_generic vhost; tidy ws location headers/spacing. Nextcloud: add WS location for standalone signaling; render & mount Janus config (NAT 1:1, ICE enforce/ignore lists, libnice hardening); extend CSP (connect-src/frame-src for cloud & collabora, worker-src blob:); disable keeporsweep app; replace nginx reload handler with compose up; add NEXTCLOUD_HOST_JANUS_CONF_PATH and related vars.

Context: https://chatgpt.com/share/68db9f41-16ec-800f-9cdf-7530862f89aa
2025-09-30 11:14:15 +02:00
2439beb95a Added correct minio http statuscodes 2025-09-29 17:29:29 +02:00
251f7b227d Add healthchecks for all Taiga services, fix RabbitMQ env var names, and define TAIGA_HOSTNAME
Details:
- Implemented healthchecks for taiga, async, rabbitmq, front, events, protected, and gateway
- Corrected RabbitMQ env variables (RABBITMQ_DEFAULT_USER/PASS/VHOST/ERLANG_COOKIE)
- Added TAIGA_HOSTNAME for backend service

See: https://chatgpt.com/share/68da9d6b-b164-800f-bcb7-410b40219a1e
2025-09-29 17:09:42 +02:00
3fbb9c38a8 Solved coturn volume bug 2025-09-29 15:33:50 +02:00
29e8b3a590 Deactivated recording for Big Blue Button 2025-09-29 15:23:48 +02:00
27b89d8fb6 Taiga: refactor service naming & resource limits
Add CPU/memory/pids limits for taiga, async, front, gateway, events, async-rabbitmq, events-rabbitmq, manager, and protected. Align manager service usage (was taiga-manage) in admin tasks and inits compose. Switch to variable-driven service names (TAIGA_* vars), add container_name patterns, normalize volume mappings via TAIGA_VOLUME_STATIC/MEDIA, fix depends_on to use TAIGA_* vars, and set RabbitMQ hostnames from vars. Remove obsolete Development.md.

Conversation reference: https://chatgpt.com/share/68da83b7-0cb4-800f-9702-d8a2d4ebea71  (replace with this chat’s share link)
2025-09-29 15:04:12 +02:00
55f2d15e93 Fix coturn container/volume separation: use COTURN_CONTAINER for container_name and map COTURN_VOLUME to /var/lib/coturn
Details:
- Removed anonoumys volume
- Renamed container_name variable to COTURN_CONTAINER
- Added dedicated COTURN_VOLUME with _data suffix for persistence
- Mount COTURN_VOLUME into /var/lib/coturn

Reference: https://chatgpt.com/share/68da6f12-b238-800f-932b-e37c8a50dddd
2025-09-29 13:36:30 +02:00
aa19a97ed6 CORS/CSP hardening & centralization
- Add reusable Nginx include: roles/sys-svc-proxy/templates/headers/access_control_allow.conf.j2
  (dynamic ACAO/credentials/methods/headers via role vars)
- Set global 'Vary: Origin' in nginx.conf.j2 to prevent cache poisoning
- CSP: allow Simple Icons via connect-src when feature is enabled
- Front proxy: rename vars to lowercase + flush handlers after config deploy
- Desktop: gate & load Simple Icons role; inject brand logos when enabled
- Bluesky + Logout: replace inline CORS with centralized include
- Simpleicons: public CORS (ACAO='*', no credentials), keep GET/OPTIONS, allow headers
- Taiga: adjust canonical domain to taiga.kanban.{{ PRIMARY_DOMAIN }}
- LibreTranslate: remove unused images/versions keys

Fixes: https://open.project.infinito.nexus/projects/cymais/work_packages/342/activity
Discussion: https://chatgpt.com/share/68da5e27-ffd4-800f-91a3-0ef103058d44
2025-09-29 12:23:58 +02:00
c06d1c4d17 Refactor yay update handling:
- Move AUR update task into dev-yay role
- Centralize defaults (AUR_HELPER, AUR_BUILDER_USER, etc.)
- Remove separate update-yay role (redundant)

See conversation with ChatGPT https://chatgpt.com/share/68da3219-6d78-800f-92ad-0a5061bac8be and related work item:
https://open.project.infinito.nexus/projects/cymais/work_packages/341/activity
2025-09-29 09:16:02 +02:00
66f294537d Replaced fixed 'web' service call for exec with 'ESPOCRM_SERVICE' variable for exec call 2025-09-28 15:52:44 +02:00
a9097a3ec3 web-app-espocrm: add resource limits, init/stop settings and cleanups
- Added CPU, memory and PID limits for espocrm, daemon and websocket services
- Enabled init process and graceful stop (SIGTERM, 30s) in docker-compose
- Adjusted env template (removed forced True/default flags)
- Introduced entity_name/ESPOCRM_SERVICE in vars for service naming
- Minor cleanup of get_app_conf defaults

Ref: https://chatgpt.com/share/68d937ce-9c34-800f-9136-54baed9c91c7
2025-09-28 15:50:28 +02:00
fc59c64273 Nextcloud Talk: fix virtual-background web check by
- adding explicit MIME types for .wasm and .tflite in internal Nginx
- relaxing CSP (script-src: allow 'unsafe-eval') for WebAssembly
- removing obsolete turnserver draft.
Details: https://chatgpt.com/share/68d7dd39-50b8-800f-ab59-cfb1d3cf07cb
2025-09-27 14:49:42 +02:00
dbbb3510f3 Refactor TURN/STUN config:
- Removed ?transport=udp from Nextcloud Talk TURN server definitions
- Dropped --no-tcp-relay to allow TCP fallback
- Removed invalid UDP mapping on TLS port
- Introduced switch between REST secret auth and lt-cred-mech via COTURN_USER_AUTH_ENABLED
- Added user_auth_enabled flag in coturn config for flexibility

See: https://chatgpt.com/share/68d7d601-3558-800f-bc84-00d7e8fc3243
2025-09-27 14:18:29 +02:00
eb3bf543a4 Removed turn and stun protocol prefix 2025-09-27 13:58:46 +02:00
4f5602c791 Nextcloud Talk: fix TURN/STUN config
- Removed duplicate Admin Manual link in README
- Fixed turnserver.config.php draft return syntax
- Unified onboard port handling in docker-compose and env
- Updated vars to define STUN/TURN configs with correct schemas
- Ensured spreed plugin config serializes clean JSON arrays

Ref: https://chatgpt.com/share/68d7cfa2-7378-800f-9ecf-09b6bb768f13
2025-09-27 13:51:17 +02:00
75d476267e Optimized Nextcloud variables 2025-09-27 12:14:57 +02:00
c3e5db7f2e Cleaned up LDAP entries to keep it more clean 2025-09-27 11:30:39 +02:00
dfd2d243b7 Enabled recordings for BBB because https://github.com/bigbluebutton/bigbluebutton/issues/9202 was solved 2025-09-27 11:28:07 +02:00
78ad2ea4b6 nextcloud(spreed): output valid JSON via to_json for signaling/stun/turn; keep internal_secret plain https://chatgpt.com/share/68d75f71-6de8-800f-854c-207771c8d883 2025-09-27 05:52:32 +02:00
c362e160fc Nextcloud: switch Talk to host networking; update proxy routing and compose; centralize Talk secrets & spreed config; remove Greenlight block
Conversation: https://chatgpt.com/share/68d74e25-c068-800f-ae20-d0e34ac8ee12
2025-09-27 05:03:48 +02:00
a044028e03 Nextcloud Talk integration cleanup: unify secrets and signaling config
- Replace inline get_app_conf secrets in env.j2 with dedicated vars (TURN, signaling, internal)
- Correctly model signaling_servers as object {servers, secret} in spreed.yml
- Use UDP stun_turn port instead of TLS for transport=udp
- Add fallback logic for standalone Coturn role in main.yml
- Remove obsolete Greenlight section from BBB override

Ref: https://chatgpt.com/share/68d74e25-c068-800f-ae20-d0e34ac8ee12
2025-09-27 04:39:11 +02:00
7405883b48 BigBlueButton & Nextcloud:
- Switch to custom BBB Docker repository
- Externalize Coturn and Collabora by default
- Add dedicated 03_dependencies.yml for dependency handling
- Improve env templating with lowercased feature flags
- Add conditional healthcheck for Greenlight
- Refactor TURN/STUN/relay handling with role variable _BBB_COTURN_ROLE
- Extend Collabora/Greenlight dependency wiring in override file
- Nextcloud Talk: refine vars and enable/disable logic with separate plugin/service flags, add network_mode support and conditional nginx proxy block

Ref: https://chatgpt.com/share/68d741ff-a544-800f-9e81-a565e0bab0eb
2025-09-27 03:46:57 +02:00
85db0a40db Refactor Coturn port configuration: unify STUN and TURN into stun_turn and stun_turn_tls, update vars, docker-compose template, and add robust healthcheck [https://chatgpt.com/share/68d73a2d-ef34-800f-90d2-1628822ca541] 2025-09-27 03:14:53 +02:00
8af39c32ec Override docker conf variables from parents 2025-09-27 02:41:13 +02:00
31e86ac0fc Optimized networks 2025-09-27 02:21:19 +02:00
4d223f1784 feat(web-svc-coturn): add configurable network_mode (default host) and adjust credential generation
- Introduced `COTURN_NETWORK_MODE` to support both host and bridge modes
- Updated docker-compose template to skip port publishing in host mode
- Changed user_password credential algorithm to random_hex for stronger randomness
- Set default network_mode: host in config

Ref: https://chatgpt.com/share/68d72a50-c36c-800f-9367-32c4ae520000
2025-09-27 02:05:48 +02:00
926def3d01 web-svc-coturn: Add resource limits and fix docker-compose template
- Set CPU, memory reservation/limit, and PID limit for coturn
- Ensure docker_compose_file_creation_enabled and disable git repo pulling
- Move certificate mounts to volumes and fix env var interpolation in command
- Correct realm and user formatting

See: https://chatgpt.com/share/66f65f18-799c-800a-95f4-b6b26511e9cb
2025-09-27 01:40:37 +02:00
083b7d2914 Refactor relay port ranges: set coturn to 20000–39999, BigBlueButton to 40000–49999, and Nextcloud to 50000–59999
See: https://chatgpt.com/share/68d6f6c2-f7c0-800f-bc2e-10876afff4a8
2025-09-26 22:26:44 +02:00
73a38e0b2b Refactor TURN/STUN handling:
- Split internal/external Coturn for BBB and Nextcloud
- Added dedicated relay port ranges per app
- Updated env and compose overrides for coturn
- Ensure coturn role is loaded conditionally
- Standardize credential/env passing for coturn
@See https://chatgpt.com/share/68d6f376-4878-800f-b4f7-62822caa49ea
2025-09-26 22:11:55 +02:00
e3c0880e98 Fix semi-stateless run_once label and load both docker-compose & openresty handlers
See: https://chatgpt.com/share/68d6f2f3-59a4-800f-b20e-ed1c7df9e2ff
2025-09-26 22:09:39 +02:00
a817d964e4 refactor(front-stack): introduce sys-stk-front-base and semi-stateless stack; improve coturn role docs
- Extract common HTTPS + Cloudflare + handler bootstrap into new role sys-stk-front-base
- Update sys-stk-front-proxy, web-svc-cdn, web-svc-file, web-svc-html to depend on sys-stk-front-base
- Add new sys-stk-semi-stateless role combining front-base + back-stateless
- Update web-svc-coturn to use sys-stk-semi-stateless and rewrite README/meta with detailed Coturn description
- Unify sys-util-csp-cert README heading

Ref: ChatGPT conversation https://chatgpt.com/share/68d6cea2-3570-800f-acb3-c3277317f17b
2025-09-26 20:25:53 +02:00
7572134e9d Removed leftover 2025-09-26 19:39:38 +02:00
97af4990aa refactor(webserver): rename roles and update references
- Rename sys-svc-webserver -> sys-svc-webserver-core
- Rename sys-stk-front-pure -> sys-svc-webserver-https
- Update includes, run_once flags, and docs across:
  * sys-ctl-mtn-cert-renew
  * sys-front-inj-*
  * sys-stk-front-proxy
  * sys-svc-certs
  * sys-svc-cln-domains
  * web-opt-rdr-*
  * web-svc-*
- Remove redundant webserver include in web-opt-rdr-www
- Fix documentation links

Ref: ChatGPT conversation https://chatgpt.com/share/68d6cea2-3570-800f-acb3-c3277317f17b
2025-09-26 19:34:42 +02:00
b6d0535173 Cleaned up comment 2025-09-26 18:53:46 +02:00
27d33435f8 fix(bbb): align TURN/STUN configuration with shared coturn service
- added entity_name to vars for consistent docker.service lookup
- switched docker_repository_* vars to use entity_name dynamically
- introduced BBB_TURN_DOMAIN, BBB_TURN_PORT, and BBB_STUN_PORT
  → fallback to web-svc-coturn when BBB_COTURN_ENABLED is false
- updated env.j2 to use new BBB_TURN_* vars instead of hardcoded domain/ports
- cleaned up obsolete comments and spacing

Conversation: https://chatgpt.com/share/68d6c4a8-d524-800f-9592-e8a3407cd721
2025-09-26 18:53:21 +02:00
3cc4014edf feat(coturn): add dedicated web-svc-coturn role with schema, ports, network, and docker-compose template
- registered subnet 192.168.104.48/28 for coturn in group_vars/all/09_networks.yml
- defined public ports for stun/turn and relay port range in group_vars/all/10_ports.yml
- removed obsolete TODO.md and env.j2 from role
- added schema/main.yml with credentials validation (user_password, auth_secret)
- refactored tasks to load sys-stk-back-stateless instead of sys-stk-full-stateful
- implemented docker-compose.yml.j2 with auth-secret + lt-cred-mech and TLS config
- restructured vars/main.yml with docker, ports, credentials, and certificates
- updated config/main.yml.j2 with canonical domain and service definitions

Conversation: https://chatgpt.com/share/68d6c4a8-d524-800f-9592-e8a3407cd721
2025-09-26 18:52:13 +02:00
63da669c33 Removed unnecessary 'sys-svc-proxy' wrapper tasks 2025-09-26 18:50:12 +02:00
fb04a4c7a0 Restructured Front Proxy variables 2025-09-26 18:43:09 +02:00
2968ac7f0a Removed unnecessary sys-svc-webserver import 2025-09-26 18:22:46 +02:00
1daa53017e Refactor BigBlueButton role:
- Aligned schema/main.yml credential definitions with consistent spacing
- Changed PostgreSQL secret to use random_hex_32 instead of bcrypt
- Improved administrator creation logic in tasks/02_administrator.yml:
  * First try with primary password
  * Retry with starred password if OIDC is enabled
  * Fallback to user:set_admin_role if both fail
See: https://chatgpt.com/share/68d6aa34-19cc-800f-828a-a5121fda589f
2025-09-26 16:59:28 +02:00
9082443753 Refactor docker compose exec usage
Introduce centralized variables:
- docker_compose_command_base
- docker_compose_command_exec

Replaced hardcoded 'docker compose exec' with '{{ docker_compose_command_exec }}'
across multiple roles (BigBlueButton, EspoCRM, Friendica, Listmonk, Mailu, Matrix, OpenProject).
Ensures consistent environment file loading and reduces duplicated code.

Details: https://chatgpt.com/share/68d6a276-19d0-800f-839d-d191d97f7c41
2025-09-26 16:26:17 +02:00
bcee1fecdf feat(inventory): add random_hex_32 generator
feat(bbb/schema): auto-generate etherpad_api_key; set fsesl_password to alphanumeric_32
test(unit): add InventoryManager tests (Option B) expecting feature-generated creds as plain strings
docs: full autocreation of credentials for BigBlueButton now enabled
See: https://chatgpt.com/share/68d69ee8-3fd4-800f-9209-60026b338934
2025-09-26 16:11:05 +02:00
0602148caa bbb: pin mediasoup to IPv4-only and single worker via compose override
Set MS_WORKERS=1, MS_ENABLE_IPV6=false, and MS_WEBRTC_LISTEN_IPS to announce only EXTERNAL_IPv4 for webrtc-sfu. Helps avoid mediasoup router init issues seen when IPv6 is present.

Context/conversation: https://chatgpt.com/share/68d69a0e-22b0-800f-890b-13721a35f51b
2025-09-26 15:50:28 +02:00
cbfb991e79 Hardened BBB Version 2025-09-26 15:21:01 +02:00
fa7b1400bd Created mail account for blackhole to prevent delivery failure messages 2025-09-26 15:11:33 +02:00
c7cae93597 Optimized IP6 deactivation 2025-09-26 13:46:55 +02:00
6ea0d09f14 bbb: WIP—stabilize env/compose wiring & prep SFU override
Context: debugging mediasoup/WebRTC failures caused by empty/interpolated vars (EXTERNAL_IPv4, etc.).
- Normalize config/main.yml (ip6_enabled flag, enable greenlight/coturn) and tidy formatting.
- Extend vars/main.yml with BBB_* switches (IPv6, Greenlight, Coturn), TURN/Coturn cert paths.
- env.j2: wire secrets & toggles, guard IPv6 via BBB_IP6_ENABLED, switch LDAP/OIDC to role flags, add TURN/STUN, and general cleanup.
- tasks/main.yml: use BBB_* fact names, robust path joins, write docker-compose.override.yml, and notify compose on env/override changes.
- tasks/01_docker-compose.yml: reference new BBB_DOCKER_COMPOSE_* facts.
- Add templates/docker-compose.override.yml.j2 (placeholder for SFU overrides to avoid bad defaults during runs).
Rationale: make Compose brings deterministic (no empty ), paving the way to set MS_WEBRTC_LISTEN_IPS in override without risk.

Chat reference: debugging thread with GPT-5 Thinking on 2025-09-26 https://chatgpt.com/share/68d59d98-4388-800f-a627-07b6a603d0b2.
2025-09-26 12:49:12 +02:00
5e4cda0ac9 Documented docker-compose.override.yml 2025-09-26 12:14:08 +02:00
1d29617f85 Added creation of docker-compose.override.yml file 2025-09-26 12:03:47 +02:00
7c5ad8e6a1 Optimized XWIKI Nextcloud Bridge 2025-09-26 09:35:14 +02:00
a26538d1b3 web-app-openproject: upgrade to OpenProject 15
- bumped image version from 14 to 15
- removed dedicated migration task (now handled by upstream entrypoints)
- renamed tasks for cleaner numbering:
  * 02_settings.yml → 01_settings.yml
  * 03_ldap.yml → 02_ldap.yml
  * 04_admin.yml → 03_admin.yml

Ref: https://chatgpt.com/share/68d57770-2430-800f-ae53-e7eda6993a8d
2025-09-25 19:39:45 +02:00
f55b0ca797 web-app-openproject: migrate from OpenProject 13 to 14
- updated base image from openproject/community:13 to openproject/openproject:14
- added dedicated migration task (db:migrate + schema cache clear)
- moved settings, ldap, and admin tasks to separate files
- adjusted docker-compose template to use OPENPROJECT_WEB_SERVICE / OPENPROJECT_SEEDER_SERVICE variables
- replaced postinstall.sh with precompile-assets.sh
- ensured depends_on uses variable-based service names

Ref: https://chatgpt.com/share/68d57770-2430-800f-ae53-e7eda6993a8d
2025-09-25 19:10:46 +02:00
6f3522dc28 fix(csp): resolve all CSP-related issues and extend webserver health checks
- Added _normalize_codes to support lists of valid HTTP status codes
- Updated web_health_expectations to handle multiple codes, deduplication, and fallback logic
- Extended unit tests with coverage for list/default combinations, invalid values, and alias behavior
- Fixed Flowise CSP flags and whitelist entries
- Adjusted Flowise, MinIO, and Pretix docker service resource limits
- Updated docker-compose templates with explicit service_name
- Corrected MinIO status_codes to 301 redirects

 All CSP errors fixed

See details: https://chatgpt.com/share/68d557ad-fc10-800f-b68b-0411d20ea6eb
2025-09-25 18:05:41 +02:00
5186eb5714 Optimized OpenProject and CSP rules 2025-09-25 14:47:28 +02:00
73bcdcaf45 Deactivated proxying of bluesky web domain 2025-09-25 13:31:18 +02:00
9e402c863f Optimized Bleusky API redirect domain 2025-09-25 13:29:45 +02:00
84865d61b8 Install swapfile tool correct 2025-09-25 13:16:13 +02:00
423850d3e6 Refactor svc-opt-swapfile role: move core logic into 01_core.yml, simplify tasks/main.yml, and integrate swapfile setup into sys-svc-docker/01_core.yml to prevent OOM failures. See https://chatgpt.com/share/68d518f2-ba0c-800f-8a3a-c6b045763ac6 2025-09-25 12:27:13 +02:00
598f4e854a Increase OpenProject container resources
- Raised web service to 3 CPUs, 3–4 GB RAM, 2048 pids
- Raised worker service to 2 CPUs, 2–3 GB RAM, 2048 pids
- Increased cache mem_reservation to 512m
- Adjusted formatting for proxy service

Ref: https://chatgpt.com/share/68d513c1-8c10-800f-bf57-351754e3f5c2
2025-09-25 12:05:03 +02:00
1f99a6b84b Refactor: force early evaluation of BlueSky redirect_domain_mappings before include_role
Ensures that redirect_domain_mappings is resolved via set_fact
before passing it into the web-opt-rdr-domains role.
See: https://chatgpt.com/share/68d51125-14f4-800f-be6a-a7be3faeb028
2025-09-25 11:55:13 +02:00
189aaaa9ec Deactivated OpenProject LDAP Administrator Flag 2025-09-25 11:10:46 +02:00
ca52dcda43 Refactor OpenProject role:
- Add CPU, memory and PID limits to all services in config/main.yml to prevent OOM
- Replace old LDAP admin bootstrap with new 02_admin.yml using OPENPROJECT_ADMINISTRATOR_* vars
- Standardize variable names (uppercase convention)
- Fix HTTPS/HSTS port check (443 instead of 433)
- Allow docker_restart_policy override in base.yml.j2
- Cleanup redundant LDAP admin runner in 01_ldap.yml
See: https://chatgpt.com/share/68d40c6e-ab9c-800f-a4a0-d9338d8c1b32
2025-09-24 17:22:47 +02:00
4f59e8e48b Added cdn.jsdelivr.net to connect-src for web-app-desktop 2025-09-24 15:35:11 +02:00
a993c153dd fix(docker-container): ensure service_name and context are passed correctly to resource.yml.j2 by switching from lookup() to include with indent filter
Ref: https://chatgpt.com/share/68d3db3d-b6b4-800f-be4b-24ac50005552
2025-09-24 13:51:44 +02:00
8d6ebb4693 Mailu/Redis: add explicit service resource limits & clamav_db volume
- use lookup(template) for redis resource injection
- add cpus/mem/pids configs for all Mailu services
- switch antivirus to dedicated clamav_db volume
- add MAILU_CLAMAV_VOLUME var
- cleanup set service_name per service in docker-compose template
https://chatgpt.com/share/68d3d69b-06f0-800f-8c4d-4a74471ab961
2025-09-24 13:31:54 +02:00
567babfdfc Fix CPU resource calculation by enforcing a minimum of 0.5 cores per container using list-based max filter. See: https://chatgpt.com/share/68d3d645-e4c4-800f-8910-b6b27bb408e7 2025-09-24 13:30:32 +02:00
18e5f001d0 Mailu: disable hardened_malloc LD_PRELOAD (set to empty) to prevent /proc/cpuinfo PermissionError in socrate startup
Details: https://chatgpt.com/share/68d3ba3b-783c-800f-bf3d-0b0ef1296f93
2025-09-24 11:31:44 +02:00
7d9cb5820f feat(jvm): add robust JVM sizing filters and apply across Confluence/Jira
Introduce filter_plugins/jvm_filters.py with jvm_max_mb/jvm_min_mb. Derive Xmx/Xms from docker mem_limit/mem_reservation using safe rules: Xmx=min(70% limit, limit-1024MB, 12288MB), floored at 1024MB; Xms=min(Xmx/2, reservation, Xmx), floored at 512MB. Parse human-readable sizes (k/m/g/t) with binary units.

Wire filters into roles: set JVM_MINIMUM_MEMORY/JVM_MAXIMUM_MEMORY via filters; stop relying on host RAM. Keep env templates simple and stable.

Add unit tests under tests/unit/filter_plugins/test_jvm_filters.py covering typical sizes, floors, caps, invalid inputs, and entity-name derivation.

Ref: https://chatgpt.com/share/68d3b9f6-8d18-800f-aa8d-8a743ddf164d
2025-09-24 11:29:40 +02:00
c181c7f6cd fix(webserver): ensure numeric casting for worker_processes and worker_connections
- Cast WEBSERVER_CPUS_EFFECTIVE to float before comparison to avoid
  'AnsibleUnsafeText < int' type errors.
- Ensure correct numeric coercion for pids_limit values.
- This prevents runtime templating errors when rendering nginx config.

Ref: https://chatgpt.com/share/68d3b047-56ac-800f-a73f-2fb144dbb7c4
2025-09-24 10:48:23 +02:00
929cddec0e Refactor resource_filter to delegate default handling to get_app_conf and update unittests accordingly https://chatgpt.com/share/68d3ad6d-76b4-800f-b04e-5e1fb70b44f3 2025-09-24 10:46:21 +02:00
9ba0efc1a1 Refactor resource configuration:
- Introduce new resource_filter plugin (mandatory hard_default, auto entity_name fallback)
- Replace get_app_conf calls with resource_filter in resource.yml.j2
- Add WEBSERVER_CPUS_EFFECTIVE, WEBSERVER_WORKER_PROCESSES, WEBSERVER_WORKER_CONNECTIONS to 05_webserver.yml
- Update Nginx templates (sys-svc-webserver, web-app-magento, web-app-nextcloud) to use new vars
- Extend svc-prx-openresty config with cpus/mem limits
- Add unit tests for resource_filter

Details: https://chatgpt.com/share/68d3a493-9a5c-800f-8cd2-bd2e7a3e3fda
2025-09-24 09:58:30 +02:00
9bf77e1e35 mastodon: tighten resources, robust exec tasks, and env defaults
- resources: per-service cpus/mem/pids for mastodon/streaming/sidekiq/redis/db
- compose: rename service key to "mastodon" (was: web), set service_name blocks
- tasks(01_setup): run rails db:migrate via docker exec (non-tty, login shell)
- tasks(02_administrator): healthchecks for 'mastodon', sed with absolute path,
  tootctl as user 'mastodon' (non-tty), optional re-health wait
- env.j2: add RAILS_ENV={{ ENVIRONMENT | default('production') }}
- resource.yml.j2: fix get_app_conf path (service_name default spacing)
- docs: remove outdated Installation/Administration files

Context: https://chatgpt.com/share/68d332a0-ae98-800f-b418-c0d0262eaa2e
2025-09-24 01:52:18 +02:00
426ba32c11 feat(services): add CPU/RAM/PIDs defaults for heavy roles and align service names
Add per-service resource overrides (cpus, mem_reservation, mem_limit, pids_limit) for ollama, mariadb, postgres, confluence, gitlab, jira, keycloak, nextcloud; light formatting fixes in wordpress.

Rename service keys from generic 'application/web' to concrete names (jira, confluence, gitlab, keycloak) and update compose templates accordingly.

Jira: introduce JIRA_STORAGE_PATH and switch mounts/README accordingly.

https://chatgpt.com/share/68d2d96c-9bf4-800f-bbec-d4f2c0051c06
2025-09-23 21:43:50 +02:00
ff7b7aeb2d feat(filters): add active_docker_container_count filter and use it for fair resource splits
Compute per-container CPU/RAM shares based on active services (web-/svc-*, enabled=true or undefined). Cast host facts to numbers, add safe min=1, and output compose-ready values. Include robust unit test.

Also: include resource.yml.j2 in base template and minor formatting tidy-up.

https://chatgpt.com/share/68d2d96c-9bf4-800f-bbec-d4f2c0051c06
2025-09-23 21:35:12 +02:00
c523d8d8d4 Casted WWW_REDIRECT_ENABLED to bool 2025-09-23 19:18:22 +02:00
12d05ef013 Bluesky: add redirects for deactivated web/view domains to BLUESKY_API_DOMAIN via web-opt-rdr-domains
Ref: https://chatgpt.com/share/68d2cf5f-4a88-800f-a739-485580d84566
2025-09-23 18:48:47 +02:00
3cbf37d774 Added correct health status code for minio api 2025-09-23 18:34:59 +02:00
fc99c72f86 Optimized Swapfiles variables and enabled async 2025-09-23 18:34:18 +02:00
3211dd7cea Optimized README.md 2025-09-23 13:47:46 +02:00
c07a9835fc Updated Flowise Credentials 2025-09-23 12:48:43 +02:00
f4cf55b3c8 Open WebUI OIDC & proxy fixes + Ollama preload + async-safe pull
- svc-ai-ollama:
  - Add preload_models (llama3, mistral, nomic-embed-text)
  - Pre-pull task: loop_var=model, async-safe changed_when/failed_when

- sys-svc-proxy (OpenResty):
  - Forward Authorization header
  - Ensure proxy_pass_request_headers on

- web-app-openwebui:
  - ADMIN_EMAIL from users.administrator.email
  - Request RBAC group scope in OAUTH_SCOPES

Ref: ChatGPT support (2025-09-23) — https://chatgpt.com/share/68d20588-2584-800f-aed4-26ce710c69c4
2025-09-23 04:27:46 +02:00
1b91ddeac2 Optimized flowise 2025-09-23 03:03:11 +02:00
b638d00d73 Removed unneccessary MINIO_OIDC_POLICY_NAME_SAFE 2025-09-23 03:02:40 +02:00
75c36a1d71 web-app-minio: manage OIDC policy via containerized mc and fix policy JSON
- Use dockerized mc with MC_HOST_minio (stateless), no temp files/dirs
- Create only RAW policy name with slash to match Keycloak claim
- Split policy: s3:* on S3 ARNs; admin:* on Resource "*"
- Add mc vars (image, MC_HOST components) to vars/main.yml
- Remove unused Ollama dependency block from tasks

Refs: ChatGPT conversation → https://chatgpt.com/share/68d1eab9-a35c-800f-aa81-76fb2101bd93
2025-09-23 02:33:35 +02:00
7a119c3175 Deactivated CSS for Open WebUI 2025-09-23 02:21:59 +02:00
3e6193ffce Solved ollama network bug 2025-09-23 02:21:20 +02:00
9d8e06015f Added whitespaces 2025-09-23 00:59:55 +02:00
5daf3387bf web-app-minio: enable OIDC integration and policy handling
- Added OIDC and LDAP feature flags in config
- Introduced API/Console URL vars for proxy alignment
- Implemented automatic MinIO policy creation for OIDC admin group
- Replaced static env.J2 with dynamic env.j2 (OIDC-aware)
- Added policy.json.j2 template with full admin rights
- Cleaned up tasks to use stdin instead of file for mc policy apply

Ref: https://chatgpt.com/share/68d1d3ef-ca84-800f-abe2-11ab70e20c4e
2025-09-23 00:56:11 +02:00
6da7f28370 Optimized whitespacing 2025-09-23 00:51:23 +02:00
208848579d svc-db-openldap: make LDIF import idempotent, unify container var, and tidy role
- Add handlers/main.yml to load memberof/refint modules and import groups via docker exec
- Use OPENLDAP_CONTAINER consistently (replace OPENLDAP_NAME)
- Rename tasks/ldifs_creation.yml -> tasks/_ldifs_creation.yml and update includes
- Drop default param from get_app_conf calls; add explicit meta: flush_handlers
- docker-compose: honor OPENLDAP_NETWORK_EXPOSE_LOCAL | bool; minor formatting
- env template: formatting/comments consistency
- Remove unused 01_rbac_group.ldif.j2; rename 02_rbac_roles -> 01_rbac_roles and fix filter to LDAP
- vars: rename OPENLDAP_NAME -> OPENLDAP_CONTAINER; prune LDIF schema type

Conversation: https://chatgpt.com/share/68d1d25d-e788-800f-bfb6-13b1f5bc6121
2025-09-23 00:49:57 +02:00
d8c73e9fc3 Renamed to correct handler 2025-09-23 00:37:26 +02:00
10b20cc3c4 tests: treat mixed Jinja in notify/package_notify as wildcard regex; ignore pure Jinja; add reverse check so all notify targets map to existing handlers. See: https://chatgpt.com/share/68d1cf5a-f7e8-800f-910c-a2215d06c2a4 2025-09-23 00:36:50 +02:00
790c184e66 feat(web-app-openwebui): add bootstrap admin configuration via ADMIN_EMAIL
Introduce ADMIN_EMAIL and SHOW_ADMIN_DETAILS options to bootstrap the first
administrator account on fresh installations. This ensures at least one admin
exists without manual database intervention.

Conversation: https://chatgpt.com/share/68d18e02-d6b8-800f-aaab-920c61b9284a
2025-09-22 21:41:32 +02:00
93d165fa4c Solved CSP issue 2025-09-22 21:22:35 +02:00
1f3abb95af Required to move handler reloading one level higher 2025-09-22 21:07:34 +02:00
7ca3a73f21 Normalized OpenLDAP variables 2025-09-22 21:02:24 +02:00
08720a43c1 feat(web-app-openwebui): enable OIDC role-based admin mapping
Activate ENABLE_OAUTH_ROLE_MANAGEMENT and configure OAUTH_ROLES_CLAIM from
RBAC.GROUP.CLAIM. Define OAUTH_ADMIN_ROLES dynamically based on RBAC group
and application administrator naming convention.

Conversation: https://chatgpt.com/share/68d18e02-d6b8-800f-aaab-920c61b9284a
2025-09-22 20:27:01 +02:00
1baed62078 Removed ollama dependendy because it's managed via Ansible and not docker compose dependency 2025-09-22 20:22:54 +02:00
963e1aea21 Removed ollama from openwebui 2025-09-22 20:15:33 +02:00
a819a05737 Activated network for svc-ai-ollama 2025-09-22 20:12:34 +02:00
4cb58bec0f Added correct portmapping for ollama 2025-09-22 20:09:01 +02:00
002f45d1df Added LDAP draft for Open WebUI - Deactivated just PoC, because OIDC is anyhow prefered 2025-09-22 20:02:36 +02:00
cbc4dad1d1 Removed wrong : 2025-09-22 20:00:55 +02:00
70d395ed15 feat(web-app-openwebui): add OIDC support via env.j2 with feature flag
Enables OIDC login by adding feature flag (features.oidc), rendering OIDC-related
environment variables, and introducing OPENWEBUI_OIDC_ENABLED.

Conversation: https://chatgpt.com/share/68d18e02-d6b8-800f-aaab-920c61b9284a
2025-09-22 19:57:55 +02:00
e20a709f04 Solved wrong image bug for minio 2025-09-22 19:56:24 +02:00
d129f71cef Added Ollama network 2025-09-22 19:19:44 +02:00
4cb428274a Add new 'Artificial Intelligence' portfolio menu category for AI tools (Ollama, OpenWebUI, Flowise, MinIO, Qdrant, LiteLLM) 🤖
Details: Introduced dedicated AI category with proper description, tags, and robot icon to group AI-related applications.

Reference: https://chatgpt.com/share/68d183ea-04dc-800f-97c9-2e83d0ca3753
2025-09-22 19:14:36 +02:00
97e2d440b2 Normalized OpenLDAP constants 2025-09-22 19:08:11 +02:00
588cd1959f Added local_ai configuration feature 2025-09-22 18:56:38 +02:00
5d1210d651 feat(ai): introduce dedicated AI roles and wiring; clean up legacy AI stack
• Add svc-ai category under roles and load it in constructor stage

• Create new 'svc-ai-ollama' role (vars, tasks, compose, meta, README) and dedicated network

• Refactor former AI stack into separate app roles: web-app-flowise and web-app-openwebui

• Add web-app-minio role; adjust config (no central DB), meta (fa-database, run_after), compose networks include, volume key

• Provide user-focused READMEs for Flowise, OpenWebUI, MinIO, Ollama

• Networks: add subnets for web-app-openwebui, web-app-flowise, web-app-minio; rename web-app-ai → svc-ai-ollama

• Ports: rename ai_* keys to web-app-openwebui / web-app-flowise; keep minio_api/minio_console

• Add group_vars/all/17_ai.yml (OLLAMA_BASE_LOCAL_URL, OLLAMA_LOCAL_ENABLED)

• Replace hardcoded include paths with path_join in multiple roles (svc-db-postgres, sys-service, sys-stk-front-proxy, sys-stk-full-stateful, sys-svc-webserver, web-svc-cdn, web-app-keycloak)

• Remove obsolete web-app-ai templates/vars/env; split Flowise into its own role

• Minor config cleanups (CSP flags to {}, central_database=false)

https://chatgpt.com/share/68d15cb8-cf18-800f-b853-78962f751f81
2025-09-22 18:40:20 +02:00
aeab7e7358 Improve CSP configuration test: validate section types safely and include role/file path in error output
See ChatGPT conversation: https://chatgpt.com/share/68d1762d-7930-800f-bba5-55f1de7446b1
2025-09-22 18:16:01 +02:00
fa6bb67a66 Removed whitespaces in templates: 2025-09-22 16:28:57 +02:00
3dc2fbd47c refactor(objstore): extract MinIO into dedicated role 'web-app-minio' and adjust AI role
• Rename ports: web-app-ai_minio_* → web-app-minio_* in group_vars

• Remove MinIO from web-app-ai (service, volumes, ENV)

• Add new role web-app-minio (config, tasks, compose, env, vars) incl. front-proxy matrix

• AI role: front-proxy loop via matrix; unify domain/port vars (OPENWEBUI/Flowise *_PORT_PUBLIC/_PORT_INTERNAL, *_DOMAIN)

• Update compose templates accordingly

Ref: https://chatgpt.com/share/68d15cb8-cf18-800f-b853-78962f751f81
2025-09-22 16:27:51 +02:00
4b56ab3d18 Normalized Nextcloud port variable mapping 2025-09-22 16:20:32 +02:00
8e934677ff refactor(nextcloud): introduce NEXTCLOUD_INTERNAL_OCC_COMMAND for consistency
Details:
- Added NEXTCLOUD_INTERNAL_OCC_COMMAND to centralize occ path handling
- Updated NEXTCLOUD_DOCKER_EXEC_OCC to reuse internal occ command
- Replaced hardcoded occ path in docker-compose healthchecks with variable
- Improves maintainability and avoids duplication

See: https://chatgpt.com/share/68d14d85-3d80-800f-9d1d-fcf6bb8ce449
2025-09-22 15:35:26 +02:00
0a927f49a2 refactor(nextcloud): use path_join for config/occ paths to avoid double slashes
Details:
- NEXTCLOUD_DOCKER_CONF_DIRECTORY, NEXTCLOUD_DOCKER_CONFIG_FILE, NEXTCLOUD_DOCKER_CONF_ADD_PATH
  now built with path_join instead of string concat
- NEXTCLOUD_DOCKER_EXEC_OCC now uses path_join for occ command
- makes path handling more robust and consistent

See: https://chatgpt.com/share/68d14d85-3d80-800f-9d1d-fcf6bb8ce449
2025-09-22 15:22:41 +02:00
e6803e5614 refactor(ansible): normalize include_role syntax and unify host config paths via path_join
- Remove stray spaces after include_role: across many roles to ensure clean YAML and
  consistent linting/formatting.
- Listmonk:
  - Introduce LISTMONK_CONFIG_HOST = [ docker_compose.directories.config, 'config.toml' ] | path_join
  - Use that var in the template task (dest) and the docker-compose volume mount
- Matrix:
  - Build MATRIX_SYNAPSE_CONFIG_PATH_HOST, MATRIX_SYNAPSE_LOG_PATH_HOST, and
    MATRIX_ELEMENT_CONFIG_PATH_HOST via path_join
- Mobilizon:
  - Build mobilizon_host_conf_exs_file via path_join
  - Keep get_app_conf strictness unchanged (defaults to True in our filter), so behavior
    remains strict even though the explicit third arg was dropped
- Simpleicons:
  - Build server.js and package.json host paths via path_join
- Numerous web-app roles (Confluence, Discourse, EspoCRM, Friendica, Funkwhale, Gitea,
  GitLab, Jenkins, Joomla, Listmonk, Mailu, Mastodon, Matomo, Matrix, MediaWiki,
  Mobilizon, Moodle, Nextcloud, OpenProject, Peertube, Pixelfed, Pretix, Roulette Wheel,
  Snipe-IT, Syncope, Taiga, WordPress, XWiki, Yourls) and web-svc roles (coturn,
  libretranslate, simpleicons) updated for consistent include_role formatting

Why:
- path_join avoids double slashes and missing separators across different config roots
- Consistent include_role: formatting improves readability and prevents linter noise

Ref:
- Conversation: https://chatgpt.com/share/68d14711-727c-800f-b454-7dc4c3c1f4cb
2025-09-22 14:55:25 +02:00
6cf6c74802 Inverted docker_compose_skipp_file_creation to don't use double negation 2025-09-22 13:40:28 +02:00
734b8764f2 Optimized web-app-ai draft 2025-09-22 13:35:13 +02:00
3edb66f444 Merge branch 'master' of github.com:kevinveenbirkenbach/infinito-nexus 2025-09-22 11:17:40 +02:00
181b2d0542 Little optimations 2025-09-22 11:17:31 +02:00
78ebf4d075 Added draft base for AI assistant 2025-09-22 11:14:50 +02:00
d523629cdd Refactor docker-compose templates: replace {% include 'build.yml.j2' %} with lookup() + indent for proper YAML embedding. Also adjusted build.yml.j2 to remove leading spaces. See: https://chatgpt.com/share/68ce584a-a430-800f-8e2a-0f96884cc8d1 2025-09-20 09:31:49 +02:00
08ac8b6a9d Explicit activated async for creating of parent DNS entries 2025-09-20 09:30:16 +02:00
79db2419a6 fix(Makefile, playbook.yml): ensure Ansible syntax-check has access to group_vars and clean up playbook formatting
- Add all group_vars/all/*.yml as extra-vars (-e @file) in Makefile syntax-check
- Use consistent quoting in playbook.yml for SOFTWARE_NAME and host_type templating

Ref: https://chatgpt.com/share/68cdee8a-4e88-800f-bf62-bed66dbbb417
2025-09-20 02:00:25 +02:00
c424afa935 Fix CLI workflow and container startup
- Updated GitHub Actions workflow to call `infinito make ...` inside container
- Simplified Dockerfile CMD to run `infinito --help` and keep container alive
- Adjusted docker-compose.yml to use explicit image name

See: https://chatgpt.com/share/68cde606-c3f8-800f-8ac5-fc035386da87
2025-09-20 01:24:20 +02:00
974a83fe6e web-app-bluesky: enable custom AppView domain and refactor DNS records
- Un-commented `view.bluesky.{{ PRIMARY_DOMAIN }}` in config to allow
  explicit AppView domain definition.
- Reworked `03_dns.yml` to build `cloudflare_records` list programmatically,
  including conditional addition of AppView records only if the domain is
  not `api.bsky.app`.
- Improved AAAA handling with `| default('')` and proper ternary
  expressions for `present/absent`.
- Updated `vars/main.yml` to remove default port fallback for
  `BLUESKY_VIEW_PORT`.

Refs: https://chatgpt.com/share/68cdde1d-1bd4-800f-a4bb-319372752fcd
2025-09-20 00:50:31 +02:00
0168167769 Docker: introduce docker-compose setup and simplify CMD
- Replaced ENTRYPOINT/CMD with a single CMD ["infinito --help"] in Dockerfile
- Added docker-compose.yml with service 'infinito', port bindings, volumes, networks
- Added env.sample for BIND_IP, SUBNET, GATEWAY defaults

See conversation: https://chatgpt.com/share/68cda4d5-1fe0-800f-a7f7-191cb8b70d84
2025-09-19 21:22:45 +02:00
1c7152ceb2 Solved build bug 2025-09-19 20:51:06 +02:00
2a98b265bc Reduced port exposal to local for better encapsulation 2025-09-19 19:43:16 +02:00
14d1362dc8 Removed alias from bookwyrm 2025-09-19 19:14:55 +02:00
a4a8061998 Refactor: unify Docker build config via build.yml.j2 include
Replaced duplicated inline build definitions in multiple docker-compose.yml.j2
templates with a shared include (roles/docker-container/templates/build.yml.j2).
This ensures consistent use of pull_policy: never and Dockerfile context across
services (Postgres, Bookwyrm, Bridgy Fed, Chess, Confluence, Jira, Moodle,
OpenProject, Pretix, Roulette Wheel, WordPress, XWiki, Simpleicons).

Conversation: https://chatgpt.com/share/68cd8f35-b764-800f-9b00-2c837103d2fb
2025-09-19 19:13:44 +02:00
96ded68ef4 Refactor DNS handling and add solo record support
- Added 'solo' flag support for A/AAAA, CNAME/MX/TXT, and SRV records in sys-dns-cloudflare-records.
- Simplified sys-svc-dns: removed NS management tasks and CLOUDFLARE_NAMESERVERS default.
- Renamed 03_apex.yml back to 02_apex.yml, adjusted AAAA task name.
- Updated web-app-bluesky DNS tasks: marked critical records with 'solo'.
- Updated web-app-mailu DNS tasks: removed cleanup block, enforced 'solo' on all records.
- Adjusted constructor stage to call domain_mappings with AUTO_BUILD_ALIASES parameter.

Conversation: https://chatgpt.com/share/68cd20d8-9ba8-800f-b070-f7294f072c40
2025-09-19 15:29:11 +02:00
2d8967d559 added www. alias for desktop as default 2025-09-19 14:55:40 +02:00
5e616d3962 web: general domain cleanup (canonical/aliases normalization)
- Normalize domain blocks across apps:
  - Add explicit 'aliases: []' everywhere (no implicit aliases)
  - Standardize canonical subdomains for consistency:
    * Bluesky: web/api under *.bluesky.<PRIMARY_DOMAIN>
    * EspoCRM: espo.crm.<PRIMARY_DOMAIN>
    * Gitea:   tea.git.<PRIMARY_DOMAIN>
    * GitLab:  lab.git.<PRIMARY_DOMAIN>
    * Joomla:  joomla.cms.<PRIMARY_DOMAIN>
    * Magento: magento.shop.<PRIMARY_DOMAIN>
    * OpenProject: open.project.<PRIMARY_DOMAIN>
    * Pretix:  ticket.shop.<PRIMARY_DOMAIN>
    * Taiga:   kanban.project.<PRIMARY_DOMAIN>
  - Remove legacy/duplicate aliases and use empty list instead
  - Fix 'alias' -> 'aliases' where applicable

Context: preparing for AUTO_BUILD_ALIASES=False and deterministic redirect mapping.

Ref: conversation https://chatgpt.com/share/68cd512c-c878-800f-bdf2-81737adf7e0e
2025-09-19 14:51:56 +02:00
0f85d27a4d filter/domain_redirect_mappings: add auto_build_alias parameter
- Extend filter signature with auto_build_alias flag to control automatic
  default→canonical alias creation
- group_vars/all: introduce AUTO_BUILD_ALIASES variable for global toggle
- Update unit tests: adjust calls to new signature and add dedicated
  test cases for auto_build_aliases=False

Ref: conversation https://chatgpt.com/share/68cd512c-c878-800f-bdf2-81737adf7e0e
2025-09-19 14:49:02 +02:00
c6677ca61b tests: ignore Jinja variables inside raw blocks in variable definitions check
- Added regex masking to skip {{ var }} usages inside {% raw %}…{% endraw %} blocks.
- Simplified code by removing redundant comments.
- Cleaned up task file for XWiki role by removing outdated note.

Ref: https://chatgpt.com/share/68cd2558-e92c-800f-a80a-a79d3c81476e
2025-09-19 11:42:01 +02:00
83ce88a048 Solved all open test issues 2025-09-19 11:32:58 +02:00
7d150fa021 DNS & certs refactor:
- Switch certbot flag from MODE_TEST → MODE_DUMMY in dedicated certs
- Add sys-svc-dns defaults for CLOUDFLARE_NAMESERVERS
- Introduce 02_nameservers.yml for NS cleanup + enforce, adjust task ordering (apex now 03_apex.yml)
- Enforce quoting for Bluesky and Mailu TXT records
- Add cleanup of MX/TXT/DMARC/DKIM in Mailu role
- Normalize no_log handling in Nextcloud plugin
- Simplify async conditionals in Collabora role
Conversation: https://chatgpt.com/share/68cd20d8-9ba8-800f-b070-f7294f072c40
2025-09-19 11:22:51 +02:00
2806aab89e Removed deathlock between sys-ctl-bkp-docker-2-loc and sys-ctl-cln-faild-bkps - Timer handles now cleanup exclusively 2025-09-19 11:21:18 +02:00
61772d5916 Solved testing mode bug 2025-09-19 11:18:29 +02:00
a10ba78a5a Bluesky: update Ansible patches to use new geolocation module path
Replaced hardcoded path to src/state/geolocation.tsx with variable BLUESKY_GEOLOCATION_PATH pointing to src/state/geolocation/index.tsx.
This ensures BAPP_CONFIG_URL and IPCC_URL replacements work with the updated Bluesky code structure.

Ref: https://chatgpt.com/share/68cb16d5-d698-800f-97e5-cc7d9016f27c
2025-09-17 22:15:30 +02:00
6854acf204 Used database type instead of database host for postgres 2025-09-17 20:53:48 +02:00
54d4eeb1ab Fix network alias assignment for DB services
Ensure that the database host alias is only attached to the database
containers themselves, not to dependent application containers. This
avoids DNS collisions where multiple containers expose the same alias
(e.g. 'postgres') on the same network, which led to connection refused
errors in XWiki.

See conversation: https://chatgpt.com/share/68cae4e5-94e4-800f-b291-d2acdb36af21
2025-09-17 18:42:36 +02:00
52fb7accac Disabled unnecessary variables temporary to make debugging easier and solved oidc bugs 2025-09-17 17:45:46 +02:00
d4c62dbf72 docker-container: ensure explicit network alias for DB services
Added explicit aliases in the networks configuration for database containers
(Postgres/MariaDB). This guarantees that the configured 'database_host' is always
resolvable across external networks, fixing intermittent 'UnknownHostException'
issues when restarting dependent services (e.g., Confluence).

Ref: https://chatgpt.com/share/68cabfac-8618-800f-bcf4-609fdff432ed
2025-09-17 16:26:02 +02:00
9ef4f91ec4 Added debug properties for xwiki but they don't seem to have any relevant effect 2025-09-17 15:07:45 +02:00
5bc635109a mediawiki: normalize LocalSettings.php base settings (clean+append once); fail if missing
oidc.php: autologin/localLogin templated via vars; optionally disable wgPasswordAttemptThrottle when 'web-svc-logout' present

vars: set defaults (AUTOLOGIN=true, LOCALLOGIN=false); use path_join/url_join for clean paths/URLs

Context: https://chatgpt.com/share/68caaf41-d098-800f-beb0-a473ff08c9c5
2025-09-17 14:53:53 +02:00
efb5488cfc Optimized variables 2025-09-17 13:16:57 +02:00
1dceabfd46 Added proxy conf variables for xwiki 2025-09-17 07:19:29 +02:00
c64ac0b4dc web-app-xwiki: verify extensions via Groovy page + new filter
- Added new filter 'xwiki_extension_status' (strips HTML, handles &nbsp;) -> returns 200/404
- Introduced checker tasks (_check_extension_via_groovy.yml) instead of REST probe
- Added early assert: superadmin login before extension installation
- Collect and assert probe results in 04_extensions.yml
- Set OIDC extension version to 'latest' (empty string)

https://chatgpt.com/share/68ca36cb-ac38-800f-8281-8dea480b6676
2025-09-17 06:20:28 +02:00
e94aac1d78 Removed non existing plugin 2025-09-17 05:18:03 +02:00
c274c1a5d4 refactor(xwiki): move extension installer logic into static Groovy file and switch to plugins dict
- Added 'plugins' section in config/main.yml to declare enabled extensions in a structured way
- Introduced new static file 'files/extension_installer_b64.groovy' that decodes Base64 JSON of requested plugins
- Simplified 04_extensions.yml: now builds installer code from static file and removed hardcoded OIDC/LDAP checks
- Dropped redundant XWIKI_EXT_* variables in vars/main.yml
- Added XWIKI_PLUGINS fact to collect enabled plugin items from config/main.yml

This refactor makes extension installation more generic, easier to unit test, and extendable beyond OIDC/LDAP.

See: https://chatgpt.com/share/68ca25e3-cbc4-800f-a45e-2b152369811a
2025-09-17 05:08:02 +02:00
62493ac5a9 XWiki: increase installer execution timeout and add retries
The task 'XWIKI | Execute installer page' now uses:
- timeout: 300 (allow up to 5 min per request)
- retries: 20
- delay: 15
- until: condition

This prevents early failures during the first Distribution Wizard bootstrap when hundreds of extensions are still being installed.

Context: https://chatgpt.com/share/68ca0f18-2124-800f-a70d-df1811966107
2025-09-17 03:30:40 +02:00
cc2b9d476f Added blob csp rule for xwiki 2025-09-17 02:47:01 +02:00
d9c527e2e2 Changed handler order 2025-09-17 02:36:17 +02:00
eafdacc378 Optimized CSP for XWIKI 2025-09-17 02:33:28 +02:00
c93ec6d43a feat(web-app-xwiki): install OIDC/LDAP via temporary Groovy page (PUT→execute→verify→delete)
Replace REST jobs flow with services.extension.install executed from a transient XWiki.InstallExtensions page.
- Build wishlist from Ansible vars; print machine-readable markers; assert success.
- Execute from XWiki space; delete page afterwards; fix delete changed_when.
- Use Jinja raw + indent for clean macro embedding.

https://chatgpt.com/share/68c9ebf5-f5e0-800f-9b80-372b4b31e772
2025-09-17 01:00:25 +02:00
0839b8e37f fix(xwiki): enable superadmin flag in xwiki.cfg and always force Distribution Wizard
- Added 'xwiki.superadmin=1' alongside the password in 'xwiki.cfg' to properly activate the superadmin account during bootstrap.
- Simplified 'xwiki.properties': Distribution Wizard config is now always present instead of conditional on the superadmin switch.
- Ensures that the Distribution Wizard ('distribution.wizard.enabled=true') and flavor bootstrap run automatically on first startup.
- This fixes the issue where REST endpoints (/rest/jobs, /repositories) stayed at 404 because the DW never executed.

Ref: https://chat.openai.com/share/7a5d58d2-8e91-4e34-8fa0-8b7d62494e4a
2025-09-16 23:53:14 +02:00
def6dc96d8 fix(xwiki): enable superadmin flag in xwiki.cfg and always force Distribution Wizard
- Added 'xwiki.superadmin=1' alongside the password in 'xwiki.cfg' to properly activate the superadmin account during bootstrap.
- Simplified 'xwiki.properties': Distribution Wizard config is now always present instead of conditional on the superadmin switch.
- Ensures that the Distribution Wizard ('distribution.wizard.enabled=true') and flavor bootstrap run automatically on first startup.
- This fixes the issue where REST endpoints (/rest/jobs, /repositories) stayed at 404 because the DW never executed.

Ref: https://chat.openai.com/share/7a5d58d2-8e91-4e34-8fa0-8b7d62494e4a
2025-09-16 23:30:07 +02:00
364f4799bc In between commit xwiki OIDC integration 2025-09-16 20:16:19 +02:00
6eb4ba45f7 Removed installjobrequest.xml.j2 2025-09-16 19:57:32 +02:00
0566c426c9 Refactored administrator page variables 2025-09-16 19:57:07 +02:00
9ce73b9c71 Harmonized saving path 2025-09-16 19:12:08 +02:00
83936edf73 fix(xwiki): use proper InstallRequest XML format for extension installation
- Replace custom <request> with class='org.xwiki.extension.job.InstallRequest'
- Use loop over extensions_to_install to build <extensionId> list
- Move namespace into <namespaces><string>wiki:xwiki</string>
- Remove unused <id>/<jobType> from root
- Ensure installDependencies, interactive, verbose inside request
- Fixes issue where server echoed <rest><list/> instead of actual extensions
2025-09-16 15:25:34 +02:00
40ecbc5466 Added correct extension install logic to prevent overwritte 2025-09-16 14:53:37 +02:00
b18b3b104c Implemented performance switch for Front Proxy 2025-09-16 13:58:46 +02:00
2f992983f4 xwiki: install/verify via REST Job API; add 'xwiki_job_id' filter; refactor extension probe; remove invalid /extensions/{id} verify; README wording
Context: fixed 404 on 'Verify OIDC extension is installed' by polling jobstatus and parsing job id via filter plugin.
Conversation: https://chatgpt.com/share/68c435b7-96c0-800f-b7d6-b3fe99b443e0
2025-09-12 17:01:37 +02:00
d7d8578b13 fix(xwiki): correct extension.repositories format to id:type:url
Changed repository definition from 'maven:xwiki-public ...' to 'xwiki-public:maven:...'
so that the XWiki Extension Manager can correctly register Maven repositories.
This resolves the 'Unsupported repository type [central]' error and allows OIDC extension installation.

Details: https://chatgpt.com/share/68c42c4f-fda4-800f-a003-c16bcc9bd2a3
2025-09-12 16:21:23 +02:00
f106d5ec36 web-app-xwiki: admin bootstrap & REST/extension install fixes
• Guard admin tasks via XWIKI_SSO_ENABLED
• Create admin using XWikiUsers object API
• Wait for REST without DW redirect
• Install OIDC/LDAP via /rest/jobs (+verify)
• Mount xwiki.cfg/properties under Tomcat WEB-INF
• Build REST URLs with url_join; enable DW auto bootstrap + repos

https://chatgpt.com/share/68c42502-a5cc-800f-b05a-a1dbe48f014d
2025-09-12 15:50:30 +02:00
53b3a3a7b1 Deactivated LDAP by default 2025-09-12 14:13:13 +02:00
f576b42579 XWiki: two-phase bootstrap + extension install before enabling auth; add XOR validation
- Add 02_validation.yml to prevent OIDC+LDAP enabled simultaneously
- Introduce _flush_config.yml with switches (OIDC/LDAP/superadmin)
- Bootstrap with native+superadmin → create admin → install extensions (superadmin) → enable final auth
- Refactor REST vars (XWIKI_REST_BASE, XWIKI_REST_XWIKI, XWIKI_REST_EXTENSION_INSTALL)
- Update templates to use switch vars; gate OIDC block in properties
- Idempotent REST readiness waits

Conversation: https://chatgpt.com/share/68c40c1e-2b3c-800f-b59f-8d37baa9ebb2
2025-09-12 14:04:02 +02:00
b0f10aa0d0 Removed unnecessary just up 2025-09-12 13:21:29 +02:00
a6a2be4373 Optimized Listmonk variables 2025-09-12 13:04:06 +02:00
b7a7be4737 Fix XWiki automation bootstrap:
- Accept HTTP 302 (Distribution Wizard redirects) in REST readiness and extension checks
- Treat 302 as missing admin user during bootstrap
- Move superadmin password to xwiki.cfg (correct location)
- Disable automatic Distribution Wizard start in xwiki.properties
- Standardize run_once includes for postgres, cdn, and xwiki roles

See: https://chatgpt.com/share/68c3a67b-80b4-800f-8a90-ebdcd4abb86c
2025-09-12 06:50:24 +02:00
2d71c461de web-app-xwiki: add SuperAdmin bootstrap support
- Added schema entry for superadminpassword
- Added vars for XWIKI_SUPERADMIN_USERNAME/PASSWORD
- Extended xwiki.properties.j2 to configure superadminpassword
- Added 02_bootstrap_admin.yml to create XWiki admin via REST using SuperAdmin
- Updated REST URLs to use XWIKI_REST_GENERAL
- Enabled CSP flag unsafe-inline

Conversation: https://chatgpt.com/share/68c39ddb-e9cc-800f-b32f-9d4c1e09e43e
2025-09-12 06:13:34 +02:00
07b7c6484f xwiki: switch to PostgreSQL and remove custom Hibernate override
Config: set database.type=postgres; use image tag lts-<dbtype>-tomcat; make DB_TYPE templated; derive database_type from app config.

Cleanup: delete hibernate.cfg.xml template and volume mounts; remove XWIKI_HOST_HIBERNATE_PATH; stop rendering hibernate.cfg.xml.

web-svc-cdn: run_once task fix.

Context: troubleshooting on 2025-09-12. Conversation link: https://chatgpt.com/share/68c3978e-77cc-800f-beda-19220f70855f
2025-09-12 05:46:45 +02:00
cce33373ba sys-svc-dns: add apex A/AAAA records for SYS_SVC_DNS_BASE_DOMAINS via task_include
This update introduces apex (@) A and optional AAAA records for all base SLD domains.
The tasks were moved into a new 02_apex.yml file and are looped using
SYS_SVC_DNS_BASE_DOMAINS. CAA record loops were updated accordingly.
See details: https://chatgpt.com/share/68c385c3-1804-800f-8c78-8614bc853f77
2025-09-12 04:30:59 +02:00
fcc9dc71ef Removed solved Todos 2025-09-12 04:05:02 +02:00
1b42ca46e8 Removed sys-dns-cloudflare-records from web-opt-rdr-www because it's covered by other tasks 2025-09-12 03:55:52 +02:00
ce8958cc01 sys-dns-wildcards: always create apex wildcard (*.apex); use explicit_domains for CURRENT_PLAY_DOMAINS_ALL list; update README and unit tests. Ref: https://chatgpt.com/share/68c37a74-7468-800f-a612-765bbbd442de 2025-09-12 03:47:37 +02:00
7e5990aa16 deploy(cli): auto-generate MODE_* flags from 01_modes.yml; remove legacy skip flags/params; drive cleanup via MODE_CLEANUP; validation via MODE_ASSERT; tests via MODE_TEST; drop MODE_BACKUP from 01_modes.yml. Ref: https://chatgpt.com/share/68c3725f-43a0-800f-9bb0-eb7cbf77ac24 2025-09-12 03:08:18 +02:00
60ef36456a Optimized variables 2025-09-12 02:41:33 +02:00
3a8b9cc958 Deactivated proxy for wildcards 2025-09-12 02:20:59 +02:00
a1a956585c Moved utils/run_once.yml to core 2025-09-12 02:20:26 +02:00
1a1f185265 Casted to bool to be sure it's interpretated correct 2025-09-12 02:19:47 +02:00
57ca6adaec MediaWiki: runtime patch for LocalSettings.php (URL, DB, lang) + safe quoting
- Add 03_patch_settings.yml to sync $wgServer/$wgCanonicalServer, DB vars, and language
- Use single-quoted PHP strings with proper escaping; idempotent grep guards
- Wire task into main.yml; rename 03_admin→04_admin and 04_extensions→05_extensions

Ref: https://chatgpt.com/share/68c3649a-e830-800f-a059-fc8eda8f76bb
2025-09-12 02:09:33 +02:00
a0c2245bbd Refactor web-opt-rdr-www:
- Split Cloudflare edge redirect into _01 and _02 task files
- Wrap Cloudflare routines in a conditional block on DNS_PROVIDER
- Preserve origin vs edge flavor handling
Conversation: https://chatgpt.com/share/68c3609b-5624-800f-b5fa-69def6032dca
2025-09-12 01:52:13 +02:00
206b3eadbc refactor(dns): replace sys-dns-parent-hosts with sys-dns-wildcards; emit only *.parent wildcards from CURRENT_PLAY_DOMAINS_ALL
Rename filter parent_build_records→wildcard_records; create only wildcard (*.parent) A/AAAA records (no base/apex); switch to CURRENT_PLAY_DOMAINS_ALL; update vars to SYN_DNS_WILDCARD_RECORDS; adjust role/task names, defaults, and docs; add unittest expecting *.a.b from www.a.b.example.com. See: https://chatgpt.com/share/68c35dc1-7170-800f-8fbe-772e61780597
2025-09-12 01:40:06 +02:00
feee3fd71f Fix false negatives in integration test for unused vars
Updated tests/integration/test_vars_usage_in_yaml.py:
- Variables immediately followed by '(' are now treated as function calls,
  not as set variables. This prevents false errors.
- Fixed detection of redirect_domain_mappings so it is no longer flagged
  as unused.

See: https://chatgpt.com/share/68c3542d-f44c-800f-a483-b3e43739f315
2025-09-12 00:59:14 +02:00
39e745049b Revert "Removed incorrect flavor cloud for hetzner"
This reverts commit db034553a3.
2025-09-12 00:43:46 +02:00
db034553a3 Removed incorrect flavor cloud for hetzner 2025-09-12 00:41:18 +02:00
f7e661bcca Todos solved and removed 2025-09-12 00:37:47 +02:00
d5f1ae0288 Revert "Remmoved default filter"
This reverts commit 7cfe97ab50.
2025-09-12 00:12:46 +02:00
3c3083481e Replaced CURRENT_PLAY_DOMAINS with CURRENT_PLAY_DOMAINS_ALL 2025-09-12 00:04:40 +02:00
7cfe97ab50 Remmoved default filter 2025-09-12 00:00:43 +02:00
a552ea175d feat(dns): add sys-svc-dns role and extend parent DNS handling
Introduce sys-svc-dns to bootstrap Cloudflare DNS prerequisites. Validates CLOUDFLARE_API_TOKEN, (optionally) manages CAA for base SLDs, and delegates parent record creation to sys-dns-parent-hosts. Wired into sys-stk-front-pure.

sys-dns-parent-hosts: new parent_dns filter builds A/AAAA for each parent host and wildcard children (*.parent). Supports dict/list inputs for CURRENT_PLAY_DOMAINS, optional IPv6, proxied flag, and optional *.apex. Exposes a single parent_build_records entry point.

Let’s Encrypt role cleanup: remove DNS/C AA management from sys-svc-letsencrypt; it now focuses on webroot challenge config and renew timer. Fixed path joins and run_once guards.

Tests: update unit tests to allow wildcard outputs and dict-based CURRENT_PLAY_DOMAINS. Add generate_base_sld_domains filter. Documentation updates for both roles.

Conversation: https://chatgpt.com/share/68c342f7-d20c-800f-b61f-cefeebcf1cd8
2025-09-11 23:47:27 +02:00
dc16b7d21c Removed refresh systemctl service listener for systemctl daemon 2025-09-11 22:37:16 +02:00
54797aa65b Surpress flushing of CSP and Webserver health checks during setup because tests will fail if procedures didn't finish 2025-09-11 22:31:24 +02:00
a6e42bff9b Optimized more run_once routines for performance 2025-09-11 22:16:42 +02:00
58cf63c040 Removed deathlock and optimized run_once settings for performance 2025-09-11 21:48:56 +02:00
682ea6d7f2 Removed unnecessary --dirval-cmd dirval 2025-09-11 21:16:10 +02:00
486729d57d Removed directory-validator dependencies because it's installed via pkgmgr 2025-09-11 21:01:23 +02:00
5342f70b03 Solved wrong variable bugs :) 2025-09-11 20:58:02 +02:00
d40a275d70 feat(sys-ctl-cln-faild-bkps): migrate role to cleanback CLI (systemd oneshot) and derive workers from Ansible facts
- install via pkgmgr (CLEANUP_FAILED_BACKUPS_PKG=cleanback)
- run: cleanback --all --dirval-cmd dirval --workers {{ CLEANUP_FAILED_BACKUPS_WORKERS }} --timeout {{ CLEANBACK_TIMEOUT_SECONDS }} --yes
- remove obsolete systemctl template and path set_fact logic
- keep task variable names intact; no defaults for runtime knobs
- update README to reflect new behavior

Conversation: https://chatgpt.com/share/68c309bf-8818-800f-84d9-c4aa74a4544c
2025-09-11 20:30:29 +02:00
3224e24d76 Refactor systemd handling
- sys-ctl-rpr-btrfs-balancer: suppress service flush for btrfs balancer (too expensive to run each play)
- sys-daemon: replace raw systemctl calls with ansible.builtin.systemd (daemon_reload, daemon_reexec)
- sys-service: split handler into 'Enable systemctl service' and 'Set systemctl service state', add become, async/poll, suppress flush guard

Conversation: https://chatgpt.com/share/68c2f7a6-6fe4-800f-9d79-3e3b0ab4a563
2025-09-11 18:24:21 +02:00
4539817c16 Mount hibernate.cfg.xml directly into Tomcat WEB-INF to ensure validationQuery is applied and avoid PROCESS privilege errors. See https://chatgpt.com/share/68c2c4dd-beec-800f-b44a-9c84494491f8 2025-09-11 15:23:52 +02:00
1a377f1eb4 fix(categories): remove unused update-pkgmgr subcategory from categories.yml
See https://chatgpt.com/share/68c2c757-7224-800f-b0c5-4750c60af7ef
2025-09-11 14:58:34 +02:00
a356566822 Optimized categories for desktop 2025-09-11 14:48:54 +02:00
5af6c0ef1b Removed update-pip 2025-09-11 14:48:22 +02:00
71276f3e5a Add custom hibernate.cfg.xml to XWiki role to use SELECT 1 as validation query (avoids PROCESS privilege requirement in MariaDB). See https://chatgpt.com/share/68c2c4dd-beec-800f-b44a-9c84494491f8 2025-09-11 14:47:41 +02:00
d5d7a7dffb feat(nextcloud): add automatic installation of XWiki Nextcloud app when 'web-app-xwiki' is in group_names
Ref: https://chatgpt.com/share/68c2bf97-b740-800f-a058-260f17aa131b
2025-09-11 14:26:40 +02:00
dcd1545093 Merge branch 'master' of github.com:kevinveenbirkenbach/infinito-nexus 2025-09-11 14:17:49 +02:00
04778a4fcc Activated LDAP for XWiki 2025-09-11 14:17:32 +02:00
e57f3bfdc1 feat(menu): add Games, Sales, and Customer Relationship Management categories; rename IAM to User Management; update tags (see conversation: https://chatgpt.com/share/68c2bc3c-2178-800f-9c88-c2faf08989dd) 2025-09-11 14:11:02 +02:00
cbfb096cdb Refactor web health checker & domain expectations (filter-based)
- Move all domain→expected-status mapping to filter `web_health_expectations`.
- Require explicit app selection via non-empty `group_names`; only those apps are included.
- Add `www_enabled` flag (wired via `WWW_REDIRECT_ENABLED`) to generate/force www.* → 301.
- Support `redirect_maps` to include manual redirects (sources forced to 301), independent of app selection.
- Aliases always 301; canonicals use per-key override or `server.status_codes.default`, else [200,302,301].
- Remove legacy fallbacks (`server.status_codes.home` / `landingpage`).
- Wire filter output into systemd ExecStart script as JSON expectations.
- Normalize various templates to use `to_json` and minor spacing fixes.
- Update app configs (e.g., YOURLS default=301; Confluence default=302; Bluesky web=405; MediaWiki/Confluence canonical/aliases).
- Constructor now uses `WWW_REDIRECT_ENABLED` for domain generation.

Tests:
- Add comprehensive unit tests for filter: selection by group, keyed/default codes, aliases, www handling, redirect_maps, input sanitization.
- Add unit tests for the standalone checker script (JSON parsing, OK/mismatch counting, sanitization).

See conversation: https://chatgpt.com/share/68c2b93e-de58-800f-8c16-ea05755ba776
2025-09-11 13:58:16 +02:00
6418a462ec XWiki: LDAP/OIDC admin mapping, config mounts, and REST installs
- LDAP: move settings to xwiki.cfg; enable trylocal (1/0), group_mapping to XWiki.XWikiAdminGroup,
  and mode_group_sync=always.
- OIDC: add groups claim request (oidc.userinfoclaims), map provider group to XWiki.XWikiAdminGroup,
  and use space-separated scopes.
- Compose: mount xwiki.cfg and xwiki.properties into /usr/local/xwiki.
- Extensions: wait for REST readiness; pre-check OIDC/LDAP extensions (URL-encoded IDs);
  install via REST job only if missing.
- Vars: strict mappings to LDAP.* and OIDC.* (no defaults), add XWIKI_ADMIN_GROUP and derived DNs.
- Config: expose ldap.local_enabled; tidy meta tags; README grammar update.

Conversation: https://chatgpt.com/share/68c2b8ad-4814-800f-b377-065f967998db
2025-09-11 13:55:53 +02:00
8bc6e1f921 Optimized xwiki integration 2025-09-11 12:29:39 +02:00
91ce097a0a feat(sys-service): migrate cleanup/backup services to generic units; harden disk-space cleanup
Services: add SYS_SERVICE_CLEANUP_BACKUPS and SYS_SERVICE_CLEANUP_DISC_SPACE in group vars.

sys-ctl-bkp-docker-2-loc: switch to sys-service; add ExecStartPre lock; ExecStartPost triggers backup cleanup; OnFailure → cleanup-failed; fix shell quoting.

sys-ctl-cln-bkps: switch to sys-service; pass CLI args via ExecStart; add ExecStartPre lock; set OnFailure; copy files; remove role-specific service template.

sys-ctl-cln-disc-space: switch to sys-service; enable timer; set OnFailure; provide ExecStart/ExecStartPre; copy files; remove role-specific service template.

script.sh (disc-space): non-interactive docker exec; consistent threshold message (use parameter); guard docker/pacman via command checks; robust container check; fix typo; use POSIX '='.

svc-opt-keyboard-color: minor formatting cleanup.

sys-ctl-hlth-disc-space: chain OnFailure to cleanup-disc-space service.

Context: ChatGPT conversation (Sep 10, 2025, Europe/Berlin) — https://chatgpt.com/share/68c1982e-bdc8-800f-bf13-a8b9f084f90e
2025-09-10 17:24:56 +02:00
79c623d8db Add initial XWiki role draft
- Added web-app-xwiki draft role with config, vars, templates, and docs
- Registered new network and port for XWiki
- Adjusted MediaWiki canonical domain to media.wiki

https://chatgpt.com/share/68c18c65-a008-800f-8d62-b695df2c6fa1
2025-09-10 16:34:37 +02:00
445c94788e Refactor: consolidate pkgmgr updates and remove legacy roles
Details:
- Added pkgmgr update task directly in pkgmgr role (pkgmgr pull --all)
- Removed deprecated update-pkgmgr role and references
- Removed deprecated update-pip role and references
- Simplified update-compose by dropping update-pkgmgr include

https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:46:39 +02:00
aac9704e8b Refactor: remove legacy update-docker role and references
Details:
- Removed update-docker role (README, meta, vars, tasks, script)
- Cleaned references from group_vars, update-compose, and docs
- Adjusted web-app-matrix role (removed @todo pointing to update-docker)
- Updated administrator guide (update-docker no longer mentioned)

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:32:33 +02:00
a57a5f8828 Refactor: remove Python-based Listmonk upgrade logic and implement upgrade as Ansible task
Details:
- Removed upgrade_listmonk() function and related calls from update-docker script
- Added dedicated Ansible task in web-app-listmonk role to run non-interactive DB/schema upgrade
- Conditional execution via MODE_UPDATE

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:25:41 +02:00
90843726de keycloak: update realm mail settings to use smtp_server.json.j2 (SPOT); merge via kc_merge_path; fix display name and SSL handling
See: https://chatgpt.com/share/68bb0b25-96bc-800f-8ff7-9ca8d7c7af11
2025-09-05 18:09:33 +02:00
d25da76117 Solved wrong variable bug 2025-09-05 17:30:08 +02:00
d48a1b3c0a Solved missing variable bugs. Role is not fully implemented need to pause development on it for the moment 2025-09-05 17:07:15 +02:00
2839d2e1a4 In between commit Magento implementation 2025-09-05 17:01:13 +02:00
00c99e58e9 Cleaned up bridgy fed 2025-09-04 17:09:35 +02:00
904040589e Added correct variables and health check 2025-09-04 15:13:10 +02:00
9f3d300bca Removed unneccessary handlers 2025-09-04 14:04:53 +02:00
9e253a2d09 Bluesky: Patch hardcoded IPCC_URL and proxy /ipcc
- Added Ansible replace task to override IPCC_URL in geolocation.tsx to same-origin '/ipcc'
- Extended Nginx extra_locations.conf to proxy /ipcc requests to https://bsky.app/ipcc
- Ensures frontend avoids CORS errors when fetching IP geolocation

See: https://chatgpt.com/share/68b97be3-0278-800f-9ee0-94389ca3ac0c
2025-09-04 13:45:57 +02:00
49120b0dcf Added more CSP headers 2025-09-04 13:36:35 +02:00
b6f91ab9d3 changed database_user to database_username 2025-09-04 12:45:22 +02:00
77e8e7ed7e Magento 2.4.8 refactor:
- Switch to split containers (markoshust/magento-php:8.2-fpm + magento-nginx:latest)
- Disable central DB; use app-local MariaDB and pin to 11.4
- Composer bootstrap of Magento in php container (Adobe repo keys), idempotent via creates
- Make setup:install idempotent; run as container user 'app'
- Wire OpenSearch (security disabled) and depends_on ordering
- Add credentials schema (adobe_public_key/adobe_private_key)
- Update vars for php/nginx/search containers + MAGENTO_USER
- Remove legacy docs (Administration.md, Upgrade.md)
Context: changes derived from our ChatGPT session about getting Magento 2.4.8 running with MariaDB 11.4.
Conversation: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 12:45:03 +02:00
32bc17e0c3 Optimized whitespacing 2025-09-04 12:41:11 +02:00
e294637cb6 Changed db config path attribut 2025-09-04 12:34:13 +02:00
577767bed6 sys-svc-rdbms: Refactor database service templates and add version support for Magento
- Unified Jinja2 variable spacing in tasks and templates
- Introduced database_image and database_version variables in vars/database.yml
- Updated mariadb.yml.j2 and postgres.yml.j2 to use {{ database_image }}:{{ database_version }}
- Ensured env file paths and includes are consistent
- Prepared support for versioned database images (needed for Magento deployment)

Ref: https://chatgpt.com/share/68b96a9d-c100-800f-856f-cd23d1eda2ed
2025-09-04 12:32:34 +02:00
e77f8da510 Added debug options to mastodon 2025-09-04 11:50:14 +02:00
4738b263ec Added docker_volume_path filter_plugin 2025-09-04 11:49:40 +02:00
0a588023a7 feat(bluesky): fix CORS by serving /config same-origin and pinning BAPP_CONFIG_URL
- Add `server.config_upstream_url` default in `roles/web-app-bluesky/config/main.yml`
  to define upstream for /config (defaults to https://ip.bsky.app/config).
- Introduce front-proxy injection `extra_locations.conf.j2` that:
  - proxies `/config` to the upstream,
  - sets SNI and correct Host header,
  - normalizes CORS headers for same-origin consumption.
- Wire the proxy injection only for the Web domain in
  `roles/web-app-bluesky/tasks/main.yml` via `proxy_extra_configuration`.
- Force fresh social-app checkout and patch
  `src/state/geolocation.tsx` to `const BAPP_CONFIG_URL = '/config'`
  in `roles/web-app-bluesky/tasks/02_social_app.yml`; notify `docker compose build` and `up`.
- Tidy and re-group PDS env in `roles/web-app-bluesky/templates/env.j2` (no functional change).
- Add vars in `roles/web-app-bluesky/vars/main.yml`:
  - `BLUESKY_FRONT_PROXY_CONTENT` (renders the extra locations),
  - `BLUESKY_CONFIG_UPSTREAM_URL` (reads `server.config_upstream_url`).

Security/Scope:
- Only affects the Bluesky web frontend (same-origin `/config`); PDS/API and AppView remain unchanged.

Refs:
- Conversation: https://chatgpt.com/share/68b8dd3a-2100-800f-959e-1495f6320aab
2025-09-04 02:29:10 +02:00
d2fa90774b Added fediverse bridge draft 2025-09-04 02:26:27 +02:00
0e72dcbe36 feat(magento): switch to ghcr.io/alexcheng1982/docker-magento2:2.4.6-p3; update Compose/Env/Tasks/Docs
• Docs: updated to MAGENTO_VOLUME; removed Installation/User_Administration guides
• Compose: volume path → /var/www/html; switched variables to MAGENTO_*/MYSQL_*/OPENSEARCH_*
• Env: new variable set + APACHE_SERVERNAME
• Task: setup:install via docker compose exec (multiline form)
• Schema: removed obsolete credentials definition
Link: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 02:25:49 +02:00
4f8ce598a9 Mastodon: allow internal chess host & refactor var names; OpenLDAP: safer get_app_conf
- Add ALLOWED_PRIVATE_ADDRESSES to .env (from svc-db-postgres) to handle 422 Mastodon::PrivateNetworkAddressError
- Switch docker-compose to MASTODON_* variables and align vars/main.yml
- Always run 01_setup.yml during deployment (removed conditional flag)
- OpenLDAP: remove implicit True default on network.local to avoid unintended truthy behavior

Context: chess.infinito.nexus resolved to 192.168.200.30 (private IP) from Mastodon; targeted allowlist unblocks federation lookups.

Ref: https://chat.openai.com/share/REPLACE_WITH_THIS_CONVERSATION_LINK
2025-09-03 21:44:47 +02:00
3769e66d8d Updated CSP for bluesky 2025-09-03 20:55:21 +02:00
33a5fadf67 web-app-chess: fix Corepack/Yarn EACCES and switch to ARG-driven Dockerfile
• Add roles/web-app-chess/files/Dockerfile using build ARGs (CHESS_VERSION, CHESS_REPO_URL, CHESS_REPO_REF, CHESS_ENTRYPOINT_REL, CHESS_ENTRYPOINT_INT, CHESS_APP_DATA_DIR, CONTAINER_PORT). Enable Corepack/Yarn as root in the runtime stage to avoid EACCES on /usr/local/bin symlinks, then drop privileges to 'node'.

• Delete Jinja-based templates/Dockerfile.j2; docker-compose now passes former Jinja vars via build.args. • Update templates/docker-compose.yml.j2 to forward all required build args. • Update config/main.yml: add CSP flag 'script-src-elem: unsafe-inline'.

Ref: https://chatgpt.com/share/68b88d3d-3bd8-800f-9723-e8df0cdc37e2
2025-09-03 20:47:50 +02:00
699a6b6f1e feat(web-app-magento): add Magento role + network/ports
- add role files (docs, vars, config, tasks, schema, templates)

- networks: add web-app-magento 192.168.103.208/28

- ports: add localhost http 8052

Conversation: https://chatgpt.com/share/68b8820f-f864-800f-8819-da509b99cee2
2025-09-03 20:00:01 +02:00
61c29eee60 web-app-chess: build/runtime hardening & feature enablement
Build: use Yarn 4 via Corepack; immutable install with inline builds.

Runtime: enable Corepack as user 'node', use project-local cache (/app/.yarn/cache), add curl; fix ownership.

Entrypoint: generate keys in correct dir; run 'yarn install --immutable --inline-builds' before migrations; wait for Postgres.

Config: enable matomo/css/desktop; notify 'docker compose build' on entrypoint changes.

Docs: rename README title to 'Chess'.

Ref: ChatGPT conversation (2025-09-03) — https://chatgpt.com/share/68b88126-7a6c-800f-acae-ae61ed577f46
2025-09-03 19:56:13 +02:00
d5204fb5c2 Removed unnecessary env loading 2025-09-03 17:41:53 +02:00
751615b1a4 Changed 09_ports.yml to 10_ports.yml 2025-09-03 17:41:14 +02:00
e2993d2912 Added more CSP urls for bluesky 2025-09-03 17:31:29 +02:00
24b6647bfb Corrected variable 2025-09-03 17:30:31 +02:00
d2dc2eab5f web-app-bluesky: refactor role, add Cloudflare DNS integration, split tasks
Changes: add AppView port; add CSP whitelist; new tasks (01_pds, 02_social_app, 03_dns); switch templates to BLUESKY_* vars; update docker-compose and env; TCP healthcheck; remove admin_password from schema.

Conversation context: https://chatgpt.com/share/68b85276-e0ec-800f-90ec-480a1d528593
2025-09-03 16:37:35 +02:00
a1130e33d7 web-app-chess: refactor runtime & entrypoint
- Move entrypoint to files/ and deploy via copy
- Parameterize APP_KEY_FILE, data dir, and entrypoint paths
- Require explicit PORT/PG envs (remove fallbacks)
- Drop stray header from config/main.yml
- Dockerfile: use templated data dir & entrypoint; keep node user
- Compose: set custom image, adjust volume mapping
- env: derive APP_SCHEME from WEB_PROTOCOL; NODE_ENV from ENVIRONMENT
- tasks: add 01_core and simplify main to include it

Ref: https://chatgpt.com/share/68b851c5-4dd8-800f-8e9e-22b985597b8f
2025-09-03 16:34:04 +02:00
df122905eb mailu: include base defaults for oletools (env_file/LD_PRELOAD)
Add base include to oletools service so it inherits env_file (LD_PRELOAD=/usr/lib/libhardened_malloc.so) and other defaults. Fixes crash: PermissionError: '/proc/cpuinfo' during hardened_malloc compatibility probe when LD_PRELOAD was absent. Aligns oletools with other Mailu services.

Refs: ChatGPT discussion – https://chatgpt.com/share/68b837ba-c9cc-800f-b5d9-62b60d6fafd9
2025-09-03 14:42:50 +02:00
d093a22d61 Added correct CSP for JIRA 2025-09-03 11:35:24 +02:00
5e550ce3a3 sys-ctl-rpr-docker-soft: switch to STRICT label mode and adapt tests
- script.py now resolves docker-compose project and working_dir strictly from container labels
- removed container-name fallback logic
- adjusted sys-ctl-hlth-docker-container to include sys-ctl-rpr-docker-soft
- cleaned up sys-svc-docker dependencies
- updated unit tests to mock docker inspect and os.path.isfile for STRICT mode

Conversation: https://chatgpt.com/share/68b80927-b800-800f-a909-0fe8d110fd0e
2025-09-03 11:24:14 +02:00
0ada12e3ca Enabled rpr service by failed health checkl isntead of tiumer 2025-09-03 10:46:46 +02:00
1a5ce4a7fa web-app-bookwyrm, web-app-confluence:
- Fix BookWyrm email SSL/TLS handling (use ternary without 'not' for clarity)
- Add truststore_enabled flag in Confluence config and vars
- Wire JVM_SUPPORT_RECOMMENDED_ARGS to disable UPM signature check if truststore is disabled
- Add placeholder style.css.j2 for Confluence

See conversation: https://chatgpt.com/share/68b80024-7100-800f-a2fe-ba8b9f5cec05
2025-09-03 10:45:41 +02:00
a9abb3ce5d Added unsafe-eval csp to jira 2025-09-03 09:43:07 +02:00
71ceb339fc Fix Confluence & BookWyrm setup:
- Add docker compose build trigger in docker-compose tasks
- Cleanup svc-prx-openresty vars
- Enable unsafe-inline CSP flags for BookWyrm, Confluence, Jira to allow Atlassian inline scripts
- Generalize CONFLUENCE_HOME usage in vars, env and docker-compose
- Ensure confluence-init.properties written with correct home
- Add JVM_SUPPORT_RECOMMENDED_ARGS to pass atlassian.home
- Update README to reference {{ CONFLUENCE_HOME }}

See: https://chatgpt.com/share/68b7582a-aeb8-800f-a14f-e98c5b4e6c70
2025-09-02 22:49:02 +02:00
61bba3d2ef feat(bookwyrm): production-ready runtime + Redis wiring
- Dockerfile: build & install gunicorn wheels
- compose: run initdb before start; use `python -m gunicorn`
- env: add POSTGRES_* and BookWyrm Redis aliases (BROKER/ACTIVITY/CACHE) + CACHE_URL
- vars: add cache URL, DB indices, and URL aliases for Redis

Ref: https://chatgpt.com/share/68b7492b-3200-800f-80c4-295bc3233d68
2025-09-02 21:45:11 +02:00
0bde4295c7 Implemented correct confluence version 2025-09-02 17:01:58 +02:00
8059f272d5 Refactor Confluence and Jira env templates to use official Atlassian ATL_* database variables instead of unused custom placeholders. Ensures containers connect directly to PostgreSQL without relying on CONFLUENCE_DATABASE_* or JIRA_DATABASE_* vars. See conversation: https://chatgpt.com/share/68b6ddfd-3c44-800f-a57e-244dbd7ceeb5 2025-09-02 14:07:38 +02:00
7c814e6e83 BookWyrm: update Dockerfile and env handling
- Remove ARG BOOKWYRM_VERSION default, use Jinja variable directly
- Add proper SMTP environment variables mapping (EMAIL_HOST, EMAIL_PORT, TLS/SSL flags, user, password, default_from)
- Ensure env.j2 uses BookWyrm-expected names only
Ref: ChatGPT conversation 2025-09-02 https://chatgpt.com/share/68b6dc73-3784-800f-9a7e-340be498a412
2025-09-02 14:01:04 +02:00
d760c042c2 Atlassian JVM sizing: cast memory vars to int before floor-division
Apply |int to TOTAL_MB and dependent values to prevent 'unsupported operand type(s) for //' during templating in Confluence and Jira roles.

Context: discussion on 2025-09-02 — https://chatgpt.com/share/68b6d386-4490-800f-9bad-aa7be1571ebe
2025-09-02 13:22:59 +02:00
6cac8085a8 feat(web-app-chess): add castling.club role with ports, networks, and build setup
- Added network subnet (192.168.103.192/28) and port 8050 for web-app-chess
- Replaced stub README with usability-focused description of castling.club
- Implemented config, vars, meta, and tasks for web-app-chess
- Added Dockerfile, docker-compose.yml, env, and docker-entrypoint.sh templates
- Integrated entrypoint asset placement
- Updated meta to reflect usability and software features

Ref: https://chatgpt.com/share/68b6c65a-3de8-800f-86b2-a110920cd50e
2025-09-02 13:21:15 +02:00
3a83f3d14e Refactor BookWyrm role: switch to source-built Dockerfile, update README/meta for usability, add env improvements (ALLOWED_HOSTS, Redis vars, Celery broker), and pin version v0.7.5. See https://chatgpt.com/share/68b6d273-abc4-800f-ad3f-e1a5b9f8dad0 2025-09-02 13:18:32 +02:00
61d852c508 Added ports and networks for bookwyrm, jira, confluence 2025-09-02 12:08:20 +02:00
188b098503 Confluence/Jira roles: add READMEs, switch to custom images, proxy/JVM envs, and integer-safe heap sizing
Confluence: README added; demo disables OIDC/LDAP; Dockerfile overlay; docker-compose now uses CONFLUENCE_CUSTOM_IMAGE and DB depends include; env.j2 adds ATL_* and JVM_*; vars use integer math (//) for Xmx/Xms and expose CUSTOM_IMAGE.

Jira: initial role skeleton with README, config/meta/tasks; Dockerfile overlay; docker-compose using JIRA_CUSTOM_IMAGE and DB depends include; env.j2 with proxy + JVM envs; vars with integer-safe memory sizing.

Context: https://chatgpt.com/share/68b6b592-2250-800f-b68e-b37ae98dbe70
2025-09-02 12:07:34 +02:00
bc56940e55 Implement initial BookWyrm role
- Removed obsolete TODO.md
- Added config/main.yml with service, feature, CSP, and registration settings
- Added schema/main.yml defining vaulted SECRET_KEY (alphanumeric)
- Added tasks/main.yml to load stateful stack
- Added Dockerfile.j2 ensuring data/media dirs
- Added docker-compose.yml.j2 with application, worker, redis, volumes
- Added env.j2 with registration, secrets, DB, Redis, OIDC support
- Extended vars/main.yml with BookWyrm variables and OIDC, Docker, Redis settings
- Updated meta/main.yml with logo and run_after dependencies

Ref: https://chatgpt.com/share/68b6c060-3a0c-800f-89f8-e114a16a4a80
2025-09-02 12:03:11 +02:00
5dfc2efb5a Used port variable 2025-09-02 11:59:50 +02:00
7f9dc65b37 Add README.md files for web-app-bookwyrm, web-app-postmarks, and web-app-socialhome roles
Introduce integration test to ensure all web-app-* roles contain a README.md (required for Web App Desktop visibility)

See: https://chatgpt.com/share/68b6be49-7b78-800f-a3ff-bf922b4b083f
2025-09-02 11:52:34 +02:00
163a925096 fix(docker-compose): proper lock path + robust pull for buildable services
- Store pull lock under ${PATH_DOCKER_COMPOSE_PULL_LOCK_DIR}/<hash>.lock so global cleanup removes it reliably
- If any service defines `build:`, run `docker compose build --pull` before pulling
- Use `docker compose pull --ignore-buildable` when supported; otherwise tolerate pull failures for locally built images

This prevents failures when images are meant to be built locally (e.g., custom images) and ensures lock handling is consistent.

Ref: https://chatgpt.com/share/68b6b592-2250-800f-b68e-b37ae98dbe70
2025-09-02 11:15:28 +02:00
a8c88634b5 cleanup: remove unused handlers and add integration test for unused handlers
Removed obsolete handlers from roles (VirtualBox, backup-to-USB, OpenLDAP)
and introduced an integration test under tests/integration/test_handlers_invoked.py
that ensures all handlers defined in roles/*/handlers are actually notified
somewhere in the code base. This keeps the repository clean by preventing
unused or forgotten handlers from accumulating.

Ref: https://chatgpt.com/share/68b6b28e-4388-800f-87d2-34dfb34b8d36
2025-09-02 11:02:30 +02:00
ce3fe1cd51 Nextcloud: integrate Talk & Whiteboard; adjust ports & healthchecks
- Enable Spreed (Talk); signaling via /standalone-signaling/
- STUN/TURN: move STUN to 3480 (3479 occupied by BBB), keep TURN 5350 reserved
- docker-compose: expose internal WS ports; explicit TURN port mapping
- Healthchecks: add nc-based TCP checks (roles/docker-container/templates/healthcheck/nc.yml.j2)
- Nginx: location proxy to talk:8081
- Schema: add talk_* secrets (turn/signaling/internal)
- Plugins: configure spreed/whiteboard via vars/*; remove old task files
- Ports matrix (group_vars/all/09_ports.yml) updated/commented

Conversation: https://chatgpt.com/share/68b61a6a-e1dc-800f-b793-4aa600bc0166
2025-09-02 00:13:23 +02:00
7ca8b7c71d feat(nextcloud): integrate Talk & Whiteboard; refactor to NEXTCLOUD_* vars; full-stack setup
config(ports): add Nextcloud websocket port (4003); canonical domains (nextcloud/talk/whiteboard)

refactor: unify get_app_conf usage & Jinja spacing; migrate paths/handlers to new NEXTCLOUD_* vars

feat(plugins): split plugin routines; configure Whiteboard via occ (URL + JWT)

fix(oidc): use NEXTCLOUD_URL for logout; correct LDAP attribute mappings; add OIDC flavor switch

feat: Whiteboard container & reverse-proxy location; Talk STUN/WS ports; Redis URL for Whiteboard

chore: drop obsolete TODO; minor cleanups in oauth2-proxy, matrix, peertube, pgadmin, phpldapadmin, pixelfed, phpmyadmin

security(schema): Bluesky jwt_secret now base64_prefixed_32; add Nextcloud whiteboard_jwt_secret

db: normalize postgres image tag templating; central DB host checks spacing fixes

ops: add full-stack bootstrap (certs, proxy, volumes); internal nginx config reload handler update

refs: https://chatgpt.com/share/68b5f5b7-8d64-800f-b001-1241f818dc0e
2025-09-01 21:37:02 +02:00
110381e80c Refactored peertube role and implemented config volume 2025-09-01 18:19:50 +02:00
b02d88adc0 Refactored server roles for better readability 2025-09-01 18:08:35 +02:00
b7065837df MediaWiki: switch feature.css to false and add custom Vector 2022 override stylesheet
See: https://chatgpt.com/share/68b5b925-f418-800f-8f84-de744dd2d093
2025-09-01 17:18:12 +02:00
c98a2378c4 Added is defined condition 2025-09-01 17:05:30 +02:00
4ae3cee36c web-svc-logout: merge logout domains into CSP connect-src and refactor task flow
• Add tasks/01_core.yml to set applications[application_id].server.csp.whitelist['connect-src'] = LOGOUT_CONNECT_SRC_NEW.

• Switch tasks/main.yml to include 01_core.yml (run-once guard preserved).

• Update templates/env.j2 to emit LOGOUT_DOMAINS as a comma-separated list.

• Rework vars/main.yml: compute LOGOUT_DOMAINS, derive LOGOUT_ORIGINS with WEB_PROTOCOL, read connect-src via the get_app_conf filter, and merge/dedupe (unique).

Rationale: ensure CSP allows cross-domain logout requests for all configured services.

Conversation: https://chatgpt.com/share/68b5b07d-b208-800f-b6b2-f26934607c8a
2025-09-01 16:41:33 +02:00
b834f0c95c Implemented config image for pretix 2025-09-01 16:20:04 +02:00
9f734dff17 web-app-pretix: fix healthcheck and allowed hosts
- Add Host header to curl healthcheck when container_hostname is defined
- Use PRETIX_PRETIX_ALLOWED_HOSTS to fix Django 400 Bad Request during healthcheck
- Centralize PRETIX_HOSTNAME from container_hostname var
- Add Redis broker/result backend config for Celery

See: https://chatgpt.com/share/68b59c42-c0fc-800f-9bfb-f1137c59b3de
2025-09-01 15:15:04 +02:00
6fa4d00547 Refactor CDN and run_once handling
- Move run_once include from main.yml to 01_core.yml in desk-gnome-caffeine and desk-ssh
- Introduce sys-svc-cdn/01_core.yml to handle shared/vendor dirs once and role dirs per run
- Replace cdn.* with cdn_paths_all.* across inj roles
- Split cdn_dirs into cdn_dirs_role and CDN_DIRS_GLOBAL
- Ensure cdn_urls uses cdn_paths_all

Details: https://chatgpt.com/share/68b58d64-1e28-800f-8907-36926a9e9a9b
2025-09-01 14:11:36 +02:00
7254667186 Nextcloud: make app:update more robust by retrying once with retries/until (fixes transient migration errors)
See: https://chatgpt.com/share/68b57e29-4420-800f-b326-b34d09fa64b5
2025-09-01 13:06:44 +02:00
aaedaab3da refactor(web-app-mediawiki): unify debug & oidc handling via _ensure_require, introduce host-side prep, switch to bind mounts
- Removed obsolete Installation.md, TODO.md, 02_debug.yml, 05_oidc.yml and legacy debug enable/disable tasks
- Added 01_prep.yml to render debug.php/oidc.php on host side before container start
- Introduced _ensure_require.yml for generic require_once management in LocalSettings.php
- Renamed 01_install.yml -> 02_install.yml to align with new numbering
- Updated docker-compose.yml.j2 to bind-mount mw-local into /opt/mw-local
- Adjusted vars/main.yml to define MEDIAWIKI_LOCAL_MOUNT_DIR and MEDIAWIKI_LOCAL_PATH
- Templates debug.php.j2 and oidc.php.j2 now gated by MODE_DEBUG and MEDIAWIKI_OIDC_ENABLED
- main.yml now orchestrates prep, install, debug, extensions, oidc require, admin consistently

Ref: https://chatgpt.com/share/68b57db2-efcc-800f-a733-aca952298437
2025-09-01 13:04:57 +02:00
7791bd8c04 Implement filter checks: ensure all defined filters are used and remove dead code
Integration tests added/updated:
- tests/integration/test_filters_usage.py: AST-based detection of filter definitions (FilterModule.filters), robust Jinja detection ({{ ... }}, {% ... %}, {% filter ... %}), plus Python call tracking; fails if a filter is used only under tests/.
- tests/integration/test_filters_are_defined.py: inverse check — every filter used in .yml/.yaml/.j2/.jinja2/.tmpl must be defined locally. Scans only inside Jinja blocks and ignores pipes inside strings (e.g., lookup('pipe', "... | grep ... | awk ...")) to avoid false positives like trusted_hosts, woff/woff2, etc.

Bug fixes & robustness:
- Build regexes without %-string formatting to avoid ValueError from literal '%' in Jinja tags.
- Strip quoted strings in usage analysis so sed/grep/awk pipes are not miscounted as filters.
- Prevent self-matches in the defining file.

Cleanup / removal of dead code:
- Removed unused filter plugins and related unit tests:
  * filter_plugins/alias_domains_map.py
  * filter_plugins/get_application_id.py
  * filter_plugins/load_configuration.py
  * filter_plugins/safe.py
  * filter_plugins/safe_join.py
  * roles/svc-db-openldap/filter_plugins/build_ldap_nested_group_entries.py
  * roles/sys-ctl-bkp-docker-2-loc/filter_plugins/dict_to_cli_args.py
  * corresponding tests under tests/unit/*
- roles/svc-db-postgres/filter_plugins/split_postgres_connections.py: dropped no-longer-needed list_postgres_roles API; adjusted tests.

Misc:
- sys-stk-front-proxy/defaults/main.yml: clarified valid vhost_flavour values (comma-separated).

Ref: https://chatgpt.com/share/68b56bac-c4f8-800f-aeef-6708dbb44199
2025-09-01 11:47:51 +02:00
34b3f3b0ad Optimized healthcheck link for web-app-yourls 2025-09-01 10:54:08 +02:00
94fe58b5da safe_join: raise ValueError on None parameters and update tests
Changed safe_join to raise ValueError if base or tail is None instead of returning 'None/path'.
Adjusted unit tests accordingly to expect exceptions for None inputs and kept empty-string handling valid.

Ref: https://chatgpt.com/share/68b55850-e854-800f-9702-09ea956b8dc4
2025-09-01 10:25:08 +02:00
9feb766e6f replaced style-src-elem by style-src 2025-09-01 10:14:03 +02:00
231fd567b3 feat(frontend): rename inj roles to sys-front-*, add sys-svc-cdn, cache-busting lookup
Introduce sys-svc-cdn (cdn_paths/cdn_urls/cdn_dirs) and ensure CDN directories + latest symlink.

Rename sys-srv-web-inj-* → sys-front-inj-*; update includes/templates; serve shared/per-app CSS & JS via CDN.

Add lookup_plugins/local_mtime_qs.py for mtime-based cache busting; split CSS into default.css/bootstrap.css + optional per-app style.css.

CSP: use style-src-elem; drop unsafe-inline for styles. Services: fix SYS_SERVICE_ALL_ENABLED bool and controlled flush.

BREAKING CHANGE: role names changed; replace includes and references accordingly.

Conversation: https://chatgpt.com/share/68b55494-9ec4-800f-b559-44707029141d
2025-09-01 10:10:23 +02:00
3f8e7c1733 Refactor CSP filter:
- Move default 'unsafe-inline' for style-src and style-src-elem into get_csp_flags
- Ensure hashes are only added if 'unsafe-inline' not in final tokens
- Improve comments and structure
- Extend unit tests to cover default flags, overrides, and final-token logic
See: https://chatgpt.com/share/68b54520-5cfc-800f-9bac-45093740df78
2025-09-01 09:03:22 +02:00
3bfab9ef8e feat(filter_plugins/url_join): add query parameter support
- Support query elements starting with '?' or '&'
  * First query element normalized to '?', subsequent to '&'
  * Each query element must be exactly one 'key=value' pair
  * Query elements may only appear after path elements
  * Once query starts, no more path elements are allowed
- Extend test suite with success and failure cases for query handling

See: https://chatgpt.com/share/68b537ea-d198-800f-927a-940c4de832f2
2025-09-01 08:16:22 +02:00
f1870c07be refactor(filter_plugins/url_join): enforce mandatory scheme and raise specific AnsibleFilterError messages
Improved url_join filter:
- Requires first element to contain a valid '<scheme>://'
- Raises specific errors for None, empty list, wrong type, missing scheme,
  extra schemes in later parts, or string conversion failures
- Provides clearer error messages with index context in parts

See: https://chatgpt.com/share/68b537ea-d198-800f-927a-940c4de832f2
2025-09-01 08:06:48 +02:00
d0cec9a7d4 CSP filters: add explicit style-src-elem handling and improve unit tests
See ChatGPT conversation: https://chatgpt.com/share/68b4a82c-e0c8-800f-9273-9165ce1aa8d6
2025-08-31 21:53:39 +02:00
1dbd714a56 yourls: move container_port/healthcheck to vars; listen on 8080
• Removed hardcoded container_port/container_healthcheck from docker-compose.yml.j2
• Added container_port=8080 and container_healthcheck to vars/main.yml
• Rationale: current image listens on 8080; centralizes settings in vars

Ref: https://chatgpt.com/share/68b4a69d-e4b0-800f-a4f8-6c8e4fc55ee4
2025-08-31 21:48:24 +02:00
3a17b2979e Refactor CSP filters to use get_url for domain resolution and update tests to check CSP directives order-independently. See: https://chatgpt.com/share/68b49e5c-6774-800f-9d8e-a3f980799c08 2025-08-31 21:11:57 +02:00
bb0530c2ac Optimized yourls variables and healthcheck 2025-08-31 20:38:02 +02:00
aa2eb53776 fix(csp): always include internal CDN in script-src/connect-src and update tests accordingly
See ChatGPT conversation: https://chatgpt.com/share/68b492b8-847c-800f-82a9-fb890d4add7f
2025-08-31 20:22:05 +02:00
5f66c1a622 feat(postgres): add split_postgres_connections filter and average pool fact
Compute POSTGRES_ALLOWED_AVG_CONNECTIONS once and propagate to app roles (gitlab, mastodon, listmonk, matrix, pretix, mobilizon, openproject, discourse). Fix docker-compose postgres command (-c flags split). Add unit tests. Minor env/locale tweaks and includes.

Conversation: https://chatgpt.com/share/68b48e72-cc28-800f-9c21-270cbc17d82a
2025-08-31 20:04:14 +02:00
b3dfb8bf22 Fix: Resolved Discourse plugin bug and unified variable/path handling
- Discourse: fixed 'DISCOURSE_CONTAINERS_DIR' and 'DISCOURSE_APPLICATION_YML_DEST'
- Nextcloud: improved plugin enable/configure tasks formatting
- WordPress: unified OIDC, msmtp, and upload.ini variables and tasks
- General: aligned spacing and switched to path_join for consistency
2025-08-29 20:53:36 +02:00
db642c1c39 refactor(schedule): unify service timeouts, rename 08_timer.yml → 08_schedule.yml, fix docker repair/update timeouts, raise WP upload limit
See https://chatgpt.com/share/68b1deb9-2534-800f-b28f-7f19925b1fa7
2025-08-29 19:09:28 +02:00
2fccebbd1f Enforce uppercase README.md and TODO.md filenames
- Renamed all Readme.md → README.md
- Renamed all Todo.md → TODO.md
- Added integration test (tests/integration/test_filename_conventions.py) to automatically check naming convention.

Background:
Consistency in file naming (uppercase README.md and TODO.md) avoids issues with case-sensitive filesystems and ensures desktop cards (e.g. Pretix) are properly included.
Ref: https://chatgpt.com/share/68b1d135-c688-800f-9441-46a3cbfee175
2025-08-29 18:11:53 +02:00
c23fbd8ec4 Add new role web-app-confluence
Introduced a new Ansible role for deploying Atlassian Confluence within the Infinito.Nexus ecosystem.
The role follows the same structure as web-app-pretix and includes:

- : Core variables, database config, OIDC integration.
- : Docker service definitions, features (Matomo, CSS, OIDC, logout, central DB).
- : Loads docker, db and proxy stack.
- : Placeholder for schema definitions.
- :
  -  (base for OIDC plugins/extensions),
  -  (service orchestration),
  -  (environment configuration).
- : Metadata, license, company, logo (Font Awesome book-open icon).

Canonical domain is set to `confluence.{{ PRIMARY_DOMAIN }}`.
This role ensures Confluence integrates seamlessly with Keycloak OIDC and the Infinito.Nexus service stack.

Conversation: https://chatgpt.com/share/68b1d006-bbd4-800f-9d2e-9c8a8af2c00f
2025-08-29 18:07:01 +02:00
2999d9af77 web-app-pretix: fully implemented role
Summary:
- Replace draft with complete README (features, resources, credits).
- Remove obsolete Todo.md.
- Switch to custom image tag (PRETIX_IMAGE_CUSTOM) and install 'pretix-oidc' in Dockerfile.
- Drop unused 'config' volume; keep persistent 'data' only.
- Rename docker-compose service from 'application' to 'pretix' and use container_port.
- Use standard depends_on include for DB/Redis (dmbs_excl).
- Align vars to docker.services.pretix.* (image/version/name); add PRETIX_IMAGE_CUSTOM.

Breaking:
- Service key changed to 'pretix' under docker.services.
- 'config' volume removed from compose.

Status:
- Pretix role is now fully implemented and production-ready.

Reference:
- Conversation: https://chatgpt.com/share/68b1cb34-b7dc-800f-8b39-c183124972f2
2025-08-29 17:46:31 +02:00
2809ffb9f0 Added correct fa class and description for pretix 2025-08-29 17:02:50 +02:00
cb12114ce8 Added correct run_after for pretix 2025-08-29 16:47:21 +02:00
ba99e558f7 Improve SAN certbundle task logic and messages
- Fixed typo: 'seperat' → 'separate'
- Added more robust changed_when conditions (stdout + stderr, handle already-issued, rate-limit, service-down cases)
- Added explicit warnings for Let's Encrypt rate limits (exact set and generic)
- Improved readability of SAN encapsulation task with descriptive name

See conversation: https://chatgpt.com/share/68b1bc75-c3a0-800f-8861-fcf4f5f4a48c
2025-08-29 16:45:03 +02:00
2aed0f97d2 Enhance timeout_start_sec_for_domains filter to accept dict, list, or str
- Updated filter to handle dict (domain map), list (flattened domains), or single str inputs.
- Prevents duplicate 'www.' prefixes by checking prefix before adding.
- Adjusted unit tests:
  * Replaced old non-dict test with invalid type tests (int, None).
  * Added explicit tests for list and string input types.

See conversation: https://chatgpt.com/share/68b1ae9a-1ac0-800f-b49d-2915386a1a23
2025-08-29 15:57:00 +02:00
f36c7831b1 Implement dynamic TimeoutStartSec filter for domains and update roles
- Added new filter plugin 'timeout_start_sec_for_domains' to calculate TimeoutStartSec based on number of domains.
- Updated sys-ctl-hlth-csp and sys-ctl-hlth-webserver tasks to use the filter.
- Removed obsolete systemctl.service.j2 in sys-ctl-hlth-csp.
- Adjusted variable naming (CURRENT_PLAY_DOMAINS_ALL etc.) in multiple roles.
- Updated srv-letsencrypt and sys-svc-certs to use uppercase vars.
- Switched pretix role to sys-stk-full-stateful and removed leftover javascript.js.
- Added unittests for the new filter under tests/unit/filter_plugins.

See conversation: https://chatgpt.com/share/68b1ae9a-1ac0-800f-b49d-2915386a1a23
2025-08-29 15:44:31 +02:00
009bee531b Refactor role naming for TLS and proxy stack
- Renamed role `srv-tls-core` → `sys-svc-certs`
- Renamed role `srv-https-stack` → `sys-stk-front-pure`
- Renamed role `sys-stk-front` → `sys-stk-front-proxy`
- Updated all includes, READMEs, meta, and dependent roles accordingly

This improves clarity and consistency of naming conventions for certificate management and proxy orchestration.

See: https://chatgpt.com/share/68b19f2c-22b0-800f-ba9b-3f2c8fd427b0
2025-08-29 14:38:20 +02:00
4c7bb6d9db Solved path bugs and optimized them 2025-08-29 14:13:59 +02:00
092869b29a pretix: enable OIDC support
- add pretix-oidc plugin installation (Dockerfile, version 2.3.1 default)
- configure OIDC env vars (issuer, endpoints, client ID/secret, scopes, unique attribute)
- enable redis + database, add config/data volumes
- switch canonical domain to ticket.<PRIMARY_DOMAIN> with pretix.<PRIMARY_DOMAIN> alias
- mirror GitLab-style OIDC var structure for consistency

Implements pretix authentication via Keycloak/SSO.
See: https://chatgpt.com/share/68b19721-341c-800f-b372-527164474018
2025-08-29 14:04:03 +02:00
f4ea6c6c0f refactor(web-app-gitlab): restructure configuration and add OIDC support
- Added oidc feature flag in config
- Removed obsolete credentials schema (initial_root_password)
- Updated docker-compose.yml.j2 to use explicit GITLAB_* vars (image, version, container, volumes)
- Moved initial_root_password into vars/main.yml
- Introduced GITLAB_OMNIBUS_BASE and GITLAB_OMNIBUS_OIDC config lists
- Switched env.j2 to use GITLAB_OMNIBUS_ALL join

See conversation: https://chatgpt.com/share/68b1962c-3ee0-800f-a858-d4590ff6132a
2025-08-29 14:02:46 +02:00
3ed84717a7 Solved wireguard name bugs 2025-08-29 13:03:06 +02:00
1cfc2b7e23 Optimized mastodon url 2025-08-29 12:27:59 +02:00
01b9648650 Made OIDC secret UPPER 2025-08-29 12:27:29 +02:00
65d3b3040d Activated system_service_suppress_flush for journalctl servic4 2025-08-29 12:26:53 +02:00
28f7ac5aba Removed attendize because it isn't maintained anymore. Pretix is the successor. 2025-08-29 11:17:10 +02:00
19926b0c57 Optimized web-app-desktop variables 2025-08-29 11:04:52 +02:00
3a79d9d630 Optimized pkgmgr variables and removed 'Ensure main.py is executable' because it should be preset by repositories itself 2025-08-29 10:53:36 +02:00
983287a84a Finished mediawiki oidc implementation 2025-08-29 04:24:50 +02:00
dd9a9b6d84 feat(mediawiki): Refactor OIDC + debug; install Composer deps in-container; modularize role
Discussion: https://chatgpt.com/share/68b10c0a-c308-800f-93ac-2ffb386cf58b

- Split tasks into 01_install, 02_debug, 03_admin, 04_extensions, 05_oidc.
- Ensure unzip+git+composer on demand in the container; run Composer as www-data with COMPOSER_HOME=/tmp/composer.
- Idempotently unpack/install PluggableAuth & OpenIDConnect; run composer install only if vendor/ is missing.
- Add sanity check for Jumbojett\OpenIDConnectClient.
- Copy oidc.php only when changed and append a single require_once to LocalSettings.php.
- Use REL1_44-compatible numeric array for $wgPluggableAuth_Config; set $wgPluggableAuth_ButtonLabelMessage.
- Debug: add debug.php that logs to STDERR (visible via docker logs); toggle cleanly with MODE_DEBUG.
- Enable OIDC feature in config; add paths/OIDC/extension vars in vars/main.yml.

fix(services): include SYS_SERVICE_GROUP_CLEANUP in StartPre lock (ssd-hdd, docker-hard).

fix(desktop/joomla): simplify MODE_DEBUG templating.

chore: minor cleanups and renames.
2025-08-29 04:10:46 +02:00
23a2e081bf Optimized services 2025-08-29 01:11:06 +02:00
4cbd848026 Set SYS_TIMER_ALL_ENABLED ny default to DEBUG_MODE 2025-08-29 01:06:09 +02:00
d67f660152 Enabled CSS and Desktop for Mediawiki 2025-08-29 00:46:29 +02:00
5c6349321b Removed MyBB role, because it's deprecated and Discourse takes over 2025-08-29 00:12:35 +02:00
af1ee64246 web-app-mediawiki: installer-driven bootstrap, DB readiness, idempotent admin; drop LocalSettings bind-mount
Tasks:
- Enable docker_compose_flush_handlers=true so services come up immediately.
- Add DB readiness guard via maintenance/sql.php (SELECT 1).
- Run maintenance/install.php on empty schema with robust changed_when/failed_when (merge stdout+stderr); keep secrets hidden.
- Run maintenance/update.php for migrations with neutral changed_when unless work is done.
- Make admin creation idempotent: tolerate 'already exists' and 'Account exists', keep async+no_log.

Config changes:
- Remove LocalSettings.php template and its host bind-mount from compose.
- Drop MediaWiki settings path variables and META namespace variable (unused after switch).

Result: First boot is fully automated (schema + admin), subsequent runs are cleanly idempotent.

Ref: ChatGPT conversation (Aug 28, 2025, Europe/Berlin) — https://chatgpt.com/share/68b0d2e1-9bc0-800f-81a5-db03ce0b81e3.
2025-08-29 00:07:00 +02:00
d96bfc64a6 added possibility to deactivate docker service loading for performance 2025-08-28 22:47:05 +02:00
6ea8301364 Refactor: migrate cmp/* and srv/* roles into sys-stk/* and sys-svc/* namespaces
- Removed obsolete 'cmp' category, introduced 'stk' category (fa-bars-staggered icon).
- Renamed roles:
  * cmp-db-docker → sys-stk-back-stateful
  * cmp-docker-oauth2 → sys-stk-back-stateless
  * srv-domain-provision → sys-stk-front
  * cmp-db-docker-proxy → sys-stk-full-stateful
  * cmp-docker-proxy → sys-stk-full-stateless
  * cmp-rdbms → sys-svc-rdbms
- Updated all include_role references, vars, templates and README.md files.
- Adjusted run_once comments and variable paths accordingly.
- Updated all web-app roles to use new sys-stk/* and sys-svc/* roles.

Conversation: https://chatgpt.com/share/68b0ba66-09f8-800f-86fc-76c47009d431
2025-08-28 22:23:09 +02:00
92f5bf6481 refactor(web-app-mybb): remove obsolete Installation.md, introduce schema for secret_pin, and rework task/vars handling
- Removed outdated Installation.md (manual plugin instructions no longer needed)
- Added schema/main.yml with validation for secret_pin
- Added config.php.j2 template to manage DB + admin config
- Refactored tasks/main.yml to deploy config.php instead of legacy docker-compose
- Removed setup-domain.yml (TLS/domain handling moved to core roles)
- Updated docker-compose.yml.j2 to mount config.php and use new vars
- Cleaned up vars/main.yml: standardized MYBB_* variable names, added MYBB_SECRET_PIN, config paths, and container port

See ChatGPT conversation: https://chatgpt.com/share/68b0ae26-93ec-800f-8785-0da7c9303090
2025-08-28 21:29:58 +02:00
58c17bf043 web-app-mediawiki: template-driven LocalSettings.php + admin automation; compose & config tweaks
Config & features:
- roles/web-app-mediawiki/config/main.yml:
  - Add sitename ('Wiki on {{ PRIMARY_DOMAIN | upper }}') and meta_namespace ('Meta')
  - Enable central_database feature and database service
  - Move volumes under docker.volumes (correct indentation)

Tasks & automation:
- roles/web-app-mediawiki/tasks/main.yml:
  - Avoid immediate compose handler flush (docker_compose_flush_handlers: false), then explicit meta: flush_handlers
  - Deploy templated LocalSettings.php to host path
  - Create admin via maintenance/createAndPromote.php (docker exec, idempotent changed_when/failed_when)

Templates:
- roles/web-app-mediawiki/templates/LocalSettings.php.j2:
  - Set $wgSitename, $wgMetaNamespace, $wgServer from MEDIAWIKI_*
  - DB settings (mysql, host:port, name, user, password)
  - Mail settings (EmergencyContact/PasswordSender)
  - Default skin: vector
  - Load basic extensions (ParserFunctions, Cite)
- roles/web-app-mediawiki/templates/docker-compose.yml.j2:
  - Switch to MEDIAWIKI_* vars, mount LocalSettings.php (ro)
  - Use container_port, include curl healthcheck
  - Fix volumes name to MEDIAWIKI_VOLUME

Vars:
- roles/web-app-mediawiki/vars/main.yml:
  - Restructure with MEDIAWIKI_* (sitename, meta_namespace, URL, image/version/container/volume)
  - Define SETTINGS host/dock paths, container_port, default user (www-data)
  - Admin bootstrap vars (name/password/email)

Misc:
- Add empty schema/main.yml placeholder for future validation

Refs: ChatGPT conversation (2025-08-28, Europe/Berlin). Link: https://chatgpt.com/share/68b0ace6-f8f4-800f-b7a7-a51a6c5260f1
2025-08-28 21:28:47 +02:00
6c2d5c52c8 Attached 'not (system_service_suppress_flush | bool)' directly to handler 2025-08-28 21:16:04 +02:00
b919f39e35 Made stop unrequired for joomla container 2025-08-28 21:15:07 +02:00
9f2cfe65af Remove non-functional Joomla LDAP integration
- Disabled LDAP feature flag (set to false by default, with comment)
- Removed ldapautocreate plugin (PHP + XML)
- Deleted LDAP helper tasks (01_ldap_files.yml, 05_ldap.yml, 07_diagnose.yml)
- Deleted LDAP CLI helper scripts (cli.php, diagnose.php, plugins.php, auth-trace.php)
- Removed LDAP configuration variables from vars/main.yml
- Removed LDAP environment variables from env.j2
- Removed LDAP-specific mounts from docker-compose.yml.j2
- Dropped php-ldap installation from Dockerfile
- Renamed task files for consistent numbering (02->01_install, 03->02_debug, 04->03_patch, 06->04_assert)

Reason: LDAP integration was removed because it was not functional.

Conversation: https://chatgpt.com/share/68b09373-7aa8-800f-8f2c-11e27123bad1
2025-08-28 19:36:12 +02:00
fe399c3967 Added all LDAP changes before removing, because it doesn't work. Will trty to replace it by OIDC 2025-08-28 19:22:37 +02:00
ef801aa498 Joomla: Add LDAP autocreate plugin support
- Introduced autocreate_users feature flag in config/main.yml
- Added ldapautocreate.php and ldapautocreate.xml plugin files
- Implemented tasks/01_ldap_files.yml for plugin deployment
- Added tasks/05_ldap.yml to configure LDAP plugin and register ldapautocreate
- Renamed tasks for better structure (01→02, 02→03, etc.)
- Updated cli-ldap.php.j2 for clean parameter handling
- Mounted ldapautocreate plugin via docker-compose.yml.j2
- Extended vars/main.yml with LDAP autocreate configuration

Ref: https://chatgpt.com/share/68b0802f-bfd4-800f-b10a-57cf0c091f7e
2025-08-28 18:13:53 +02:00
18f3b1042f feat(web-app-joomla): reliable first-run install, safe debug toggler, DB patching, LDAP scaffolding
Why
- Fix flaky first-run installs and make config edits idempotent.
- Prepare LDAP support and allow optional inline CSP for UI.
- Improve observability and guard against broken configuration.php.

What
- config/main.yml: enable features.ldap; add CSP flags (allow inline style/script elem); minor spacing.
- tasks/: split into 01_install (wait for core, absolute CLI path), 02_debug (toggle $debug/$error_reporting safely), 03_patch (patch DB creds in configuration.php), 04_ldap (configure plugin via helper), 05_assert (optional php -l).
- templates/Dockerfile.j2: conditionally install/compile php-ldap (fallback to docker-php-ext-install with libsasl2-dev).
- templates/cli-ldap.php.j2: idempotently enable & configure Authentication - LDAP from env.
- templates/docker-compose.yml.j2: build custom image when LDAP is enabled; mount cli-ldap.php; pull_policy: never.
- templates/env.j2: add site/admin vars, MariaDB connector/env, full LDAP env.
- vars/main.yml: default to MariaDB (mysqli), add JOOMLA_* vars incl. JOOMLA_CONFIG_FILE.

Notes
- LDAP path implemented but NOT yet tested end-to-end.
- Ref: https://chatgpt.com/share/68b068a8-2aa4-800f-8cd1-56383561a9a8.
2025-08-28 16:33:45 +02:00
dece6228a4 Refactor docker-compose build logic and pull policy
- Added conditional '--pull' flag on retry in docker-compose build handler, tied to MODE_UPDATE
- Added 'pull_policy: never' to multiple docker-compose service templates to prevent unwanted image pulls
- Fixed minor formatting issues (e.g. Nextcloud volume spacing, WordPress desktop alignment)

Reference: https://chatgpt.com/share/68b0207a-4d9c-800f-b76f-9515885e5183
2025-08-28 11:25:35 +02:00
cb66fb2978 Refactor LDAP variable schema to use top-level constant LDAP and nested ALL-CAPS keys.
- Converted group_vars/all/13_ldap.yml from lower-case to ALL-CAPS nested keys.
- Updated all roles, tasks, templates, and filter_plugins to reference LDAP.* instead of ldap.*.
- Fixed Keycloak JSON templates to properly quote Jinja variables.
- Adjusted svc-db-openldap filter plugins and unit tests to handle new LDAP structure.
- Updated integration test to only check uniqueness of TOP-LEVEL ALL-CAPS constants, ignoring nested keys.

See: https://chatgpt.com/share/68b01017-efe0-800f-a508-7d7e2f1c8c8d
2025-08-28 10:15:48 +02:00
b9da6908ec keycloak(role): add realm support to generic updater
- Allow kc_object_kind='realm'
- Map endpoint to 'realms' and default lookup_field to 'id'
- Use realm-specific kcadm GET/UPDATE (no -r flag)
- Preserve immutables: id, realm
- Guard query-based ID resolution to non-realm objects

Context: fixing failure in 'Update REALM mail settings' task.
See: https://chatgpt.com/share/68affdb8-3d28-800f-8480-aa6a74000bf8
2025-08-28 08:57:29 +02:00
8baec17562 web-app-taiga: extract admin bootstrap into dedicated task; add robust upsert path
Add roles/web-app-taiga/tasks/01_administrator.yml to handle admin creation via 'createsuperuser' and, on failure, an upsert fallback using 'manage.py shell'. Ensures email, is_staff, is_superuser, is_active are set and password is updated when needed; emits CHANGED marker for idempotence.

Update roles/web-app-taiga/tasks/main.yml to include the new 01_administrator.yml task file, removing the inline admin logic for better separation of concerns.

Uses taiga-manage helper service and composes docker-compose.yml with docker-compose-inits.yml to inherit env/networks/volumes consistently.

Chat reference: https://chatgpt.com/share/68af7637-225c-800f-b670-2b948f5dea54
2025-08-27 23:58:37 +02:00
1401779a9d web-app-taiga: add manage/init flow and idempotent admin bootstrap; fix OIDC config and env quoting
config/main.yml: convert oidc from empty mapping to block; indent flavor under oidc; enable javascript feature.

tasks/main.yml: use path_join for taiga settings; create docker-compose-inits via TAIGA_DOCKER_COMPOSE_INIT_PATH; flush handlers; add idempotent createsuperuser via taiga-manage with async/poll and masked logs.

templates/docker-compose-inits.yml.j2: include compose/container base to inherit env and project settings.

templates/env.j2: quote WEB_PROTOCOL and WEBSOCKET_PROTOCOL.

templates/javascript.js.j2: add SSO warning include.

users/main.yml: add administrator email stub.

vars/main.yml: add js_application_name; restructure OIDC flavor flags; add compose PATH vars; expose TAIGA_SUPERUSER_* vars.

Chat reference: https://chatgpt.com/share/68af7637-225c-800f-b670-2b948f5dea54
2025-08-27 23:19:42 +02:00
707a3fc1d0 Optimized defaults for modes 2025-08-27 22:58:05 +02:00
d595d46e2e Solved unquoted bug 2025-08-27 22:30:03 +02:00
73d5651eea web-app-taiga: refactor OIDC gating + defaults
- Introduced dedicated variables in vars/main.yml:
  * TAIGA_FLAVOR_TAIGAIO
  * TAIGA_TAIGAIO_ENABLED
- Replaced inline Jinja2 get_app_conf checks with TAIGA_TAIGAIO_ENABLED for
  consistency in tasks, docker-compose template and env file.
- Adjusted env.j2 to use TAIGA_TAIGAIO_ENABLED instead of direct flavor checks.
- Enabled css by default (true instead of false).
- Cleaned up spacing/indentation in config and env.

This improves readability, reduces duplicated logic, and makes it easier to
maintain both OIDC flavors (robrotheram, taigaio).

Conversation: https://chatgpt.com/share/68af65b3-27c0-800f-964f-ff4f2d96ff5d
2025-08-27 22:08:35 +02:00
12a267827d Refactor websocket and Taiga variables
- Introduce WEBSOCKET_PROTOCOL derived from WEB_PROTOCOL (wss if https, else ws).
- Replace hardcoded websocket URLs in EspoCRM, Nextcloud and Taiga with {{ WEBSOCKET_PROTOCOL }}.
- Fix mautrix-imessage to use ws:// for internal synapse:8008.
- Standardize Pixelfed OIDC env spacing.
- Refactor Taiga variables to TAIGA_* naming convention and clean up EMAIL_BACKEND definition.

See: https://chatgpt.com/share/68af62fa-4dcc-800f-9aaf-cff746daab1e
2025-08-27 21:57:04 +02:00
c6cd6430bb Refactor Joomla role to new docker.* schema
- Move image definition from images.joomla to docker.services.joomla
- Add container name, container_port variable, and healthcheck
- Introduce JOOMLA_IMAGE, JOOMLA_VERSION, JOOMLA_CONTAINER, JOOMLA_VOLUME in vars
- Use volume mapping via docker.volumes.data

See: https://chatgpt.com/share/68af55a9-6514-800f-b6f7-1dc86356936e
2025-08-27 21:00:08 +02:00
67b2ebf001 Encapsulated code to pass performance tests 2025-08-27 20:58:00 +02:00
ebb6660473 Renamed Gitea variables 2025-08-27 20:49:35 +02:00
f62d09d8f1 Handle Let's Encrypt maintenance errors gracefully
- Extend certbundle task to ignore 'The service is down for maintenance or had an internal error'
  as a fatal failure.
- Add debug/warning output when this error occurs, so playbook does not stop but logs the issue.
- Ensure changed_when does not mark run as changed if only maintenance error was hit.

Ref: https://chatgpt.com/share/68af4e15-24cc-800f-b1dd-6a5f2380e35a
2025-08-27 20:28:25 +02:00
de159db918 web-app-wordpress: move msmtp configuration from Docker image to docker-compose mount
- Removed COPY of msmtp configuration from Dockerfile to avoid baking secrets/config into the image
- Added volume mount for host-side msmtp config ({{ WORDPRESS_HOST_MSMTP_CONF }}) in docker-compose.yml
- Keeps PHP upload.ini handling inside the image, but externalizes sensitive mail configuration
- Increases flexibility and avoids rebuilds when msmtp config changes

Ref: https://chatgpt.com/share/68af3c51-0544-800f-b76f-b2660c43addb
2025-08-27 19:12:03 +02:00
e2c2cf4bcf Updated sys-svc-msmtp execution condition 2025-08-27 18:12:49 +02:00
6e1e1ad5c5 Renamed pixelfed parameter 2025-08-27 18:11:31 +02:00
06baa4b03a Added correct validation handling 2025-08-27 18:10:49 +02:00
73e7fbdc8a refactor(web-app-wordpress): unify variable naming to uppercase WORDPRESS_* style
- Replaced all lowercase wordpress_* variables with uppercase WORDPRESS_* equivalents
- Ensured consistency across tasks, templates, and vars
- Improves readability and aligns with naming conventions

Conversation: https://chatgpt.com/share/68af29b5-8e7c-800f-bd12-48cc5956311c
2025-08-27 17:52:38 +02:00
bae2bc21ec Optimized system services included suppress option and solved bugs 2025-08-27 17:34:59 +02:00
a8f4dea9d2 Solved matrix name bug 2025-08-27 16:39:07 +02:00
5aaf2d28dc Refactor path handling, service conditions and dependencies
- Fixed incorrect filter usage in docker-compose handler (proper use of | path_join).
- Improved LetsEncrypt template by joining paths with filenames instead of appending manually.
- Enhanced sys-svc-msmtp task with an additional condition to only run if no-reply mailu_token exists.
- Updated Keycloak meta to depend on Mailu (ensuring token generation before setup).
- Refactored Keycloak import path variables to use path_join consistently.
- Adjusted Mailu meta dependency to run after Matomo instead of Keycloak.

See: https://chatgpt.com/share/68af13e6-edc0-800f-b76a-a5f427837173
2025-08-27 16:19:57 +02:00
5287bb4d74 Refactor Akaunting role and CSP handling
- Improved CSP filter to properly include web-svc-cdn and use protocol-aware domains
- Added Todo.md with redis and OIDC notes
- Enhanced Akaunting role config with CSP flags and redis option
- Updated schema to include app_key validation
- Reworked tasks to handle first-run marker logic cleanly
- Fixed docker-compose template (marker, healthcheck, setup flag)
- Expanded env.j2 with cache, email, proxy, and redis options
- Added javascript.js.j2 template for SSO warning
- Introduced structured vars for Akaunting role
- Removed deprecated update-repository-with-files.yml task

See conversation: https://chatgpt.com/share/68af00df-2c74-800f-90b6-6ac5b29acdcb
2025-08-27 14:58:44 +02:00
5446a1497e Optimized attendize role. Role can be removed as soon as pretix as alternative tool is implemented 2025-08-27 12:27:55 +02:00
19889a8cfc fix(credentials, akaunting):
- update cli/create/credentials.py to handle vault literals correctly:
  * strip 'vault |' headers and keep only ANSIBLE_VAULT body
  * skip reprocessing keys added in same run (no duplicate confirmation prompts)
  * detect both 'vault' and 'ANSIBLE_VAULT' as already encrypted

Refs: https://chatgpt.com/share/68aed780-ad4c-800f-877d-aa4c40a47755
2025-08-27 12:02:36 +02:00
d9980c0d8f feat(baserow): add one-time SSO warning JavaScript
- Introduced a generic sso_warning.js.j2 template under
  templates/roles/web-app/templates/javascripts/
- Included this template in web-app-baserow/templates/javascript.js.j2
- Added new variable js_application_name in
  roles/web-app-baserow/vars/main.yml to make the warning
  application-specific
- Implemented cookie-based logic so the warning is only shown once
  per user (default: 365 days)

Reference: https://chatgpt.com/share/68aecdae-82d0-800f-b05e-f2cb680664f1
2025-08-27 11:19:59 +02:00
35206aaafd Solved undeclared docker compose variable bug 2025-08-26 22:35:41 +02:00
942e8c9c12 Updated baserow CSP adn variables for new Infinito.Nexus structure 2025-08-26 22:20:31 +02:00
97f4045c68 Keycloak: align client attributes with realm dictionary
- Extended kc_force_attrs in tasks/main.yml to source 'publicClient',
  'serviceAccountsEnabled' and 'frontchannelLogout' directly from
  KEYCLOAK_DICTIONARY_REALM for consistency with import definitions.
- Updated default.json.j2 import template to set 'publicClient' to true.
- Public client mode is required so the frontend API of role 'web-app-desktop'
  can handle login/logout flows without client secret.

Ref: https://chatgpt.com/share/68ae0060-4fac-800f-9f02-22592a4087d3
2025-08-26 21:22:27 +02:00
c182ecf516 Refactor and cleanup OIDC, desktop, and web-app roles
- Improved OIDC variable definitions (12_oidc.yml)
- Added account/security/profile URLs
- Restructured web-app-desktop tasks and JS handling
- Introduced oidc.js and iframe.js with runtime loader
- Fixed nginx.conf, LDAP, and healthcheck templates spacing
- Improved Lua injection for CSP and snippets
- Fixed typos (WordPress, receive, etc.)
- Added silent-check-sso nginx location

Conversation: https://chatgpt.com/share/68ae0060-4fac-800f-9f02-22592a4087d3
2025-08-26 20:44:05 +02:00
ce033c370a Removed waiting for other services, otherwise it ends up breaking, waiting for hard restart service 2025-08-26 19:23:47 +02:00
a0477ad54c Switched OnFailure with StartPost 2025-08-26 19:10:41 +02:00
35c3681f55 sys-daemon & sys-service: align timeout handling
- Updated sys-daemon defaults:
  * Increased SYSTEMD_DEFAULT_TIMEOUT_START to 24h
  * Improved inline comments for clarity
- Changed sys-service vars:
  * Removed hardcoded 60s TimeoutStartSec
  * Now empty by default → inherits manager defaults from sys-daemon

See: https://chatgpt.com/share/68ade432-67f8-800f-b6c2-b8f87764479b
2025-08-26 18:48:45 +02:00
af97e71976 Fix: correct Docker Go template syntax in sys-ctl-rpr-docker-soft script
Replaced over-escaped '{{{{.Names}}}}' with proper '{{.Names}}'
in docker ps commands. This resolves 'failed to parse template:
unexpected "{" in command' errors during unhealthy/exited
container detection.

Reference: https://chatgpt.com/share/68addfd9-fa78-800f-abda-49161699e673
2025-08-26 18:25:25 +02:00
19a51fd718 Solved linebreak bug 2025-08-26 17:13:29 +02:00
b916173422 Renamed web-app-port-ui to web-app-desktop 2025-08-26 11:35:22 +02:00
9756a0f75f Extend repair scripts with env-file support and unit tests
- Added detect_env_file() to both sys-ctl-rpr-docker-soft and sys-ctl-rpr-docker-hard
  * prefer .env, fallback to .env/env
  * append --env-file parameter automatically
- Refactored soft script to use compose_cmd() for consistent command building
- Adjusted error recovery path in soft script to also respect env-file
- Extended unit tests for soft script to cover env-file priority and restart commands
- Added new unit tests for hard script verifying env-file priority, cwd handling,
  and --only filter logic

Ref: https://chatgpt.com/share/68ad7b30-7510-800f-8172-56f03a2f40f5
2025-08-26 11:15:59 +02:00
e417bc19bd Refactor sys-ctl-rpr-docker-soft role to use standalone Python script with argparse and unittests
- Replace Jinja2 template (script.py.j2) with raw Python script (files/script.py)
- Add argparse options: --manipulation, --manipulation-string, --timeout
- Implement timeout handling in wait_while_manipulation_running
- Update systemd ExecStart/ExecStartPre handling in tasks/01_core.yml
- Remove obsolete systemctl.service.j2 and script.py.j2 templates
- Add unittest suite under tests/unit/roles/sys-ctl-rpr-docker-soft/files/test_script.py
- Mock docker and systemctl calls in tests for safe execution

Reference: ChatGPT conversation (see https://chatgpt.com/share/68ad770b-ea84-800f-b378-559cb61fc43a)
2025-08-26 10:58:17 +02:00
7ad14673e1 sys-service: add ExecStartPost support and adjust health/repair roles
- extended generic systemctl template to support ExecStartPost
- health-docker-volumes: run main script with whitelist, trigger both compose alarm and cleanup on failure
- repair-docker-hard: added ExecStartPre lock, ExecStart, and ExecStartPost to trigger compose alarm always, plus cleanup on failure
- removed obsolete role-specific systemctl.service.j2 templates
- improved consistency across vars and defaults

See: https://chatgpt.com/share/68ad6cb8-c164-800f-96b6-a45c6c7779b3
2025-08-26 10:15:35 +02:00
eb781dbf8b fix(keycloak/ldap): make userObjectClasses JSON-safe and exclude posixAccount
- Render userObjectClasses via `tojson` (and trim) to avoid invalid control
  characters and ensure valid realm import parsing.
- Introduce KEYCLOAK_LDAP_USER_OBJECT_CLASSES in vars; exclude `posixAccount`
  for Keycloak’s LDAP config while keeping it for Ansible-managed UNIX users.
- Update UserStorageProvider template to use the new variable.

Rationale:
Keycloak must not require `posixAccount` on every LDAP user. We keep
`posixAccount` structural for Ansible provisioning, but filter it out for
Keycloak to prevent sync/import errors on entries without POSIX attributes.

Touched:
- roles/web-app-keycloak/templates/import/components/org.keycloak.storage.UserStorageProvider.json.j2
- roles/web-app-keycloak/vars/main.yml

Refs: conversation https://chatgpt.com/share/68aa1ef0-3658-800f-bdf4-5b57131d03b4
2025-08-23 22:05:26 +02:00
6016da6f1f Optimized bbb variables 2025-08-23 19:21:07 +02:00
8b2f0ac47b refactor(web-app-espocrm): improve config patching and container vars
- Replace `ESPOCRM_NAME` with `ESPOCRM_CONTAINER` for clarity and consistency.
- Drop unused `ESPOCRM_CONFIG_FILE_PUBLIC`, rely only on `config-internal.php`.
- Make DB credential patching idempotent using `grep` + `sed` checks.
- Replace direct sed edits for maintenance/cron/cache with EspoCRM ConfigWriter.
- Add fallback execution as root if www-data user cannot write config.
- Clear EspoCRM cache only when config changes and in update mode.
- Remove obsolete OIDC scopes inline task (now handled via env/vars).
- Fix docker-compose template to use `ESPOCRM_CONTAINER`.

This refactor makes the EspoCRM role more robust, idempotent, and aligned
with EspoCRM’s official ConfigWriter mechanism.

See conversation: https://chatgpt.com/share/68a87820-12f8-800f-90d6-01ba97a1b279
2025-08-22 16:01:48 +02:00
9d6d64e11d Renamed espocrm data volume 2025-08-22 14:49:25 +02:00
f1a2967a37 Implemented sys-svc-cln-anon-volumes as service so that it can be triggert after sys-ctl-rpr-docker-hard 2025-08-22 14:48:50 +02:00
95a2172fff Corrected link 2025-08-22 09:23:40 +02:00
dc3f4e05a8 sys-ctl-rpr-docker-hard: Refactor restart script with argparse & update systemd ExecStart
- Removed unused soft restart function and switched to argparse-based CLI.
- Added --only argument to selectively restart subdirectories.
- Updated systemctl service template to pass PATH_DOCKER_COMPOSE_INSTANCES as argument.
- Ensures service unit correctly invokes the Python script with target path.

See conversation: https://chatgpt.com/share/68a771d9-5fd8-800f-a410-08132699cc3a
2025-08-21 21:22:29 +02:00
e33944cda2 Solved service ignore parameter bugs 2025-08-21 21:04:21 +02:00
efa68cc1e0 sys-ctl: make service file generation deterministic and simplify ignore logic
- Added '| sort' to all service group lists and backup routine lists to ensure
  deterministic ordering and stable checksums across Ansible runs.
- Adjusted systemctl templates to use a single service variable
  ('SYS_SERVICE_BACKUP_RMT_2_LOC') instead of rejecting dynamic list entries,
  making the ignore logic simpler and more predictable.
- Fixed minor whitespace inconsistencies in Jinja templates to avoid
  unnecessary changes.

This change was made to prevent spurious 'changed' states in Ansible caused by
non-deterministic list order and to reduce complexity in service definitions.

See discussion: https://chatgpt.com/share/68a74c20-6300-800f-a44e-da43ae2f3dea
2025-08-21 18:43:17 +02:00
79e702a3ab web-svc-collabora: localize vars, adjust CSP, fix systemd perms; refactor role composition
- sys-service:
  - Set explicit ownership and permissions for generated unit files:
    owner=root, group=root, mode=0644. Prevents drift and makes idempotence
    predictable when handlers reload/refresh systemd.

- web-svc-collabora:
  - Move cmp-docker-proxy include into tasks/01_core.yml and run it
    before Nginx config generation. Use public: true only to initialize the
    proxy/compose context and docker_compose_flush_handlers: true to ensure
    timely handler execution.
  - Define role-local variables domain and http_port in vars/main.yml
    and use {{ domain }} for the Nginx server file path. These values MUST
    be defined locally because they cannot be reliably imported via
    public: true — other roles may override them later in the play, leading
    to leakage and nondeterministic behavior. Localizing avoids precedence
    conflicts without resorting to host-wide set_fact.
  - CSP adjusted: add server.security.flags.style-src.unsafe-inline: true
    to accommodate Collabora’s inline styles (requested as “csr” in notes).
  - Minor variable alignment/cleanup and TODO note for future refactor.

- Housekeeping:
  - Rename task title to reflect {{ domain }} usage.

Refs:
- Discussion and rationale in this chat https://chatgpt.com/share/68a731aa-d394-800f-9eb4-2499f45ed54b (2025-08-21, Europe/Berlin).
2025-08-21 16:48:37 +02:00
9180182d5b Optimized variables 2025-08-21 16:27:10 +02:00
535094d15d Added more update tasks for ESPOCRM config 2025-08-21 16:23:08 +02:00
658003f5b9 Added test user entry 2025-08-21 09:56:50 +02:00
3ff783df17 Updated mailu move docs 2025-08-21 09:49:36 +02:00
3df511aee9 Changed constructor order. emails need to be defned before users 2025-08-20 18:54:44 +02:00
c27d16322b Optimized variables 2025-08-20 18:17:13 +02:00
7a6e273ea4 In between commit, updated matrix and optimized mailu 2025-08-20 17:51:17 +02:00
384beae7c1 Added task to update default email settings 2025-08-20 16:41:53 +02:00
ad7e61e8b1 Set default buffer level for proxy basic conf, which are necessary for OIDC login 2025-08-20 15:56:32 +02:00
fa46523433 Update trusted domains for matomo 2025-08-20 15:35:08 +02:00
f4a380d802 Optimized alarm and system handlers 2025-08-20 15:17:04 +02:00
42d6c1799b sys-service: add systemd_directive filter and refactor service template
Introduced custom filter plugin to render optional systemd directives, refactored template to loop over directives, and adjusted default vars (TimeoutStartSec, RuntimeMaxSec handling).

Details: see ChatGPT conversation
https://chatgpt.com/share/68a5a730-6344-800f-b9a3-dc62d5902e9b
2025-08-20 12:46:07 +02:00
8608d89653 Implemented correct template for collabora 2025-08-20 09:07:33 +02:00
a4f39ac732 Renamed webserver roles to more speakable names 2025-08-20 08:54:17 +02:00
9cfb8f3a60 Different optimations for collabora 2025-08-20 08:34:12 +02:00
3e5344a46c Optimized Collabora CSP for Nextcloud 2025-08-20 07:03:02 +02:00
ec07d1a20b Added logic to start docker compose pull just once per directory 2025-08-20 07:02:27 +02:00
594d9417d1 handlers(docker): add once-per-directory docker compose pull with lockfile
- Introduced a new handler 'docker compose pull' that runs only once per
  {{ docker_compose.directories.instance }} directory by using a lock
  file under /run/ansible/compose-pull.
- Ensures idempotency by marking the task as changed only when a pull
  was actually executed.
- Restricted execution with 'when: MODE_UPDATE | bool'.
- Improves update workflow by avoiding redundant docker pulls during
  the same Ansible run.

Reference: ChatGPT discussion
https://chatgpt.com/share/68a55151-959c-800f-8b70-160ffe43e776
2025-08-20 06:42:49 +02:00
dc125e4843 Solved path bug 2025-08-20 06:18:52 +02:00
39a54294dd Moved update commands to nextcloud role 2025-08-20 06:07:33 +02:00
a57fe718de Optimized spacinbg 2025-08-20 05:49:35 +02:00
b6aec5fe33 Optimized features 2025-08-20 05:39:49 +02:00
de07d890dc Solvewd 'sys-ctl-bkp-docker-2-loc' bug 2025-08-20 05:25:24 +02:00
e27f355697 Solvewd tabulator bug 2025-08-20 05:02:16 +02:00
790762d397 Renamed some web apps to web servicesy 2025-08-20 05:00:24 +02:00
4ce681e643 Add integration test: ensure roles including 'sys-service' define system_service_id
This test scans all roles for tasks including:
  - include_role:
      name: sys-service

If present, the role must define a non-empty 'system_service_id' in vars/main.yml.
Helps enforce consistency and prevent misconfiguration.

Ref: https://chatgpt.com/share/68a536e5-c384-800f-937a-f9d91249950c
2025-08-20 04:46:27 +02:00
55cf3d0d8e Solved unit performance tests 2025-08-20 04:35:46 +02:00
2708b67751 Optimized webserver on failure 2025-08-20 04:12:42 +02:00
f477ee3731 Deactivated redis, moved version to correct place for web-svc-collabora 2025-08-20 03:40:37 +02:00
6d70f78989 fix(domain-filters): support dependency expansion via seed param
- Added missing 'Iterable' import in 'canonical_domains_map' to avoid NameError.
- Introduced 'seed' parameter so the filter can start traversal from current play apps
  while still emitting canonical domains for discovered dependencies (e.g. web-svc-collabora).
- Updated 01_constructor.yml to pass full 'applications' and a clean 'seed' list
  (using dict2items → key) instead of '.keys()' method calls, fixing integration
  test error: 'reference to application keys is invalid'.

This resolves issues where collabora domains were missing and integration tests failed.

Ref: https://chatgpt.com/share/68a51f9b-3924-800f-a41b-803d8dd10397
2025-08-20 03:07:14 +02:00
b867a52471 Refactor and extend role dependency resolution:
- Introduced module_utils/role_dependency_resolver.py with full support for include_role, import_role, meta dependencies, and run_after.
- Refactored cli/build/tree.py to use RoleDependencyResolver (added toggles for include/import/dependencies/run_after).
- Extended filter_plugins/canonical_domains_map.py with optional 'recursive' mode (ignores run_after by design).
- Updated roles/web-app-nextcloud to properly include Collabora dependency.
- Added comprehensive unittests under tests/unit/module_utils for RoleDependencyResolver.

Ref: https://chatgpt.com/share/68a519c8-8e54-800f-83c0-be38546620d9
2025-08-20 02:42:07 +02:00
78ee3e3c64 Deactivated on_failure for telegram and email 2025-08-20 01:20:06 +02:00
d7ece2a8c3 Optimized message 2025-08-20 01:03:07 +02:00
3794aa87b0 Optimized spacing 2025-08-20 01:02:29 +02:00
4cf996b1bb Removed old collabora 2025-08-20 01:02:11 +02:00
79517b2fe9 Optimized spacing 2025-08-20 01:01:32 +02:00
a84ee1240a Optimized collabora name 2025-08-20 01:00:51 +02:00
7019b307c5 Optimized collabora draft 2025-08-20 01:00:20 +02:00
838a8fc7a1 Solved svc-opt-ssd-hdd path bug 2025-08-19 21:50:55 +02:00
95aba805c0 Removed variable which leads to bugs in other contexts 2025-08-19 20:50:08 +02:00
0856c340c7 Removed unnecessary logic 2025-08-19 20:35:02 +02:00
b90a2f6c87 sys-ctl-alm-{email,telegram}: unescape instance names before alerts
Use `systemd-escape --unescape` to restore human-readable unit identifiers in
Telegram and Email alerts. Also ensure Telegram messages are URL-encoded and
Email status checks try both raw and escaped forms for robustness.

Fixes issue where slashes were shown as dashes in notifications.

Context: see ChatGPT conversation
https://chatgpt.com/share/68a4c171-db08-800f-8399-7e07f237a441
2025-08-19 20:25:15 +02:00
98e045196b Removed cleanup service lock 2025-08-19 19:06:58 +02:00
a10dd402b8 refactor: improve service handling and introduce MODE_ASSERT
- Improved get_service_name filter plugin (clearer suffix handling, consistent var names).
- Added MODE_ASSERT flag to optionally execute validation/assertion tasks.
- Fixed systemd unit handling: consistent use of %I instead of %i, correct escaping of instance names.
- Unified on_failure behavior and alarm composer scripts.
- Cleaned up redundant logging, handlers, and debug config.
- Strengthened sys-service template resolution with assert (only active when MODE_ASSERT).
- Simplified timer and suffix handling with get_service_name filter.
- Hardened sensitive tasks with no_log.
- Added conditional asserts across roles (Keycloak, DNS, Mailu, Discourse, etc.).

These changes improve consistency, safety, and validation across the automation stack.

Conversation: https://chatgpt.com/share/68a4ae28-483c-800f-b2f7-f64c7124c274
2025-08-19 19:02:52 +02:00
6e538eabc8 Enhance tree builder: detect include_role dependencies from tasks/*.yml
- Added logic to scan each role’s tasks/*.yml files for include_role usage
- Supports:
  * loop/with_items with literal strings → adds each role
  * patterns with variables inside literals (e.g. svc-db-{{database_type}}) → expanded to glob and matched
  * pure variable-only names ({{var}}) → ignored
  * pure literal names → added directly
- Merges discovered dependencies under graphs["dependencies"]["include_role"]
- Added dedicated unit test covering looped includes, glob patterns, pure literals, and ignoring pure variables

See ChatGPT conversation (https://chatgpt.com/share/68a4ace0-7268-800f-bd32-b475c5c9ba1d) for context.
2025-08-19 19:00:03 +02:00
82cc24a7f5 Added reset condition for openresty 2025-08-19 17:48:02 +02:00
26b392ea76 refactor!: replace sys-systemctl with sys-service, add sys-daemon, and rename systemctl_* → system_service_* across repo
- Swap role includes: sys-systemctl → sys-service in all roles
- Rename variables everywhere: systemctl_* → system_service_* (incl. systemctl_id → system_service_id)
- Templates: ExecStart now uses {{ system_service_script_exec }}; add optional RuntimeMaxSec via SYS_SERVICE_DEFAULT_RUNTIME
- Move SYS_SERVICE defaults into roles/sys-service/defaults (remove SYS_SERVICE_ALL_ENABLED & SYS_SERVICE_DEFAULT_STATE from group_vars/07_services.yml)
- Tidy group_vars/all/08_timer.yml formatting
- Introduce roles/sys-daemon:
  - default manager timeouts (timeouts.conf)
  - optional purge of /etc/systemd/system.conf.d
  - validation via systemd-analyze verify
  - handlers for daemon-reload & daemon-reexec
- Refactor sys-timer to system_service_* variables (docs and templates updated)
- Move filter_plugins/filetype.py under sys-service
- Update meta/README to point to official systemd docs
- Touch many roles (backup/cleanup/health/repair/certs/nginx/csp/wireguard/ssd-hdd/keyboard/update-docker/alarm compose/email/telegram/etc.) to new naming

BREAKING CHANGE:
- Role path/name change: use `sys-service` instead of `sys-systemctl`
- All `systemctl_*` vars are now `system_service_*` (e.g., on_calendar, state, timer_enabled, script_exec, id)
- If you have custom templates, adopt RuntimeMaxSec and new variable names

Chat context: https://chatgpt.com/share/68a47568-312c-800f-af3f-e98575446327
2025-08-19 15:00:44 +02:00
b49fdc509e Refactor alarm compose service and systemctl templates
- Fixed bug where not both alarm services (email + telegram) were triggered.
- Removed direct OnFailure references for email and telegram,
  now handled by unified compose service.
- Introduced 01_core.yml in sys-ctl-alm-compose to structure
  role execution (subservices → core service → test run).
- Added configurable variables SYSTEMCTL_ALARM_COMPOSER_SUBSERVICES
  and SYSTEMCTL_ALARM_COMPOSER_DUMMY_MESSAGE.
- Replaced dedicated @.service template with generic systemctl template
  using systemctl_tpl_* variables for flexibility.
- Updated script.sh.j2 to collect exit codes and print clear errors.
- Fixed typos and streamlined vars in sys-systemctl.

See conversation: https://chatgpt.com/share/68a46172-7c3c-800f-a69c-0cb9edd6839f
2025-08-19 13:35:39 +02:00
b1e8339283 Added /bin/systemctl start {{ SYS_SERVICE_CLEANUP_BACKUPS_OLD }} 2025-08-19 12:56:25 +02:00
f5db786878 Restart and activate all services and timer when in debug mode 2025-08-19 12:20:19 +02:00
7ef20474a0 Renamed sys-ctl-cln-backups to sys-ctl-cln-bkps 2025-08-19 12:15:33 +02:00
83b9f697ab Encapsulated again to see output in journald 2025-08-19 11:25:07 +02:00
dd7b5e844c removed /bin/sh -c encapsulation and solved wrong --ignore names 2025-08-19 11:15:36 +02:00
da01305cac Replaced {{ systemctl_id | get_service_script_path( by systemctl_script_exec 2025-08-19 10:56:46 +02:00
1082caddae refactor(sys-ctl-alm-compose, sys-timer-cln-bkps):
- update alarm compose unit to run email/telegram notifiers independently via multiple ExecStart lines
- ensure cleanup backup dependencies are included before timer setup with handler flush
conversation: https://chatgpt.com/share/68a43429-c0cc-800f-9cc9-9a5ae258dc50
2025-08-19 10:22:38 +02:00
242347878d Moved email host and domain SPOT to SYSTEM_EMAIL constant 2025-08-19 10:00:35 +02:00
f46aabe884 Moved healthcheck to the end so that it is setup after email configuration 2025-08-19 09:46:12 +02:00
d3cc187c3b Made System Email Variables UPPER 2025-08-19 09:34:18 +02:00
0a4b9bc8e4 Generated service names with function 2025-08-19 02:01:15 +02:00
2887e54cca Solved path bug 2025-08-19 01:48:43 +02:00
630fd43382 refactor(services): unify service/timer runtime control and cleanup handling
- Introduce SYS_SERVICE_ALL_ENABLED and SYS_TIMER_ALL_ENABLED runtime flags
- Add SYS_SERVICE_DEFAULT_STATE for consistent default handling
- Ensure all on-failure service names use lowercase software_name
- Load sys-svc-cln-anon-volumes role during Docker cleanup
- Allow forced service refresh when SYS_SERVICE_ALL_ENABLED is true
- Replace ACTIVATE_ALL_TIMERS with SYS_TIMER_ALL_ENABLED
- Use SYS_SERVICE_DEFAULT_STATE in sys-systemctl vars
- Remove redundant MIG build job fail check

Related to service/timer process control refactoring.
2025-08-19 01:27:37 +02:00
3114a7b586 solved missing vars bug 2025-08-19 01:01:09 +02:00
34d771266a Solved path bug 2025-08-19 00:46:47 +02:00
73b7d2728e Solved timer bug 2025-08-19 00:33:00 +02:00
fc4df980c5 Solved empty entry bug 2025-08-18 23:54:23 +02:00
763b43b44c Implemented dynamic script path to sys-ctl-cln-disc-space 2025-08-18 23:50:28 +02:00
db860e6ae3 Adapted load order 2025-08-18 23:47:14 +02:00
2ba486902f Deactivated file copying for sys-ctl-cln-faild-bkps 2025-08-18 23:34:06 +02:00
7848226f83 Optimized service configuration for allerts 2025-08-18 23:28:41 +02:00
185f37af52 Refactor systemctl service handling with @ support
- Unified variable naming: system_service_id → systemctl_id
- Added automatic removal of trailing '@' for role directory resolution
- Improved first_found search: prefer target role, fallback to sys-systemctl defaults
- Split template resolution logic to avoid undefined variable errors
- Added assertion in sys-timer to forbid '@' in systemctl_id
- Corrected default systemctl.service.j2 template description
- Cleaned up path handling and script directory generation

Context: conversation about fixing template resolution and @ handling
https://chatgpt.com/share/68a39994-1bb0-800f-a219-109e643c3efb
2025-08-18 23:22:46 +02:00
b9461026a6 refactor: improve get_service_name suffix handling and handler usage
- Updated filter_plugins/get_service_name.py:
  * Default suffix handling: auto-select .service (no '@') or .timer (with '@')
  * Explicit False disables suffix entirely
  * Explicit string suffix still supported
- Updated sys-systemctl handler to use new filter instead of SYS_SERVICE_SUFFIX
- Extended unit tests to cover new suffix behavior

Ref: https://chat.openai.com/share/8c2de9e6-daa0-44dd-ae13-d7a7d8d8b6d9
2025-08-18 22:36:31 +02:00
bf63e01b98 refactor(systemd-services): migrate SYS_SERVICE_SUFFIX usage to get_service_name filter
Replaced all hardcoded service name concatenations with the new get_service_name filter.
This ensures consistency, proper lowercase formatting, and correct handling of '@' suffixed units.

Added unittests for the filter (normal, custom suffix, '@'-units, and lowercase normalization).

Context: see ChatGPT discussion https://chatgpt.com/share/68a38beb-b9bc-800f-b7ed-cdd2b64b2604
2025-08-18 22:24:33 +02:00
4a600ac531 Added get_service_name 2025-08-18 22:10:52 +02:00
dc0bb555c1 Added another group_names validation 2025-08-18 21:37:07 +02:00
5adce08aea Optimized variable names 2025-08-18 21:26:46 +02:00
2569abc0be Refactor systemctl services and timers
- Unified service templates into generic systemctl templates
- Introduced reusable filter plugins for script path handling
- Updated path variables and service/timer definitions
- Migrated roles (backup, cleanup, repair, etc.) to use systemctl role
- Added sys-daemon role for core systemd cleanup
- Simplified timer handling via sys-timer role

Note: This is a large refactor and some errors may still exist. Further testing and adjustments will be needed.
2025-08-18 21:22:16 +02:00
3a839cfe37 Refactor systemctl services and categories due to alarm bugs
This commit restructures systemctl service definitions and category mappings.

Motivation: Alarm-related bugs revealed inconsistencies in service and role handling.

Preparation step: lays the groundwork for fixing the alarm issues by aligning categories, roles, and service templates.
2025-08-18 13:35:43 +02:00
29f50da226 Add custom Ansible filter plugin get_category_entries
This commit introduces a new Ansible filter plugin named
'get_category_entries', which returns all role names under the
roles/ directory that start with a given prefix.

Additionally, unit tests (unittest framework) have been added under
tests/unit/filterplugins/ to ensure correct behavior, including:

- Returns empty list when roles/ directory is missing
- Correctly filters and sorts by prefix
- Ignores non-directory entries
- Supports custom roles_path argument
- Returns all roles when prefix is empty

Reference: https://chatgpt.com/share/68a2f1ab-1fe8-800f-b22a-28c1c95802c2
2025-08-18 11:27:26 +02:00
1453 changed files with 22574 additions and 9918 deletions

View File

@@ -21,12 +21,12 @@ jobs:
- name: Clean build artifacts
run: |
docker run --rm infinito:latest make clean
docker run --rm infinito:latest infinito make clean
- name: Generate project outputs
run: |
docker run --rm infinito:latest make build
docker run --rm infinito:latest infinito make build
- name: Run tests
run: |
docker run --rm infinito:latest make test
docker run --rm infinito:latest infinito make test

View File

@@ -59,11 +59,4 @@ RUN INFINITO_PATH=$(pkgmgr path infinito) && \
ln -sf "$INFINITO_PATH"/main.py /usr/local/bin/infinito && \
chmod +x /usr/local/bin/infinito
# 10) Run integration tests
# This needed to be deactivated becaus it doesn't work with gitthub workflow
#RUN INFINITO_PATH=$(pkgmgr path infinito) && \
# cd "$INFINITO_PATH" && \
# make test
ENTRYPOINT ["infinito"]
CMD ["--help"]
CMD sh -c "infinito --help && exec tail -f /dev/null"

View File

@@ -73,7 +73,7 @@ messy-test:
@echo "🧪 Running Python tests…"
PYTHONPATH=. python -m unittest discover -s tests
@echo "📑 Checking Ansible syntax…"
ansible-playbook playbook.yml --syntax-check
ansible-playbook -i localhost, -c local $(foreach f,$(wildcard group_vars/all/*.yml),-e @$(f)) playbook.yml --syntax-check
install: build
@echo "⚙️ Install complete."

View File

View File

@@ -1,5 +1,6 @@
[defaults]
# --- Performance & Behavior ---
pipelining = True
forks = 25
strategy = linear
gathering = smart
@@ -14,19 +15,14 @@ stdout_callback = yaml
callbacks_enabled = profile_tasks,timer
# --- Plugin paths ---
filter_plugins = ./filter_plugins
filter_plugins = ./filter_plugins
lookup_plugins = ./lookup_plugins
module_utils = ./module_utils
[ssh_connection]
# Multiplexing: safer socket path in HOME instead of /tmp
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r \
-o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new \
-o PreferredAuthentications=publickey,password,keyboard-interactive
# Pipelining boosts speed; works fine if sudoers does not enforce "requiretty"
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r -o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new -o PreferredAuthentications=publickey,password,keyboard-interactive
pipelining = True
scp_if_ssh = smart
transfer_method = smart
[persistent_connection]
connect_timeout = 30

View File

@@ -1,3 +0,0 @@
# Todo
- Test this script. It's just a draft. Checkout https://chatgpt.com/c/681d9e2b-7b28-800f-aef8-4f1427e9021d
- Solve bugs in show_vault_variables.py

View File

@@ -83,6 +83,13 @@ class DefaultsGenerator:
print(f"Error during rendering: {e}", file=sys.stderr)
sys.exit(1)
# Sort applications by application key for stable output
apps = result.get("defaults_applications", {})
if isinstance(apps, dict) and apps:
result["defaults_applications"] = {
k: apps[k] for k in sorted(apps.keys())
}
# Write output
self.output_file.parent.mkdir(parents=True, exist_ok=True)
with self.output_file.open("w", encoding="utf-8") as f:

View File

@@ -189,7 +189,7 @@ def parse_args():
def main():
args = parse_args()
primary_domain = '{{ PRIMARY_DOMAIN }}'
primary_domain = '{{ SYSTEM_EMAIL.DOMAIN }}'
become_pwd = '{{ lookup("password", "/dev/null length=42 chars=ascii_letters,digits") }}'
try:
@@ -220,6 +220,10 @@ def main():
print(f"Error building user entries: {e}", file=sys.stderr)
sys.exit(1)
# Sort users by key for deterministic output
if isinstance(users, dict) and users:
users = OrderedDict(sorted(users.items()))
# Convert OrderedDict into plain dict for YAML
default_users = {'default_users': users}
plain_data = dictify(default_users)

View File

@@ -102,8 +102,10 @@ def find_cycle(roles):
def topological_sort(graph, in_degree, roles=None):
"""
Perform topological sort on the dependency graph.
If `roles` is provided, on error it will include detailed debug info.
If a cycle is detected, raise an Exception with detailed debug info.
"""
from collections import deque
queue = deque([r for r, d in in_degree.items() if d == 0])
sorted_roles = []
local_in = dict(in_degree)
@@ -117,28 +119,26 @@ def topological_sort(graph, in_degree, roles=None):
queue.append(nbr)
if len(sorted_roles) != len(in_degree):
# Something went wrong: likely a cycle
cycle = find_cycle(roles or {})
if roles is not None:
if cycle:
header = f"Circular dependency detected: {' -> '.join(cycle)}"
else:
header = "Circular dependency detected among the roles!"
unsorted = [r for r in in_degree if r not in sorted_roles]
unsorted = [r for r in in_degree if r not in sorted_roles]
detail_lines = ["Unsorted roles and their dependencies:"]
header = "❌ Dependency resolution failed"
if cycle:
reason = f"Circular dependency detected: {' -> '.join(cycle)}"
else:
reason = "Unresolved dependencies among roles (possible cycle or missing role)."
details = []
if unsorted:
details.append("Unsorted roles and their declared run_after dependencies:")
for r in unsorted:
deps = roles.get(r, {}).get('run_after', [])
detail_lines.append(f" - {r} depends on {deps!r}")
details.append(f" - {r} depends on {deps!r}")
detail_lines.append("Full dependency graph:")
detail_lines.append(f" {dict(graph)!r}")
graph_repr = f"Full dependency graph: {dict(graph)!r}"
raise Exception("\n".join([header] + detail_lines))
else:
if cycle:
raise Exception(f"Circular dependency detected: {' -> '.join(cycle)}")
else:
raise Exception("Circular dependency detected among the roles!")
raise Exception("\n".join([header, reason] + details + [graph_repr]))
return sorted_roles

View File

@@ -5,10 +5,10 @@ import json
from typing import Dict, Any
from cli.build.graph import build_mappings, output_graph
from module_utils.role_dependency_resolver import RoleDependencyResolver
def find_roles(roles_dir: str):
"""Yield (role_name, role_path) for every subfolder in roles_dir."""
for entry in os.listdir(roles_dir):
path = os.path.join(roles_dir, entry)
if os.path.isdir(path):
@@ -16,46 +16,31 @@ def find_roles(roles_dir: str):
def main():
# default roles dir is ../../roles relative to this script
script_dir = os.path.dirname(os.path.abspath(__file__))
default_roles_dir = os.path.abspath(os.path.join(script_dir, '..', '..', 'roles'))
default_roles_dir = os.path.abspath(os.path.join(script_dir, "..", "..", "roles"))
parser = argparse.ArgumentParser(
description="Generate all graphs for each role and write meta/tree.json"
)
parser.add_argument(
'-d', '--role_dir',
default=default_roles_dir,
help=f"Path to roles directory (default: {default_roles_dir})"
)
parser.add_argument(
'-D', '--depth',
type=int,
default=0,
help="Max recursion depth (>0) or <=0 to stop on cycle"
)
parser.add_argument(
'-o', '--output',
choices=['yaml', 'json', 'console'],
default='json',
help="Output format"
)
parser.add_argument(
'-p', '--preview',
action='store_true',
help="Preview graphs to console instead of writing files"
)
parser.add_argument(
'-s', '--shadow-folder',
type=str,
default=None,
help="If set, writes tree.json to this shadow folder instead of the role's actual meta/ folder"
)
parser.add_argument(
'-v', '--verbose',
action='store_true',
help="Enable verbose logging"
)
parser.add_argument("-d", "--role_dir", default=default_roles_dir,
help=f"Path to roles directory (default: {default_roles_dir})")
parser.add_argument("-D", "--depth", type=int, default=0,
help="Max recursion depth (>0) or <=0 to stop on cycle")
parser.add_argument("-o", "--output", choices=["yaml", "json", "console"],
default="json", help="Output format")
parser.add_argument("-p", "--preview", action="store_true",
help="Preview graphs to console instead of writing files")
parser.add_argument("-s", "--shadow-folder", type=str, default=None,
help="If set, writes tree.json to this shadow folder instead of the role's actual meta/ folder")
parser.add_argument("-v", "--verbose", action="store_true", help="Enable verbose logging")
# Toggles
parser.add_argument("--no-include-role", action="store_true", help="Do not scan include_role")
parser.add_argument("--no-import-role", action="store_true", help="Do not scan import_role")
parser.add_argument("--no-dependencies", action="store_true", help="Do not read meta/main.yml dependencies")
parser.add_argument("--no-run-after", action="store_true",
help="Do not read galaxy_info.run_after from meta/main.yml")
args = parser.parse_args()
if args.verbose:
@@ -65,6 +50,8 @@ def main():
print(f"Preview mode: {args.preview}")
print(f"Shadow folder: {args.shadow_folder}")
resolver = RoleDependencyResolver(args.role_dir)
for role_name, role_path in find_roles(args.role_dir):
if args.verbose:
print(f"Processing role: {role_name}")
@@ -75,24 +62,43 @@ def main():
max_depth=args.depth
)
# Direct deps (depth=1) getrennt erfasst für buckets
inc_roles, imp_roles = resolver._scan_tasks(role_path)
meta_deps = resolver._extract_meta_dependencies(role_path)
run_after = set()
if not args.no_run_after:
run_after = resolver._extract_meta_run_after(role_path)
if any([not args.no_include_role and inc_roles,
not args.no_import_role and imp_roles,
not args.no_dependencies and meta_deps,
not args.no_run_after and run_after]):
deps_root = graphs.setdefault("dependencies", {})
if not args.no_include_role and inc_roles:
deps_root["include_role"] = sorted(inc_roles)
if not args.no_import_role and imp_roles:
deps_root["import_role"] = sorted(imp_roles)
if not args.no_dependencies and meta_deps:
deps_root["dependencies"] = sorted(meta_deps)
if not args.no_run_after and run_after:
deps_root["run_after"] = sorted(run_after)
graphs["dependencies"] = deps_root
if args.preview:
for key, data in graphs.items():
if args.verbose:
print(f"Previewing graph '{key}' for role '{role_name}'")
output_graph(data, 'console', role_name, key)
output_graph(data, "console", role_name, key)
else:
# Decide on output folder
if args.shadow_folder:
tree_file = os.path.join(
args.shadow_folder, role_name, 'meta', 'tree.json'
)
tree_file = os.path.join(args.shadow_folder, role_name, "meta", "tree.json")
else:
tree_file = os.path.join(role_path, 'meta', 'tree.json')
tree_file = os.path.join(role_path, "meta", "tree.json")
os.makedirs(os.path.dirname(tree_file), exist_ok=True)
with open(tree_file, 'w') as f:
with open(tree_file, "w", encoding="utf-8") as f:
json.dump(graphs, f, indent=2)
print(f"Wrote {tree_file}")
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -1,14 +1,29 @@
#!/usr/bin/env python3
"""
Selectively add & vault NEW credentials in your inventory, preserving comments
and formatting. Existing values are left untouched unless --force is used.
Usage example:
infinito create credentials \
--role-path roles/web-app-akaunting \
--inventory-file host_vars/echoserver.yml \
--vault-password-file .pass/echoserver.txt \
--set credentials.database_password=mysecret
"""
import argparse
import subprocess
import sys
from pathlib import Path
import yaml
from typing import Dict, Any
from module_utils.manager.inventory import InventoryManager
from module_utils.handler.vault import VaultHandler, VaultScalar
from module_utils.handler.yaml import YamlHandler
from yaml.dumper import SafeDumper
from typing import Dict, Any, Union
from ruamel.yaml import YAML
from ruamel.yaml.comments import CommentedMap
from module_utils.manager.inventory import InventoryManager
from module_utils.handler.vault import VaultHandler # uses your existing handler
# ---------- helpers ----------
def ask_for_confirmation(key: str) -> bool:
"""Prompt the user for confirmation to overwrite an existing value."""
@@ -18,35 +33,117 @@ def ask_for_confirmation(key: str) -> bool:
return confirmation == 'y'
def main():
def ensure_map(node: CommentedMap, key: str) -> CommentedMap:
"""
Ensure node[key] exists and is a mapping (CommentedMap) for round-trip safety.
"""
if key not in node or not isinstance(node.get(key), CommentedMap):
node[key] = CommentedMap()
return node[key]
def _is_ruamel_vault(val: Any) -> bool:
"""Detect if a ruamel scalar already carries the !vault tag."""
try:
return getattr(val, 'tag', None) == '!vault'
except Exception:
return False
def _is_vault_encrypted(val: Any) -> bool:
"""
Detect if value is already a vault string or a ruamel !vault scalar.
Accept both '$ANSIBLE_VAULT' and '!vault' markers.
"""
if _is_ruamel_vault(val):
return True
if isinstance(val, str) and ("$ANSIBLE_VAULT" in val or "!vault" in val):
return True
return False
def _vault_body(text: str) -> str:
"""
Return only the vault body starting from the first line that contains
'$ANSIBLE_VAULT'. If not found, return the original text.
Also strips any leading '!vault |' header if present.
"""
lines = text.splitlines()
for i, ln in enumerate(lines):
if "$ANSIBLE_VAULT" in ln:
return "\n".join(lines[i:])
return text
def _make_vault_scalar_from_text(text: str) -> Any:
"""
Build a ruamel object representing a literal block scalar tagged with !vault
by parsing a tiny YAML snippet. This avoids depending on yaml_set_tag().
"""
body = _vault_body(text)
indented = " " + body.replace("\n", "\n ") # proper block scalar indentation
snippet = f"v: !vault |\n{indented}\n"
y = YAML(typ="rt")
return y.load(snippet)["v"]
def to_vault_block(vault_handler: VaultHandler, value: Union[str, Any], label: str) -> Any:
"""
Return a ruamel scalar tagged as !vault. If the input value is already
vault-encrypted (string contains $ANSIBLE_VAULT or is a !vault scalar), reuse/wrap.
Otherwise, encrypt plaintext via ansible-vault.
"""
# Already a ruamel !vault scalar → reuse
if _is_ruamel_vault(value):
return value
# Already an encrypted string (may include '!vault |' or just the header)
if isinstance(value, str) and ("$ANSIBLE_VAULT" in value or "!vault" in value):
return _make_vault_scalar_from_text(value)
# Plaintext → encrypt now
snippet = vault_handler.encrypt_string(str(value), label)
return _make_vault_scalar_from_text(snippet)
def parse_overrides(pairs: list[str]) -> Dict[str, str]:
"""
Parse --set key=value pairs into a dict.
Supports both 'credentials.key=val' and 'key=val' (short) forms.
"""
out: Dict[str, str] = {}
for pair in pairs:
k, v = pair.split("=", 1)
out[k.strip()] = v.strip()
return out
# ---------- main ----------
def main() -> int:
parser = argparse.ArgumentParser(
description="Selectively vault credentials + become-password in your inventory."
description="Selectively add & vault NEW credentials in your inventory, preserving comments/formatting."
)
parser.add_argument("--role-path", required=True, help="Path to your role")
parser.add_argument("--inventory-file", required=True, help="Host vars file to update")
parser.add_argument("--vault-password-file", required=True, help="Vault password file")
parser.add_argument(
"--role-path", required=True, help="Path to your role"
)
parser.add_argument(
"--inventory-file", required=True, help="Host vars file to update"
)
parser.add_argument(
"--vault-password-file", required=True, help="Vault password file"
)
parser.add_argument(
"--set", nargs="*", default=[], help="Override values key.subkey=VALUE"
"--set", nargs="*", default=[],
help="Override values key[.subkey]=VALUE (applied to NEW keys; with --force also to existing)"
)
parser.add_argument(
"-f", "--force", action="store_true",
help="Force overwrite without confirmation"
help="Allow overrides to replace existing values (will ask per key unless combined with --yes)"
)
parser.add_argument(
"-y", "--yes", action="store_true",
help="Non-interactive: assume 'yes' for all overwrite confirmations when --force is used"
)
args = parser.parse_args()
# Parse overrides
overrides = {
k.strip(): v.strip()
for pair in args.set for k, v in [pair.split("=", 1)]
}
overrides = parse_overrides(args.set)
# Initialize inventory manager
# Initialize inventory manager (provides schema + app_id + vault)
manager = InventoryManager(
role_path=Path(args.role_path),
inventory_path=Path(args.inventory_file),
@@ -54,62 +151,90 @@ def main():
overrides=overrides
)
# Load existing credentials to preserve
existing_apps = manager.inventory.get("applications", {})
existing_creds = {}
if manager.app_id in existing_apps:
existing_creds = existing_apps[manager.app_id].get("credentials", {}).copy()
# 1) Load existing inventory with ruamel (round-trip)
yaml_rt = YAML(typ="rt")
yaml_rt.preserve_quotes = True
# Apply schema (may generate defaults)
updated_inventory = manager.apply_schema()
with open(args.inventory_file, "r", encoding="utf-8") as f:
data = yaml_rt.load(f) # CommentedMap or None
if data is None:
data = CommentedMap()
# Restore existing database_password if present
apps = updated_inventory.setdefault("applications", {})
app_block = apps.setdefault(manager.app_id, {})
creds = app_block.setdefault("credentials", {})
if "database_password" in existing_creds:
creds["database_password"] = existing_creds["database_password"]
# 2) Get schema-applied structure (defaults etc.) for *non-destructive* merge
schema_inventory: Dict[str, Any] = manager.apply_schema()
# Store original plaintext values
original_plain = {key: str(val) for key, val in creds.items()}
# 3) Ensure structural path exists
apps = ensure_map(data, "applications")
app_block = ensure_map(apps, manager.app_id)
creds = ensure_map(app_block, "credentials")
for key, raw_val in list(creds.items()):
# Skip if already vaulted
if isinstance(raw_val, VaultScalar) or str(raw_val).lstrip().startswith("$ANSIBLE_VAULT"):
# 4) Determine defaults we could add
schema_apps = schema_inventory.get("applications", {})
schema_app_block = schema_apps.get(manager.app_id, {})
schema_creds = schema_app_block.get("credentials", {}) if isinstance(schema_app_block, dict) else {}
# 5) Add ONLY missing credential keys
newly_added_keys = set()
for key, default_val in schema_creds.items():
if key in creds:
# existing → do not touch (preserve plaintext/vault/formatting/comments)
continue
# Determine plaintext
plain = original_plain.get(key, "")
if key in overrides and (args.force or ask_for_confirmation(key)):
plain = overrides[key]
# Value to use for the new key
# Priority: --set exact key → default from schema → empty string
ov = overrides.get(f"credentials.{key}", None)
if ov is None:
ov = overrides.get(key, None)
# Encrypt the plaintext
encrypted = manager.vault_handler.encrypt_string(plain, key)
lines = encrypted.splitlines()
indent = len(lines[1]) - len(lines[1].lstrip())
body = "\n".join(line[indent:] for line in lines[1:])
creds[key] = VaultScalar(body)
# Vault top-level become password if present
if "ansible_become_password" in updated_inventory:
val = str(updated_inventory["ansible_become_password"])
if val.lstrip().startswith("$ANSIBLE_VAULT"):
updated_inventory["ansible_become_password"] = VaultScalar(val)
if ov is not None:
value_for_new_key: Union[str, Any] = ov
else:
snippet = manager.vault_handler.encrypt_string(
val, "ansible_become_password"
if _is_vault_encrypted(default_val):
# Schema already provides a vault value → take it as-is
creds[key] = to_vault_block(manager.vault_handler, default_val, key)
newly_added_keys.add(key)
continue
value_for_new_key = "" if default_val is None else str(default_val)
# Insert as !vault literal (encrypt if needed)
creds[key] = to_vault_block(manager.vault_handler, value_for_new_key, key)
newly_added_keys.add(key)
# 6) ansible_become_password: only add if missing;
# never rewrite an existing one unless --force (+ confirm/--yes) and override provided.
if "ansible_become_password" not in data:
val = overrides.get("ansible_become_password", None)
if val is not None:
data["ansible_become_password"] = to_vault_block(
manager.vault_handler, val, "ansible_become_password"
)
lines = snippet.splitlines()
indent = len(lines[1]) - len(lines[1].lstrip())
body = "\n".join(line[indent:] for line in lines[1:])
updated_inventory["ansible_become_password"] = VaultScalar(body)
else:
if args.force and "ansible_become_password" in overrides:
do_overwrite = args.yes or ask_for_confirmation("ansible_become_password")
if do_overwrite:
data["ansible_become_password"] = to_vault_block(
manager.vault_handler, overrides["ansible_become_password"], "ansible_become_password"
)
# Write back to file
# 7) Overrides for existing credential keys (only with --force)
if args.force:
for ov_key, ov_val in overrides.items():
# Accept both 'credentials.key' and bare 'key'
key = ov_key.split(".", 1)[1] if ov_key.startswith("credentials.") else ov_key
if key in creds:
# If we just added it in this run, don't ask again or rewrap
if key in newly_added_keys:
continue
if args.yes or ask_for_confirmation(key):
creds[key] = to_vault_block(manager.vault_handler, ov_val, key)
# 8) Write back with ruamel (preserve formatting & comments)
with open(args.inventory_file, "w", encoding="utf-8") as f:
yaml.dump(updated_inventory, f, sort_keys=False, Dumper=SafeDumper)
yaml_rt.dump(data, f)
print(f"Inventory selectively vaulted{args.inventory_file}")
print(f"Added new credentials without touching existing formatting/comments{args.inventory_file}")
return 0
if __name__ == "__main__":
main()
sys.exit(main())

View File

@@ -11,8 +11,8 @@ sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')
from module_utils.entity_name_utils import get_entity_name
# Paths to the group-vars files
PORTS_FILE = './group_vars/all/09_ports.yml'
NETWORKS_FILE = './group_vars/all/10_networks.yml'
PORTS_FILE = './group_vars/all/10_ports.yml'
NETWORKS_FILE = './group_vars/all/09_networks.yml'
ROLE_TEMPLATE_DIR = './templates/roles/web-app'
ROLES_DIR = './roles'

View File

@@ -5,6 +5,9 @@ import subprocess
import os
import datetime
import sys
import re
from typing import Optional, Dict, Any, List
def run_ansible_playbook(
inventory,
@@ -13,21 +16,20 @@ def run_ansible_playbook(
allowed_applications=None,
password_file=None,
verbose=0,
skip_tests=False,
skip_validation=False,
skip_build=False,
cleanup=False,
skip_tests=False,
logs=False
):
start_time = datetime.datetime.now()
print(f"\n▶️ Script started at: {start_time.isoformat()}\n")
if cleanup:
# Cleanup is now handled via MODE_CLEANUP
if modes.get("MODE_CLEANUP", False):
cleanup_command = ["make", "clean-keep-logs"] if logs else ["make", "clean"]
print("\n🧹 Cleaning up project (" + " ".join(cleanup_command) +")...\n")
print("\n🧹 Cleaning up project (" + " ".join(cleanup_command) + ")...\n")
subprocess.run(cleanup_command, check=True)
else:
print("\n⚠️ Skipping build as requested.\n")
print("\n⚠️ Skipping cleanup as requested.\n")
if not skip_build:
print("\n🛠️ Building project (make messy-build)...\n")
@@ -38,26 +40,24 @@ def run_ansible_playbook(
script_dir = os.path.dirname(os.path.realpath(__file__))
playbook = os.path.join(os.path.dirname(script_dir), "playbook.yml")
# Inventory validation step
if not skip_validation:
# Inventory validation is controlled via MODE_ASSERT
if modes.get("MODE_ASSERT", None) is False:
print("\n⚠️ Skipping inventory validation as requested.\n")
elif "MODE_ASSERT" not in modes or modes["MODE_ASSERT"] is True:
print("\n🔍 Validating inventory before deployment...\n")
try:
subprocess.run(
[sys.executable,
os.path.join(script_dir, "validate/inventory.py"),
os.path.dirname(inventory)
[
sys.executable,
os.path.join(script_dir, "validate", "inventory.py"),
os.path.dirname(inventory),
],
check=True
check=True,
)
except subprocess.CalledProcessError:
print(
"\n❌ Inventory validation failed. Deployment aborted.\n",
file=sys.stderr
)
print("\n❌ Inventory validation failed. Deployment aborted.\n", file=sys.stderr)
sys.exit(1)
else:
print("\n⚠️ Skipping inventory validation as requested.\n")
if not skip_tests:
print("\n🧪 Running tests (make messy-test)...\n")
subprocess.run(["make", "messy-test"], check=True)
@@ -93,25 +93,136 @@ def run_ansible_playbook(
duration = end_time - start_time
print(f"⏱️ Total execution time: {duration}\n")
def validate_application_ids(inventory, app_ids):
"""
Abort the script if any application IDs are invalid, with detailed reasons.
"""
from module_utils.valid_deploy_id import ValidDeployId
validator = ValidDeployId()
invalid = validator.validate(inventory, app_ids)
if invalid:
print("\n❌ Detected invalid application_id(s):\n")
for app_id, status in invalid.items():
reasons = []
if not status['in_roles']:
if not status["in_roles"]:
reasons.append("not defined in roles (infinito)")
if not status['in_inventory']:
if not status["in_inventory"]:
reasons.append("not found in inventory file")
print(f" - {app_id}: " + ", ".join(reasons))
sys.exit(1)
MODE_LINE_RE = re.compile(
r"""^\s*(?P<key>[A-Z0-9_]+)\s*:\s*(?P<value>.+?)\s*(?:#\s*(?P<cmt>.*))?\s*$"""
)
def _parse_bool_literal(text: str) -> Optional[bool]:
t = text.strip().lower()
if t in ("true", "yes", "on"):
return True
if t in ("false", "no", "off"):
return False
return None
def load_modes_from_yaml(modes_yaml_path: str) -> List[Dict[str, Any]]:
"""
Parse group_vars/all/01_modes.yml line-by-line to recover:
- name (e.g., MODE_TEST)
- default (True/False/None if templated/unknown)
- help (from trailing # comment, if present)
"""
modes = []
if not os.path.exists(modes_yaml_path):
raise FileNotFoundError(f"Modes file not found: {modes_yaml_path}")
with open(modes_yaml_path, "r", encoding="utf-8") as fh:
for line in fh:
line = line.rstrip()
if not line or line.lstrip().startswith("#"):
continue
m = MODE_LINE_RE.match(line)
if not m:
continue
key = m.group("key")
val = m.group("value").strip()
cmt = (m.group("cmt") or "").strip()
if not key.startswith("MODE_"):
continue
default_bool = _parse_bool_literal(val)
modes.append(
{
"name": key,
"default": default_bool,
"help": cmt or f"Toggle {key}",
}
)
return modes
def add_dynamic_mode_args(
parser: argparse.ArgumentParser, modes_meta: List[Dict[str, Any]]
) -> Dict[str, Dict[str, Any]]:
"""
Add argparse options based on modes metadata.
Returns a dict mapping mode name -> { 'dest': <argparse_dest>, 'default': <bool/None>, 'kind': 'bool_true'|'bool_false'|'explicit' }.
"""
spec: Dict[str, Dict[str, Any]] = {}
for m in modes_meta:
name = m["name"]
default = m["default"]
desc = m["help"]
short = name.replace("MODE_", "").lower()
if default is True:
opt = f"--skip-{short}"
dest = f"skip_{short}"
help_txt = desc or f"Skip/disable {short} (default: enabled)"
parser.add_argument(opt, action="store_true", help=help_txt, dest=dest)
spec[name] = {"dest": dest, "default": True, "kind": "bool_true"}
elif default is False:
opt = f"--{short}"
dest = short
help_txt = desc or f"Enable {short} (default: disabled)"
parser.add_argument(opt, action="store_true", help=help_txt, dest=dest)
spec[name] = {"dest": dest, "default": False, "kind": "bool_false"}
else:
opt = f"--{short}"
dest = short
help_txt = desc or f"Set {short} explicitly (true/false). If omitted, keep inventory default."
parser.add_argument(opt, choices=["true", "false"], help=help_txt, dest=dest)
spec[name] = {"dest": dest, "default": None, "kind": "explicit"}
return spec
def build_modes_from_args(
spec: Dict[str, Dict[str, Any]], args_namespace: argparse.Namespace
) -> Dict[str, Any]:
"""
Using the argparse results and the spec, compute the `modes` dict to pass to Ansible.
"""
modes: Dict[str, Any] = {}
for mode_name, info in spec.items():
dest = info["dest"]
kind = info["kind"]
val = getattr(args_namespace, dest, None)
if kind == "bool_true":
modes[mode_name] = False if val else True
elif kind == "bool_false":
modes[mode_name] = True if val else False
else:
if val is not None:
modes[mode_name] = True if val == "true" else False
return modes
def main():
parser = argparse.ArgumentParser(
description="Run the central Ansible deployment script to manage infrastructure, updates, and tests."
@@ -119,87 +230,74 @@ def main():
parser.add_argument(
"inventory",
help="Path to the inventory file (INI or YAML) containing hosts and variables."
help="Path to the inventory file (INI or YAML) containing hosts and variables.",
)
parser.add_argument(
"-l", "--limit",
help="Restrict execution to a specific host or host group from the inventory."
"-l",
"--limit",
help="Restrict execution to a specific host or host group from the inventory.",
)
parser.add_argument(
"-T", "--host-type",
"-T",
"--host-type",
choices=["server", "desktop"],
default="server",
help="Specify whether the target is a server or a personal computer. Affects role selection and variables."
help="Specify whether the target is a server or a personal computer. Affects role selection and variables.",
)
parser.add_argument(
"-r", "--reset", action="store_true",
help="Reset all Infinito.Nexus files and configurations, and run the entire playbook (not just individual roles)."
"-p",
"--password-file",
help="Path to the file containing the Vault password. If not provided, prompts for the password interactively.",
)
parser.add_argument(
"-t", "--test", action="store_true",
help="Run test routines instead of production tasks. Useful for local testing and CI pipelines."
"-B",
"--skip-build",
action="store_true",
help="Skip running 'make build' before deployment.",
)
parser.add_argument(
"-u", "--update", action="store_true",
help="Enable the update procedure to bring software and roles up to date."
"-t",
"--skip-tests",
action="store_true",
help="Skip running 'make messy-tests' before deployment.",
)
parser.add_argument(
"-b", "--backup", action="store_true",
help="Perform a full backup of critical data and configurations before the update process."
)
parser.add_argument(
"-c", "--cleanup", action="store_true",
help="Clean up unused files and outdated configurations after all tasks are complete. Also cleans up the repository before the deployment procedure."
)
parser.add_argument(
"-d", "--debug", action="store_true",
help="Enable detailed debug output for Ansible and this script."
)
parser.add_argument(
"-p", "--password-file",
help="Path to the file containing the Vault password. If not provided, prompts for the password interactively."
)
parser.add_argument(
"-s", "--skip-tests", action="store_true",
help="Skip running 'make test' even if tests are normally enabled."
)
parser.add_argument(
"-V", "--skip-validation", action="store_true",
help="Skip inventory validation before deployment."
)
parser.add_argument(
"-B", "--skip-build", action="store_true",
help="Skip running 'make build' before deployment."
)
parser.add_argument(
"-i", "--id",
"-i",
"--id",
nargs="+",
default=[],
dest="id",
help="List of application_id's for partial deploy. If not set, all application IDs defined in the inventory will be executed."
help="List of application_id's for partial deploy. If not set, all application IDs defined in the inventory will be executed.",
)
parser.add_argument(
"-v", "--verbose", action="count", default=0,
help="Increase verbosity level. Multiple -v flags increase detail (e.g., -vvv for maximum log output)."
"-v",
"--verbose",
action="count",
default=0,
help="Increase verbosity level. Multiple -v flags increase detail (e.g., -vvv for maximum log output).",
)
parser.add_argument(
"--logs", action="store_true",
help="Keep the CLI logs during cleanup command"
"--logs",
action="store_true",
help="Keep the CLI logs during cleanup command",
)
# ---- Dynamically add mode flags from group_vars/all/01_modes.yml ----
script_dir = os.path.dirname(os.path.realpath(__file__))
repo_root = os.path.dirname(script_dir)
modes_yaml_path = os.path.join(repo_root, "group_vars", "all", "01_modes.yml")
modes_meta = load_modes_from_yaml(modes_yaml_path)
modes_spec = add_dynamic_mode_args(parser, modes_meta)
args = parser.parse_args()
validate_application_ids(args.inventory, args.id)
modes = {
"MODE_RESET": args.reset,
"MODE_TEST": args.test,
"MODE_UPDATE": args.update,
"MODE_BACKUP": args.backup,
"MODE_CLEANUP": args.cleanup,
"MODE_LOGS": args.logs,
"MODE_DEBUG": args.debug,
"host_type": args.host_type
}
# Build modes from dynamic args
modes = build_modes_from_args(modes_spec, args)
# Additional non-dynamic flags
modes["MODE_LOGS"] = args.logs
modes["host_type"] = args.host_type
run_ansible_playbook(
inventory=args.inventory,
@@ -208,11 +306,9 @@ def main():
allowed_applications=args.id,
password_file=args.password_file,
verbose=args.verbose,
skip_tests=args.skip_tests,
skip_validation=args.skip_validation,
skip_build=args.skip_build,
cleanup=args.cleanup,
logs=args.logs
skip_tests=args.skip_tests,
logs=args.logs,
)

View File

@@ -228,7 +228,7 @@ def parse_meta_dependencies(role_dir: str) -> List[str]:
def sanitize_run_once_var(role_name: str) -> str:
"""
Generate run_once variable name from role name.
Example: 'sys-srv-web-inj-logout' -> 'run_once_sys_srv_web_inj_logout'
Example: 'sys-front-inj-logout' -> 'run_once_sys_front_inj_logout'
"""
return "run_once_" + role_name.replace("-", "_")

60
docker-compose.yml Normal file
View File

@@ -0,0 +1,60 @@
version: "3.9"
services:
infinito:
build:
context: .
dockerfile: Dockerfile
network: host
pull_policy: never
container_name: infinito_nexus
image: infinito_nexus
restart: unless-stopped
volumes:
- data:/var/lib/docker/volumes/
- backups:/Backups/
- letsencrypt:/etc/letsencrypt/
ports:
# --- Mail services (classic + secure) ---
- "${BIND_IP:-127.0.0.1}:25:25" # SMTP
- "${BIND_IP:-127.0.0.1}:110:110" # POP3
- "${BIND_IP:-127.0.0.1}:143:143" # IMAP
- "${BIND_IP:-127.0.0.1}:465:465" # SMTPS
- "${BIND_IP:-127.0.0.1}:587:587" # Submission (SMTP)
- "${BIND_IP:-127.0.0.1}:993:993" # IMAPS (bound to public IP)
- "${BIND_IP:-127.0.0.1}:995:995" # POP3S
- "${BIND_IP:-127.0.0.1}:4190:4190" # Sieve (ManageSieve)
# --- Web / API services ---
- "${BIND_IP:-127.0.0.1}:80:80" # HTTP
- "${BIND_IP:-127.0.0.1}:443:443" # HTTPS
- "${BIND_IP:-127.0.0.1}:8448:8448" # Matrix federation port
# --- TURN / STUN (UDP + TCP) ---
- "${BIND_IP:-127.0.0.1}:3478-3480:3478-3480/udp" # TURN/STUN UDP
- "${BIND_IP:-127.0.0.1}:3478-3480:3478-3480" # TURN/STUN TCP
# --- Streaming / RTMP ---
- "${BIND_IP:-127.0.0.1}:1935:1935" # Peertube
# --- Custom / application ports ---
- "${BIND_IP:-127.0.0.1}:2201:2201" # Gitea
- "${BIND_IP:-127.0.0.1}:2202:2202" # Gitlab
- "${BIND_IP:-127.0.0.1}:2203:22" # SSH
- "${BIND_IP:-127.0.0.1}:33552:33552"
# --- Consecutive ranges ---
- "${BIND_IP:-127.0.0.1}:48081-48083:48081-48083"
- "${BIND_IP:-127.0.0.1}:48087:48087"
volumes:
data:
backups:
letsencrypt:
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: ${SUBNET:-172.30.0.0/24}
gateway: ${GATEWAY:-172.30.0.1}

View File

@@ -15,8 +15,8 @@ Follow these guides to install and configure Infinito.Nexus:
- **Networking & VPN** - Configure `WireGuard`, `OpenVPN`, and `Nginx Reverse Proxy`.
## Managing & Updating Infinito.Nexus 🔄
- Regularly update services using `update-docker`, `update-pacman`, or `update-apt`.
- Monitor system health with `sys-hlth-btrfs`, `sys-hlth-webserver`, and `sys-hlth-docker-container`.
- Automate system maintenance with `sys-lock`, `sys-cln-bkps-service`, and `sys-rpr-docker-hard`.
- Regularly update services using `update-pacman`, or `update-apt`.
- Monitor system health with `sys-ctl-hlth-btrfs`, `sys-ctl-hlth-webserver`, and `sys-ctl-hlth-docker-container`.
- Automate system maintenance with `sys-lock`, `sys-ctl-cln-bkps`, and `sys-ctl-rpr-docker-hard`.
For more details, refer to the specific guides above.

3
env.sample Normal file
View File

@@ -0,0 +1,3 @@
BIND_IP=127.0.0.1
SUBNET=172.30.0.0/24
GATEWAY=172.30.0.1

View File

@@ -0,0 +1,79 @@
# -*- coding: utf-8 -*-
"""
Ansible filter to count active docker services for current host.
Active means:
- application key is in group_names
- application key matches prefix regex (default: ^(web-|svc-).* )
- under applications[app]['docker']['services'] each service is counted if:
- 'enabled' is True, OR
- 'enabled' is missing/undefined (treated as active)
Returns an integer. If ensure_min_one=True, returns at least 1.
"""
import re
from typing import Any, Dict, Mapping, Iterable
def _is_mapping(x: Any) -> bool:
# be liberal: Mapping covers dict-like; fallback to dict check
try:
return isinstance(x, Mapping)
except Exception:
return isinstance(x, dict)
def active_docker_container_count(applications: Mapping[str, Any],
group_names: Iterable[str],
prefix_regex: str = r'^(web-|svc-).*',
ensure_min_one: bool = False) -> int:
if not _is_mapping(applications):
return 1 if ensure_min_one else 0
group_set = set(group_names or [])
try:
pattern = re.compile(prefix_regex)
except re.error:
pattern = re.compile(r'^(web-|svc-).*') # fallback
count = 0
for app_key, app_val in applications.items():
# host selection + name prefix
if app_key not in group_set:
continue
if not pattern.match(str(app_key)):
continue
docker = app_val.get('docker') if _is_mapping(app_val) else None
services = docker.get('services') if _is_mapping(docker) else None
if not _is_mapping(services):
# sometimes roles define a single service name string; ignore
continue
for _svc_name, svc_cfg in services.items():
if not _is_mapping(svc_cfg):
# allow shorthand like: service: {} or image string -> counts as enabled
count += 1
continue
enabled = svc_cfg.get('enabled', True)
if isinstance(enabled, bool):
if enabled:
count += 1
else:
# non-bool enabled -> treat "truthy" as enabled
if bool(enabled):
count += 1
if ensure_min_one and count < 1:
return 1
return count
class FilterModule(object):
def filters(self):
return {
# usage: {{ applications | active_docker_container_count(group_names) }}
'active_docker_container_count': active_docker_container_count,
}

View File

@@ -1,86 +0,0 @@
from ansible.errors import AnsibleFilterError
class FilterModule(object):
def filters(self):
return {'alias_domains_map': self.alias_domains_map}
def alias_domains_map(self, apps, PRIMARY_DOMAIN):
"""
Build a map of application IDs to their alias domains.
- If no `domains` key → []
- If `domains` exists but is an empty dict → return the original cfg
- Explicit `aliases` are used (default appended if missing)
- If only `canonical` defined and it doesn't include default, default is added
- Invalid types raise AnsibleFilterError
"""
def parse_entry(domains_cfg, key, app_id):
if key not in domains_cfg:
return None
entry = domains_cfg[key]
if isinstance(entry, dict):
values = list(entry.values())
elif isinstance(entry, list):
values = entry
else:
raise AnsibleFilterError(
f"Unexpected type for 'domains.{key}' in application '{app_id}': {type(entry).__name__}"
)
for d in values:
if not isinstance(d, str) or not d.strip():
raise AnsibleFilterError(
f"Invalid domain entry in '{key}' for application '{app_id}': {d!r}"
)
return values
def default_domain(app_id, primary):
return f"{app_id}.{primary}"
# 1) Precompute canonical domains per app (fallback to default)
canonical_map = {}
for app_id, cfg in apps.items():
domains_cfg = cfg.get('server',{}).get('domains',{})
entry = domains_cfg.get('canonical')
if entry is None:
canonical_map[app_id] = [default_domain(app_id, PRIMARY_DOMAIN)]
elif isinstance(entry, dict):
canonical_map[app_id] = list(entry.values())
elif isinstance(entry, list):
canonical_map[app_id] = list(entry)
else:
raise AnsibleFilterError(
f"Unexpected type for 'server.domains.canonical' in application '{app_id}': {type(entry).__name__}"
)
# 2) Build alias list per app
result = {}
for app_id, cfg in apps.items():
domains_cfg = cfg.get('server',{}).get('domains')
# no domains key → no aliases
if domains_cfg is None:
result[app_id] = []
continue
# empty domains dict → return the original cfg
if isinstance(domains_cfg, dict) and not domains_cfg:
result[app_id] = cfg
continue
# otherwise, compute aliases
aliases = parse_entry(domains_cfg, 'aliases', app_id) or []
default = default_domain(app_id, PRIMARY_DOMAIN)
has_aliases = 'aliases' in domains_cfg
has_canon = 'canonical' in domains_cfg
if has_aliases:
if default not in aliases:
aliases.append(default)
elif has_canon:
canon = canonical_map.get(app_id, [])
if default not in canon and default not in aliases:
aliases.append(default)
result[app_id] = aliases
return result

View File

@@ -4,45 +4,81 @@ import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.entity_name_utils import get_entity_name
from module_utils.role_dependency_resolver import RoleDependencyResolver
from typing import Iterable
class FilterModule(object):
def filters(self):
return {'canonical_domains_map': self.canonical_domains_map}
def canonical_domains_map(self, apps, PRIMARY_DOMAIN):
def canonical_domains_map(
self,
apps,
PRIMARY_DOMAIN,
*,
recursive: bool = False,
roles_base_dir: str | None = None,
seed: Iterable[str] | None = None,
):
"""
Maps applications to their canonical domains, checking for conflicts
and ensuring all domains are valid and unique across applications.
Build { app_id: [canonical domains...] }.
Rekursiv werden nur include_role, import_role und meta/main.yml:dependencies verfolgt.
'run_after' wird hier absichtlich ignoriert.
"""
if not isinstance(apps, dict):
raise AnsibleFilterError(f"'apps' must be a dict, got {type(apps).__name__}")
app_keys = set(apps.keys())
seed_keys = set(seed) if seed is not None else app_keys
if recursive:
roles_base_dir = roles_base_dir or os.path.join(os.getcwd(), "roles")
if not os.path.isdir(roles_base_dir):
raise AnsibleFilterError(
f"roles_base_dir '{roles_base_dir}' not found or not a directory."
)
resolver = RoleDependencyResolver(roles_base_dir)
discovered_roles = resolver.resolve_transitively(
start_roles=seed_keys,
resolve_include_role=True,
resolve_import_role=True,
resolve_dependencies=True,
resolve_run_after=False,
max_depth=None,
)
# all discovered roles that actually have config entries in `apps`
target_apps = discovered_roles & app_keys
else:
target_apps = seed_keys
result = {}
seen_domains = {}
for app_id, cfg in apps.items():
if app_id.startswith((
"web-",
"svc-db-" # Database services can also be exposed to the internet. It is just listening to the port, but the domain is used for port mapping
)):
if not isinstance(cfg, dict):
raise AnsibleFilterError(
f"Invalid configuration for application '{app_id}': "
f"expected a dict, got {cfg!r}"
for app_id in sorted(target_apps):
cfg = apps.get(app_id)
if cfg is None:
continue
if not str(app_id).startswith(("web-", "svc-db-")):
continue
if not isinstance(cfg, dict):
raise AnsibleFilterError(
f"Invalid configuration for application '{app_id}': expected dict, got {cfg!r}"
)
domains_cfg = cfg.get('server',{}).get('domains',{})
if not domains_cfg or 'canonical' not in domains_cfg:
self._add_default_domain(app_id, PRIMARY_DOMAIN, seen_domains, result)
continue
canonical_domains = domains_cfg['canonical']
self._process_canonical_domains(app_id, canonical_domains, seen_domains, result)
domains_cfg = cfg.get('server', {}).get('domains', {})
if not domains_cfg or 'canonical' not in domains_cfg:
self._add_default_domain(app_id, PRIMARY_DOMAIN, seen_domains, result)
continue
canonical_domains = domains_cfg['canonical']
self._process_canonical_domains(app_id, canonical_domains, seen_domains, result)
return result
def _add_default_domain(self, app_id, PRIMARY_DOMAIN, seen_domains, result):
"""
Add the default domain for an application if no canonical domains are defined.
Ensures the domain is unique across applications.
"""
entity_name = get_entity_name(app_id)
default_domain = f"{entity_name}.{PRIMARY_DOMAIN}"
if default_domain in seen_domains:
@@ -54,40 +90,21 @@ class FilterModule(object):
result[app_id] = [default_domain]
def _process_canonical_domains(self, app_id, canonical_domains, seen_domains, result):
"""
Process the canonical domains for an application, handling both lists and dicts,
and ensuring each domain is unique.
"""
if isinstance(canonical_domains, dict):
self._process_canonical_domains_dict(app_id, canonical_domains, seen_domains, result)
for _, domain in canonical_domains.items():
self._validate_and_check_domain(app_id, domain, seen_domains)
result[app_id] = canonical_domains.copy()
elif isinstance(canonical_domains, list):
self._process_canonical_domains_list(app_id, canonical_domains, seen_domains, result)
for domain in canonical_domains:
self._validate_and_check_domain(app_id, domain, seen_domains)
result[app_id] = list(canonical_domains)
else:
raise AnsibleFilterError(
f"Unexpected type for 'server.domains.canonical' in application '{app_id}': "
f"{type(canonical_domains).__name__}"
)
def _process_canonical_domains_dict(self, app_id, domains_dict, seen_domains, result):
"""
Process a dictionary of canonical domains for an application.
"""
for name, domain in domains_dict.items():
self._validate_and_check_domain(app_id, domain, seen_domains)
result[app_id] = domains_dict.copy()
def _process_canonical_domains_list(self, app_id, domains_list, seen_domains, result):
"""
Process a list of canonical domains for an application.
"""
for domain in domains_list:
self._validate_and_check_domain(app_id, domain, seen_domains)
result[app_id] = list(domains_list)
def _validate_and_check_domain(self, app_id, domain, seen_domains):
"""
Validate the domain and check if it has already been assigned to another application.
"""
if not isinstance(domain, str) or not domain.strip():
raise AnsibleFilterError(
f"Invalid domain entry in 'canonical' for application '{app_id}': {domain!r}"

View File

@@ -1,14 +1,32 @@
from ansible.errors import AnsibleFilterError
import hashlib
import base64
import sys, os
import sys
import os
# Ensure module_utils is importable when this filter runs from Ansible
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.config_utils import get_app_conf
from module_utils.get_url import get_url
def _dedup_preserve(seq):
"""Return a list with stable order and unique items."""
seen = set()
out = []
for x in seq:
if x not in seen:
seen.add(x)
out.append(x)
return out
class FilterModule(object):
"""
Custom filters for Content Security Policy generation and CSP-related utilities.
Jinja filters for building a robust, CSP3-aware Content-Security-Policy header.
Safari/CSP2 compatibility is ensured by merging the -elem/-attr variants into the base
directives (style-src, script-src). We intentionally do NOT mirror back into -elem/-attr
to allow true CSP3 granularity on modern browsers.
"""
def filters(self):
@@ -16,10 +34,14 @@ class FilterModule(object):
'build_csp_header': self.build_csp_header,
}
# -------------------------------
# Helpers
# -------------------------------
@staticmethod
def is_feature_enabled(applications: dict, feature: str, application_id: str) -> bool:
"""
Return True if applications[application_id].features[feature] is truthy.
Returns True if applications[application_id].features[feature] is truthy.
"""
return get_app_conf(
applications,
@@ -31,6 +53,10 @@ class FilterModule(object):
@staticmethod
def get_csp_whitelist(applications, application_id, directive):
"""
Returns a list of additional whitelist entries for a given directive.
Accepts both scalar and list in config; always returns a list.
"""
wl = get_app_conf(
applications,
application_id,
@@ -47,28 +73,39 @@ class FilterModule(object):
@staticmethod
def get_csp_flags(applications, application_id, directive):
"""
Dynamically extract all CSP flags for a given directive and return them as tokens,
e.g., "'unsafe-eval'", "'unsafe-inline'", etc.
Returns CSP flag tokens (e.g., "'unsafe-eval'", "'unsafe-inline'") for a directive,
merging sane defaults with app config.
Defaults:
- For styles we enable 'unsafe-inline' by default (style-src, style-src-elem, style-src-attr),
because many apps rely on inline styles / style attributes.
- For scripts we do NOT enable 'unsafe-inline' by default.
"""
flags = get_app_conf(
default_flags = {}
if directive in ('style-src', 'style-src-elem', 'style-src-attr'):
default_flags = {'unsafe-inline': True}
configured = get_app_conf(
applications,
application_id,
'server.csp.flags.' + directive,
False,
{}
)
tokens = []
for flag_name, enabled in flags.items():
merged = {**default_flags, **configured}
tokens = []
for flag_name, enabled in merged.items():
if enabled:
tokens.append(f"'{flag_name}'")
return tokens
@staticmethod
def get_csp_inline_content(applications, application_id, directive):
"""
Return inline script/style snippets to hash for a given CSP directive.
Returns inline script/style snippets to hash for a given directive.
Accepts both scalar and list in config; always returns a list.
"""
snippets = get_app_conf(
applications,
@@ -86,7 +123,7 @@ class FilterModule(object):
@staticmethod
def get_csp_hash(content):
"""
Compute the SHA256 hash of the given inline content and return
Computes the SHA256 hash of the given inline content and returns
a CSP token like "'sha256-<base64>'".
"""
try:
@@ -96,6 +133,10 @@ class FilterModule(object):
except Exception as exc:
raise AnsibleFilterError(f"get_csp_hash failed: {exc}")
# -------------------------------
# Main builder
# -------------------------------
def build_csp_header(
self,
applications,
@@ -105,8 +146,27 @@ class FilterModule(object):
matomo_feature_name='matomo'
):
"""
Build the Content-Security-Policy header value dynamically based on application settings.
Inline hashes are read from applications[application_id].csp.hashes
Builds the Content-Security-Policy header value dynamically based on application settings.
Key points:
- CSP3-aware: supports base/elem/attr for styles and scripts.
- Safari/CSP2 fallback: base directives (style-src, script-src) always include
the union of their -elem/-attr variants.
- We do NOT mirror back into -elem/-attr; finer CSP3 rules remain effective
on modern browsers if you choose to use them.
- If the app explicitly disables a token on the *base* (e.g. style-src.unsafe-inline: false),
that token is removed from the merged base even if present in elem/attr.
- Inline hashes are added ONLY if that directive does NOT include 'unsafe-inline'.
- Whitelists/flags/hashes read from:
server.csp.whitelist.<directive>
server.csp.flags.<directive>
server.csp.hashes.<directive>
- “Smart defaults”:
* internal CDN for style/script elem and connect
* Matomo endpoints (if feature enabled) for script-elem/connect
* Simpleicons (if feature enabled) for connect
* reCAPTCHA (if feature enabled) for script-elem/frame-src
* frame-ancestors extended for desktop/logout/keycloak if enabled
"""
try:
directives = [
@@ -116,71 +176,121 @@ class FilterModule(object):
'frame-src',
'script-src',
'script-src-elem',
'script-src-attr',
'style-src',
'style-src-elem',
'style-src-attr',
'font-src',
'worker-src',
'manifest-src',
'media-src',
]
parts = []
tokens_by_dir = {}
explicit_flags_by_dir = {}
for directive in directives:
# Collect explicit flags (to later respect explicit "False" on base during merge)
explicit_flags = get_app_conf(
applications,
application_id,
'server.csp.flags.' + directive,
False,
{}
)
explicit_flags_by_dir[directive] = explicit_flags
tokens = ["'self'"]
# unsafe-eval / unsafe-inline flags
# 1) Flags (with sane defaults)
flags = self.get_csp_flags(applications, application_id, directive)
tokens += flags
# Matomo integration
if (
self.is_feature_enabled(applications, matomo_feature_name, application_id)
and directive in ['script-src-elem', 'connect-src']
):
matomo_domain = domains.get('web-app-matomo')[0]
if matomo_domain:
tokens.append(f"{web_protocol}://{matomo_domain}")
# 2) Internal CDN defaults for selected directives
if directive in ('script-src-elem', 'connect-src', 'style-src-elem', 'style-src'):
tokens.append(get_url(domains, 'web-svc-cdn', web_protocol))
# ReCaptcha integration: allow loading scripts from Google if feature enabled
# 3) Matomo (if enabled)
if directive in ('script-src-elem', 'connect-src'):
if self.is_feature_enabled(applications, matomo_feature_name, application_id):
tokens.append(get_url(domains, 'web-app-matomo', web_protocol))
# 4) Simpleicons (if enabled) typically used via connect-src (fetch)
if directive == 'connect-src':
if self.is_feature_enabled(applications, 'simpleicons', application_id):
tokens.append(get_url(domains, 'web-svc-simpleicons', web_protocol))
# 5) reCAPTCHA (if enabled) scripts + frames
if self.is_feature_enabled(applications, 'recaptcha', application_id):
if directive in ['script-src-elem',"frame-src"]:
if directive in ('script-src-elem', 'frame-src'):
tokens.append('https://www.gstatic.com')
tokens.append('https://www.google.com')
# Allow the loading of js from the cdn
if directive == 'script-src-elem':
if self.is_feature_enabled(applications, 'logout', application_id) or self.is_feature_enabled(applications, 'desktop', application_id):
domain = domains.get('web-svc-cdn')[0]
tokens.append(f"{domain}")
# 6) Frame ancestors (desktop + logout)
if directive == 'frame-ancestors':
# Enable loading via ancestors
if self.is_feature_enabled(applications, 'desktop', application_id):
domain = domains.get('web-app-port-ui')[0]
sld_tld = ".".join(domain.split(".")[-2:]) # yields "example.com"
tokens.append(f"{sld_tld}") # yields "*.example.com"
# Allow being embedded by the desktop app domain's site
domain = domains.get('web-app-desktop')[0]
sld_tld = ".".join(domain.split(".")[-2:]) # e.g., example.com
tokens.append(f"{sld_tld}")
if self.is_feature_enabled(applications, 'logout', application_id):
# Allow logout via infinito logout proxy
domain = domains.get('web-svc-logout')[0]
tokens.append(f"{domain}")
# Allow logout via keycloak app
domain = domains.get('web-app-keycloak')[0]
tokens.append(f"{domain}")
# whitelist
tokens.append(get_url(domains, 'web-svc-logout', web_protocol))
tokens.append(get_url(domains, 'web-app-keycloak', web_protocol))
# 7) Custom whitelist
tokens += self.get_csp_whitelist(applications, application_id, directive)
# only add hashes if 'unsafe-inline' is NOT in flags
if "'unsafe-inline'" not in flags:
# 8) Inline hashes (only if this directive does NOT include 'unsafe-inline')
if "'unsafe-inline'" not in tokens:
for snippet in self.get_csp_inline_content(applications, application_id, directive):
tokens.append(self.get_csp_hash(snippet))
parts.append(f"{directive} {' '.join(tokens)};")
tokens_by_dir[directive] = _dedup_preserve(tokens)
# static img-src
# ----------------------------------------------------------
# CSP3 families → ensure CSP2 fallback (Safari-safe)
# Merge style/script families so base contains union of elem/attr.
# Respect explicit disables on the base (e.g. unsafe-inline=False).
# Do NOT mirror back into elem/attr (keep granularity).
# ----------------------------------------------------------
def _strip_if_disabled(unioned_tokens, explicit_flags, name):
"""
Remove a token (e.g. 'unsafe-inline') from the unioned token list
if it is explicitly disabled in the base directive flags.
"""
if isinstance(explicit_flags, dict) and explicit_flags.get(name) is False:
tok = f"'{name}'"
return [t for t in unioned_tokens if t != tok]
return unioned_tokens
def merge_family(base_key, elem_key, attr_key):
base = tokens_by_dir.get(base_key, [])
elem = tokens_by_dir.get(elem_key, [])
attr = tokens_by_dir.get(attr_key, [])
union = _dedup_preserve(base + elem + attr)
# Respect explicit disables on the base
explicit_base = explicit_flags_by_dir.get(base_key, {})
# The most relevant flags for script/style:
for flag_name in ('unsafe-inline', 'unsafe-eval'):
union = _strip_if_disabled(union, explicit_base, flag_name)
tokens_by_dir[base_key] = union # write back only to base
merge_family('style-src', 'style-src-elem', 'style-src-attr')
merge_family('script-src', 'script-src-elem', 'script-src-attr')
# ----------------------------------------------------------
# Assemble header
# ----------------------------------------------------------
parts = []
for directive in directives:
if directive in tokens_by_dir:
parts.append(f"{directive} {' '.join(tokens_by_dir[directive])};")
# Keep permissive img-src for data/blob + any host (as before)
parts.append("img-src * data: blob:;")
return ' '.join(parts)
except Exception as exc:

View File

@@ -7,7 +7,7 @@ class FilterModule(object):
def filters(self):
return {'domain_mappings': self.domain_mappings}
def domain_mappings(self, apps, PRIMARY_DOMAIN):
def domain_mappings(self, apps, primary_domain, auto_build_alias):
"""
Build a flat list of redirect mappings for all apps:
- source: each alias domain
@@ -43,7 +43,7 @@ class FilterModule(object):
domains_cfg = cfg.get('server',{}).get('domains',{})
entry = domains_cfg.get('canonical')
if entry is None:
canonical_map[app_id] = [default_domain(app_id, PRIMARY_DOMAIN)]
canonical_map[app_id] = [default_domain(app_id, primary_domain)]
elif isinstance(entry, dict):
canonical_map[app_id] = list(entry.values())
elif isinstance(entry, list):
@@ -61,11 +61,11 @@ class FilterModule(object):
alias_map[app_id] = []
continue
if isinstance(domains_cfg, dict) and not domains_cfg:
alias_map[app_id] = [default_domain(app_id, PRIMARY_DOMAIN)]
alias_map[app_id] = [default_domain(app_id, primary_domain)]
continue
aliases = parse_entry(domains_cfg, 'aliases', app_id) or []
default = default_domain(app_id, PRIMARY_DOMAIN)
default = default_domain(app_id, primary_domain)
has_aliases = 'aliases' in domains_cfg
has_canonical = 'canonical' in domains_cfg
@@ -74,7 +74,7 @@ class FilterModule(object):
aliases.append(default)
elif has_canonical:
canon = canonical_map.get(app_id, [])
if default not in canon and default not in aliases:
if default not in canon and default not in aliases and auto_build_alias:
aliases.append(default)
alias_map[app_id] = aliases
@@ -84,7 +84,7 @@ class FilterModule(object):
mappings = []
for app_id, sources in alias_map.items():
canon_list = canonical_map.get(app_id, [])
target = canon_list[0] if canon_list else default_domain(app_id, PRIMARY_DOMAIN)
target = canon_list[0] if canon_list else default_domain(app_id, primary_domain)
for src in sources:
if src == target:
# skip self-redirects

View File

@@ -4,7 +4,7 @@ class FilterModule(object):
def filters(self):
return {'generate_all_domains': self.generate_all_domains}
def generate_all_domains(self, domains_dict, include_www=True):
def generate_all_domains(self, domains_dict, include_www:bool=True):
"""
Transform a dict of domains (values: str, list, dict) into a flat list,
optionally add 'www.' prefixes, dedupe and sort alphabetically.

View File

@@ -1,49 +0,0 @@
import os
import re
import yaml
from ansible.errors import AnsibleFilterError
def get_application_id(role_name):
"""
Jinja2/Ansible filter: given a role name, load its vars/main.yml and return the application_id value.
"""
# Construct path: assumes current working directory is project root
vars_file = os.path.join(os.getcwd(), 'roles', role_name, 'vars', 'main.yml')
if not os.path.isfile(vars_file):
raise AnsibleFilterError(f"Vars file not found for role '{role_name}': {vars_file}")
try:
# Read entire file content to avoid lazy stream issues
with open(vars_file, 'r', encoding='utf-8') as f:
content = f.read()
data = yaml.safe_load(content)
except Exception as e:
raise AnsibleFilterError(f"Error reading YAML from {vars_file}: {e}")
# Ensure parsed data is a mapping
if not isinstance(data, dict):
raise AnsibleFilterError(
f"Error reading YAML from {vars_file}: expected mapping, got {type(data).__name__}"
)
# Detect malformed YAML: no valid identifier-like keys
valid_key_pattern = re.compile(r'^[A-Za-z_][A-Za-z0-9_]*$')
if data and not any(valid_key_pattern.match(k) for k in data.keys()):
raise AnsibleFilterError(f"Error reading YAML from {vars_file}: invalid top-level keys")
if 'application_id' not in data:
raise AnsibleFilterError(f"Key 'application_id' not found in {vars_file}")
return data['application_id']
class FilterModule(object):
"""
Ansible filter plugin entry point.
"""
def filters(self):
return {
'get_application_id': get_application_id,
}

View File

@@ -0,0 +1,31 @@
# Custom Ansible filter to get all role names under "roles/" with a given prefix.
import os
def get_category_entries(prefix, roles_path="roles"):
"""
Returns a list of role names under the given roles_path
that start with the specified prefix.
:param prefix: String prefix to match role names.
:param roles_path: Path to the roles directory (default: 'roles').
:return: List of matching role names.
"""
if not os.path.isdir(roles_path):
return []
roles = []
for entry in os.listdir(roles_path):
full_path = os.path.join(roles_path, entry)
if os.path.isdir(full_path) and entry.startswith(prefix):
roles.append(entry)
return sorted(roles)
class FilterModule(object):
""" Custom filters for Ansible """
def filters(self):
return {
"get_category_entries": get_category_entries
}

View File

@@ -20,9 +20,10 @@ def get_docker_paths(application_id: str, path_docker_compose_instances: str) ->
'config': f"{base}config/",
},
'files': {
'env': f"{base}.env/env",
'docker_compose': f"{base}docker-compose.yml",
'dockerfile': f"{base}Dockerfile",
'env': f"{base}.env/env",
'docker_compose': f"{base}docker-compose.yml",
'docker_compose_override': f"{base}docker-compose.override.yml",
'dockerfile': f"{base}Dockerfile",
}
}

View File

@@ -0,0 +1,37 @@
"""
Custom Ansible filter to build a systemctl unit name (always lowercase).
Rules:
- If `systemctl_id` ends with '@': drop the '@' and return
"{systemctl_id_without_at}.{software_name}@{suffix_handling}".
- Else: return "{systemctl_id}.{software_name}{suffix_handling}".
Suffix handling:
- Default "" → automatically pick:
- ".service" if no '@' in systemctl_id
- ".timer" if '@' in systemctl_id
- Explicit False → no suffix at all
- Any string → ".{suffix}" (lowercased)
"""
def get_service_name(systemctl_id, software_name, suffix=""):
sid = str(systemctl_id).strip().lower()
software_name = str(software_name).strip().lower()
# Determine suffix
if suffix is False:
sfx = "" # no suffix at all
elif suffix == "" or suffix is None:
sfx = ".service"
else:
sfx = str(suffix).strip().lower()
if sid.endswith("@"):
base = sid[:-1] # drop the trailing '@'
return f"{base}.{software_name}@{sfx}"
else:
return f"{sid}.{software_name}{sfx}"
class FilterModule(object):
def filters(self):
return {"get_service_name": get_service_name}

View File

@@ -0,0 +1,24 @@
# filter_plugins/get_service_script_path.py
# Custom Ansible filter to generate service script paths.
def get_service_script_path(systemctl_id, script_type):
"""
Build the path to a service script based on systemctl_id and type.
:param systemctl_id: The identifier of the system service.
:param script_type: The script type/extension (e.g., sh, py, yml).
:return: The full path string.
"""
if not systemctl_id or not script_type:
raise ValueError("Both systemctl_id and script_type are required")
return f"/opt/scripts/systemctl/{systemctl_id}/script.{script_type}"
class FilterModule(object):
""" Custom filters for Ansible """
def filters(self):
return {
"get_service_script_path": get_service_script_path
}

View File

@@ -0,0 +1,77 @@
from __future__ import annotations
import sys, os, re
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from ansible.errors import AnsibleFilterError
from module_utils.config_utils import get_app_conf
from module_utils.entity_name_utils import get_entity_name
_UNIT_RE = re.compile(r'^\s*(\d+(?:\.\d+)?)\s*([kKmMgGtT]?[bB]?)?\s*$')
_FACTORS = {
'': 1, 'b': 1,
'k': 1024, 'kb': 1024,
'm': 1024**2, 'mb': 1024**2,
'g': 1024**3, 'gb': 1024**3,
't': 1024**4, 'tb': 1024**4,
}
def _to_bytes(v: str) -> int:
if v is None:
raise AnsibleFilterError("jvm_filters: size value is None")
s = str(v).strip()
m = _UNIT_RE.match(s)
if not m:
raise AnsibleFilterError(f"jvm_filters: invalid size '{v}'")
num, unit = m.group(1), (m.group(2) or '').lower()
try:
val = float(num)
except ValueError as e:
raise AnsibleFilterError(f"jvm_filters: invalid numeric size '{v}'") from e
factor = _FACTORS.get(unit)
if factor is None:
raise AnsibleFilterError(f"jvm_filters: unknown unit in '{v}'")
return int(val * factor)
def _to_mb(v: str) -> int:
return max(0, _to_bytes(v) // (1024 * 1024))
def _svc(app_id: str) -> str:
return get_entity_name(app_id)
def _mem_limit_mb(apps: dict, app_id: str) -> int:
svc = _svc(app_id)
raw = get_app_conf(apps, app_id, f"docker.services.{svc}.mem_limit")
mb = _to_mb(raw)
if mb <= 0:
raise AnsibleFilterError(f"jvm_filters: mem_limit for '{svc}' must be > 0 MB (got '{raw}')")
return mb
def _mem_res_mb(apps: dict, app_id: str) -> int:
svc = _svc(app_id)
raw = get_app_conf(apps, app_id, f"docker.services.{svc}.mem_reservation")
mb = _to_mb(raw)
if mb <= 0:
raise AnsibleFilterError(f"jvm_filters: mem_reservation for '{svc}' must be > 0 MB (got '{raw}')")
return mb
def jvm_max_mb(apps: dict, app_id: str) -> int:
"""Xmx = min( floor(0.7*limit), limit-1024, 12288 ) with floor at 1024 MB."""
limit_mb = _mem_limit_mb(apps, app_id)
c1 = (limit_mb * 7) // 10
c2 = max(0, limit_mb - 1024)
c3 = 12288
return max(1024, min(c1, c2, c3))
def jvm_min_mb(apps: dict, app_id: str) -> int:
"""Xms = min( floor(Xmx/2), mem_reservation, Xmx ) with floor at 512 MB."""
xmx = jvm_max_mb(apps, app_id)
res = _mem_res_mb(apps, app_id)
return max(512, min(xmx // 2, res, xmx))
class FilterModule(object):
def filters(self):
return {
"jvm_max_mb": jvm_max_mb,
"jvm_min_mb": jvm_min_mb,
}

View File

@@ -1,122 +0,0 @@
import os
import yaml
import re
from ansible.errors import AnsibleFilterError
# in-memory cache: application_id → (parsed_yaml, is_nested)
_cfg_cache = {}
def load_configuration(application_id, key):
if not isinstance(key, str):
raise AnsibleFilterError("Key must be a dotted-string, e.g. 'features.matomo'")
# locate roles/
here = os.path.dirname(__file__)
root = os.path.abspath(os.path.join(here, '..'))
roles_dir = os.path.join(root, 'roles')
if not os.path.isdir(roles_dir):
raise AnsibleFilterError(f"Roles directory not found at {roles_dir}")
# first time? load & cache
if application_id not in _cfg_cache:
config_path = None
# 1) primary: vars/main.yml declares it
for role in os.listdir(roles_dir):
mv = os.path.join(roles_dir, role, 'vars', 'main.yml')
if os.path.exists(mv):
try:
md = yaml.safe_load(open(mv)) or {}
except Exception:
md = {}
if md.get('application_id') == application_id:
cf = os.path.join(roles_dir, role, "config" , "main.yml")
if not os.path.exists(cf):
raise AnsibleFilterError(
f"Role '{role}' declares '{application_id}' but missing config/main.yml"
)
config_path = cf
break
# 2) fallback nested
if config_path is None:
for role in os.listdir(roles_dir):
cf = os.path.join(roles_dir, role, "config" , "main.yml")
if not os.path.exists(cf):
continue
try:
dd = yaml.safe_load(open(cf)) or {}
except Exception:
dd = {}
if isinstance(dd, dict) and application_id in dd:
config_path = cf
break
# 3) fallback flat
if config_path is None:
for role in os.listdir(roles_dir):
cf = os.path.join(roles_dir, role, "config" , "main.yml")
if not os.path.exists(cf):
continue
try:
dd = yaml.safe_load(open(cf)) or {}
except Exception:
dd = {}
# flat style: dict with all non-dict values
if isinstance(dd, dict) and not any(isinstance(v, dict) for v in dd.values()):
config_path = cf
break
if config_path is None:
return None
# parse once
try:
parsed = yaml.safe_load(open(config_path)) or {}
except Exception as e:
raise AnsibleFilterError(f"Error loading config/main.yml at {config_path}: {e}")
# detect nested vs flat
is_nested = isinstance(parsed, dict) and (application_id in parsed)
_cfg_cache[application_id] = (parsed, is_nested)
parsed, is_nested = _cfg_cache[application_id]
# pick base entry
entry = parsed[application_id] if is_nested else parsed
# resolve dotted key
key_parts = key.split('.')
for part in key_parts:
# Check if part has an index (e.g., domains.canonical[0])
match = re.match(r'([^\[]+)\[([0-9]+)\]', part)
if match:
part, index = match.groups()
index = int(index)
if isinstance(entry, dict) and part in entry:
entry = entry[part]
# Check if entry is a list and access the index
if isinstance(entry, list) and 0 <= index < len(entry):
entry = entry[index]
else:
raise AnsibleFilterError(
f"Index '{index}' out of range for key '{part}' in application '{application_id}'"
)
else:
raise AnsibleFilterError(
f"Key '{part}' not found under application '{application_id}'"
)
else:
if isinstance(entry, dict) and part in entry:
entry = entry[part]
else:
raise AnsibleFilterError(
f"Key '{part}' not found under application '{application_id}'"
)
return entry
class FilterModule(object):
def filters(self):
return {'load_configuration': load_configuration}

View File

@@ -0,0 +1,40 @@
# filter_plugins/resource_filter.py
from __future__ import annotations
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.config_utils import get_app_conf, AppConfigKeyError, ConfigEntryNotSetError # noqa: F401
from module_utils.entity_name_utils import get_entity_name
from ansible.errors import AnsibleFilterError
def resource_filter(
applications: dict,
application_id: str,
key: str,
service_name: str,
hard_default,
):
"""
Lookup order:
1) docker.services.<service_name or get_entity_name(application_id)>.<key>
2) hard_default (mandatory)
- service_name may be "" → will resolve to get_entity_name(application_id).
- hard_default is mandatory (no implicit None).
- required=False always.
"""
try:
primary_service = service_name if service_name != "" else get_entity_name(application_id)
return get_app_conf(applications, application_id, f"docker.services.{primary_service}.{key}", False, hard_default)
except (AppConfigKeyError, ConfigEntryNotSetError) as e:
raise AnsibleFilterError(str(e))
class FilterModule(object):
def filters(self):
return {
"resource_filter": resource_filter,
}

View File

@@ -1,55 +0,0 @@
from jinja2 import Undefined
def safe_placeholders(template: str, mapping: dict = None) -> str:
"""
Format a template like "{url}/logo.png".
If mapping is provided (not None) and ANY placeholder is missing or maps to None/empty string, the function will raise KeyError.
If mapping is None, missing placeholders or invalid templates return empty string.
Numerical zero or False are considered valid values.
Any other formatting errors return an empty string.
"""
# Non-string templates yield empty
if not isinstance(template, str):
return ''
class SafeDict(dict):
def __getitem__(self, key):
val = super().get(key, None)
# Treat None or empty string as missing
if val is None or (isinstance(val, str) and val == ''):
raise KeyError(key)
return val
def __missing__(self, key):
raise KeyError(key)
silent = mapping is None
data = mapping or {}
try:
return template.format_map(SafeDict(data))
except KeyError:
if silent:
return ''
raise
except Exception:
return ''
def safe_var(value):
"""
Ansible filter: returns the value unchanged unless it's Undefined or None,
in which case returns an empty string.
Catches all exceptions and yields ''.
"""
try:
if isinstance(value, Undefined) or value is None:
return ''
return value
except Exception:
return ''
class FilterModule(object):
def filters(self):
return {
'safe_var': safe_var,
'safe_placeholders': safe_placeholders,
}

View File

@@ -1,28 +0,0 @@
"""
Ansible filter plugin that joins a base string and a tail path safely.
If the base is falsy (None, empty, etc.), returns an empty string.
"""
def safe_join(base, tail):
"""
Safely join base and tail into a path or URL.
- base: the base string. If falsy, returns ''.
- tail: the string to append. Leading/trailing slashes are handled.
- On any exception, returns ''.
"""
try:
if not base:
return ''
base_str = str(base).rstrip('/')
tail_str = str(tail).lstrip('/')
return f"{base_str}/{tail_str}"
except Exception:
return ''
class FilterModule(object):
def filters(self):
return {
'safe_join': safe_join,
}

View File

@@ -0,0 +1,67 @@
# filter_plugins/timeout_start_sec_for_domains.py (nur Kern geändert)
from ansible.errors import AnsibleFilterError
class FilterModule(object):
def filters(self):
return {
"timeout_start_sec_for_domains": self.timeout_start_sec_for_domains,
}
def timeout_start_sec_for_domains(
self,
domains_dict,
include_www=True,
per_domain_seconds=25,
overhead_seconds=30,
min_seconds=120,
max_seconds=3600,
):
"""
Args:
domains_dict (dict | list[str] | str): Either the domain mapping dict
(values can be str | list[str] | dict[str,str]) or an already
flattened list of domains, or a single domain string.
include_www (bool): If true, add 'www.<domain>' for non-www entries.
...
"""
try:
# Local flattener for dict inputs (like your generate_all_domains source)
def _flatten_from_dict(domains_map):
flat = []
for v in (domains_map or {}).values():
if isinstance(v, str):
flat.append(v)
elif isinstance(v, list):
flat.extend(v)
elif isinstance(v, dict):
flat.extend(v.values())
return flat
# Accept dict | list | str
if isinstance(domains_dict, dict):
flat = _flatten_from_dict(domains_dict)
elif isinstance(domains_dict, list):
flat = list(domains_dict)
elif isinstance(domains_dict, str):
flat = [domains_dict]
else:
raise AnsibleFilterError(
"Expected 'domains_dict' to be dict | list | str."
)
if include_www:
base_unique = sorted(set(flat))
www_variants = [f"www.{d}" for d in base_unique if not str(d).lower().startswith("www.")]
flat.extend(www_variants)
unique_domains = sorted(set(flat))
count = len(unique_domains)
raw = overhead_seconds + per_domain_seconds * count
clamped = max(min_seconds, min(max_seconds, int(raw)))
return clamped
except AnsibleFilterError:
raise
except Exception as exc:
raise AnsibleFilterError(f"timeout_start_sec_for_domains failed: {exc}")

146
filter_plugins/url_join.py Normal file
View File

@@ -0,0 +1,146 @@
"""
Ansible filter plugin that safely joins URL components from a list.
- Requires a valid '<scheme>://' in the first element (any RFC-3986-ish scheme)
- Preserves the double slash after the scheme, collapses other duplicate slashes
- Supports query parts introduced by elements starting with '?' or '&'
* first query element uses '?', subsequent use '&' (regardless of given prefix)
* each query element must be exactly one 'key=value' pair
* query elements may only appear after path elements; once query starts, no more path parts
- Raises specific AnsibleFilterError messages for common misuse
"""
import re
from ansible.errors import AnsibleFilterError
_SCHEME_RE = re.compile(r'^([a-zA-Z][a-zA-Z0-9+.\-]*://)(.*)$')
_QUERY_PAIR_RE = re.compile(r'^[^&=?#]+=[^&?#]*$') # key=value (no '&', no extra '?' or '#')
def _to_str_or_error(obj, index):
"""Cast to str, raising a specific AnsibleFilterError with index context."""
try:
return str(obj)
except Exception as e:
raise AnsibleFilterError(
f"url_join: unable to convert part at index {index} to string: {e}"
)
def url_join(parts):
"""
Join a list of URL parts, URL-aware (scheme, path, query).
Args:
parts (list|tuple): URL segments. First element MUST include '<scheme>://'.
Path elements are plain strings.
Query elements must start with '?' or '&' and contain exactly one 'key=value'.
Returns:
str: Joined URL.
Raises:
AnsibleFilterError: with specific, descriptive messages.
"""
# --- basic input validation ---
if parts is None:
raise AnsibleFilterError("url_join: parts must be a non-empty list; got None")
if not isinstance(parts, (list, tuple)):
raise AnsibleFilterError(
f"url_join: parts must be a list/tuple; got {type(parts).__name__}"
)
if len(parts) == 0:
raise AnsibleFilterError("url_join: parts must be a non-empty list")
# --- first element must carry a scheme ---
first_raw = parts[0]
if first_raw is None:
raise AnsibleFilterError(
"url_join: first element must include a scheme like 'https://'; got None"
)
first_str = _to_str_or_error(first_raw, 0)
m = _SCHEME_RE.match(first_str)
if not m:
raise AnsibleFilterError(
"url_join: first element must start with '<scheme>://', e.g. 'https://example.com'; "
f"got '{first_str}'"
)
scheme = m.group(1) # e.g., 'https://', 'ftp://', 'myapp+v1://'
after_scheme = m.group(2).lstrip('/') # strip only leading slashes right after scheme
# --- iterate parts: collect path parts until first query part; then only query parts allowed ---
path_parts = []
query_pairs = []
in_query = False
for i, p in enumerate(parts):
if p is None:
# skip None silently (consistent with path_join-ish behavior)
continue
s = _to_str_or_error(p, i)
# disallow additional scheme in later parts
if i > 0 and "://" in s:
raise AnsibleFilterError(
f"url_join: only the first element may contain a scheme; part at index {i} "
f"looks like a URL with scheme ('{s}')."
)
# first element: replace with remainder after scheme and continue
if i == 0:
s = after_scheme
# check if this is a query element (starts with ? or &)
if s.startswith('?') or s.startswith('&'):
in_query = True
raw_pair = s[1:] # strip the leading ? or &
if raw_pair == '':
raise AnsibleFilterError(
f"url_join: query element at index {i} is empty; expected '?key=value' or '&key=value'"
)
# Disallow multiple pairs in a single element; enforce exactly one key=value
if '&' in raw_pair:
raise AnsibleFilterError(
f"url_join: query element at index {i} must contain exactly one 'key=value' pair "
f"without '&'; got '{s}'"
)
if not _QUERY_PAIR_RE.match(raw_pair):
raise AnsibleFilterError(
f"url_join: query element at index {i} must match 'key=value' (no extra '?', '&', '#'); got '{s}'"
)
query_pairs.append(raw_pair)
else:
# non-query element
if in_query:
# once query started, no more path parts allowed
raise AnsibleFilterError(
f"url_join: path element found at index {i} after query parameters started; "
f"query parts must come last"
)
# normal path part: strip slashes to avoid duplicate '/'
path_parts.append(s.strip('/'))
# normalize path: remove empty chunks
path_parts = [p for p in path_parts if p != '']
# --- build result ---
# path portion
if path_parts:
joined_path = "/".join(path_parts)
base = scheme + joined_path
else:
# no path beyond scheme
base = scheme
# query portion
if query_pairs:
base = base + "?" + "&".join(query_pairs)
return base
class FilterModule(object):
def filters(self):
return {
'url_join': url_join,
}

View File

@@ -0,0 +1,21 @@
from ansible.errors import AnsibleFilterError
def docker_volume_path(volume_name: str) -> str:
"""
Returns the absolute filesystem path of a Docker volume.
Example:
"akaunting_data" -> "/var/lib/docker/volumes/akaunting_data/_data/"
"""
if not volume_name or not isinstance(volume_name, str):
raise AnsibleFilterError(f"Invalid volume name: {volume_name}")
return f"/var/lib/docker/volumes/{volume_name}/_data/"
class FilterModule(object):
"""Docker volume path filters."""
def filters(self):
return {
"docker_volume_path": docker_volume_path,
}

View File

@@ -1,6 +1,8 @@
SOFTWARE_NAME: "Infinito.Nexus" # Name of the software
# Deployment
ENVIRONMENT: "production" # Possible values: production, development
DEPLOYMENT_MODE: "single" # Use single, if you deploy on one server. Use cluster if you setup in cluster mode.
# If true, sensitive credentials will be masked or hidden from all Ansible task logs
# Recommendet to set to true
@@ -20,26 +22,20 @@ HOST_TIME_FORMAT: "HH:mm"
HOST_THOUSAND_SEPARATOR: "."
HOST_DECIMAL_MARK: ","
# Deployment mode
DEPLOYMENT_MODE: "single" # Use single, if you deploy on one server. Use cluster if you setup in cluster mode.
# Web
WEB_PROTOCOL: "https" # Web protocol type. Use https or http. If you run local you need to change it to http
WEB_PORT: "{{ 443 if WEB_PROTOCOL == 'https' else 80 }}" # Default port web applications will listen to
# Websocket
WEBSOCKET_PROTOCOL: "{{ 'wss' if WEB_PROTOCOL == 'https' else 'ws' }}"
# WWW-Redirect to None WWW-Domains enabled
WWW_REDIRECT_ENABLED: "{{ ('web-opt-rdr-www' in group_names) | bool }}"
AUTO_BUILD_ALIASES: False # If enabled it creates an alias domain for each web application by the entity name, recommended to set to false to safge domain space
# Domain
PRIMARY_DOMAIN: "localhost" # Primary Domain of the server
# Server Tact Variables
## Ours in which the server is "awake" (100% working). Rest of the time is reserved for maintanance
HOURS_SERVER_AWAKE: "0..23"
## Random delay for systemd timers to avoid peak loads.
RANDOMIZED_DELAY_SEC: "5min"
# Runtime Variables for Process Control
ACTIVATE_ALL_TIMERS: false # Activates all timers, independend if the handlers had been triggered
PRIMARY_DOMAIN: "localhost" # Primary Domain of the server
DNS_PROVIDER: cloudflare # The DNS Provider\Registrar for the domain
@@ -52,22 +48,19 @@ CERTBOT_CREDENTIALS_FILE: "{{ CERTBOT_CREDENTIALS_DIR }}/{{ CERT
CERTBOT_DNS_PROPAGATION_WAIT_SECONDS: 300 # How long should the script wait for DNS propagation before continuing
CERTBOT_FLAVOR: san # Possible options: san (recommended, with a dns flavor like cloudflare, or hetzner), wildcard(doesn't function with www redirect), dedicated
# Path where Certbot stores challenge webroot files
LETSENCRYPT_WEBROOT_PATH: "/var/lib/letsencrypt/"
# Letsencrypt
LETSENCRYPT_WEBROOT_PATH: "/var/lib/letsencrypt/" # Path where Certbot stores challenge webroot files
LETSENCRYPT_BASE_PATH: "/etc/letsencrypt/" # Base directory containing Certbot configuration, account data, and archives
LETSENCRYPT_LIVE_PATH: "{{ LETSENCRYPT_BASE_PATH }}live/" # Symlink directory for the current active certificate and private key
# Base directory containing Certbot configuration, account data, and archives
LETSENCRYPT_BASE_PATH: "/etc/letsencrypt/"
# Symlink directory for the current active certificate and private key
LETSENCRYPT_LIVE_PATH: "{{ LETSENCRYPT_BASE_PATH }}live/"
## Docker Role Specific Parameters
DOCKER_RESTART_POLICY: "unless-stopped"
DOCKER_VARS_FILE: "{{ playbook_dir }}/roles/docker-compose/vars/docker-compose.yml"
## Docker
DOCKER_RESTART_POLICY: "unless-stopped" # Default restart parameter for docker containers
DOCKER_VARS_FILE: "{{ playbook_dir }}/roles/docker-compose/vars/docker-compose.yml" # File containing docker compose variables used by other services
DOCKER_WHITELISTET_ANON_VOLUMES: [] # Volumes which should be ignored during docker anonymous health check
# Asyn Confitguration
ASYNC_ENABLED: "{{ not MODE_DEBUG | bool }}" # Activate async, deactivated for debugging
ASYNC_TIME: "{{ 300 if ASYNC_ENABLED | bool else omit }}" # Run for mnax 5min
ASYNC_TIME: "{{ 300 if ASYNC_ENABLED | bool else omit }}" # Run for max 5min
ASYNC_POLL: "{{ 0 if ASYNC_ENABLED | bool else 10 }}" # Don't wait for task
# default value if not set via CLI (-e) or in playbook vars
@@ -83,18 +76,15 @@ _applications_nextcloud_oidc_flavor: >-
False,
'oidc_login'
if applications
| get_app_conf('web-app-nextcloud','features.ldap',False, True)
else 'sociallogin'
| get_app_conf('web-app-nextcloud','features.ldap',False, True, True)
else 'sociallogin',
True
)
}}
# Systemctl
SYS_TIMER_SUFFIX: ".{{ SOFTWARE_NAME | lower }}.timer"
SYS_SERVICE_SUFFIX: ".{{ SOFTWARE_NAME | lower }}.service"
# Role-based access control
# @See https://en.wikipedia.org/wiki/Role-based_access_control
RBAC:
GROUP:
NAME: "/roles" # Name of the group which holds the RBAC roles
CLAIM: "groups" # Name of the claim containing the RBAC groups
CLAIM: "groups" # Name of the claim containing the RBAC groups

View File

@@ -1,9 +1,10 @@
# Mode
# The following modes can be combined with each other
MODE_TEST: false # Executes test routines instead of productive routines
MODE_UPDATE: true # Executes updates
MODE_BACKUP: true # Activates the backup before the update procedure
MODE_CLEANUP: true # Cleanup unused files and configurations
MODE_DEBUG: false # This enables debugging in ansible and in the apps, You SHOULD NOT enable this on production servers
MODE_RESET: false # Cleans up all Infinito.Nexus files. It's necessary to run to whole playbook and not particial roles when using this function.
MODE_DUMMY: false # Executes dummy/test routines instead of productive routines
MODE_UPDATE: true # Executes updates
MODE_DEBUG: false # This enables debugging in ansible and in the apps, You SHOULD NOT enable this on production servers
MODE_RESET: false # Cleans up all Infinito.Nexus files. It's necessary to run to whole playbook and not particial roles when using this function.
MODE_CLEANUP: true # Cleanup unused files and configurations
MODE_ASSERT: "{{ MODE_DEBUG | bool }}" # Executes validation tasks during the run.
MODE_BACKUP: true # Executes the Backup before the deployment

View File

@@ -0,0 +1,8 @@
# Email Configuration
DEFAULT_SYSTEM_EMAIL:
DOMAIN: "{{ PRIMARY_DOMAIN }}"
HOST: "mail.{{ PRIMARY_DOMAIN }}"
PORT: 465
TLS: true # true for TLS and false for SSL
START_TLS: false
SMTP: true

View File

@@ -1,9 +0,0 @@
# Email Configuration
default_system_email:
domain: "{{ PRIMARY_DOMAIN }}"
host: "mail.{{ PRIMARY_DOMAIN }}"
port: 465
tls: true # true for TLS and false for SSL
start_tls: false
smtp: true
# password: # Needs to be defined in inventory file

View File

@@ -1,38 +0,0 @@
# System maintenance Services
## Timeouts to wait for other services to stop
system_maintenance_lock_timeout_cleanup_services: "15min"
system_maintenance_lock_timeout_storage_optimizer: "10min"
system_maintenance_lock_timeout_backup_services: "1h"
system_maintenance_lock_timeout_heal_docker: "30min"
system_maintenance_lock_timeout_update_docker: "2min"
system_maintenance_lock_timeout_restart_docker: "{{system_maintenance_lock_timeout_update_docker}}"
## Services
### Defined Services for Backup Tasks
system_maintenance_backup_services:
- "sys-bkp-docker-2-loc"
- "svc-bkp-rmt-2-loc"
- "svc-bkp-loc-2-usb"
- "sys-bkp-docker-2-loc-everything"
### Defined Services for System Cleanup
system_maintenance_cleanup_services:
- "sys-cln-backups"
- "sys-cln-disc-space"
- "sys-cln-faild-bkps"
### Services that Manipulate the System
system_maintenance_manipulation_services:
- "sys-rpr-docker-soft"
- "update-docker"
- "svc-opt-ssd-hdd"
- "sys-rpr-docker-hard"
## Total System Maintenance Services
system_maintenance_services: "{{ system_maintenance_backup_services + system_maintenance_cleanup_services + system_maintenance_manipulation_services }}"
### Define Variables for Docker Volume Health services
whitelisted_anonymous_docker_volumes: []

View File

@@ -29,4 +29,31 @@ NGINX:
IMAGE: "/tmp/cache_nginx_image/" # Directory which nginx uses to cache images
USER: "http" # Default nginx user in ArchLinux
# Effective CPUs (float) across proxy and the current app
WEBSERVER_CPUS_EFFECTIVE: >-
{{
[
(applications | resource_filter('svc-prx-openresty', 'cpus', service_name | default(''), RESOURCE_CPUS)) | float,
(applications | resource_filter(application_id, 'cpus', service_name | default(''), RESOURCE_CPUS)) | float
] | min
}}
# Nginx requires an integer for worker_processes:
# - if cpus < 1 → 1
# - else → floor to int
WEBSERVER_WORKER_PROCESSES: >-
{{
1 if (WEBSERVER_CPUS_EFFECTIVE | float) < 1
else (WEBSERVER_CPUS_EFFECTIVE | float | int)
}}
# worker_connections from pids_limit (use the smaller one), with correct key/defaults
WEBSERVER_WORKER_CONNECTIONS: >-
{{
[
(applications | resource_filter('svc-prx-openresty', 'pids_limit', service_name | default(''), RESOURCE_PIDS_LIMIT)) | int,
(applications | resource_filter(application_id, 'pids_limit', service_name | default(''), RESOURCE_PIDS_LIMIT)) | int
] | min
}}
# @todo It propably makes sense to distinguish between target and source mount path, so that the config files can be stored in the openresty volumes folder

View File

@@ -0,0 +1,9 @@
# Path Variables for Key Directories and Scripts
PATH_ADMINISTRATOR_HOME: "/home/administrator/"
PATH_ADMINISTRATOR_SCRIPTS: "/opt/scripts/"
PATH_SYSTEMCTL_SCRIPTS: "{{ [ PATH_ADMINISTRATOR_SCRIPTS, 'systemctl' ] | path_join }}"
PATH_DOCKER_COMPOSE_INSTANCES: "/opt/docker/"
PATH_SYSTEM_LOCK_SCRIPT: "/opt/scripts/sys-lock.py"
PATH_SYSTEM_SERVICE_DIR: "/etc/systemd/system"
PATH_DOCKER_COMPOSE_PULL_LOCK_DIR: "/run/ansible/compose-pull/"

View File

@@ -1,6 +0,0 @@
# Path Variables for Key Directories and Scripts
PATH_ADMINISTRATOR_HOME: "/home/administrator/"
PATH_ADMINISTRATOR_SCRIPTS: "/opt/scripts/"
PATH_DOCKER_COMPOSE_INSTANCES: "/opt/docker/"
PATH_SYSTEM_LOCK_SCRIPT: "/opt/scripts/sys-lock.py"

View File

@@ -0,0 +1,53 @@
# Services
## Meta
SYS_SERVICE_SUFFIX: ".{{ SOFTWARE_NAME | lower }}.service"
## Names
SYS_SERVICE_CLEANUP_BACKUPS: "{{ 'sys-ctl-cln-bkps' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_CLEANUP_BACKUPS_FAILED: "{{ 'sys-ctl-cln-faild-bkps' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_CLEANUP_ANONYMOUS_VOLUMES: "{{ 'sys-ctl-cln-anon-volumes' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_CLEANUP_DISC_SPACE: "{{ 'sys-ctl-cln-disc-space' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_OPTIMIZE_DRIVE: "{{ 'svc-opt-ssd-hdd' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_BACKUP_RMT_2_LOC: "{{ 'svc-bkp-rmt-2-loc' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_BACKUP_DOCKER_2_LOC: "{{ 'sys-ctl-bkp-docker-2-loc' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_REPAIR_DOCKER_SOFT: "{{ 'sys-ctl-rpr-docker-soft' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_REPAIR_DOCKER_HARD: "{{ 'sys-ctl-rpr-docker-hard' | get_service_name(SOFTWARE_NAME) }}"
## On Failure
SYS_SERVICE_ON_FAILURE_COMPOSE: "{{ ('sys-ctl-alm-compose@') | get_service_name(SOFTWARE_NAME, False) }}%n.service"
## Groups
SYS_SERVICE_GROUP_BACKUPS: >
{{ (('sys-ctl-bkp-' | get_category_entries) + ('svc-bkp-' | get_category_entries))
| map('regex_replace', '$', SYS_SERVICE_SUFFIX) | list | sort }}
SYS_SERVICE_GROUP_CLEANUP: >
{{ ('sys-ctl-cln-' | get_category_entries)
| map('regex_replace', '$', SYS_SERVICE_SUFFIX) | list | sort }}
SYS_SERVICE_GROUP_REPAIR: >
{{ ('sys-ctl-rpr-' | get_category_entries)
| map('regex_replace', '$', SYS_SERVICE_SUFFIX) | list | sort }}
SYS_SERVICE_GROUP_OPTIMIZATION: >
{{ ('svc-opt-' | get_category_entries)
| map('regex_replace', '$', SYS_SERVICE_SUFFIX) | list | sort }}
SYS_SERVICE_GROUP_MAINTANANCE: >
{{ ('svc-mtn-' | get_category_entries)
| map('regex_replace', '$', SYS_SERVICE_SUFFIX) | list | sort }}
## Collection of services to manipulate the system
SYS_SERVICE_GROUP_MANIPULATION: >
{{
(
SYS_SERVICE_GROUP_BACKUPS +
SYS_SERVICE_GROUP_CLEANUP +
SYS_SERVICE_GROUP_REPAIR +
SYS_SERVICE_GROUP_OPTIMIZATION +
SYS_SERVICE_GROUP_MAINTANANCE
) | sort
}}

View File

@@ -1,29 +0,0 @@
## Schedule for Health Checks
on_calendar_health_btrfs: "*-*-* 00:00:00" # Check once per day the btrfs for errors
on_calendar_health_journalctl: "*-*-* 00:00:00" # Check once per day the journalctl for errors
on_calendar_health_disc_space: "*-*-* 06,12,18,00:00:00" # Check four times per day if there is sufficient disc space
on_calendar_health_docker_container: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if the docker containers are healthy
on_calendar_health_docker_volumes: "*-*-* {{ HOURS_SERVER_AWAKE }}:15:00" # Check once per hour if the docker volumes are healthy
on_calendar_health_csp_crawler: "*-*-* {{ HOURS_SERVER_AWAKE }}:30:00" # Check once per hour if all CSP are fullfilled available
on_calendar_health_nginx: "*-*-* {{ HOURS_SERVER_AWAKE }}:45:00" # Check once per hour if all webservices are available
on_calendar_health_msmtp: "*-*-* 00:00:00" # Check once per day SMTP Server
## Schedule for Cleanup Tasks
on_calendar_cleanup_backups: "*-*-* 00,06,12,18:30:00" # Cleanup backups every 6 hours, MUST be called before disc space cleanup
on_calendar_cleanup_disc_space: "*-*-* 07,13,19,01:30:00" # Cleanup disc space every 6 hours
on_calendar_cleanup_certs: "*-*-* 12,00:45:00" # Deletes and revokes unused certs
## Schedule for Backup Tasks
on_calendar_backup_docker_to_local: "*-*-* 03:30:00"
on_calendar_backup_remote_to_local: "*-*-* 21:30:00"
## Schedule for Maintenance Tasks
on_calendar_heal_docker: "*-*-* {{ HOURS_SERVER_AWAKE }}:30:00" # Heal unhealthy docker instances once per hour
on_calendar_renew_lets_encrypt_certificates: "*-*-* 12,00:30:00" # Renew Mailu certificates twice per day
on_calendar_deploy_certificates: "*-*-* 13,01:30:00" # Deploy letsencrypt certificates twice per day to docker containers
on_calendar_msi_keyboard_color: "*-*-* *:*:00" # Change the keyboard color every minute
on_calendar_cleanup_failed_docker: "*-*-* 12:00:00" # Clean up failed docker backups every noon
on_calendar_btrfs_auto_balancer: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month
on_calendar_restart_docker: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM
on_calendar_nextcloud: "22" # Do nextcloud maintanace between 22:00 and 02:00

View File

@@ -0,0 +1,51 @@
# Service Timers
## Meta
SYS_TIMER_ALL_ENABLED: "{{ MODE_DEBUG }}" # Runtime Variables for Process Control - Activates all timers, independend if the handlers had been triggered
## Server Tact Variables
HOURS_SERVER_AWAKE: "6..23" # Ours in which the server is "awake" (100% working). Rest of the time is reserved for maintanance
RANDOMIZED_DELAY_SEC: "5min" # Random delay for systemd timers to avoid peak loads.
## Timeouts for all services
SYS_TIMEOUT_DOCKER_RPR_HARD: "10min"
SYS_TIMEOUT_DOCKER_RPR_SOFT: "{{ SYS_TIMEOUT_DOCKER_RPR_HARD }}"
SYS_TIMEOUT_CLEANUP_SERVICES: "15min"
SYS_TIMEOUT_DOCKER_UPDATE: "20min"
SYS_TIMEOUT_STORAGE_OPTIMIZER: "{{ SYS_TIMEOUT_DOCKER_UPDATE }}"
SYS_TIMEOUT_BACKUP_SERVICES: "60min"
## On Calendar
### Schedule for health checks
SYS_SCHEDULE_HEALTH_BTRFS: "*-*-* 00:00:00" # Check once per day the btrfs for errors
SYS_SCHEDULE_HEALTH_JOURNALCTL: "*-*-* 00:00:00" # Check once per day the journalctl for errors
SYS_SCHEDULE_HEALTH_DISC_SPACE: "*-*-* 06,12,18,00:00:00" # Check four times per day if there is sufficient disc space
SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if the docker containers are healthy
SYS_SCHEDULE_HEALTH_DOCKER_VOLUMES: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if the docker volumes are healthy
SYS_SCHEDULE_HEALTH_CSP_CRAWLER: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if all CSP are fullfilled available
SYS_SCHEDULE_HEALTH_NGINX: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if all webservices are available
SYS_SCHEDULE_HEALTH_MSMTP: "*-*-* 00:00:00" # Check once per day SMTP Server
### Schedule for cleanup tasks
SYS_SCHEDULE_CLEANUP_CERTS: "*-*-* 20:00" # Deletes and revokes unused certs once per day
SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 21:00" # Clean up failed docker backups once per day
SYS_SCHEDULE_CLEANUP_BACKUPS: "*-*-* 22:00" # Cleanup backups once per day, MUST be called before disc space cleanup
SYS_SCHEDULE_CLEANUP_DISC_SPACE: "*-*-* 23:00" # Cleanup disc space once per day
### Schedule for repair services
SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 00:00:00" # Restart docker instances every Sunday
### Schedule for backup tasks
SYS_SCHEDULE_BACKUP_REMOTE_TO_LOCAL: "*-*-* 00:30:00" # Pull Backup of the previous day
SYS_SCHEDULE_BACKUP_DOCKER_TO_LOCAL: "*-*-* 01:00:00" # Backup the current day
### Schedule for Maintenance Tasks
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_RENEW: "*-*-* 10,22:00:00" # Renew Mailu certificates twice per day
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_DEPLOY: "*-*-* 11,23:00:00" # Deploy letsencrypt certificates twice per day to docker containers
SYS_SCHEDULE_MAINTANANCE_NEXTCLOUD: "21" # Do nextcloud maintanace between 21:00 and 01:00
### Animation
SYS_SCHEDULE_ANIMATION_KEYBOARD_COLOR: "*-*-* *:*:00" # Change the keyboard color every minute

View File

@@ -10,7 +10,7 @@ defaults_networks:
# /28 Networks, 14 Usable Ip Addresses
web-app-akaunting:
subnet: 192.168.101.0/28
web-app-attendize:
web-app-confluence:
subnet: 192.168.101.16/28
web-app-baserow:
subnet: 192.168.101.32/28
@@ -34,8 +34,8 @@ defaults_networks:
subnet: 192.168.101.176/28
web-app-listmonk:
subnet: 192.168.101.192/28
# Free:
# subnet: 192.168.101.208/28
web-app-jira:
subnet: 192.168.101.208/28
web-app-matomo:
subnet: 192.168.101.224/28
web-app-mastodon:
@@ -48,7 +48,7 @@ defaults_networks:
subnet: 192.168.102.16/28
web-app-moodle:
subnet: 192.168.102.32/28
web-app-mybb:
web-app-bookwyrm:
subnet: 192.168.102.48/28
web-app-nextcloud:
subnet: 192.168.102.64/28
@@ -84,11 +84,11 @@ defaults_networks:
subnet: 192.168.103.64/28
web-app-syncope:
subnet: 192.168.103.80/28
web-app-collabora:
web-svc-collabora:
subnet: 192.168.103.96/28
web-svc-simpleicons:
subnet: 192.168.103.112/28
web-app-libretranslate:
web-svc-libretranslate:
subnet: 192.168.103.128/28
web-app-pretix:
subnet: 192.168.103.144/28
@@ -96,6 +96,24 @@ defaults_networks:
subnet: 192.168.103.160/28
web-svc-logout:
subnet: 192.168.103.176/28
web-app-chess:
subnet: 192.168.103.192/28
web-app-magento:
subnet: 192.168.103.208/28
web-app-bridgy-fed:
subnet: 192.168.103.224/28
web-app-xwiki:
subnet: 192.168.103.240/28
web-app-openwebui:
subnet: 192.168.104.0/28
web-app-flowise:
subnet: 192.168.104.16/28
web-app-minio:
subnet: 192.168.104.32/28
web-svc-coturn:
subnet: 192.168.104.48/28
web-app-mini-qr:
subnet: 192.168.104.64/28
# /24 Networks / 254 Usable Clients
web-app-bigbluebutton:
@@ -108,3 +126,5 @@ defaults_networks:
subnet: 192.168.201.0/24
svc-db-openldap:
subnet: 192.168.202.0/24
svc-ai-ollama:
subnet: 192.168.203.0/24 # Big network to bridge applications into ai

View File

@@ -2,12 +2,12 @@ ports:
# Ports which are exposed to localhost
localhost:
database:
svc-db-postgres: 5432
svc-db-mariadb: 3306
svc-db-postgres: 5432
svc-db-mariadb: 3306
# https://developer.mozilla.org/de/docs/Web/API/WebSockets_API
websocket:
web-app-mastodon: 4001
web-app-espocrm: 4002
web-app-mastodon: 4001
web-app-espocrm: 4002
oauth2_proxy:
web-app-phpmyadmin: 4181
web-app-lam: 4182
@@ -26,7 +26,7 @@ ports:
web-app-gitea: 8002
web-app-wordpress: 8003
web-app-mediawiki: 8004
web-app-mybb: 8005
web-app-confluence: 8005
web-app-yourls: 8006
web-app-mailu: 8007
web-app-elk: 8008
@@ -36,7 +36,7 @@ ports:
web-app-funkwhale: 8012
web-app-roulette-wheel: 8013
web-app-joomla: 8014
web-app-attendize: 8015
web-app-jira: 8015
web-app-pgadmin: 8016
web-app-baserow: 8017
web-app-matomo: 8018
@@ -50,7 +50,7 @@ ports:
web-app-moodle: 8026
web-app-taiga: 8027
web-app-friendica: 8028
web-app-port-ui: 8029
web-app-desktop: 8029
web-app-bluesky_api: 8030
web-app-bluesky_web: 8031
web-app-keycloak: 8032
@@ -63,26 +63,47 @@ ports:
web-app-navigator: 8039
web-app-espocrm: 8040
web-app-syncope: 8041
web-app-collabora: 8042
web-svc-collabora: 8042
web-app-mobilizon: 8043
web-svc-simpleicons: 8044
web-app-libretranslate: 8045
web-svc-libretranslate: 8045
web-app-pretix: 8046
web-app-mig: 8047
web-svc-logout: 8048
web-app-bookwyrm: 8049
web-app-chess: 8050
web-app-bluesky_view: 8051
web-app-magento: 8052
web-app-bridgy-fed: 8053
web-app-xwiki: 8054
web-app-openwebui: 8055
web-app-flowise: 8056
web-app-minio_api: 8057
web-app-minio_console: 8058
web-app-mini-qr: 8059
web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port
public:
# The following ports should be changed to 22 on the subdomain via stream mapping
ssh:
web-app-gitea: 2201
web-app-gitlab: 2202
web-app-gitea: 2201
web-app-gitlab: 2202
ldaps:
svc-db-openldap: 636
stun:
web-app-bigbluebutton: 3478 # Not sure if it's right placed here or if it should be moved to localhost section
web-app-nextcloud: 3479
turn:
web-app-bigbluebutton: 5349 # Not sure if it's right placed here or if it should be moved to localhost section
web-app-nextcloud: 5350 # Not used yet
svc-db-openldap: 636
stun_turn:
web-app-bigbluebutton: 3478 # Not sure if it's right placed here or if it should be moved to localhost section
# Occupied by BBB: 3479
web-app-nextcloud: 3480
web-svc-coturn: 3481
stun_turn_tls:
web-app-bigbluebutton: 5349 # Not sure if it's right placed here or if it should be moved to localhost section
web-app-nextcloud: 5350 # Not used yet
web-svc-coturn: 5351
federation:
web-app-matrix_synapse: 8448
relay_port_ranges:
web-svc-coturn_start: 20000
web-svc-coturn_end: 39999
web-app-bigbluebutton_start: 40000
web-app-bigbluebutton_end: 49999
web-app-nextcloud_start: 50000
web-app-nextcloud_end: 59999

View File

@@ -7,33 +7,40 @@
#############################################
# @see https://en.wikipedia.org/wiki/OpenID_Connect
## Helper Variables:
_oidc_client_realm: "{{ OIDC.CLIENT.REALM if OIDC.CLIENT is defined and OIDC.CLIENT.REALM is defined else SOFTWARE_NAME | lower }}"
_oidc_url: "{{
( OIDC.URL
if (OIDC is defined and OIDC.URL is defined)
else WEB_PROTOCOL ~ '://' ~ (domains | get_domain('web-app-keycloak'))
).rstrip('/')
}}"
_oidc_client_issuer_url: "{{ _oidc_url ~ '/realms/' ~ _oidc_client_realm }}"
_oidc_client_id: "{{ OIDC.CLIENT.ID if OIDC.CLIENT is defined and OIDC.CLIENT.ID is defined else SOFTWARE_NAME | lower }}"
# Helper Variables:
_oidc_client_realm: "{{ OIDC.CLIENT.REALM if OIDC.CLIENT is defined and OIDC.CLIENT.REALM is defined else SOFTWARE_NAME | lower }}"
_oidc_url: "{{
( OIDC.URL
if (OIDC is defined and OIDC.URL is defined)
else domains | get_url('web-app-keycloak', WEB_PROTOCOL)
).rstrip('/')
}}"
_oidc_client_issuer_url: "{{ _oidc_url ~ '/realms/' ~ _oidc_client_realm }}"
_oidc_client_id: "{{ OIDC.CLIENT.ID if OIDC.CLIENT is defined and OIDC.CLIENT.ID is defined else SOFTWARE_NAME | lower }}"
_oidc_account_url: "{{ _oidc_client_issuer_url ~ '/account' }}"
_oidc_protocol_oidc: "{{ _oidc_client_issuer_url ~ '/protocol/openid-connect' }}"
# Definition
defaults_oidc:
URL: "{{ _oidc_url }}"
CLIENT:
ID: "{{ _oidc_client_id }}" # Client identifier, typically matching your primary domain
# secret: # Client secret for authenticating with the OIDC provider (set in the inventory file). Recommend greater then 32 characters
REALM: "{{ _oidc_client_realm }}" # The realm to which the client belongs in the OIDC provider
ISSUER_URL: "{{ _oidc_client_issuer_url }}" # Base URL of the OIDC provider (issuer)
DISCOVERY_DOCUMENT: "{{ _oidc_client_issuer_url ~ '/.well-known/openid-configuration' }}" # URL for fetching the provider's configuration details
AUTHORIZE_URL: "{{ _oidc_client_issuer_url ~ '/protocol/openid-connect/auth' }}" # Endpoint to start the authorization process
TOKEN_URL: "{{ _oidc_client_issuer_url ~ '/protocol/openid-connect/token' }}" # Endpoint to exchange authorization codes for tokens (note: 'token_url' may be a typo for 'token_url')
USER_INFO_URL: "{{ _oidc_client_issuer_url ~ '/protocol/openid-connect/userinfo' }}" # Endpoint to retrieve user information
LOGOUT_URL: "{{ _oidc_client_issuer_url ~ '/protocol/openid-connect/logout' }}" # Endpoint to log out the user
CHANGE_CREDENTIALS: "{{ _oidc_client_issuer_url ~ '/account/account-security/signing-in' }}" # URL for managing or changing user credentials
CERTS: "{{ _oidc_client_issuer_url ~ '/protocol/openid-connect/certs' }}" # JSON Web Key Set (JWKS)
ID: "{{ _oidc_client_id }}" # Client identifier, typically matching your primary domain
# SECRET: # Client secret for authenticating with the OIDC provider (set in the inventory file). Recommend greater then 32 characters
REALM: "{{ _oidc_client_realm }}" # The realm to which the client belongs in the OIDC provider
ISSUER_URL: "{{ _oidc_client_issuer_url }}" # Base URL of the OIDC provider (issuer)
DISCOVERY_DOCUMENT: "{{ _oidc_client_issuer_url ~ '/.well-known/openid-configuration' }}" # URL for fetching the provider's configuration details
AUTHORIZE_URL: "{{ _oidc_protocol_oidc ~ '/auth' }}" # Endpoint to start the authorization process
TOKEN_URL: "{{ _oidc_protocol_oidc ~ '/token' }}" # Endpoint to exchange authorization codes for tokens (note: 'token_url' may be a typo for 'token_url')
USER_INFO_URL: "{{ _oidc_protocol_oidc ~ '/userinfo' }}" # Endpoint to retrieve user information
LOGOUT_URL: "{{ _oidc_protocol_oidc ~ '/logout' }}" # Endpoint to log out the user
CERTS: "{{ _oidc_protocol_oidc ~ '/certs' }}" # JSON Web Key Set (JWKS)
ACCOUNT:
URL: "{{ _oidc_account_url }}" # Entry point for the user settings console
PROFILE_URL: "{{ _oidc_account_url ~ '/#/personal-info' }}" # Section for managing personal information
SECURITY_URL: "{{ _oidc_account_url ~ '/#/security/signingin' }}" # Section for managing login and security settings
CHANGE_CREDENTIALS: "{{ _oidc_account_url ~ '/account-security/signing-in' }}" # URL for managing or changing user credentials
RESET_CREDENTIALS: "{{ _oidc_client_issuer_url ~ '/login-actions/reset-credentials?client_id=' ~ _oidc_client_id }}" # Password reset url
BUTTON_TEXT: "SSO Login ({{ PRIMARY_DOMAIN | upper }})" # Default button text
BUTTON_TEXT: "SSO Login ({{ PRIMARY_DOMAIN | upper }})" # Default button text
ATTRIBUTES:
# Attribut to identify the user
USERNAME: "preferred_username"

View File

@@ -14,22 +14,22 @@ _ldap_domain: "{{ PRIMARY_DOMAIN }}" # LDAP is jsut listening to
_ldap_user_id: "uid"
_ldap_filters_users_all: "(|(objectclass=inetOrgPerson))"
ldap:
LDAP:
# Distinguished Names (DN)
dn:
DN:
# -------------------------------------------------------------------------
# Base DN / Suffix
# This is the top-level naming context for your directory, used as the
# default search base for most operations (e.g. adding users, groups).
# Example: “dc=example,dc=com”
root: "{{ LDAP_DN_BASE }}"
administrator:
ROOT: "{{ LDAP_DN_BASE }}"
ADMINISTRATOR:
# -------------------------------------------------------------------------
# Data-Tree Administrator Bind DN
# The DN used to authenticate for regular directory operations under
# the data tree (adding users, modifying attributes, creating OUs, etc.).
# Typically: “cn=admin,dc=example,dc=com”
data: "cn={{ applications['svc-db-openldap'].users.administrator.username }},{{ LDAP_DN_BASE }}"
DATA: "cn={{ applications['svc-db-openldap'].users.administrator.username }},{{ LDAP_DN_BASE }}"
# -------------------------------------------------------------------------
# Config-Tree Administrator Bind DN
@@ -37,9 +37,9 @@ ldap:
# need to load or modify schema, overlays, modules, or other server-
# level settings.
# Typically: “cn=admin,cn=config”
configuration: "cn={{ applications['svc-db-openldap'].users.administrator.username }},cn=config"
CONFIGURATION: "cn={{ applications['svc-db-openldap'].users.administrator.username }},cn=config"
ou:
OU:
# -------------------------------------------------------------------------
# Organizational Units (OUs)
# Pre-created containers in the directory tree to logically separate entries:
@@ -47,9 +47,9 @@ ldap:
# groups: Contains organizational or business groups (e.g., departments, teams).
# roles: Contains application-specific RBAC roles
# (e.g., "cn=app1-user", "cn=yourls-admin").
users: "ou=users,{{ LDAP_DN_BASE }}"
groups: "ou=groups,{{ LDAP_DN_BASE }}"
roles: "ou=roles,{{ LDAP_DN_BASE }}"
USERS: "ou=users,{{ LDAP_DN_BASE }}"
GROUPS: "ou=groups,{{ LDAP_DN_BASE }}"
ROLES: "ou=roles,{{ LDAP_DN_BASE }}"
# -------------------------------------------------------------------------
# Additional Notes
@@ -59,17 +59,17 @@ ldap:
# for ordinary user/group operations, and vice versa.
# Password to access dn.bind
bind_credential: "{{ applications | get_app_conf('svc-db-openldap', 'credentials.administrator_database_password') }}"
server:
domain: "{{ _ldap_name if _ldap_docker_network_enabled else _ldap_domain }}" # Mapping for public or locale access
port: "{{ _ldap_server_port }}"
uri: "{{ _ldap_protocol }}://{{ _ldap_name if _ldap_docker_network_enabled else _ldap_domain }}:{{ _ldap_server_port }}"
security: "" #TLS, SSL - Leave empty for none
network:
local: "{{ _ldap_docker_network_enabled }}" # Uses the application configuration to define if local network should be available or not
user:
objects:
structural:
BIND_CREDENTIAL: "{{ applications | get_app_conf('svc-db-openldap', 'credentials.administrator_database_password') }}"
SERVER:
DOMAIN: "{{ _ldap_name if _ldap_docker_network_enabled else _ldap_domain }}" # Mapping for public or locale access
PORT: "{{ _ldap_server_port }}"
URI: "{{ _ldap_protocol }}://{{ _ldap_name if _ldap_docker_network_enabled else _ldap_domain }}:{{ _ldap_server_port }}"
SECURITY: "" #TLS, SSL - Leave empty for none
NETWORK:
LOCAL: "{{ _ldap_docker_network_enabled }}" # Uses the application configuration to define if local network should be available or not
USER:
OBJECTS:
STRUCTURAL:
- person # Structural Classes define the core identity of an entry:
# • Specify mandatory attributes (e.g. sn, cn)
# • Each entry must have exactly one structural class
@@ -77,26 +77,26 @@ ldap:
# (e.g. mail, employeeNumber)
- posixAccount # Provides UNIX account attributes (uidNumber, gidNumber,
# homeDirectory)
auxiliary:
nextloud_user: "nextcloudUser" # Auxiliary Classes attach optional attributes without
AUXILIARY:
NEXTCLOUD_USER: "nextcloudUser" # Auxiliary Classes attach optional attributes without
# changing the entrys structural role. Here they add
# nextcloudQuota and nextcloudEnabled for Nextcloud.
ssh_public_key: "ldapPublicKey" # Allows storing SSH public keys for services like Gitea.
attributes:
SSH_PUBLIC_KEY: "ldapPublicKey" # Allows storing SSH public keys for services like Gitea.
ATTRIBUTES:
# Attribut to identify the user
id: "{{ _ldap_user_id }}"
mail: "mail"
fullname: "cn"
firstname: "givenname"
surname: "sn"
ssh_public_key: "sshPublicKey"
nextcloud_quota: "nextcloudQuota"
filters:
users:
login: "(&{{ _ldap_filters_users_all }}({{_ldap_user_id}}=%{{_ldap_user_id}}))"
all: "{{ _ldap_filters_users_all }}"
rbac:
flavors:
ID: "{{ _ldap_user_id }}"
MAIL: "mail"
FULLNAME: "cn"
FIRSTNAME: "givenname"
SURNAME: "sn"
SSH_PUBLIC_KEY: "sshPublicKey"
NEXTCLOUD_QUOTA: "nextcloudQuota"
FILTERS:
USERS:
LOGIN: "(&{{ _ldap_filters_users_all }}({{_ldap_user_id}}=%{{_ldap_user_id}}))"
ALL: "{{ _ldap_filters_users_all }}"
RBAC:
FLAVORS:
# Valid values posixGroup, groupOfNames
- groupOfNames
# - posixGroup

View File

@@ -21,7 +21,7 @@ defaults_service_provider:
if 'web-app-bluesky' in group_names else '' }}
email: "{{ users.contact.username ~ '@' ~ PRIMARY_DOMAIN if 'web-app-mailu' in group_names else '' }}"
mastodon: "{{ '@' ~ users.contact.username ~ '@' ~ domains | get_domain('web-app-mastodon') if 'web-app-mastodon' in group_names else '' }}"
matrix: "{{ '@' ~ users.contact.username ~ ':' ~ domains['web-app-matrix'].synapse if 'web-app-matrix' in group_names else '' }}"
matrix: "{{ '@' ~ users.contact.username ~ ':' ~ applications | get_app_conf('web-app-matrix', 'server_name') if 'web-app-matrix' in group_names else '' }}"
peertube: "{{ '@' ~ users.contact.username ~ '@' ~ domains | get_domain('web-app-peertube') if 'web-app-peertube' in group_names else '' }}"
pixelfed: "{{ '@' ~ users.contact.username ~ '@' ~ domains | get_domain('web-app-pixelfed') if 'web-app-pixelfed' in group_names else '' }}"
phone: "+0 000 000 404"

View File

@@ -1,6 +1,5 @@
backups_folder_path: "/Backups/" # Path to the backups folder
BACKUPS_FOLDER_PATH: "/Backups/" # Path to the backups folder
# Storage Space-Related Configurations
size_percent_maximum_backup: 75 # Maximum storage space in percent for backups
size_percent_cleanup_disc_space: 85 # Threshold for triggering cleanup actions
size_percent_disc_space_warning: 90 # Warning threshold in percent for free disk space
SIZE_PERCENT_MAXIMUM_BACKUP: 75 # Maximum storage space in percent for backups
SIZE_PERCENT_CLEANUP_DISC_SPACE: 85 # Threshold for triggering cleanup actions

3
group_vars/all/17_ai.yml Normal file
View File

@@ -0,0 +1,3 @@
# URL of Local Ollama Container
OLLAMA_BASE_LOCAL_URL: "http://{{ applications | get_app_conf('svc-ai-ollama', 'docker.services.ollama.name') }}:{{ applications | get_app_conf('svc-ai-ollama', 'docker.services.ollama.port') }}"
OLLAMA_LOCAL_ENABLED: "{{ applications | get_app_conf(application_id, 'features.local_ai') }}"

View File

@@ -0,0 +1,47 @@
# Host resources
RESOURCE_HOST_CPUS: "{{ ansible_processor_vcpus | int }}"
RESOURCE_HOST_MEM: "{{ (ansible_memtotal_mb | int) // 1024 }}"
# Reserve for OS
RESOURCE_HOST_RESERVE_CPU: 2
RESOURCE_HOST_RESERVE_MEM: 4
# Available for apps
RESOURCE_AVAIL_CPUS: "{{ (RESOURCE_HOST_CPUS | int) - (RESOURCE_HOST_RESERVE_CPU | int) }}"
RESOURCE_AVAIL_MEM: "{{ (RESOURCE_HOST_MEM | int) - (RESOURCE_HOST_RESERVE_MEM | int) }}"
# Count active docker services (only roles starting with web- or svc-; service counts if enabled==true OR enabled is undefined)
RESOURCE_ACTIVE_DOCKER_CONTAINER_COUNT: >-
{{
applications
| active_docker_container_count(group_names, '^(web-|svc-).*', ensure_min_one=True)
}}
# Per-container fair share (numbers!), later we append 'g' only for the string fields in compose
RESOURCE_CPUS_NUM: >-
{{
[
(
((RESOURCE_AVAIL_CPUS | float) / (RESOURCE_ACTIVE_DOCKER_CONTAINER_COUNT | float))
| round(2)
),
0.5
] | max
}}
RESOURCE_MEM_RESERVATION_NUM: >-
{{
(((RESOURCE_AVAIL_MEM | float) / (RESOURCE_ACTIVE_DOCKER_CONTAINER_COUNT | float)) * 0.7)
| round(1)
}}
RESOURCE_MEM_LIMIT_NUM: >-
{{
(((RESOURCE_AVAIL_MEM | float) / (RESOURCE_ACTIVE_DOCKER_CONTAINER_COUNT | float)) * 1.0)
| round(1)
}}
# Final strings with units for compose defaults (keep numbers above for math elsewhere if needed)
RESOURCE_CPUS: "{{ RESOURCE_CPUS_NUM }}"
RESOURCE_MEM_RESERVATION: "{{ RESOURCE_MEM_RESERVATION_NUM }}g"
RESOURCE_MEM_LIMIT: "{{ RESOURCE_MEM_LIMIT_NUM }}g"
RESOURCE_PIDS_LIMIT: 512

View File

@@ -0,0 +1,53 @@
from __future__ import annotations
from ansible.plugins.lookup import LookupBase
from ansible.errors import AnsibleError
import os
class LookupModule(LookupBase):
"""
Return a cache-busting string based on the LOCAL file's mtime.
Usage (single path → string via Jinja):
{{ lookup('local_mtime_qs', '/path/to/file.css') }}
-> "?version=1712323456"
Options:
param (str): query parameter name (default: "version")
mode (str): "qs" (default) → returns "?<param>=<mtime>"
"epoch" → returns "<mtime>"
Multiple paths (returns list, one result per term):
{{ lookup('local_mtime_qs', '/a.js', '/b.js', param='v') }}
"""
def run(self, terms, variables=None, **kwargs):
if not terms:
return []
param = kwargs.get('param', 'version')
mode = kwargs.get('mode', 'qs')
if mode not in ('qs', 'epoch'):
raise AnsibleError("local_mtime_qs: 'mode' must be 'qs' or 'epoch'")
results = []
for term in terms:
path = os.path.abspath(os.path.expanduser(str(term)))
# Fail fast if path is missing or not a regular file
if not os.path.exists(path):
raise AnsibleError(f"local_mtime_qs: file does not exist: {path}")
if not os.path.isfile(path):
raise AnsibleError(f"local_mtime_qs: not a regular file: {path}")
try:
mtime = int(os.stat(path).st_mtime)
except OSError as e:
raise AnsibleError(f"local_mtime_qs: cannot stat '{path}': {e}")
if mode == 'qs':
results.append(f"?{param}={mtime}")
else: # mode == 'epoch'
results.append(str(mtime))
return results

View File

@@ -6,6 +6,7 @@ __metaclass__ = type
import os
import subprocess
import time
from datetime import datetime
class CertUtils:
_domain_cert_mapping = None
@@ -22,6 +23,30 @@ class CertUtils:
except subprocess.CalledProcessError:
return ""
@staticmethod
def run_openssl_dates(cert_path):
"""
Returns (not_before_ts, not_after_ts) as POSIX timestamps or (None, None) on failure.
"""
try:
output = subprocess.check_output(
['openssl', 'x509', '-in', cert_path, '-noout', '-startdate', '-enddate'],
universal_newlines=True
)
nb, na = None, None
for line in output.splitlines():
line = line.strip()
if line.startswith('notBefore='):
nb = line.split('=', 1)[1].strip()
elif line.startswith('notAfter='):
na = line.split('=', 1)[1].strip()
def _parse(openssl_dt):
# OpenSSL format example: "Oct 10 12:34:56 2025 GMT"
return int(datetime.strptime(openssl_dt, "%b %d %H:%M:%S %Y %Z").timestamp())
return (_parse(nb) if nb else None, _parse(na) if na else None)
except Exception:
return (None, None)
@staticmethod
def extract_sans(cert_text):
dns_entries = []
@@ -59,7 +84,6 @@ class CertUtils:
else:
return domain == san
@classmethod
def build_snapshot(cls, cert_base_path):
snapshot = []
@@ -82,6 +106,17 @@ class CertUtils:
@classmethod
def refresh_cert_mapping(cls, cert_base_path, debug=False):
"""
Build mapping: SAN -> list of entries
entry = {
'folder': str,
'cert_path': str,
'mtime': float,
'not_before': int|None,
'not_after': int|None,
'is_wildcard': bool
}
"""
cert_files = cls.list_cert_files(cert_base_path)
mapping = {}
for cert_path in cert_files:
@@ -90,46 +125,82 @@ class CertUtils:
continue
sans = cls.extract_sans(cert_text)
folder = os.path.basename(os.path.dirname(cert_path))
try:
mtime = os.stat(cert_path).st_mtime
except FileNotFoundError:
mtime = 0.0
nb, na = cls.run_openssl_dates(cert_path)
for san in sans:
if san not in mapping:
mapping[san] = folder
entry = {
'folder': folder,
'cert_path': cert_path,
'mtime': mtime,
'not_before': nb,
'not_after': na,
'is_wildcard': san.startswith('*.'),
}
mapping.setdefault(san, []).append(entry)
cls._domain_cert_mapping = mapping
if debug:
print(f"[DEBUG] Refreshed domain-to-cert mapping: {mapping}")
print(f"[DEBUG] Refreshed domain-to-cert mapping (counts): "
f"{ {k: len(v) for k, v in mapping.items()} }")
@classmethod
def ensure_cert_mapping(cls, cert_base_path, debug=False):
if cls._domain_cert_mapping is None or cls.snapshot_changed(cert_base_path):
cls.refresh_cert_mapping(cert_base_path, debug)
@staticmethod
def _score_entry(entry):
"""
Return tuple used for sorting newest-first:
(not_before or -inf, mtime)
"""
nb = entry.get('not_before')
mtime = entry.get('mtime', 0.0)
return (nb if nb is not None else -1, mtime)
@classmethod
def find_cert_for_domain(cls, domain, cert_base_path, debug=False):
cls.ensure_cert_mapping(cert_base_path, debug)
exact_match = None
wildcard_match = None
candidates_exact = []
candidates_wild = []
for san, folder in cls._domain_cert_mapping.items():
for san, entries in cls._domain_cert_mapping.items():
if san == domain:
exact_match = folder
break
if san.startswith('*.'):
candidates_exact.extend(entries)
elif san.startswith('*.'):
base = san[2:]
if domain.count('.') == base.count('.') + 1 and domain.endswith('.' + base):
wildcard_match = folder
candidates_wild.extend(entries)
if exact_match:
if debug:
print(f"[DEBUG] Exact match for {domain} found in {exact_match}")
return exact_match
def _pick_newest(entries):
if not entries:
return None
# newest by (not_before, mtime)
best = max(entries, key=cls._score_entry)
return best
if wildcard_match:
if debug:
print(f"[DEBUG] Wildcard match for {domain} found in {wildcard_match}")
return wildcard_match
best_exact = _pick_newest(candidates_exact)
best_wild = _pick_newest(candidates_wild)
if best_exact and debug:
print(f"[DEBUG] Best exact match for {domain}: {best_exact['folder']} "
f"(not_before={best_exact['not_before']}, mtime={best_exact['mtime']})")
if best_wild and debug:
print(f"[DEBUG] Best wildcard match for {domain}: {best_wild['folder']} "
f"(not_before={best_wild['not_before']}, mtime={best_wild['mtime']})")
# Prefer exact if it exists; otherwise wildcard
chosen = best_exact or best_wild
if chosen:
return chosen['folder']
if debug:
print(f"[DEBUG] No certificate folder found for {domain}")
return None

View File

@@ -24,7 +24,7 @@ class ConfigEntryNotSetError(AppConfigKeyError):
pass
def get_app_conf(applications, application_id, config_path, strict=True, default=None):
def get_app_conf(applications, application_id, config_path, strict=True, default=None, skip_missing_app=False):
# Path to the schema file for this application
schema_path = os.path.join('roles', application_id, 'schema', 'main.yml')
@@ -133,6 +133,9 @@ def get_app_conf(applications, application_id, config_path, strict=True, default
try:
obj = applications[application_id]
except KeyError:
if skip_missing_app:
# Simply return default instead of failing
return default if default is not None else False
raise AppConfigKeyError(
f"Application ID '{application_id}' not found in applications dict.\n"
f"path_trace: {path_trace}\n"

View File

@@ -142,7 +142,8 @@ class InventoryManager:
"""
if algorithm == "random_hex":
return secrets.token_hex(64)
if algorithm == "random_hex_32":
return secrets.token_hex(32)
if algorithm == "sha256":
return hashlib.sha256(secrets.token_bytes(32)).hexdigest()
if algorithm == "sha1":

View File

@@ -0,0 +1,296 @@
import os
import fnmatch
import re
from typing import Dict, Set, Iterable, Tuple, Optional
import yaml
class RoleDependencyResolver:
_RE_PURE_JINJA = re.compile(r"\s*\{\{\s*[^}]+\s*\}\}\s*$")
def __init__(self, roles_dir: str):
self.roles_dir = roles_dir
# -------------------------- public API --------------------------
def resolve_transitively(
self,
start_roles: Iterable[str],
*,
resolve_include_role: bool = True,
resolve_import_role: bool = True,
resolve_dependencies: bool = True,
resolve_run_after: bool = False,
max_depth: Optional[int] = None,
) -> Set[str]:
to_visit = list(dict.fromkeys(start_roles))
visited: Set[str] = set()
depth: Dict[str, int] = {}
for r in to_visit:
depth[r] = 0
while to_visit:
role = to_visit.pop()
cur_d = depth.get(role, 0)
if role in visited:
continue
visited.add(role)
if max_depth is not None and cur_d >= max_depth:
continue
for dep in self.get_role_dependencies(
role,
resolve_include_role=resolve_include_role,
resolve_import_role=resolve_import_role,
resolve_dependencies=resolve_dependencies,
resolve_run_after=resolve_run_after,
):
if dep not in visited:
to_visit.append(dep)
depth[dep] = cur_d + 1
return visited
def get_role_dependencies(
self,
role_name: str,
*,
resolve_include_role: bool = True,
resolve_import_role: bool = True,
resolve_dependencies: bool = True,
resolve_run_after: bool = False,
) -> Set[str]:
role_path = os.path.join(self.roles_dir, role_name)
if not os.path.isdir(role_path):
return set()
deps: Set[str] = set()
if resolve_include_role or resolve_import_role:
includes, imports = self._scan_tasks(role_path)
if resolve_include_role:
deps |= includes
if resolve_import_role:
deps |= imports
if resolve_dependencies:
deps |= self._extract_meta_dependencies(role_path)
if resolve_run_after:
deps |= self._extract_meta_run_after(role_path)
return deps
# -------------------------- scanning helpers --------------------------
def _scan_tasks(self, role_path: str) -> Tuple[Set[str], Set[str]]:
tasks_dir = os.path.join(role_path, "tasks")
include_roles: Set[str] = set()
import_roles: Set[str] = set()
if not os.path.isdir(tasks_dir):
return include_roles, import_roles
all_roles = self._list_role_dirs(self.roles_dir)
candidates = []
for root, _, files in os.walk(tasks_dir):
for f in files:
if f.endswith(".yml") or f.endswith(".yaml"):
candidates.append(os.path.join(root, f))
for file_path in candidates:
try:
with open(file_path, "r", encoding="utf-8") as f:
docs = list(yaml.safe_load_all(f))
except Exception:
inc, imp = self._tolerant_scan_file(file_path, all_roles)
include_roles |= inc
import_roles |= imp
continue
for doc in docs or []:
if not isinstance(doc, list):
continue
for task in doc:
if not isinstance(task, dict):
continue
if "include_role" in task:
include_roles |= self._extract_from_task(task, "include_role", all_roles)
if "import_role" in task:
import_roles |= self._extract_from_task(task, "import_role", all_roles)
return include_roles, import_roles
def _extract_from_task(self, task: dict, key: str, all_roles: Iterable[str]) -> Set[str]:
roles: Set[str] = set()
spec = task.get(key)
if not isinstance(spec, dict):
return roles
name = spec.get("name")
loop_val = self._collect_loop_values(task)
if loop_val is not None:
for item in self._iter_flat(loop_val):
cand = self._role_from_loop_item(item, name_template=name)
if cand:
roles.add(cand)
if isinstance(name, str) and name.strip() and not self._is_pure_jinja_var(name):
pattern = self._jinja_to_glob(name) if ("{{" in name and "}}" in name) else name
self._match_glob_into(pattern, all_roles, roles)
return roles
if isinstance(name, str) and name.strip():
if "{{" in name and "}}" in name:
if self._is_pure_jinja_var(name):
return roles
pattern = self._jinja_to_glob(name)
self._match_glob_into(pattern, all_roles, roles)
else:
roles.add(name.strip())
return roles
def _collect_loop_values(self, task: dict):
for k in ("loop", "with_items", "with_list", "with_flattened"):
if k in task:
return task[k]
return None
def _iter_flat(self, value):
if isinstance(value, list):
for v in value:
if isinstance(v, list):
for x in v:
yield x
else:
yield v
def _role_from_loop_item(self, item, name_template=None) -> Optional[str]:
tmpl = (name_template or "").strip() if isinstance(name_template, str) else ""
if isinstance(item, str):
if tmpl in ("{{ item }}", "{{item}}") or not tmpl or "item" in tmpl:
return item.strip()
return None
if isinstance(item, dict):
for k in ("role", "name"):
v = item.get(k)
if isinstance(v, str) and v.strip():
if tmpl in (f"{{{{ item.{k} }}}}", f"{{{{item.{k}}}}}") or not tmpl or "item" in tmpl:
return v.strip()
return None
def _match_glob_into(self, pattern: str, all_roles: Iterable[str], out: Set[str]):
if "*" in pattern or "?" in pattern or "[" in pattern:
for r in all_roles:
if fnmatch.fnmatch(r, pattern):
out.add(r)
else:
out.add(pattern)
def test_jinja_mixed_name_glob_matching(self):
"""
include_role:
name: "prefix-{{ item }}-suffix"
loop: [x, y]
Existing roles: prefix-x-suffix, prefix-y-suffix, prefix-z-suffix
Expectation:
- NO raw loop items ('x', 'y') end up as roles
- Glob matching resolves to all three concrete roles
"""
make_role(self.roles_dir, "A")
for rn in ["prefix-x-suffix", "prefix-y-suffix", "prefix-z-suffix"]:
make_role(self.roles_dir, rn)
write(
os.path.join(self.roles_dir, "A", "tasks", "main.yml"),
"""
- name: jinja-mixed glob
include_role:
name: "prefix-{{ item }}-suffix"
loop:
- x
- y
"""
)
r = RoleDependencyResolver(self.roles_dir)
deps = r.get_role_dependencies("A")
# ensure no raw loop items leak into the results
self.assertNotIn("x", deps)
self.assertNotIn("y", deps)
# only the resolved role names should be present
self.assertEqual(
deps,
{"prefix-x-suffix", "prefix-y-suffix", "prefix-z-suffix"},
)
# -------------------------- meta helpers --------------------------
def _extract_meta_dependencies(self, role_path: str) -> Set[str]:
deps: Set[str] = set()
meta_main = os.path.join(role_path, "meta", "main.yml")
if not os.path.isfile(meta_main):
return deps
try:
with open(meta_main, "r", encoding="utf-8") as f:
meta = yaml.safe_load(f) or {}
raw_deps = meta.get("dependencies", [])
if isinstance(raw_deps, list):
for item in raw_deps:
if isinstance(item, str):
deps.add(item.strip())
elif isinstance(item, dict):
r = item.get("role")
if isinstance(r, str) and r.strip():
deps.add(r.strip())
except Exception:
pass
return deps
def _extract_meta_run_after(self, role_path: str) -> Set[str]:
deps: Set[str] = set()
meta_main = os.path.join(role_path, "meta", "main.yml")
if not os.path.isfile(meta_main):
return deps
try:
with open(meta_main, "r", encoding="utf-8") as f:
meta = yaml.safe_load(f) or {}
galaxy_info = meta.get("galaxy_info", {})
run_after = galaxy_info.get("run_after", [])
if isinstance(run_after, list):
for item in run_after:
if isinstance(item, str) and item.strip():
deps.add(item.strip())
except Exception:
pass
return deps
# -------------------------- small utils --------------------------
def _list_role_dirs(self, roles_dir: str) -> list[str]:
return [
d for d in os.listdir(roles_dir)
if os.path.isdir(os.path.join(roles_dir, d))
]
@classmethod
def _is_pure_jinja_var(cls, s: str) -> bool:
return bool(cls._RE_PURE_JINJA.fullmatch(s or ""))
@staticmethod
def _jinja_to_glob(s: str) -> str:
pattern = re.sub(r"\{\{[^}]+\}\}", "*", s or "")
pattern = re.sub(r"\*{2,}", "*", pattern)
return pattern.strip()

View File

@@ -1,10 +1,10 @@
- name: Execute {{ SOFTWARE_NAME }} Play
- name: "Execute {{ SOFTWARE_NAME }} Play"
hosts: all
tasks:
- name: "Load 'constructor' tasks"
include_tasks: "tasks/stages/01_constructor.yml"
- name: "Load '{{host_type}}' tasks"
include_tasks: "tasks/stages/02_{{host_type}}.yml"
- name: "Load '{{ host_type }}' tasks"
include_tasks: "tasks/stages/02_{{ host_type }}.yml"
- name: "Load 'destructor' tasks"
include_tasks: "tasks/stages/03_destructor.yml"
become: true

View File

@@ -3,4 +3,7 @@ collections:
- name: community.general
- name: hetzner.hcloud
yay:
- python-simpleaudio
- python-simpleaudio
- python-numpy
pacman:
- ansible

View File

@@ -1,3 +0,0 @@
# Todos
- Use at all applications the ansible role name as application_id
- Implement filter_plugins/get_infinito_path.py

View File

@@ -1,4 +1,9 @@
roles:
docker:
title: "Docker Toolkit"
description: "Generic Docker helpers and utilities (compose wrappers, container tooling)."
icon: "fas fa-docker"
invokable: false
dev:
title: "Software Development Utilties"
invokable: false
@@ -6,26 +11,61 @@ roles:
title: "System"
description: "System near components. Will be automaticly called if necessary from other roles."
invokable: false
alm:
title: "Alerting"
description: "Notification handlers for system events"
icon: "fas fa-bell"
ctl:
title: "Control"
description: "Control layer for system lifecycle management—handling cleanup, monitoring, backups, alerting, maintenance, and repair tasks."
icon: "fas fa-cogs"
invokable: false
cln:
title: "Cleanup"
description: "Roles for cleaning up various system resources—old backups, unused certificates, temporary files, Docker volumes, disk caches, deprecated domains, and more."
icon: "fas fa-trash-alt"
cln:
title: "Cleanup"
description: "Roles for cleaning up various system resources—old backups, unused certificates, temporary files, Docker volumes, disk caches, deprecated domains, and more."
icon: "fas fa-trash-alt"
invokable: false
hlth:
title: "Monitoring"
description: "Roles for system monitoring and health checks—encompassing bot-style automated checks and core low-level monitors for logs, containers, disk usage, and more."
icon: "fas fa-chart-area"
invokable: false
bkp:
title: "Backup & Restore"
description: "Backup strategies & restore procedures"
icon: "fas fa-hdd"
invokable: false
alm:
title: "Alerting"
description: "Notification handlers for system events"
icon: "fas fa-bell"
invokable: false
mtn:
title: "Maintenance"
description: "Maintenance roles for certificates, system upkeep, and recurring operational tasks."
icon: "fas fa-tools"
invokable: false
rpr:
title: "Repair"
description: "Repair and recovery roles—handling hard/soft recovery of Docker, Btrfs balancers, and other low-level system fixes."
icon: "fas fa-wrench"
invokable: false
dns:
title: "DNS Automation"
description: "DNS providers, records, and rDNS management (Cloudflare, Hetzner, etc.)."
icon: "fas fa-network-wired"
invokable: false
hlth:
title: "Monitoring"
description: "Roles for system monitoring and health checks—encompassing bot-style automated checks and core low-level monitors for logs, containers, disk usage, and more."
icon: "fas fa-chart-area"
stk:
title: "Stack"
description: "Stack levels to setup the server"
icon: "fas fa-bars-staggered"
invokable: false
bkp:
title: "Backup & Restore"
description: "Backup strategies & restore procedures"
icon: "fas fa-hdd"
front:
title: "System Frontend Helpers"
description: "Frontend helpers for reverse-proxied apps (injection, shared assets, CDN plumbing)."
icon: "fas fa-wand-magic-sparkles"
invokable: false
inj:
title: "Injection"
description: "Composable HTML injection roles (CSS, JS, logout interceptor, analytics, desktop iframe) for Nginx/OpenResty via sub_filter/Lua with CDN-backed assets."
icon: "fas fa-filter"
invokable: false
update:
title: "Updates & Package Management"
description: "OS & package updates"
@@ -36,11 +76,6 @@ roles:
description: "Roles for installing and configuring hardware drivers—covering printers, graphics, input devices, and other peripheral support."
icon: "fas fa-microchip"
invokable: true
# core:
# title: "Core & System"
# description: "Fundamental system configuration"
# icon: "fas fa-cogs"
# invokable: true
gen:
title: "Generic"
description: "Helper roles & installers (git, locales, timer, etc.)"
@@ -66,20 +101,10 @@ roles:
description: "Utility roles for server-side configuration and management—covering corporate identity provisioning, network helpers, and other service-oriented toolkits."
icon: "fas fa-cogs"
invokable: true
srv:
title: "Server"
description: "General server roles for provisioning and managing server infrastructure—covering web servers, proxy servers, network services, and other backend components."
icon: "fas fa-server"
invokable: false
web:
title: "Webserver"
description: "Web-server roles for installing and configuring Nginx (core, TLS, injection filters, composer modules)."
icon: "fas fa-server"
invokable: false
proxy:
title: "Proxy Server"
description: "Proxy-server roles for virtual-host orchestration and reverse-proxy setups."
icon: "fas fa-project-diagram"
dev:
title: "Developer Utilities"
description: "Developer-centric server utilities and admin toolkits."
icon: "fas fa-code"
invokable: false
web:
title: "Web Infrastructure"
@@ -99,11 +124,6 @@ roles:
title: "Webserver Optimation"
description: "Tools which help to optimize webservers"
invokable: true
net:
title: "Network"
description: "Network setup (DNS, Let's Encrypt HTTP, WireGuard, etc.)"
icon: "fas fa-globe"
invokable: true
svc:
title: "Services"
description: "Infrastructure services like databases"
@@ -123,7 +143,21 @@ roles:
description: "Reverseproxy roles for routing and loadbalancing traffic to backend services"
icon: "fas fa-project-diagram"
invokable: true
net:
title: "Network"
description: "Network setup (DNS, Let's Encrypt HTTP, WireGuard, etc.)"
icon: "fas fa-globe"
invokable: true
ai:
title: "AI Services"
description: "Core AI building blocks—model serving, OpenAI-compatible gateways, vector databases, orchestration, and chat UIs."
icon: "fas fa-brain"
invokable: true
bkp:
title: "Backup Services"
description: "Service-level backup and recovery components—handling automated data snapshots, remote backups, synchronization services, and backup orchestration across databases, files, and containers."
icon: "fas fa-database"
invokable: true
user:
title: "Users & Access"
description: "User accounts & access control"

View File

@@ -1,11 +0,0 @@
# Database Docker with Web Proxy
This role builds on `cmp-db-docker` by adding a reverse-proxy frontend for HTTP access to your database service.
## Features
- **Database Composition**
Leverages the `cmp-db-docker` role to stand up your containerized database (PostgreSQL, MariaDB, etc.) with backups and user management.
- **Reverse Proxy**
Includes the `srv-proxy-6-6-domain` role to configure a proxy (e.g. nginx) for routing HTTP(S) traffic to your database UI or management endpoint.

View File

@@ -1 +0,0 @@
DATABASE_VARS_FILE: "{{ playbook_dir }}/roles/cmp-rdbms/vars/database.yml"

View File

@@ -1,14 +0,0 @@
# run_once_cmp_docker_proxy: deactivated
# Load the proxy first, so that openresty handlers are flushed before the main docker compose
- name: "For '{{ application_id }}': include role srv-proxy-6-6-domain"
include_role:
name: srv-proxy-6-6-domain
vars:
domain: "{{ domains | get_domain(application_id) }}"
http_port: "{{ ports.localhost.http[application_id] }}"
- name: "For '{{ application_id }}': Load cmp-docker-oauth2"
include_role:
name: cmp-docker-oauth2

View File

@@ -1 +0,0 @@
{% include 'roles/cmp-rdbms/templates/services/' + database_type + '.yml.j2' %}

View File

@@ -1,20 +0,0 @@
# Helper variables
_dbtype: "{{ (database_type | d('') | trim) }}"
_database_id: "{{ ('svc-db-' ~ _dbtype) if _dbtype else '' }}"
_database_central_name: "{{ (applications | get_app_conf(_database_id, 'docker.services.' ~ _dbtype ~ '.name', False, '')) if _dbtype else '' }}"
_database_consumer_id: "{{ database_application_id | d(application_id) }}"
_database_consumer_entity_name: "{{ _database_consumer_id | get_entity_name }}"
_database_central_enabled: "{{ (applications | get_app_conf(_database_consumer_id, 'features.central_database', False)) if _dbtype else False }}"
# Definition
database_name: "{{ _database_consumer_entity_name }}"
database_instance: "{{ _database_central_name if _database_central_enabled else database_name }}" # This could lead to bugs at dedicated database @todo cleanup
database_host: "{{ _database_central_name if _database_central_enabled else 'database' }}" # This could lead to bugs at dedicated database @todo cleanup
database_username: "{{ _database_consumer_entity_name }}"
database_password: "{{ applications | get_app_conf(_database_consumer_id, 'credentials.database_password', true) }}"
database_port: "{{ (ports.localhost.database[_database_id] | d('')) if _dbtype else '' }}"
database_env: "{{ docker_compose.directories.env }}{{ database_type }}.env"
database_url_jdbc: "jdbc:{{ database_type if database_type == 'mariadb' else 'postgresql' }}://{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_url_full: "{{ database_type }}://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_volume: "{{ _database_consumer_entity_name ~ '_' if not _database_central_enabled }}{{ database_host }}"

View File

@@ -19,3 +19,5 @@
template:
src: caffeine.desktop.j2
dest: "{{auto_start_directory}}caffeine.desktop"
- include_tasks: utils/run_once.yml

View File

@@ -1,4 +1,3 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
when: run_once_desk_gnome_caffeine is not defined

View File

@@ -9,4 +9,4 @@
community.general.pacman:
name: "libreoffice-{{ applications['desk-libreoffice'].flavor }}-{{ item }}"
state: present
loop: "{{libreoffice_languages}}"
loop: "{{ libreoffice_languages }}"

View File

@@ -5,8 +5,8 @@
- name: Link homefolders to cloud
ansible.builtin.file:
src: "{{nextcloud_cloud_directory}}{{item}}"
dest: "{{nextcloud_user_home_directory}}{{item}}"
src: "{{nextcloud_cloud_directory}}{{ item }}"
dest: "{{nextcloud_user_home_directory}}{{ item }}"
owner: "{{ users[desktop_username].username }}"
group: "{{ users[desktop_username].username }}"
state: link

View File

@@ -48,4 +48,6 @@
state: present
create: yes
mode: "0644"
become: false
become: false
- include_tasks: utils/run_once.yml

View File

@@ -1,4 +1,3 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
when: run_once_desk_ssh is not defined

View File

@@ -1,4 +0,0 @@
---
- name: reload virtualbox kernel modules
become: true
command: vboxreload

View File

@@ -1,8 +1,14 @@
---
- name: Setup locale.gen
template: src=locale.gen dest=/etc/locale.gen
template:
src: locale.gen.j2
dest: /etc/locale.gen
- name: Setup locale.conf
template: src=locale.conf dest=/etc/locale.conf
template:
src: locale.conf.j2
dest: /etc/locale.conf
- name: Generate locales
shell: locale-gen
become: true

View File

@@ -1,2 +0,0 @@
LANG=en_US.UTF-8
LANGUAGE=en_US.UTF-8

View File

@@ -0,0 +1,2 @@
LANG={{ HOST_LL_CC }}.UTF-8
LANGUAGE={{ HOST_LL_CC }}.UTF-8

View File

@@ -127,7 +127,7 @@
#de_BE@euro ISO-8859-15
#de_CH.UTF-8 UTF-8
#de_CH ISO-8859-1
de_DE.UTF-8 UTF-8
#de_DE.UTF-8 UTF-8
#de_DE ISO-8859-1
#de_DE@euro ISO-8859-15
#de_IT.UTF-8 UTF-8

View File

@@ -0,0 +1,4 @@
AUR_HELPER: yay
AUR_BUILDER_USER: aur_builder
AUR_BUILDER_GROUP: wheel
AUR_BUILDER_SUDOERS_PATH: /etc/sudoers.d/11-install-aur_builder

View File

@@ -6,42 +6,53 @@
- dev-git
- dev-base-devel
- name: install yay
- name: Install yay build prerequisites
community.general.pacman:
name:
- base-devel
- patch
state: present
- name: Create the `aur_builder` user
- name: Create the AUR builder user
become: true
ansible.builtin.user:
name: aur_builder
name: "{{ AUR_BUILDER_USER }}"
create_home: yes
group: wheel
group: "{{ AUR_BUILDER_GROUP }}"
- name: Allow the `aur_builder` user to run `sudo pacman` without a password
- name: Allow AUR builder to run pacman without password
become: true
ansible.builtin.lineinfile:
path: /etc/sudoers.d/11-install-aur_builder
line: 'aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman'
path: "{{ AUR_BUILDER_SUDOERS_PATH }}"
line: '{{ AUR_BUILDER_USER }} ALL=(ALL) NOPASSWD: /usr/bin/pacman'
create: yes
validate: 'visudo -cf %s'
- name: Clone yay from AUR
become: true
become_user: aur_builder
become_user: "{{ AUR_BUILDER_USER }}"
git:
repo: https://aur.archlinux.org/yay.git
dest: /home/aur_builder/yay
dest: "/home/{{ AUR_BUILDER_USER }}/yay"
clone: yes
update: yes
- name: Build and install yay
become: true
become_user: aur_builder
become_user: "{{ AUR_BUILDER_USER }}"
shell: |
cd /home/aur_builder/yay
cd /home/{{ AUR_BUILDER_USER }}/yay
makepkg -si --noconfirm
args:
creates: /usr/bin/yay
- name: upgrade the system using yay, only act on AUR packages.
become: true
become_user: "{{ AUR_BUILDER_USER }}"
kewlfft.aur.aur:
upgrade: yes
use: "{{ AUR_HELPER }}"
aur_only: yes
when: MODE_UPDATE | bool
- include_tasks: utils/run_once.yml

View File

@@ -1,5 +1,3 @@
- block:
- include_tasks: 01_core.yml
- set_fact:
run_once_dev_yay: true
when: run_once_dev_yay is not defined

View File

@@ -20,7 +20,7 @@ To offer a centralized, extensible system for managing containerized application
- **Reset Logic:** Cleans previous Compose project files and data when `MODE_RESET` is enabled.
- **Handlers for Runtime Control:** Automatically builds, sets up, or restarts containers based on handlers.
- **Template-ready Service Files:** Predefined service base and health check templates.
- **Integration Support:** Compatible with `srv-proxy-7-4-core` and other Infinito.Nexus service roles.
- **Integration Support:** Compatible with `sys-svc-proxy` and other Infinito.Nexus service roles.
## Administration Tips

View File

@@ -1,3 +1,3 @@
docker_compose_skipp_file_creation: false # If set to true the file creation will be skipped
docker_pull_git_repository: false # Activates docker repository download and routine
docker_compose_flush_handlers: false # Set to true in the vars/main.yml of the including role to autoflush after docker compose routine
docker_compose_file_creation_enabled: true # If set to true the file creation will be skipped
docker_pull_git_repository: false # Activates docker repository download and routine
docker_compose_flush_handlers: false # Set to true in the vars/main.yml of the including role to autoflush after docker compose routine

View File

@@ -9,14 +9,44 @@
listen:
- docker compose up
- docker compose restart
- docker compose just up
when: MODE_ASSERT | bool
- name: Build docker compose
- name: docker compose pull
shell: |
set -euo pipefail
lock="{{ [ PATH_DOCKER_COMPOSE_PULL_LOCK_DIR, (docker_compose.directories.instance | hash('sha1')) ~ '.lock' ] | path_join }}"
if [ ! -e "$lock" ]; then
mkdir -p "$(dirname "$lock")"
if docker compose config | grep -qE '^[[:space:]]+build:'; then
docker compose build --pull
fi
if docker compose pull --help 2>/dev/null | grep -q -- '--ignore-buildable'; then
docker compose pull --ignore-buildable
else
docker compose pull || true
fi
: > "$lock"
echo "pulled"
fi
args:
chdir: "{{ docker_compose.directories.instance }}"
executable: /bin/bash
register: compose_pull
changed_when: "'pulled' in compose_pull.stdout"
environment:
COMPOSE_HTTP_TIMEOUT: 600
DOCKER_CLIENT_TIMEOUT: 600
when: MODE_UPDATE | bool
listen:
- docker compose up
- docker compose restart
- name: Build docker compose
shell: |
set -euo pipefail
docker compose build || {
echo "Retrying without cache and pulling bases...";
docker compose build --no-cache --pull;
docker compose build --no-cache{{ ' --pull' if MODE_UPDATE | bool else ''}};
}
args:
chdir: "{{ docker_compose.directories.instance }}"
@@ -45,7 +75,6 @@
DOCKER_CLIENT_TIMEOUT: 600
listen:
- docker compose up
- docker compose just up # @todo replace later just up by up when code is refactored, build atm is also listening to up
- name: docker compose restart
command:

View File

@@ -1,3 +1,8 @@
- name: Remove all docker compose pull locks
file:
path: "{{ PATH_DOCKER_COMPOSE_PULL_LOCK_DIR }}"
state: absent
- name: "Load docker container role"
include_role:
name: docker-container

View File

@@ -1,15 +1,18 @@
- name: Set default docker_repository_path
set_fact:
docker_repository_path: "{{docker_compose.directories.services}}repository/"
docker_repository_path: "{{ [ docker_compose.directories.services, 'repository/' ] | path_join }}"
- name: pull docker repository
git:
repo: "{{ docker_repository_address }}"
dest: "{{ docker_repository_path }}"
version: "{{ docker_repository_branch | default('main') }}"
depth: 1
update: yes
recursive: yes
repo: "{{ docker_repository_address }}"
dest: "{{ docker_repository_path }}"
version: "{{ docker_repository_branch | default('main') }}"
single_branch: yes
depth: 1
update: yes
recursive: yes
force: yes
accept_hostkey: yes
notify:
- docker compose build
- docker compose up

View File

@@ -5,7 +5,9 @@
loop:
- "{{ application_id | abs_role_path_by_application_id }}/templates/Dockerfile.j2"
- "{{ application_id | abs_role_path_by_application_id }}/files/Dockerfile"
notify: docker compose up
notify:
- docker compose build
- docker compose up
register: create_dockerfile_result
failed_when:
- create_dockerfile_result is failed
@@ -26,6 +28,21 @@
- env_template is failed
- "'Could not find or access' not in env_template.msg"
- name: "Create (optional) '{{ docker_compose.files.docker_compose_override }}'"
template:
src: "{{ item }}"
dest: "{{ docker_compose.files.docker_compose_override }}"
mode: '770'
force: yes
notify: docker compose up
register: docker_compose_override_template
loop:
- "{{ application_id | abs_role_path_by_application_id }}/templates/docker-compose.override.yml.j2"
- "{{ application_id | abs_role_path_by_application_id }}/files/docker-compose.override.yml"
failed_when:
- docker_compose_override_template is failed
- "'Could not find or access' not in docker_compose_override_template.msg"
- name: "Create (obligatoric) '{{ docker_compose.files.docker_compose }}'"
template:
src: "docker-compose.yml.j2"

View File

@@ -24,7 +24,7 @@
include_tasks: "04_files.yml"
- name: "Ensure that {{ docker_compose.directories.instance }} is up"
include_tasks: "05_ensure_up.yml"
when: not docker_compose_skipp_file_creation | bool
when: docker_compose_file_creation_enabled | bool
- name: "flush docker compose for '{{ application_id }}'"
meta: flush_handlers

View File

@@ -2,7 +2,7 @@
services:
{# Load Database #}
{% if applications | is_docker_service_enabled(application_id, 'database') %}
{% include 'roles/cmp-rdbms/templates/services/main.yml.j2' %}
{% include 'roles/sys-svc-rdbms/templates/services/main.yml.j2' %}
{% endif %}
{# Load Redis #}
{% if applications | is_docker_service_enabled(application_id, 'redis') or applications | get_app_conf(application_id, 'features.oauth2', False) %}

View File

@@ -1,5 +1,6 @@
{# This template needs to be included in docker-compose.yml #}
networks:
{# Central RDMS-Database Network #}
{% if
(applications | get_app_conf(application_id, 'features.central_database', False) and database_type is defined) or
application_id in ['svc-db-mariadb','svc-db-postgres']
@@ -7,6 +8,7 @@ networks:
{{ applications | get_app_conf('svc-db-' ~ database_type, 'docker.network') }}:
external: true
{% endif %}
{# Central LDAP Network #}
{% if
applications | get_app_conf(application_id, 'features.ldap', False) and
applications | get_app_conf('svc-db-openldap', 'network.docker', False)
@@ -14,7 +16,13 @@ networks:
{{ applications | get_app_conf('svc-db-openldap', 'docker.network') }}:
external: true
{% endif %}
{% if not application_id.startswith('svc-db-') %}
{# Central AI Network #}
{% if applications | get_app_conf(application_id, 'features.local_ai', False) %}
{{ applications | get_app_conf('svc-ai-ollama', 'docker.network') }}:
external: true
{% endif %}
{# Default Network #}
{% if not application_id.startswith('svc-db-') and not application_id.startswith('svc-ai-') %}
default:
{% if
application_id in networks.local and
@@ -25,7 +33,7 @@ networks:
ipam:
driver: default
config:
- subnet: {{networks.local[application_id].subnet}}
- subnet: {{ networks.local[application_id].subnet }}
{% endif %}
{% endif %}
{{ "\n" }}

View File

@@ -1,4 +1,6 @@
users:
blackhole:
description: "Everything what will be send to this user will disapear"
username: "blackhole"
username: "blackhole"
roles:
- mail-bot

Some files were not shown because too many files have changed in this diff Show More