Compare commits

...

125 Commits

Author SHA1 Message Date
445c94788e Refactor: consolidate pkgmgr updates and remove legacy roles
Details:
- Added pkgmgr update task directly in pkgmgr role (pkgmgr pull --all)
- Removed deprecated update-pkgmgr role and references
- Removed deprecated update-pip role and references
- Simplified update-compose by dropping update-pkgmgr include

https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:46:39 +02:00
aac9704e8b Refactor: remove legacy update-docker role and references
Details:
- Removed update-docker role (README, meta, vars, tasks, script)
- Cleaned references from group_vars, update-compose, and docs
- Adjusted web-app-matrix role (removed @todo pointing to update-docker)
- Updated administrator guide (update-docker no longer mentioned)

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:32:33 +02:00
a57a5f8828 Refactor: remove Python-based Listmonk upgrade logic and implement upgrade as Ansible task
Details:
- Removed upgrade_listmonk() function and related calls from update-docker script
- Added dedicated Ansible task in web-app-listmonk role to run non-interactive DB/schema upgrade
- Conditional execution via MODE_UPDATE

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:25:41 +02:00
90843726de keycloak: update realm mail settings to use smtp_server.json.j2 (SPOT); merge via kc_merge_path; fix display name and SSL handling
See: https://chatgpt.com/share/68bb0b25-96bc-800f-8ff7-9ca8d7c7af11
2025-09-05 18:09:33 +02:00
d25da76117 Solved wrong variable bug 2025-09-05 17:30:08 +02:00
d48a1b3c0a Solved missing variable bugs. Role is not fully implemented need to pause development on it for the moment 2025-09-05 17:07:15 +02:00
2839d2e1a4 In between commit Magento implementation 2025-09-05 17:01:13 +02:00
00c99e58e9 Cleaned up bridgy fed 2025-09-04 17:09:35 +02:00
904040589e Added correct variables and health check 2025-09-04 15:13:10 +02:00
9f3d300bca Removed unneccessary handlers 2025-09-04 14:04:53 +02:00
9e253a2d09 Bluesky: Patch hardcoded IPCC_URL and proxy /ipcc
- Added Ansible replace task to override IPCC_URL in geolocation.tsx to same-origin '/ipcc'
- Extended Nginx extra_locations.conf to proxy /ipcc requests to https://bsky.app/ipcc
- Ensures frontend avoids CORS errors when fetching IP geolocation

See: https://chatgpt.com/share/68b97be3-0278-800f-9ee0-94389ca3ac0c
2025-09-04 13:45:57 +02:00
49120b0dcf Added more CSP headers 2025-09-04 13:36:35 +02:00
b6f91ab9d3 changed database_user to database_username 2025-09-04 12:45:22 +02:00
77e8e7ed7e Magento 2.4.8 refactor:
- Switch to split containers (markoshust/magento-php:8.2-fpm + magento-nginx:latest)
- Disable central DB; use app-local MariaDB and pin to 11.4
- Composer bootstrap of Magento in php container (Adobe repo keys), idempotent via creates
- Make setup:install idempotent; run as container user 'app'
- Wire OpenSearch (security disabled) and depends_on ordering
- Add credentials schema (adobe_public_key/adobe_private_key)
- Update vars for php/nginx/search containers + MAGENTO_USER
- Remove legacy docs (Administration.md, Upgrade.md)
Context: changes derived from our ChatGPT session about getting Magento 2.4.8 running with MariaDB 11.4.
Conversation: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 12:45:03 +02:00
32bc17e0c3 Optimized whitespacing 2025-09-04 12:41:11 +02:00
e294637cb6 Changed db config path attribut 2025-09-04 12:34:13 +02:00
577767bed6 sys-svc-rdbms: Refactor database service templates and add version support for Magento
- Unified Jinja2 variable spacing in tasks and templates
- Introduced database_image and database_version variables in vars/database.yml
- Updated mariadb.yml.j2 and postgres.yml.j2 to use {{ database_image }}:{{ database_version }}
- Ensured env file paths and includes are consistent
- Prepared support for versioned database images (needed for Magento deployment)

Ref: https://chatgpt.com/share/68b96a9d-c100-800f-856f-cd23d1eda2ed
2025-09-04 12:32:34 +02:00
e77f8da510 Added debug options to mastodon 2025-09-04 11:50:14 +02:00
4738b263ec Added docker_volume_path filter_plugin 2025-09-04 11:49:40 +02:00
0a588023a7 feat(bluesky): fix CORS by serving /config same-origin and pinning BAPP_CONFIG_URL
- Add `server.config_upstream_url` default in `roles/web-app-bluesky/config/main.yml`
  to define upstream for /config (defaults to https://ip.bsky.app/config).
- Introduce front-proxy injection `extra_locations.conf.j2` that:
  - proxies `/config` to the upstream,
  - sets SNI and correct Host header,
  - normalizes CORS headers for same-origin consumption.
- Wire the proxy injection only for the Web domain in
  `roles/web-app-bluesky/tasks/main.yml` via `proxy_extra_configuration`.
- Force fresh social-app checkout and patch
  `src/state/geolocation.tsx` to `const BAPP_CONFIG_URL = '/config'`
  in `roles/web-app-bluesky/tasks/02_social_app.yml`; notify `docker compose build` and `up`.
- Tidy and re-group PDS env in `roles/web-app-bluesky/templates/env.j2` (no functional change).
- Add vars in `roles/web-app-bluesky/vars/main.yml`:
  - `BLUESKY_FRONT_PROXY_CONTENT` (renders the extra locations),
  - `BLUESKY_CONFIG_UPSTREAM_URL` (reads `server.config_upstream_url`).

Security/Scope:
- Only affects the Bluesky web frontend (same-origin `/config`); PDS/API and AppView remain unchanged.

Refs:
- Conversation: https://chatgpt.com/share/68b8dd3a-2100-800f-959e-1495f6320aab
2025-09-04 02:29:10 +02:00
d2fa90774b Added fediverse bridge draft 2025-09-04 02:26:27 +02:00
0e72dcbe36 feat(magento): switch to ghcr.io/alexcheng1982/docker-magento2:2.4.6-p3; update Compose/Env/Tasks/Docs
• Docs: updated to MAGENTO_VOLUME; removed Installation/User_Administration guides
• Compose: volume path → /var/www/html; switched variables to MAGENTO_*/MYSQL_*/OPENSEARCH_*
• Env: new variable set + APACHE_SERVERNAME
• Task: setup:install via docker compose exec (multiline form)
• Schema: removed obsolete credentials definition
Link: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 02:25:49 +02:00
4f8ce598a9 Mastodon: allow internal chess host & refactor var names; OpenLDAP: safer get_app_conf
- Add ALLOWED_PRIVATE_ADDRESSES to .env (from svc-db-postgres) to handle 422 Mastodon::PrivateNetworkAddressError
- Switch docker-compose to MASTODON_* variables and align vars/main.yml
- Always run 01_setup.yml during deployment (removed conditional flag)
- OpenLDAP: remove implicit True default on network.local to avoid unintended truthy behavior

Context: chess.infinito.nexus resolved to 192.168.200.30 (private IP) from Mastodon; targeted allowlist unblocks federation lookups.

Ref: https://chat.openai.com/share/REPLACE_WITH_THIS_CONVERSATION_LINK
2025-09-03 21:44:47 +02:00
3769e66d8d Updated CSP for bluesky 2025-09-03 20:55:21 +02:00
33a5fadf67 web-app-chess: fix Corepack/Yarn EACCES and switch to ARG-driven Dockerfile
• Add roles/web-app-chess/files/Dockerfile using build ARGs (CHESS_VERSION, CHESS_REPO_URL, CHESS_REPO_REF, CHESS_ENTRYPOINT_REL, CHESS_ENTRYPOINT_INT, CHESS_APP_DATA_DIR, CONTAINER_PORT). Enable Corepack/Yarn as root in the runtime stage to avoid EACCES on /usr/local/bin symlinks, then drop privileges to 'node'.

• Delete Jinja-based templates/Dockerfile.j2; docker-compose now passes former Jinja vars via build.args. • Update templates/docker-compose.yml.j2 to forward all required build args. • Update config/main.yml: add CSP flag 'script-src-elem: unsafe-inline'.

Ref: https://chatgpt.com/share/68b88d3d-3bd8-800f-9723-e8df0cdc37e2
2025-09-03 20:47:50 +02:00
699a6b6f1e feat(web-app-magento): add Magento role + network/ports
- add role files (docs, vars, config, tasks, schema, templates)

- networks: add web-app-magento 192.168.103.208/28

- ports: add localhost http 8052

Conversation: https://chatgpt.com/share/68b8820f-f864-800f-8819-da509b99cee2
2025-09-03 20:00:01 +02:00
61c29eee60 web-app-chess: build/runtime hardening & feature enablement
Build: use Yarn 4 via Corepack; immutable install with inline builds.

Runtime: enable Corepack as user 'node', use project-local cache (/app/.yarn/cache), add curl; fix ownership.

Entrypoint: generate keys in correct dir; run 'yarn install --immutable --inline-builds' before migrations; wait for Postgres.

Config: enable matomo/css/desktop; notify 'docker compose build' on entrypoint changes.

Docs: rename README title to 'Chess'.

Ref: ChatGPT conversation (2025-09-03) — https://chatgpt.com/share/68b88126-7a6c-800f-acae-ae61ed577f46
2025-09-03 19:56:13 +02:00
d5204fb5c2 Removed unnecessary env loading 2025-09-03 17:41:53 +02:00
751615b1a4 Changed 09_ports.yml to 10_ports.yml 2025-09-03 17:41:14 +02:00
e2993d2912 Added more CSP urls for bluesky 2025-09-03 17:31:29 +02:00
24b6647bfb Corrected variable 2025-09-03 17:30:31 +02:00
d2dc2eab5f web-app-bluesky: refactor role, add Cloudflare DNS integration, split tasks
Changes: add AppView port; add CSP whitelist; new tasks (01_pds, 02_social_app, 03_dns); switch templates to BLUESKY_* vars; update docker-compose and env; TCP healthcheck; remove admin_password from schema.

Conversation context: https://chatgpt.com/share/68b85276-e0ec-800f-90ec-480a1d528593
2025-09-03 16:37:35 +02:00
a1130e33d7 web-app-chess: refactor runtime & entrypoint
- Move entrypoint to files/ and deploy via copy
- Parameterize APP_KEY_FILE, data dir, and entrypoint paths
- Require explicit PORT/PG envs (remove fallbacks)
- Drop stray header from config/main.yml
- Dockerfile: use templated data dir & entrypoint; keep node user
- Compose: set custom image, adjust volume mapping
- env: derive APP_SCHEME from WEB_PROTOCOL; NODE_ENV from ENVIRONMENT
- tasks: add 01_core and simplify main to include it

Ref: https://chatgpt.com/share/68b851c5-4dd8-800f-8e9e-22b985597b8f
2025-09-03 16:34:04 +02:00
df122905eb mailu: include base defaults for oletools (env_file/LD_PRELOAD)
Add base include to oletools service so it inherits env_file (LD_PRELOAD=/usr/lib/libhardened_malloc.so) and other defaults. Fixes crash: PermissionError: '/proc/cpuinfo' during hardened_malloc compatibility probe when LD_PRELOAD was absent. Aligns oletools with other Mailu services.

Refs: ChatGPT discussion – https://chatgpt.com/share/68b837ba-c9cc-800f-b5d9-62b60d6fafd9
2025-09-03 14:42:50 +02:00
d093a22d61 Added correct CSP for JIRA 2025-09-03 11:35:24 +02:00
5e550ce3a3 sys-ctl-rpr-docker-soft: switch to STRICT label mode and adapt tests
- script.py now resolves docker-compose project and working_dir strictly from container labels
- removed container-name fallback logic
- adjusted sys-ctl-hlth-docker-container to include sys-ctl-rpr-docker-soft
- cleaned up sys-svc-docker dependencies
- updated unit tests to mock docker inspect and os.path.isfile for STRICT mode

Conversation: https://chatgpt.com/share/68b80927-b800-800f-a909-0fe8d110fd0e
2025-09-03 11:24:14 +02:00
0ada12e3ca Enabled rpr service by failed health checkl isntead of tiumer 2025-09-03 10:46:46 +02:00
1a5ce4a7fa web-app-bookwyrm, web-app-confluence:
- Fix BookWyrm email SSL/TLS handling (use ternary without 'not' for clarity)
- Add truststore_enabled flag in Confluence config and vars
- Wire JVM_SUPPORT_RECOMMENDED_ARGS to disable UPM signature check if truststore is disabled
- Add placeholder style.css.j2 for Confluence

See conversation: https://chatgpt.com/share/68b80024-7100-800f-a2fe-ba8b9f5cec05
2025-09-03 10:45:41 +02:00
a9abb3ce5d Added unsafe-eval csp to jira 2025-09-03 09:43:07 +02:00
71ceb339fc Fix Confluence & BookWyrm setup:
- Add docker compose build trigger in docker-compose tasks
- Cleanup svc-prx-openresty vars
- Enable unsafe-inline CSP flags for BookWyrm, Confluence, Jira to allow Atlassian inline scripts
- Generalize CONFLUENCE_HOME usage in vars, env and docker-compose
- Ensure confluence-init.properties written with correct home
- Add JVM_SUPPORT_RECOMMENDED_ARGS to pass atlassian.home
- Update README to reference {{ CONFLUENCE_HOME }}

See: https://chatgpt.com/share/68b7582a-aeb8-800f-a14f-e98c5b4e6c70
2025-09-02 22:49:02 +02:00
61bba3d2ef feat(bookwyrm): production-ready runtime + Redis wiring
- Dockerfile: build & install gunicorn wheels
- compose: run initdb before start; use `python -m gunicorn`
- env: add POSTGRES_* and BookWyrm Redis aliases (BROKER/ACTIVITY/CACHE) + CACHE_URL
- vars: add cache URL, DB indices, and URL aliases for Redis

Ref: https://chatgpt.com/share/68b7492b-3200-800f-80c4-295bc3233d68
2025-09-02 21:45:11 +02:00
0bde4295c7 Implemented correct confluence version 2025-09-02 17:01:58 +02:00
8059f272d5 Refactor Confluence and Jira env templates to use official Atlassian ATL_* database variables instead of unused custom placeholders. Ensures containers connect directly to PostgreSQL without relying on CONFLUENCE_DATABASE_* or JIRA_DATABASE_* vars. See conversation: https://chatgpt.com/share/68b6ddfd-3c44-800f-a57e-244dbd7ceeb5 2025-09-02 14:07:38 +02:00
7c814e6e83 BookWyrm: update Dockerfile and env handling
- Remove ARG BOOKWYRM_VERSION default, use Jinja variable directly
- Add proper SMTP environment variables mapping (EMAIL_HOST, EMAIL_PORT, TLS/SSL flags, user, password, default_from)
- Ensure env.j2 uses BookWyrm-expected names only
Ref: ChatGPT conversation 2025-09-02 https://chatgpt.com/share/68b6dc73-3784-800f-9a7e-340be498a412
2025-09-02 14:01:04 +02:00
d760c042c2 Atlassian JVM sizing: cast memory vars to int before floor-division
Apply |int to TOTAL_MB and dependent values to prevent 'unsupported operand type(s) for //' during templating in Confluence and Jira roles.

Context: discussion on 2025-09-02 — https://chatgpt.com/share/68b6d386-4490-800f-9bad-aa7be1571ebe
2025-09-02 13:22:59 +02:00
6cac8085a8 feat(web-app-chess): add castling.club role with ports, networks, and build setup
- Added network subnet (192.168.103.192/28) and port 8050 for web-app-chess
- Replaced stub README with usability-focused description of castling.club
- Implemented config, vars, meta, and tasks for web-app-chess
- Added Dockerfile, docker-compose.yml, env, and docker-entrypoint.sh templates
- Integrated entrypoint asset placement
- Updated meta to reflect usability and software features

Ref: https://chatgpt.com/share/68b6c65a-3de8-800f-86b2-a110920cd50e
2025-09-02 13:21:15 +02:00
3a83f3d14e Refactor BookWyrm role: switch to source-built Dockerfile, update README/meta for usability, add env improvements (ALLOWED_HOSTS, Redis vars, Celery broker), and pin version v0.7.5. See https://chatgpt.com/share/68b6d273-abc4-800f-ad3f-e1a5b9f8dad0 2025-09-02 13:18:32 +02:00
61d852c508 Added ports and networks for bookwyrm, jira, confluence 2025-09-02 12:08:20 +02:00
188b098503 Confluence/Jira roles: add READMEs, switch to custom images, proxy/JVM envs, and integer-safe heap sizing
Confluence: README added; demo disables OIDC/LDAP; Dockerfile overlay; docker-compose now uses CONFLUENCE_CUSTOM_IMAGE and DB depends include; env.j2 adds ATL_* and JVM_*; vars use integer math (//) for Xmx/Xms and expose CUSTOM_IMAGE.

Jira: initial role skeleton with README, config/meta/tasks; Dockerfile overlay; docker-compose using JIRA_CUSTOM_IMAGE and DB depends include; env.j2 with proxy + JVM envs; vars with integer-safe memory sizing.

Context: https://chatgpt.com/share/68b6b592-2250-800f-b68e-b37ae98dbe70
2025-09-02 12:07:34 +02:00
bc56940e55 Implement initial BookWyrm role
- Removed obsolete TODO.md
- Added config/main.yml with service, feature, CSP, and registration settings
- Added schema/main.yml defining vaulted SECRET_KEY (alphanumeric)
- Added tasks/main.yml to load stateful stack
- Added Dockerfile.j2 ensuring data/media dirs
- Added docker-compose.yml.j2 with application, worker, redis, volumes
- Added env.j2 with registration, secrets, DB, Redis, OIDC support
- Extended vars/main.yml with BookWyrm variables and OIDC, Docker, Redis settings
- Updated meta/main.yml with logo and run_after dependencies

Ref: https://chatgpt.com/share/68b6c060-3a0c-800f-89f8-e114a16a4a80
2025-09-02 12:03:11 +02:00
5dfc2efb5a Used port variable 2025-09-02 11:59:50 +02:00
7f9dc65b37 Add README.md files for web-app-bookwyrm, web-app-postmarks, and web-app-socialhome roles
Introduce integration test to ensure all web-app-* roles contain a README.md (required for Web App Desktop visibility)

See: https://chatgpt.com/share/68b6be49-7b78-800f-a3ff-bf922b4b083f
2025-09-02 11:52:34 +02:00
163a925096 fix(docker-compose): proper lock path + robust pull for buildable services
- Store pull lock under ${PATH_DOCKER_COMPOSE_PULL_LOCK_DIR}/<hash>.lock so global cleanup removes it reliably
- If any service defines `build:`, run `docker compose build --pull` before pulling
- Use `docker compose pull --ignore-buildable` when supported; otherwise tolerate pull failures for locally built images

This prevents failures when images are meant to be built locally (e.g., custom images) and ensures lock handling is consistent.

Ref: https://chatgpt.com/share/68b6b592-2250-800f-b68e-b37ae98dbe70
2025-09-02 11:15:28 +02:00
a8c88634b5 cleanup: remove unused handlers and add integration test for unused handlers
Removed obsolete handlers from roles (VirtualBox, backup-to-USB, OpenLDAP)
and introduced an integration test under tests/integration/test_handlers_invoked.py
that ensures all handlers defined in roles/*/handlers are actually notified
somewhere in the code base. This keeps the repository clean by preventing
unused or forgotten handlers from accumulating.

Ref: https://chatgpt.com/share/68b6b28e-4388-800f-87d2-34dfb34b8d36
2025-09-02 11:02:30 +02:00
ce3fe1cd51 Nextcloud: integrate Talk & Whiteboard; adjust ports & healthchecks
- Enable Spreed (Talk); signaling via /standalone-signaling/
- STUN/TURN: move STUN to 3480 (3479 occupied by BBB), keep TURN 5350 reserved
- docker-compose: expose internal WS ports; explicit TURN port mapping
- Healthchecks: add nc-based TCP checks (roles/docker-container/templates/healthcheck/nc.yml.j2)
- Nginx: location proxy to talk:8081
- Schema: add talk_* secrets (turn/signaling/internal)
- Plugins: configure spreed/whiteboard via vars/*; remove old task files
- Ports matrix (group_vars/all/09_ports.yml) updated/commented

Conversation: https://chatgpt.com/share/68b61a6a-e1dc-800f-b793-4aa600bc0166
2025-09-02 00:13:23 +02:00
7ca8b7c71d feat(nextcloud): integrate Talk & Whiteboard; refactor to NEXTCLOUD_* vars; full-stack setup
config(ports): add Nextcloud websocket port (4003); canonical domains (nextcloud/talk/whiteboard)

refactor: unify get_app_conf usage & Jinja spacing; migrate paths/handlers to new NEXTCLOUD_* vars

feat(plugins): split plugin routines; configure Whiteboard via occ (URL + JWT)

fix(oidc): use NEXTCLOUD_URL for logout; correct LDAP attribute mappings; add OIDC flavor switch

feat: Whiteboard container & reverse-proxy location; Talk STUN/WS ports; Redis URL for Whiteboard

chore: drop obsolete TODO; minor cleanups in oauth2-proxy, matrix, peertube, pgadmin, phpldapadmin, pixelfed, phpmyadmin

security(schema): Bluesky jwt_secret now base64_prefixed_32; add Nextcloud whiteboard_jwt_secret

db: normalize postgres image tag templating; central DB host checks spacing fixes

ops: add full-stack bootstrap (certs, proxy, volumes); internal nginx config reload handler update

refs: https://chatgpt.com/share/68b5f5b7-8d64-800f-b001-1241f818dc0e
2025-09-01 21:37:02 +02:00
110381e80c Refactored peertube role and implemented config volume 2025-09-01 18:19:50 +02:00
b02d88adc0 Refactored server roles for better readability 2025-09-01 18:08:35 +02:00
b7065837df MediaWiki: switch feature.css to false and add custom Vector 2022 override stylesheet
See: https://chatgpt.com/share/68b5b925-f418-800f-8f84-de744dd2d093
2025-09-01 17:18:12 +02:00
c98a2378c4 Added is defined condition 2025-09-01 17:05:30 +02:00
4ae3cee36c web-svc-logout: merge logout domains into CSP connect-src and refactor task flow
• Add tasks/01_core.yml to set applications[application_id].server.csp.whitelist['connect-src'] = LOGOUT_CONNECT_SRC_NEW.

• Switch tasks/main.yml to include 01_core.yml (run-once guard preserved).

• Update templates/env.j2 to emit LOGOUT_DOMAINS as a comma-separated list.

• Rework vars/main.yml: compute LOGOUT_DOMAINS, derive LOGOUT_ORIGINS with WEB_PROTOCOL, read connect-src via the get_app_conf filter, and merge/dedupe (unique).

Rationale: ensure CSP allows cross-domain logout requests for all configured services.

Conversation: https://chatgpt.com/share/68b5b07d-b208-800f-b6b2-f26934607c8a
2025-09-01 16:41:33 +02:00
b834f0c95c Implemented config image for pretix 2025-09-01 16:20:04 +02:00
9f734dff17 web-app-pretix: fix healthcheck and allowed hosts
- Add Host header to curl healthcheck when container_hostname is defined
- Use PRETIX_PRETIX_ALLOWED_HOSTS to fix Django 400 Bad Request during healthcheck
- Centralize PRETIX_HOSTNAME from container_hostname var
- Add Redis broker/result backend config for Celery

See: https://chatgpt.com/share/68b59c42-c0fc-800f-9bfb-f1137c59b3de
2025-09-01 15:15:04 +02:00
6fa4d00547 Refactor CDN and run_once handling
- Move run_once include from main.yml to 01_core.yml in desk-gnome-caffeine and desk-ssh
- Introduce sys-svc-cdn/01_core.yml to handle shared/vendor dirs once and role dirs per run
- Replace cdn.* with cdn_paths_all.* across inj roles
- Split cdn_dirs into cdn_dirs_role and CDN_DIRS_GLOBAL
- Ensure cdn_urls uses cdn_paths_all

Details: https://chatgpt.com/share/68b58d64-1e28-800f-8907-36926a9e9a9b
2025-09-01 14:11:36 +02:00
7254667186 Nextcloud: make app:update more robust by retrying once with retries/until (fixes transient migration errors)
See: https://chatgpt.com/share/68b57e29-4420-800f-b326-b34d09fa64b5
2025-09-01 13:06:44 +02:00
aaedaab3da refactor(web-app-mediawiki): unify debug & oidc handling via _ensure_require, introduce host-side prep, switch to bind mounts
- Removed obsolete Installation.md, TODO.md, 02_debug.yml, 05_oidc.yml and legacy debug enable/disable tasks
- Added 01_prep.yml to render debug.php/oidc.php on host side before container start
- Introduced _ensure_require.yml for generic require_once management in LocalSettings.php
- Renamed 01_install.yml -> 02_install.yml to align with new numbering
- Updated docker-compose.yml.j2 to bind-mount mw-local into /opt/mw-local
- Adjusted vars/main.yml to define MEDIAWIKI_LOCAL_MOUNT_DIR and MEDIAWIKI_LOCAL_PATH
- Templates debug.php.j2 and oidc.php.j2 now gated by MODE_DEBUG and MEDIAWIKI_OIDC_ENABLED
- main.yml now orchestrates prep, install, debug, extensions, oidc require, admin consistently

Ref: https://chatgpt.com/share/68b57db2-efcc-800f-a733-aca952298437
2025-09-01 13:04:57 +02:00
7791bd8c04 Implement filter checks: ensure all defined filters are used and remove dead code
Integration tests added/updated:
- tests/integration/test_filters_usage.py: AST-based detection of filter definitions (FilterModule.filters), robust Jinja detection ({{ ... }}, {% ... %}, {% filter ... %}), plus Python call tracking; fails if a filter is used only under tests/.
- tests/integration/test_filters_are_defined.py: inverse check — every filter used in .yml/.yaml/.j2/.jinja2/.tmpl must be defined locally. Scans only inside Jinja blocks and ignores pipes inside strings (e.g., lookup('pipe', "... | grep ... | awk ...")) to avoid false positives like trusted_hosts, woff/woff2, etc.

Bug fixes & robustness:
- Build regexes without %-string formatting to avoid ValueError from literal '%' in Jinja tags.
- Strip quoted strings in usage analysis so sed/grep/awk pipes are not miscounted as filters.
- Prevent self-matches in the defining file.

Cleanup / removal of dead code:
- Removed unused filter plugins and related unit tests:
  * filter_plugins/alias_domains_map.py
  * filter_plugins/get_application_id.py
  * filter_plugins/load_configuration.py
  * filter_plugins/safe.py
  * filter_plugins/safe_join.py
  * roles/svc-db-openldap/filter_plugins/build_ldap_nested_group_entries.py
  * roles/sys-ctl-bkp-docker-2-loc/filter_plugins/dict_to_cli_args.py
  * corresponding tests under tests/unit/*
- roles/svc-db-postgres/filter_plugins/split_postgres_connections.py: dropped no-longer-needed list_postgres_roles API; adjusted tests.

Misc:
- sys-stk-front-proxy/defaults/main.yml: clarified valid vhost_flavour values (comma-separated).

Ref: https://chatgpt.com/share/68b56bac-c4f8-800f-aeef-6708dbb44199
2025-09-01 11:47:51 +02:00
34b3f3b0ad Optimized healthcheck link for web-app-yourls 2025-09-01 10:54:08 +02:00
94fe58b5da safe_join: raise ValueError on None parameters and update tests
Changed safe_join to raise ValueError if base or tail is None instead of returning 'None/path'.
Adjusted unit tests accordingly to expect exceptions for None inputs and kept empty-string handling valid.

Ref: https://chatgpt.com/share/68b55850-e854-800f-9702-09ea956b8dc4
2025-09-01 10:25:08 +02:00
9feb766e6f replaced style-src-elem by style-src 2025-09-01 10:14:03 +02:00
231fd567b3 feat(frontend): rename inj roles to sys-front-*, add sys-svc-cdn, cache-busting lookup
Introduce sys-svc-cdn (cdn_paths/cdn_urls/cdn_dirs) and ensure CDN directories + latest symlink.

Rename sys-srv-web-inj-* → sys-front-inj-*; update includes/templates; serve shared/per-app CSS & JS via CDN.

Add lookup_plugins/local_mtime_qs.py for mtime-based cache busting; split CSS into default.css/bootstrap.css + optional per-app style.css.

CSP: use style-src-elem; drop unsafe-inline for styles. Services: fix SYS_SERVICE_ALL_ENABLED bool and controlled flush.

BREAKING CHANGE: role names changed; replace includes and references accordingly.

Conversation: https://chatgpt.com/share/68b55494-9ec4-800f-b559-44707029141d
2025-09-01 10:10:23 +02:00
3f8e7c1733 Refactor CSP filter:
- Move default 'unsafe-inline' for style-src and style-src-elem into get_csp_flags
- Ensure hashes are only added if 'unsafe-inline' not in final tokens
- Improve comments and structure
- Extend unit tests to cover default flags, overrides, and final-token logic
See: https://chatgpt.com/share/68b54520-5cfc-800f-9bac-45093740df78
2025-09-01 09:03:22 +02:00
3bfab9ef8e feat(filter_plugins/url_join): add query parameter support
- Support query elements starting with '?' or '&'
  * First query element normalized to '?', subsequent to '&'
  * Each query element must be exactly one 'key=value' pair
  * Query elements may only appear after path elements
  * Once query starts, no more path elements are allowed
- Extend test suite with success and failure cases for query handling

See: https://chatgpt.com/share/68b537ea-d198-800f-927a-940c4de832f2
2025-09-01 08:16:22 +02:00
f1870c07be refactor(filter_plugins/url_join): enforce mandatory scheme and raise specific AnsibleFilterError messages
Improved url_join filter:
- Requires first element to contain a valid '<scheme>://'
- Raises specific errors for None, empty list, wrong type, missing scheme,
  extra schemes in later parts, or string conversion failures
- Provides clearer error messages with index context in parts

See: https://chatgpt.com/share/68b537ea-d198-800f-927a-940c4de832f2
2025-09-01 08:06:48 +02:00
d0cec9a7d4 CSP filters: add explicit style-src-elem handling and improve unit tests
See ChatGPT conversation: https://chatgpt.com/share/68b4a82c-e0c8-800f-9273-9165ce1aa8d6
2025-08-31 21:53:39 +02:00
1dbd714a56 yourls: move container_port/healthcheck to vars; listen on 8080
• Removed hardcoded container_port/container_healthcheck from docker-compose.yml.j2
• Added container_port=8080 and container_healthcheck to vars/main.yml
• Rationale: current image listens on 8080; centralizes settings in vars

Ref: https://chatgpt.com/share/68b4a69d-e4b0-800f-a4f8-6c8e4fc55ee4
2025-08-31 21:48:24 +02:00
3a17b2979e Refactor CSP filters to use get_url for domain resolution and update tests to check CSP directives order-independently. See: https://chatgpt.com/share/68b49e5c-6774-800f-9d8e-a3f980799c08 2025-08-31 21:11:57 +02:00
bb0530c2ac Optimized yourls variables and healthcheck 2025-08-31 20:38:02 +02:00
aa2eb53776 fix(csp): always include internal CDN in script-src/connect-src and update tests accordingly
See ChatGPT conversation: https://chatgpt.com/share/68b492b8-847c-800f-82a9-fb890d4add7f
2025-08-31 20:22:05 +02:00
5f66c1a622 feat(postgres): add split_postgres_connections filter and average pool fact
Compute POSTGRES_ALLOWED_AVG_CONNECTIONS once and propagate to app roles (gitlab, mastodon, listmonk, matrix, pretix, mobilizon, openproject, discourse). Fix docker-compose postgres command (-c flags split). Add unit tests. Minor env/locale tweaks and includes.

Conversation: https://chatgpt.com/share/68b48e72-cc28-800f-9c21-270cbc17d82a
2025-08-31 20:04:14 +02:00
b3dfb8bf22 Fix: Resolved Discourse plugin bug and unified variable/path handling
- Discourse: fixed 'DISCOURSE_CONTAINERS_DIR' and 'DISCOURSE_APPLICATION_YML_DEST'
- Nextcloud: improved plugin enable/configure tasks formatting
- WordPress: unified OIDC, msmtp, and upload.ini variables and tasks
- General: aligned spacing and switched to path_join for consistency
2025-08-29 20:53:36 +02:00
db642c1c39 refactor(schedule): unify service timeouts, rename 08_timer.yml → 08_schedule.yml, fix docker repair/update timeouts, raise WP upload limit
See https://chatgpt.com/share/68b1deb9-2534-800f-b28f-7f19925b1fa7
2025-08-29 19:09:28 +02:00
2fccebbd1f Enforce uppercase README.md and TODO.md filenames
- Renamed all Readme.md → README.md
- Renamed all Todo.md → TODO.md
- Added integration test (tests/integration/test_filename_conventions.py) to automatically check naming convention.

Background:
Consistency in file naming (uppercase README.md and TODO.md) avoids issues with case-sensitive filesystems and ensures desktop cards (e.g. Pretix) are properly included.
Ref: https://chatgpt.com/share/68b1d135-c688-800f-9441-46a3cbfee175
2025-08-29 18:11:53 +02:00
c23fbd8ec4 Add new role web-app-confluence
Introduced a new Ansible role for deploying Atlassian Confluence within the Infinito.Nexus ecosystem.
The role follows the same structure as web-app-pretix and includes:

- : Core variables, database config, OIDC integration.
- : Docker service definitions, features (Matomo, CSS, OIDC, logout, central DB).
- : Loads docker, db and proxy stack.
- : Placeholder for schema definitions.
- :
  -  (base for OIDC plugins/extensions),
  -  (service orchestration),
  -  (environment configuration).
- : Metadata, license, company, logo (Font Awesome book-open icon).

Canonical domain is set to `confluence.{{ PRIMARY_DOMAIN }}`.
This role ensures Confluence integrates seamlessly with Keycloak OIDC and the Infinito.Nexus service stack.

Conversation: https://chatgpt.com/share/68b1d006-bbd4-800f-9d2e-9c8a8af2c00f
2025-08-29 18:07:01 +02:00
2999d9af77 web-app-pretix: fully implemented role
Summary:
- Replace draft with complete README (features, resources, credits).
- Remove obsolete Todo.md.
- Switch to custom image tag (PRETIX_IMAGE_CUSTOM) and install 'pretix-oidc' in Dockerfile.
- Drop unused 'config' volume; keep persistent 'data' only.
- Rename docker-compose service from 'application' to 'pretix' and use container_port.
- Use standard depends_on include for DB/Redis (dmbs_excl).
- Align vars to docker.services.pretix.* (image/version/name); add PRETIX_IMAGE_CUSTOM.

Breaking:
- Service key changed to 'pretix' under docker.services.
- 'config' volume removed from compose.

Status:
- Pretix role is now fully implemented and production-ready.

Reference:
- Conversation: https://chatgpt.com/share/68b1cb34-b7dc-800f-8b39-c183124972f2
2025-08-29 17:46:31 +02:00
2809ffb9f0 Added correct fa class and description for pretix 2025-08-29 17:02:50 +02:00
cb12114ce8 Added correct run_after for pretix 2025-08-29 16:47:21 +02:00
ba99e558f7 Improve SAN certbundle task logic and messages
- Fixed typo: 'seperat' → 'separate'
- Added more robust changed_when conditions (stdout + stderr, handle already-issued, rate-limit, service-down cases)
- Added explicit warnings for Let's Encrypt rate limits (exact set and generic)
- Improved readability of SAN encapsulation task with descriptive name

See conversation: https://chatgpt.com/share/68b1bc75-c3a0-800f-8861-fcf4f5f4a48c
2025-08-29 16:45:03 +02:00
2aed0f97d2 Enhance timeout_start_sec_for_domains filter to accept dict, list, or str
- Updated filter to handle dict (domain map), list (flattened domains), or single str inputs.
- Prevents duplicate 'www.' prefixes by checking prefix before adding.
- Adjusted unit tests:
  * Replaced old non-dict test with invalid type tests (int, None).
  * Added explicit tests for list and string input types.

See conversation: https://chatgpt.com/share/68b1ae9a-1ac0-800f-b49d-2915386a1a23
2025-08-29 15:57:00 +02:00
f36c7831b1 Implement dynamic TimeoutStartSec filter for domains and update roles
- Added new filter plugin 'timeout_start_sec_for_domains' to calculate TimeoutStartSec based on number of domains.
- Updated sys-ctl-hlth-csp and sys-ctl-hlth-webserver tasks to use the filter.
- Removed obsolete systemctl.service.j2 in sys-ctl-hlth-csp.
- Adjusted variable naming (CURRENT_PLAY_DOMAINS_ALL etc.) in multiple roles.
- Updated srv-letsencrypt and sys-svc-certs to use uppercase vars.
- Switched pretix role to sys-stk-full-stateful and removed leftover javascript.js.
- Added unittests for the new filter under tests/unit/filter_plugins.

See conversation: https://chatgpt.com/share/68b1ae9a-1ac0-800f-b49d-2915386a1a23
2025-08-29 15:44:31 +02:00
009bee531b Refactor role naming for TLS and proxy stack
- Renamed role `srv-tls-core` → `sys-svc-certs`
- Renamed role `srv-https-stack` → `sys-stk-front-pure`
- Renamed role `sys-stk-front` → `sys-stk-front-proxy`
- Updated all includes, READMEs, meta, and dependent roles accordingly

This improves clarity and consistency of naming conventions for certificate management and proxy orchestration.

See: https://chatgpt.com/share/68b19f2c-22b0-800f-ba9b-3f2c8fd427b0
2025-08-29 14:38:20 +02:00
4c7bb6d9db Solved path bugs and optimized them 2025-08-29 14:13:59 +02:00
092869b29a pretix: enable OIDC support
- add pretix-oidc plugin installation (Dockerfile, version 2.3.1 default)
- configure OIDC env vars (issuer, endpoints, client ID/secret, scopes, unique attribute)
- enable redis + database, add config/data volumes
- switch canonical domain to ticket.<PRIMARY_DOMAIN> with pretix.<PRIMARY_DOMAIN> alias
- mirror GitLab-style OIDC var structure for consistency

Implements pretix authentication via Keycloak/SSO.
See: https://chatgpt.com/share/68b19721-341c-800f-b372-527164474018
2025-08-29 14:04:03 +02:00
f4ea6c6c0f refactor(web-app-gitlab): restructure configuration and add OIDC support
- Added oidc feature flag in config
- Removed obsolete credentials schema (initial_root_password)
- Updated docker-compose.yml.j2 to use explicit GITLAB_* vars (image, version, container, volumes)
- Moved initial_root_password into vars/main.yml
- Introduced GITLAB_OMNIBUS_BASE and GITLAB_OMNIBUS_OIDC config lists
- Switched env.j2 to use GITLAB_OMNIBUS_ALL join

See conversation: https://chatgpt.com/share/68b1962c-3ee0-800f-a858-d4590ff6132a
2025-08-29 14:02:46 +02:00
3ed84717a7 Solved wireguard name bugs 2025-08-29 13:03:06 +02:00
1cfc2b7e23 Optimized mastodon url 2025-08-29 12:27:59 +02:00
01b9648650 Made OIDC secret UPPER 2025-08-29 12:27:29 +02:00
65d3b3040d Activated system_service_suppress_flush for journalctl servic4 2025-08-29 12:26:53 +02:00
28f7ac5aba Removed attendize because it isn't maintained anymore. Pretix is the successor. 2025-08-29 11:17:10 +02:00
19926b0c57 Optimized web-app-desktop variables 2025-08-29 11:04:52 +02:00
3a79d9d630 Optimized pkgmgr variables and removed 'Ensure main.py is executable' because it should be preset by repositories itself 2025-08-29 10:53:36 +02:00
983287a84a Finished mediawiki oidc implementation 2025-08-29 04:24:50 +02:00
dd9a9b6d84 feat(mediawiki): Refactor OIDC + debug; install Composer deps in-container; modularize role
Discussion: https://chatgpt.com/share/68b10c0a-c308-800f-93ac-2ffb386cf58b

- Split tasks into 01_install, 02_debug, 03_admin, 04_extensions, 05_oidc.
- Ensure unzip+git+composer on demand in the container; run Composer as www-data with COMPOSER_HOME=/tmp/composer.
- Idempotently unpack/install PluggableAuth & OpenIDConnect; run composer install only if vendor/ is missing.
- Add sanity check for Jumbojett\OpenIDConnectClient.
- Copy oidc.php only when changed and append a single require_once to LocalSettings.php.
- Use REL1_44-compatible numeric array for $wgPluggableAuth_Config; set $wgPluggableAuth_ButtonLabelMessage.
- Debug: add debug.php that logs to STDERR (visible via docker logs); toggle cleanly with MODE_DEBUG.
- Enable OIDC feature in config; add paths/OIDC/extension vars in vars/main.yml.

fix(services): include SYS_SERVICE_GROUP_CLEANUP in StartPre lock (ssd-hdd, docker-hard).

fix(desktop/joomla): simplify MODE_DEBUG templating.

chore: minor cleanups and renames.
2025-08-29 04:10:46 +02:00
23a2e081bf Optimized services 2025-08-29 01:11:06 +02:00
4cbd848026 Set SYS_TIMER_ALL_ENABLED ny default to DEBUG_MODE 2025-08-29 01:06:09 +02:00
d67f660152 Enabled CSS and Desktop for Mediawiki 2025-08-29 00:46:29 +02:00
5c6349321b Removed MyBB role, because it's deprecated and Discourse takes over 2025-08-29 00:12:35 +02:00
af1ee64246 web-app-mediawiki: installer-driven bootstrap, DB readiness, idempotent admin; drop LocalSettings bind-mount
Tasks:
- Enable docker_compose_flush_handlers=true so services come up immediately.
- Add DB readiness guard via maintenance/sql.php (SELECT 1).
- Run maintenance/install.php on empty schema with robust changed_when/failed_when (merge stdout+stderr); keep secrets hidden.
- Run maintenance/update.php for migrations with neutral changed_when unless work is done.
- Make admin creation idempotent: tolerate 'already exists' and 'Account exists', keep async+no_log.

Config changes:
- Remove LocalSettings.php template and its host bind-mount from compose.
- Drop MediaWiki settings path variables and META namespace variable (unused after switch).

Result: First boot is fully automated (schema + admin), subsequent runs are cleanly idempotent.

Ref: ChatGPT conversation (Aug 28, 2025, Europe/Berlin) — https://chatgpt.com/share/68b0d2e1-9bc0-800f-81a5-db03ce0b81e3.
2025-08-29 00:07:00 +02:00
d96bfc64a6 added possibility to deactivate docker service loading for performance 2025-08-28 22:47:05 +02:00
6ea8301364 Refactor: migrate cmp/* and srv/* roles into sys-stk/* and sys-svc/* namespaces
- Removed obsolete 'cmp' category, introduced 'stk' category (fa-bars-staggered icon).
- Renamed roles:
  * cmp-db-docker → sys-stk-back-stateful
  * cmp-docker-oauth2 → sys-stk-back-stateless
  * srv-domain-provision → sys-stk-front
  * cmp-db-docker-proxy → sys-stk-full-stateful
  * cmp-docker-proxy → sys-stk-full-stateless
  * cmp-rdbms → sys-svc-rdbms
- Updated all include_role references, vars, templates and README.md files.
- Adjusted run_once comments and variable paths accordingly.
- Updated all web-app roles to use new sys-stk/* and sys-svc/* roles.

Conversation: https://chatgpt.com/share/68b0ba66-09f8-800f-86fc-76c47009d431
2025-08-28 22:23:09 +02:00
92f5bf6481 refactor(web-app-mybb): remove obsolete Installation.md, introduce schema for secret_pin, and rework task/vars handling
- Removed outdated Installation.md (manual plugin instructions no longer needed)
- Added schema/main.yml with validation for secret_pin
- Added config.php.j2 template to manage DB + admin config
- Refactored tasks/main.yml to deploy config.php instead of legacy docker-compose
- Removed setup-domain.yml (TLS/domain handling moved to core roles)
- Updated docker-compose.yml.j2 to mount config.php and use new vars
- Cleaned up vars/main.yml: standardized MYBB_* variable names, added MYBB_SECRET_PIN, config paths, and container port

See ChatGPT conversation: https://chatgpt.com/share/68b0ae26-93ec-800f-8785-0da7c9303090
2025-08-28 21:29:58 +02:00
58c17bf043 web-app-mediawiki: template-driven LocalSettings.php + admin automation; compose & config tweaks
Config & features:
- roles/web-app-mediawiki/config/main.yml:
  - Add sitename ('Wiki on {{ PRIMARY_DOMAIN | upper }}') and meta_namespace ('Meta')
  - Enable central_database feature and database service
  - Move volumes under docker.volumes (correct indentation)

Tasks & automation:
- roles/web-app-mediawiki/tasks/main.yml:
  - Avoid immediate compose handler flush (docker_compose_flush_handlers: false), then explicit meta: flush_handlers
  - Deploy templated LocalSettings.php to host path
  - Create admin via maintenance/createAndPromote.php (docker exec, idempotent changed_when/failed_when)

Templates:
- roles/web-app-mediawiki/templates/LocalSettings.php.j2:
  - Set $wgSitename, $wgMetaNamespace, $wgServer from MEDIAWIKI_*
  - DB settings (mysql, host:port, name, user, password)
  - Mail settings (EmergencyContact/PasswordSender)
  - Default skin: vector
  - Load basic extensions (ParserFunctions, Cite)
- roles/web-app-mediawiki/templates/docker-compose.yml.j2:
  - Switch to MEDIAWIKI_* vars, mount LocalSettings.php (ro)
  - Use container_port, include curl healthcheck
  - Fix volumes name to MEDIAWIKI_VOLUME

Vars:
- roles/web-app-mediawiki/vars/main.yml:
  - Restructure with MEDIAWIKI_* (sitename, meta_namespace, URL, image/version/container/volume)
  - Define SETTINGS host/dock paths, container_port, default user (www-data)
  - Admin bootstrap vars (name/password/email)

Misc:
- Add empty schema/main.yml placeholder for future validation

Refs: ChatGPT conversation (2025-08-28, Europe/Berlin). Link: https://chatgpt.com/share/68b0ace6-f8f4-800f-b7a7-a51a6c5260f1
2025-08-28 21:28:47 +02:00
6c2d5c52c8 Attached 'not (system_service_suppress_flush | bool)' directly to handler 2025-08-28 21:16:04 +02:00
b919f39e35 Made stop unrequired for joomla container 2025-08-28 21:15:07 +02:00
9f2cfe65af Remove non-functional Joomla LDAP integration
- Disabled LDAP feature flag (set to false by default, with comment)
- Removed ldapautocreate plugin (PHP + XML)
- Deleted LDAP helper tasks (01_ldap_files.yml, 05_ldap.yml, 07_diagnose.yml)
- Deleted LDAP CLI helper scripts (cli.php, diagnose.php, plugins.php, auth-trace.php)
- Removed LDAP configuration variables from vars/main.yml
- Removed LDAP environment variables from env.j2
- Removed LDAP-specific mounts from docker-compose.yml.j2
- Dropped php-ldap installation from Dockerfile
- Renamed task files for consistent numbering (02->01_install, 03->02_debug, 04->03_patch, 06->04_assert)

Reason: LDAP integration was removed because it was not functional.

Conversation: https://chatgpt.com/share/68b09373-7aa8-800f-8f2c-11e27123bad1
2025-08-28 19:36:12 +02:00
fe399c3967 Added all LDAP changes before removing, because it doesn't work. Will trty to replace it by OIDC 2025-08-28 19:22:37 +02:00
ef801aa498 Joomla: Add LDAP autocreate plugin support
- Introduced autocreate_users feature flag in config/main.yml
- Added ldapautocreate.php and ldapautocreate.xml plugin files
- Implemented tasks/01_ldap_files.yml for plugin deployment
- Added tasks/05_ldap.yml to configure LDAP plugin and register ldapautocreate
- Renamed tasks for better structure (01→02, 02→03, etc.)
- Updated cli-ldap.php.j2 for clean parameter handling
- Mounted ldapautocreate plugin via docker-compose.yml.j2
- Extended vars/main.yml with LDAP autocreate configuration

Ref: https://chatgpt.com/share/68b0802f-bfd4-800f-b10a-57cf0c091f7e
2025-08-28 18:13:53 +02:00
18f3b1042f feat(web-app-joomla): reliable first-run install, safe debug toggler, DB patching, LDAP scaffolding
Why
- Fix flaky first-run installs and make config edits idempotent.
- Prepare LDAP support and allow optional inline CSP for UI.
- Improve observability and guard against broken configuration.php.

What
- config/main.yml: enable features.ldap; add CSP flags (allow inline style/script elem); minor spacing.
- tasks/: split into 01_install (wait for core, absolute CLI path), 02_debug (toggle $debug/$error_reporting safely), 03_patch (patch DB creds in configuration.php), 04_ldap (configure plugin via helper), 05_assert (optional php -l).
- templates/Dockerfile.j2: conditionally install/compile php-ldap (fallback to docker-php-ext-install with libsasl2-dev).
- templates/cli-ldap.php.j2: idempotently enable & configure Authentication - LDAP from env.
- templates/docker-compose.yml.j2: build custom image when LDAP is enabled; mount cli-ldap.php; pull_policy: never.
- templates/env.j2: add site/admin vars, MariaDB connector/env, full LDAP env.
- vars/main.yml: default to MariaDB (mysqli), add JOOMLA_* vars incl. JOOMLA_CONFIG_FILE.

Notes
- LDAP path implemented but NOT yet tested end-to-end.
- Ref: https://chatgpt.com/share/68b068a8-2aa4-800f-8cd1-56383561a9a8.
2025-08-28 16:33:45 +02:00
dece6228a4 Refactor docker-compose build logic and pull policy
- Added conditional '--pull' flag on retry in docker-compose build handler, tied to MODE_UPDATE
- Added 'pull_policy: never' to multiple docker-compose service templates to prevent unwanted image pulls
- Fixed minor formatting issues (e.g. Nextcloud volume spacing, WordPress desktop alignment)

Reference: https://chatgpt.com/share/68b0207a-4d9c-800f-b76f-9515885e5183
2025-08-28 11:25:35 +02:00
cb66fb2978 Refactor LDAP variable schema to use top-level constant LDAP and nested ALL-CAPS keys.
- Converted group_vars/all/13_ldap.yml from lower-case to ALL-CAPS nested keys.
- Updated all roles, tasks, templates, and filter_plugins to reference LDAP.* instead of ldap.*.
- Fixed Keycloak JSON templates to properly quote Jinja variables.
- Adjusted svc-db-openldap filter plugins and unit tests to handle new LDAP structure.
- Updated integration test to only check uniqueness of TOP-LEVEL ALL-CAPS constants, ignoring nested keys.

See: https://chatgpt.com/share/68b01017-efe0-800f-a508-7d7e2f1c8c8d
2025-08-28 10:15:48 +02:00
b9da6908ec keycloak(role): add realm support to generic updater
- Allow kc_object_kind='realm'
- Map endpoint to 'realms' and default lookup_field to 'id'
- Use realm-specific kcadm GET/UPDATE (no -r flag)
- Preserve immutables: id, realm
- Guard query-based ID resolution to non-realm objects

Context: fixing failure in 'Update REALM mail settings' task.
See: https://chatgpt.com/share/68affdb8-3d28-800f-8480-aa6a74000bf8
2025-08-28 08:57:29 +02:00
8baec17562 web-app-taiga: extract admin bootstrap into dedicated task; add robust upsert path
Add roles/web-app-taiga/tasks/01_administrator.yml to handle admin creation via 'createsuperuser' and, on failure, an upsert fallback using 'manage.py shell'. Ensures email, is_staff, is_superuser, is_active are set and password is updated when needed; emits CHANGED marker for idempotence.

Update roles/web-app-taiga/tasks/main.yml to include the new 01_administrator.yml task file, removing the inline admin logic for better separation of concerns.

Uses taiga-manage helper service and composes docker-compose.yml with docker-compose-inits.yml to inherit env/networks/volumes consistently.

Chat reference: https://chatgpt.com/share/68af7637-225c-800f-b670-2b948f5dea54
2025-08-27 23:58:37 +02:00
1401779a9d web-app-taiga: add manage/init flow and idempotent admin bootstrap; fix OIDC config and env quoting
config/main.yml: convert oidc from empty mapping to block; indent flavor under oidc; enable javascript feature.

tasks/main.yml: use path_join for taiga settings; create docker-compose-inits via TAIGA_DOCKER_COMPOSE_INIT_PATH; flush handlers; add idempotent createsuperuser via taiga-manage with async/poll and masked logs.

templates/docker-compose-inits.yml.j2: include compose/container base to inherit env and project settings.

templates/env.j2: quote WEB_PROTOCOL and WEBSOCKET_PROTOCOL.

templates/javascript.js.j2: add SSO warning include.

users/main.yml: add administrator email stub.

vars/main.yml: add js_application_name; restructure OIDC flavor flags; add compose PATH vars; expose TAIGA_SUPERUSER_* vars.

Chat reference: https://chatgpt.com/share/68af7637-225c-800f-b670-2b948f5dea54
2025-08-27 23:19:42 +02:00
707a3fc1d0 Optimized defaults for modes 2025-08-27 22:58:05 +02:00
d595d46e2e Solved unquoted bug 2025-08-27 22:30:03 +02:00
667 changed files with 8114 additions and 4972 deletions

View File

View File

@@ -11,7 +11,7 @@ sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')
from module_utils.entity_name_utils import get_entity_name
# Paths to the group-vars files
PORTS_FILE = './group_vars/all/09_ports.yml'
PORTS_FILE = './group_vars/all/10_ports.yml'
NETWORKS_FILE = './group_vars/all/09_networks.yml'
ROLE_TEMPLATE_DIR = './templates/roles/web-app'
ROLES_DIR = './roles'

View File

@@ -228,7 +228,7 @@ def parse_meta_dependencies(role_dir: str) -> List[str]:
def sanitize_run_once_var(role_name: str) -> str:
"""
Generate run_once variable name from role name.
Example: 'sys-srv-web-inj-logout' -> 'run_once_sys_srv_web_inj_logout'
Example: 'sys-front-inj-logout' -> 'run_once_sys_front_inj_logout'
"""
return "run_once_" + role_name.replace("-", "_")

View File

@@ -15,7 +15,7 @@ Follow these guides to install and configure Infinito.Nexus:
- **Networking & VPN** - Configure `WireGuard`, `OpenVPN`, and `Nginx Reverse Proxy`.
## Managing & Updating Infinito.Nexus 🔄
- Regularly update services using `update-docker`, `update-pacman`, or `update-apt`.
- Regularly update services using `update-pacman`, or `update-apt`.
- Monitor system health with `sys-ctl-hlth-btrfs`, `sys-ctl-hlth-webserver`, and `sys-ctl-hlth-docker-container`.
- Automate system maintenance with `sys-lock`, `sys-ctl-cln-bkps`, and `sys-ctl-rpr-docker-hard`.

View File

@@ -1,86 +0,0 @@
from ansible.errors import AnsibleFilterError
class FilterModule(object):
def filters(self):
return {'alias_domains_map': self.alias_domains_map}
def alias_domains_map(self, apps, PRIMARY_DOMAIN):
"""
Build a map of application IDs to their alias domains.
- If no `domains` key → []
- If `domains` exists but is an empty dict → return the original cfg
- Explicit `aliases` are used (default appended if missing)
- If only `canonical` defined and it doesn't include default, default is added
- Invalid types raise AnsibleFilterError
"""
def parse_entry(domains_cfg, key, app_id):
if key not in domains_cfg:
return None
entry = domains_cfg[key]
if isinstance(entry, dict):
values = list(entry.values())
elif isinstance(entry, list):
values = entry
else:
raise AnsibleFilterError(
f"Unexpected type for 'domains.{key}' in application '{app_id}': {type(entry).__name__}"
)
for d in values:
if not isinstance(d, str) or not d.strip():
raise AnsibleFilterError(
f"Invalid domain entry in '{key}' for application '{app_id}': {d!r}"
)
return values
def default_domain(app_id, primary):
return f"{app_id}.{primary}"
# 1) Precompute canonical domains per app (fallback to default)
canonical_map = {}
for app_id, cfg in apps.items():
domains_cfg = cfg.get('server',{}).get('domains',{})
entry = domains_cfg.get('canonical')
if entry is None:
canonical_map[app_id] = [default_domain(app_id, PRIMARY_DOMAIN)]
elif isinstance(entry, dict):
canonical_map[app_id] = list(entry.values())
elif isinstance(entry, list):
canonical_map[app_id] = list(entry)
else:
raise AnsibleFilterError(
f"Unexpected type for 'server.domains.canonical' in application '{app_id}': {type(entry).__name__}"
)
# 2) Build alias list per app
result = {}
for app_id, cfg in apps.items():
domains_cfg = cfg.get('server',{}).get('domains')
# no domains key → no aliases
if domains_cfg is None:
result[app_id] = []
continue
# empty domains dict → return the original cfg
if isinstance(domains_cfg, dict) and not domains_cfg:
result[app_id] = cfg
continue
# otherwise, compute aliases
aliases = parse_entry(domains_cfg, 'aliases', app_id) or []
default = default_domain(app_id, PRIMARY_DOMAIN)
has_aliases = 'aliases' in domains_cfg
has_canon = 'canonical' in domains_cfg
if has_aliases:
if default not in aliases:
aliases.append(default)
elif has_canon:
canon = canonical_map.get(app_id, [])
if default not in canon and default not in aliases:
aliases.append(default)
result[app_id] = aliases
return result

View File

@@ -1,10 +1,14 @@
from ansible.errors import AnsibleFilterError
import hashlib
import base64
import sys, os
import sys
import os
# Ensure module_utils is importable when this filter runs from Ansible
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.config_utils import get_app_conf
from module_utils.get_url import get_url
class FilterModule(object):
"""
@@ -16,10 +20,14 @@ class FilterModule(object):
'build_csp_header': self.build_csp_header,
}
# -------------------------------
# Helpers
# -------------------------------
@staticmethod
def is_feature_enabled(applications: dict, feature: str, application_id: str) -> bool:
"""
Return True if applications[application_id].features[feature] is truthy.
Returns True if applications[application_id].features[feature] is truthy.
"""
return get_app_conf(
applications,
@@ -31,6 +39,10 @@ class FilterModule(object):
@staticmethod
def get_csp_whitelist(applications, application_id, directive):
"""
Returns a list of additional whitelist entries for a given directive.
Accepts both scalar and list in config; always returns a list.
"""
wl = get_app_conf(
applications,
application_id,
@@ -47,28 +59,37 @@ class FilterModule(object):
@staticmethod
def get_csp_flags(applications, application_id, directive):
"""
Dynamically extract all CSP flags for a given directive and return them as tokens,
e.g., "'unsafe-eval'", "'unsafe-inline'", etc.
Returns CSP flag tokens (e.g., "'unsafe-eval'", "'unsafe-inline'") for a directive,
merging sane defaults with app config.
Default: 'unsafe-inline' is enabled for style-src and style-src-elem.
"""
flags = get_app_conf(
# Defaults that apply to all apps
default_flags = {}
if directive in ('style-src', 'style-src-elem'):
default_flags = {'unsafe-inline': True}
configured = get_app_conf(
applications,
application_id,
'server.csp.flags.' + directive,
False,
{}
)
tokens = []
for flag_name, enabled in flags.items():
# Merge defaults with configured flags (configured overrides defaults)
merged = {**default_flags, **configured}
tokens = []
for flag_name, enabled in merged.items():
if enabled:
tokens.append(f"'{flag_name}'")
return tokens
@staticmethod
def get_csp_inline_content(applications, application_id, directive):
"""
Return inline script/style snippets to hash for a given CSP directive.
Returns inline script/style snippets to hash for a given directive.
Accepts both scalar and list in config; always returns a list.
"""
snippets = get_app_conf(
applications,
@@ -86,7 +107,7 @@ class FilterModule(object):
@staticmethod
def get_csp_hash(content):
"""
Compute the SHA256 hash of the given inline content and return
Computes the SHA256 hash of the given inline content and returns
a CSP token like "'sha256-<base64>'".
"""
try:
@@ -96,6 +117,10 @@ class FilterModule(object):
except Exception as exc:
raise AnsibleFilterError(f"get_csp_hash failed: {exc}")
# -------------------------------
# Main builder
# -------------------------------
def build_csp_header(
self,
applications,
@@ -105,80 +130,80 @@ class FilterModule(object):
matomo_feature_name='matomo'
):
"""
Build the Content-Security-Policy header value dynamically based on application settings.
Inline hashes are read from applications[application_id].csp.hashes
Builds the Content-Security-Policy header value dynamically based on application settings.
- Flags (e.g., 'unsafe-eval', 'unsafe-inline') are read from server.csp.flags.<directive>,
with sane defaults applied in get_csp_flags (always 'unsafe-inline' for style-src and style-src-elem).
- Inline hashes are read from server.csp.hashes.<directive>.
- Whitelists are read from server.csp.whitelist.<directive>.
- Inline hashes are added only if the final tokens do NOT include 'unsafe-inline'.
"""
try:
directives = [
'default-src',
'connect-src',
'frame-ancestors',
'frame-src',
'script-src',
'script-src-elem',
'style-src',
'font-src',
'worker-src',
'manifest-src',
'media-src',
'default-src', # Fallback source list for content types not explicitly listed
'connect-src', # Allowed URLs for XHR, WebSockets, EventSource, fetch()
'frame-ancestors', # Who may embed this page
'frame-src', # Sources for nested browsing contexts (e.g., <iframe>)
'script-src', # Sources for script execution
'script-src-elem', # Sources for <script> elements
'style-src', # Sources for inline styles and <style>/<link> elements
'style-src-elem', # Sources for <style> and <link rel="stylesheet">
'font-src', # Sources for fonts
'worker-src', # Sources for workers
'manifest-src', # Sources for web app manifests
'media-src', # Sources for audio and video
]
parts = []
for directive in directives:
tokens = ["'self'"]
# unsafe-eval / unsafe-inline flags
# 1) Load flags (includes defaults from get_csp_flags)
flags = self.get_csp_flags(applications, application_id, directive)
tokens += flags
if directive in ['script-src-elem', 'connect-src']:
# Matomo integration
if self.is_feature_enabled(applications, matomo_feature_name, application_id):
matomo_domain = domains.get('web-app-matomo')[0]
if matomo_domain:
tokens.append(f"{web_protocol}://{matomo_domain}")
# Allow the loading of js from the cdn
if self.is_feature_enabled(applications, 'logout', application_id) or self.is_feature_enabled(applications, 'desktop', application_id):
domain = domains.get('web-svc-cdn')[0]
tokens.append(f"{web_protocol}://{domain}")
# 2) Allow fetching from internal CDN by default for selected directives
if directive in ['script-src-elem', 'connect-src', 'style-src-elem']:
tokens.append(get_url(domains, 'web-svc-cdn', web_protocol))
# ReCaptcha integration: allow loading scripts from Google if feature enabled
# 3) Matomo integration if feature is enabled
if directive in ['script-src-elem', 'connect-src']:
if self.is_feature_enabled(applications, matomo_feature_name, application_id):
tokens.append(get_url(domains, 'web-app-matomo', web_protocol))
# 4) ReCaptcha integration (scripts + frames) if feature is enabled
if self.is_feature_enabled(applications, 'recaptcha', application_id):
if directive in ['script-src-elem',"frame-src"]:
if directive in ['script-src-elem', 'frame-src']:
tokens.append('https://www.gstatic.com')
tokens.append('https://www.google.com')
# 5) Frame ancestors handling (desktop + logout support)
if directive == 'frame-ancestors':
# Enable loading via ancestors
if self.is_feature_enabled(applications, 'desktop', application_id):
# Allow being embedded by the desktop app domain (and potentially its parent)
domain = domains.get('web-app-desktop')[0]
sld_tld = ".".join(domain.split(".")[-2:]) # yields "example.com"
tokens.append(f"{sld_tld}") # yields "*.example.com"
sld_tld = ".".join(domain.split(".")[-2:]) # e.g., example.com
tokens.append(f"{sld_tld}")
if self.is_feature_enabled(applications, 'logout', application_id):
# Allow logout via infinito logout proxy
domain = domains.get('web-svc-logout')[0]
tokens.append(f"{web_protocol}://{domain}")
# Allow logout via keycloak app
domain = domains.get('web-app-keycloak')[0]
tokens.append(f"{web_protocol}://{domain}")
# whitelist
# Allow embedding via logout proxy and Keycloak app
tokens.append(get_url(domains, 'web-svc-logout', web_protocol))
tokens.append(get_url(domains, 'web-app-keycloak', web_protocol))
# 6) Custom whitelist entries
tokens += self.get_csp_whitelist(applications, application_id, directive)
# only add hashes if 'unsafe-inline' is NOT in flags
if "'unsafe-inline'" not in flags:
# 7) Add inline content hashes ONLY if final tokens do NOT include 'unsafe-inline'
# (Check tokens, not flags, to include defaults and later modifications.)
if "'unsafe-inline'" not in tokens:
for snippet in self.get_csp_inline_content(applications, application_id, directive):
tokens.append(self.get_csp_hash(snippet))
# Append directive
parts.append(f"{directive} {' '.join(tokens)};")
# static img-src
# 8) Static img-src directive (kept permissive for data/blob and any host)
parts.append("img-src * data: blob:;")
return ' '.join(parts)
except Exception as exc:

View File

@@ -1,49 +0,0 @@
import os
import re
import yaml
from ansible.errors import AnsibleFilterError
def get_application_id(role_name):
"""
Jinja2/Ansible filter: given a role name, load its vars/main.yml and return the application_id value.
"""
# Construct path: assumes current working directory is project root
vars_file = os.path.join(os.getcwd(), 'roles', role_name, 'vars', 'main.yml')
if not os.path.isfile(vars_file):
raise AnsibleFilterError(f"Vars file not found for role '{role_name}': {vars_file}")
try:
# Read entire file content to avoid lazy stream issues
with open(vars_file, 'r', encoding='utf-8') as f:
content = f.read()
data = yaml.safe_load(content)
except Exception as e:
raise AnsibleFilterError(f"Error reading YAML from {vars_file}: {e}")
# Ensure parsed data is a mapping
if not isinstance(data, dict):
raise AnsibleFilterError(
f"Error reading YAML from {vars_file}: expected mapping, got {type(data).__name__}"
)
# Detect malformed YAML: no valid identifier-like keys
valid_key_pattern = re.compile(r'^[A-Za-z_][A-Za-z0-9_]*$')
if data and not any(valid_key_pattern.match(k) for k in data.keys()):
raise AnsibleFilterError(f"Error reading YAML from {vars_file}: invalid top-level keys")
if 'application_id' not in data:
raise AnsibleFilterError(f"Key 'application_id' not found in {vars_file}")
return data['application_id']
class FilterModule(object):
"""
Ansible filter plugin entry point.
"""
def filters(self):
return {
'get_application_id': get_application_id,
}

View File

@@ -1,122 +0,0 @@
import os
import yaml
import re
from ansible.errors import AnsibleFilterError
# in-memory cache: application_id → (parsed_yaml, is_nested)
_cfg_cache = {}
def load_configuration(application_id, key):
if not isinstance(key, str):
raise AnsibleFilterError("Key must be a dotted-string, e.g. 'features.matomo'")
# locate roles/
here = os.path.dirname(__file__)
root = os.path.abspath(os.path.join(here, '..'))
roles_dir = os.path.join(root, 'roles')
if not os.path.isdir(roles_dir):
raise AnsibleFilterError(f"Roles directory not found at {roles_dir}")
# first time? load & cache
if application_id not in _cfg_cache:
config_path = None
# 1) primary: vars/main.yml declares it
for role in os.listdir(roles_dir):
mv = os.path.join(roles_dir, role, 'vars', 'main.yml')
if os.path.exists(mv):
try:
md = yaml.safe_load(open(mv)) or {}
except Exception:
md = {}
if md.get('application_id') == application_id:
cf = os.path.join(roles_dir, role, "config" , "main.yml")
if not os.path.exists(cf):
raise AnsibleFilterError(
f"Role '{role}' declares '{application_id}' but missing config/main.yml"
)
config_path = cf
break
# 2) fallback nested
if config_path is None:
for role in os.listdir(roles_dir):
cf = os.path.join(roles_dir, role, "config" , "main.yml")
if not os.path.exists(cf):
continue
try:
dd = yaml.safe_load(open(cf)) or {}
except Exception:
dd = {}
if isinstance(dd, dict) and application_id in dd:
config_path = cf
break
# 3) fallback flat
if config_path is None:
for role in os.listdir(roles_dir):
cf = os.path.join(roles_dir, role, "config" , "main.yml")
if not os.path.exists(cf):
continue
try:
dd = yaml.safe_load(open(cf)) or {}
except Exception:
dd = {}
# flat style: dict with all non-dict values
if isinstance(dd, dict) and not any(isinstance(v, dict) for v in dd.values()):
config_path = cf
break
if config_path is None:
return None
# parse once
try:
parsed = yaml.safe_load(open(config_path)) or {}
except Exception as e:
raise AnsibleFilterError(f"Error loading config/main.yml at {config_path}: {e}")
# detect nested vs flat
is_nested = isinstance(parsed, dict) and (application_id in parsed)
_cfg_cache[application_id] = (parsed, is_nested)
parsed, is_nested = _cfg_cache[application_id]
# pick base entry
entry = parsed[application_id] if is_nested else parsed
# resolve dotted key
key_parts = key.split('.')
for part in key_parts:
# Check if part has an index (e.g., domains.canonical[0])
match = re.match(r'([^\[]+)\[([0-9]+)\]', part)
if match:
part, index = match.groups()
index = int(index)
if isinstance(entry, dict) and part in entry:
entry = entry[part]
# Check if entry is a list and access the index
if isinstance(entry, list) and 0 <= index < len(entry):
entry = entry[index]
else:
raise AnsibleFilterError(
f"Index '{index}' out of range for key '{part}' in application '{application_id}'"
)
else:
raise AnsibleFilterError(
f"Key '{part}' not found under application '{application_id}'"
)
else:
if isinstance(entry, dict) and part in entry:
entry = entry[part]
else:
raise AnsibleFilterError(
f"Key '{part}' not found under application '{application_id}'"
)
return entry
class FilterModule(object):
def filters(self):
return {'load_configuration': load_configuration}

View File

@@ -1,55 +0,0 @@
from jinja2 import Undefined
def safe_placeholders(template: str, mapping: dict = None) -> str:
"""
Format a template like "{url}/logo.png".
If mapping is provided (not None) and ANY placeholder is missing or maps to None/empty string, the function will raise KeyError.
If mapping is None, missing placeholders or invalid templates return empty string.
Numerical zero or False are considered valid values.
Any other formatting errors return an empty string.
"""
# Non-string templates yield empty
if not isinstance(template, str):
return ''
class SafeDict(dict):
def __getitem__(self, key):
val = super().get(key, None)
# Treat None or empty string as missing
if val is None or (isinstance(val, str) and val == ''):
raise KeyError(key)
return val
def __missing__(self, key):
raise KeyError(key)
silent = mapping is None
data = mapping or {}
try:
return template.format_map(SafeDict(data))
except KeyError:
if silent:
return ''
raise
except Exception:
return ''
def safe_var(value):
"""
Ansible filter: returns the value unchanged unless it's Undefined or None,
in which case returns an empty string.
Catches all exceptions and yields ''.
"""
try:
if isinstance(value, Undefined) or value is None:
return ''
return value
except Exception:
return ''
class FilterModule(object):
def filters(self):
return {
'safe_var': safe_var,
'safe_placeholders': safe_placeholders,
}

View File

@@ -1,28 +0,0 @@
"""
Ansible filter plugin that joins a base string and a tail path safely.
If the base is falsy (None, empty, etc.), returns an empty string.
"""
def safe_join(base, tail):
"""
Safely join base and tail into a path or URL.
- base: the base string. If falsy, returns ''.
- tail: the string to append. Leading/trailing slashes are handled.
- On any exception, returns ''.
"""
try:
if not base:
return ''
base_str = str(base).rstrip('/')
tail_str = str(tail).lstrip('/')
return f"{base_str}/{tail_str}"
except Exception:
return ''
class FilterModule(object):
def filters(self):
return {
'safe_join': safe_join,
}

View File

@@ -0,0 +1,67 @@
# filter_plugins/timeout_start_sec_for_domains.py (nur Kern geändert)
from ansible.errors import AnsibleFilterError
class FilterModule(object):
def filters(self):
return {
"timeout_start_sec_for_domains": self.timeout_start_sec_for_domains,
}
def timeout_start_sec_for_domains(
self,
domains_dict,
include_www=True,
per_domain_seconds=25,
overhead_seconds=30,
min_seconds=120,
max_seconds=3600,
):
"""
Args:
domains_dict (dict | list[str] | str): Either the domain mapping dict
(values can be str | list[str] | dict[str,str]) or an already
flattened list of domains, or a single domain string.
include_www (bool): If true, add 'www.<domain>' for non-www entries.
...
"""
try:
# Local flattener for dict inputs (like your generate_all_domains source)
def _flatten_from_dict(domains_map):
flat = []
for v in (domains_map or {}).values():
if isinstance(v, str):
flat.append(v)
elif isinstance(v, list):
flat.extend(v)
elif isinstance(v, dict):
flat.extend(v.values())
return flat
# Accept dict | list | str
if isinstance(domains_dict, dict):
flat = _flatten_from_dict(domains_dict)
elif isinstance(domains_dict, list):
flat = list(domains_dict)
elif isinstance(domains_dict, str):
flat = [domains_dict]
else:
raise AnsibleFilterError(
"Expected 'domains_dict' to be dict | list | str."
)
if include_www:
base_unique = sorted(set(flat))
www_variants = [f"www.{d}" for d in base_unique if not str(d).lower().startswith("www.")]
flat.extend(www_variants)
unique_domains = sorted(set(flat))
count = len(unique_domains)
raw = overhead_seconds + per_domain_seconds * count
clamped = max(min_seconds, min(max_seconds, int(raw)))
return clamped
except AnsibleFilterError:
raise
except Exception as exc:
raise AnsibleFilterError(f"timeout_start_sec_for_domains failed: {exc}")

146
filter_plugins/url_join.py Normal file
View File

@@ -0,0 +1,146 @@
"""
Ansible filter plugin that safely joins URL components from a list.
- Requires a valid '<scheme>://' in the first element (any RFC-3986-ish scheme)
- Preserves the double slash after the scheme, collapses other duplicate slashes
- Supports query parts introduced by elements starting with '?' or '&'
* first query element uses '?', subsequent use '&' (regardless of given prefix)
* each query element must be exactly one 'key=value' pair
* query elements may only appear after path elements; once query starts, no more path parts
- Raises specific AnsibleFilterError messages for common misuse
"""
import re
from ansible.errors import AnsibleFilterError
_SCHEME_RE = re.compile(r'^([a-zA-Z][a-zA-Z0-9+.\-]*://)(.*)$')
_QUERY_PAIR_RE = re.compile(r'^[^&=?#]+=[^&?#]*$') # key=value (no '&', no extra '?' or '#')
def _to_str_or_error(obj, index):
"""Cast to str, raising a specific AnsibleFilterError with index context."""
try:
return str(obj)
except Exception as e:
raise AnsibleFilterError(
f"url_join: unable to convert part at index {index} to string: {e}"
)
def url_join(parts):
"""
Join a list of URL parts, URL-aware (scheme, path, query).
Args:
parts (list|tuple): URL segments. First element MUST include '<scheme>://'.
Path elements are plain strings.
Query elements must start with '?' or '&' and contain exactly one 'key=value'.
Returns:
str: Joined URL.
Raises:
AnsibleFilterError: with specific, descriptive messages.
"""
# --- basic input validation ---
if parts is None:
raise AnsibleFilterError("url_join: parts must be a non-empty list; got None")
if not isinstance(parts, (list, tuple)):
raise AnsibleFilterError(
f"url_join: parts must be a list/tuple; got {type(parts).__name__}"
)
if len(parts) == 0:
raise AnsibleFilterError("url_join: parts must be a non-empty list")
# --- first element must carry a scheme ---
first_raw = parts[0]
if first_raw is None:
raise AnsibleFilterError(
"url_join: first element must include a scheme like 'https://'; got None"
)
first_str = _to_str_or_error(first_raw, 0)
m = _SCHEME_RE.match(first_str)
if not m:
raise AnsibleFilterError(
"url_join: first element must start with '<scheme>://', e.g. 'https://example.com'; "
f"got '{first_str}'"
)
scheme = m.group(1) # e.g., 'https://', 'ftp://', 'myapp+v1://'
after_scheme = m.group(2).lstrip('/') # strip only leading slashes right after scheme
# --- iterate parts: collect path parts until first query part; then only query parts allowed ---
path_parts = []
query_pairs = []
in_query = False
for i, p in enumerate(parts):
if p is None:
# skip None silently (consistent with path_join-ish behavior)
continue
s = _to_str_or_error(p, i)
# disallow additional scheme in later parts
if i > 0 and "://" in s:
raise AnsibleFilterError(
f"url_join: only the first element may contain a scheme; part at index {i} "
f"looks like a URL with scheme ('{s}')."
)
# first element: replace with remainder after scheme and continue
if i == 0:
s = after_scheme
# check if this is a query element (starts with ? or &)
if s.startswith('?') or s.startswith('&'):
in_query = True
raw_pair = s[1:] # strip the leading ? or &
if raw_pair == '':
raise AnsibleFilterError(
f"url_join: query element at index {i} is empty; expected '?key=value' or '&key=value'"
)
# Disallow multiple pairs in a single element; enforce exactly one key=value
if '&' in raw_pair:
raise AnsibleFilterError(
f"url_join: query element at index {i} must contain exactly one 'key=value' pair "
f"without '&'; got '{s}'"
)
if not _QUERY_PAIR_RE.match(raw_pair):
raise AnsibleFilterError(
f"url_join: query element at index {i} must match 'key=value' (no extra '?', '&', '#'); got '{s}'"
)
query_pairs.append(raw_pair)
else:
# non-query element
if in_query:
# once query started, no more path parts allowed
raise AnsibleFilterError(
f"url_join: path element found at index {i} after query parameters started; "
f"query parts must come last"
)
# normal path part: strip slashes to avoid duplicate '/'
path_parts.append(s.strip('/'))
# normalize path: remove empty chunks
path_parts = [p for p in path_parts if p != '']
# --- build result ---
# path portion
if path_parts:
joined_path = "/".join(path_parts)
base = scheme + joined_path
else:
# no path beyond scheme
base = scheme
# query portion
if query_pairs:
base = base + "?" + "&".join(query_pairs)
return base
class FilterModule(object):
def filters(self):
return {
'url_join': url_join,
}

View File

@@ -0,0 +1,21 @@
from ansible.errors import AnsibleFilterError
def docker_volume_path(volume_name: str) -> str:
"""
Returns the absolute filesystem path of a Docker volume.
Example:
"akaunting_data" -> "/var/lib/docker/volumes/akaunting_data/_data/"
"""
if not volume_name or not isinstance(volume_name, str):
raise AnsibleFilterError(f"Invalid volume name: {volume_name}")
return f"/var/lib/docker/volumes/{volume_name}/_data/"
class FilterModule(object):
"""Docker volume path filters."""
def filters(self):
return {
"docker_volume_path": docker_volume_path,
}

View File

@@ -1,10 +1,10 @@
# Mode
# The following modes can be combined with each other
MODE_TEST: false # Executes test routines instead of productive routines
MODE_UPDATE: true # Executes updates
MODE_BACKUP: true # Activates the backup before the update procedure
MODE_CLEANUP: true # Cleanup unused files and configurations
MODE_DEBUG: false # This enables debugging in ansible and in the apps, You SHOULD NOT enable this on production servers
MODE_RESET: false # Cleans up all Infinito.Nexus files. It's necessary to run to whole playbook and not particial roles when using this function.
MODE_ASSERT: false # Executes validation tasks during the run.
MODE_TEST: false # Executes test routines instead of productive routines
MODE_UPDATE: true # Executes updates
MODE_DEBUG: false # This enables debugging in ansible and in the apps, You SHOULD NOT enable this on production servers
MODE_RESET: false # Cleans up all Infinito.Nexus files. It's necessary to run to whole playbook and not particial roles when using this function.
MODE_BACKUP: "{{ MODE_UPDATE }}" # Activates the backup before the update procedure
MODE_CLEANUP: "{{ MODE_DEBUG }}" # Cleanup unused files and configurations
MODE_ASSERT: "{{ MODE_DEBUG }}" # Executes validation tasks during the run.

View File

@@ -12,7 +12,6 @@ SYS_SERVICE_BACKUP_RMT_2_LOC: "{{ 'svc-bkp-rmt-2-loc' | get_se
SYS_SERVICE_BACKUP_DOCKER_2_LOC: "{{ 'sys-ctl-bkp-docker-2-loc' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_REPAIR_DOCKER_SOFT: "{{ 'sys-ctl-rpr-docker-soft' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_REPAIR_DOCKER_HARD: "{{ 'sys-ctl-rpr-docker-hard' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_UPDATE_DOCKER: "{{ 'update-docker' | get_service_name(SOFTWARE_NAME) }}"
## On Failure
SYS_SERVICE_ON_FAILURE_COMPOSE: "{{ ('sys-ctl-alm-compose@') | get_service_name(SOFTWARE_NAME, False) }}%n.service"
@@ -46,8 +45,7 @@ SYS_SERVICE_GROUP_MANIPULATION: >
SYS_SERVICE_GROUP_CLEANUP +
SYS_SERVICE_GROUP_REPAIR +
SYS_SERVICE_GROUP_OPTIMIZATION +
SYS_SERVICE_GROUP_MAINTANANCE +
[ SYS_SERVICE_UPDATE_DOCKER ]
SYS_SERVICE_GROUP_MAINTANANCE
) | sort
}}

View File

@@ -2,20 +2,20 @@
# Service Timers
## Meta
SYS_TIMER_ALL_ENABLED: "{{ not MODE_DEBUG }}" # Runtime Variables for Process Control - Activates all timers, independend if the handlers had been triggered
SYS_TIMER_ALL_ENABLED: "{{ MODE_DEBUG }}" # Runtime Variables for Process Control - Activates all timers, independend if the handlers had been triggered
## Server Tact Variables
HOURS_SERVER_AWAKE: "0..23" # Ours in which the server is "awake" (100% working). Rest of the time is reserved for maintanance
RANDOMIZED_DELAY_SEC: "5min" # Random delay for systemd timers to avoid peak loads.
HOURS_SERVER_AWAKE: "0..23" # Ours in which the server is "awake" (100% working). Rest of the time is reserved for maintanance
RANDOMIZED_DELAY_SEC: "5min" # Random delay for systemd timers to avoid peak loads.
## Timeouts for all services
SYS_TIMEOUT_CLEANUP_SERVICES: "15min"
SYS_TIMEOUT_STORAGE_OPTIMIZER: "10min"
SYS_TIMEOUT_BACKUP_SERVICES: "1h"
SYS_TIMEOUT_HEAL_DOCKER: "30min"
SYS_TIMEOUT_UPDATE_DOCKER: "2min"
SYS_TIMEOUT_RESTART_DOCKER: "{{ SYS_TIMEOUT_UPDATE_DOCKER }}"
SYS_TIMEOUT_DOCKER_RPR_HARD: "10min"
SYS_TIMEOUT_DOCKER_RPR_SOFT: "{{ SYS_TIMEOUT_DOCKER_RPR_HARD }}"
SYS_TIMEOUT_CLEANUP_SERVICES: "15min"
SYS_TIMEOUT_DOCKER_UPDATE: "20min"
SYS_TIMEOUT_STORAGE_OPTIMIZER: "{{ SYS_TIMEOUT_DOCKER_UPDATE }}"
SYS_TIMEOUT_BACKUP_SERVICES: "60min"
## On Calendar
@@ -37,7 +37,6 @@ SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 12:00:00"
### Schedule for repair services
SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month
SYS_SCHEDULE_REPAIR_DOCKER_SOFT: "*-*-* {{ HOURS_SERVER_AWAKE }}:30:00" # Heal unhealthy docker instances once per hour
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM
### Schedule for backup tasks

View File

@@ -10,7 +10,7 @@ defaults_networks:
# /28 Networks, 14 Usable Ip Addresses
web-app-akaunting:
subnet: 192.168.101.0/28
web-app-attendize:
web-app-confluence:
subnet: 192.168.101.16/28
web-app-baserow:
subnet: 192.168.101.32/28
@@ -34,8 +34,8 @@ defaults_networks:
subnet: 192.168.101.176/28
web-app-listmonk:
subnet: 192.168.101.192/28
# Free:
# subnet: 192.168.101.208/28
web-app-jira:
subnet: 192.168.101.208/28
web-app-matomo:
subnet: 192.168.101.224/28
web-app-mastodon:
@@ -48,7 +48,7 @@ defaults_networks:
subnet: 192.168.102.16/28
web-app-moodle:
subnet: 192.168.102.32/28
web-app-mybb:
web-app-bookwyrm:
subnet: 192.168.102.48/28
web-app-nextcloud:
subnet: 192.168.102.64/28
@@ -96,6 +96,12 @@ defaults_networks:
subnet: 192.168.103.160/28
web-svc-logout:
subnet: 192.168.103.176/28
web-app-chess:
subnet: 192.168.103.192/28
web-app-magento:
subnet: 192.168.103.208/28
web-app-bridgy-fed:
subnet: 192.168.103.224/28
# /24 Networks / 254 Usable Clients
web-app-bigbluebutton:

View File

@@ -2,12 +2,12 @@ ports:
# Ports which are exposed to localhost
localhost:
database:
svc-db-postgres: 5432
svc-db-mariadb: 3306
svc-db-postgres: 5432
svc-db-mariadb: 3306
# https://developer.mozilla.org/de/docs/Web/API/WebSockets_API
websocket:
web-app-mastodon: 4001
web-app-espocrm: 4002
web-app-mastodon: 4001
web-app-espocrm: 4002
oauth2_proxy:
web-app-phpmyadmin: 4181
web-app-lam: 4182
@@ -26,7 +26,7 @@ ports:
web-app-gitea: 8002
web-app-wordpress: 8003
web-app-mediawiki: 8004
web-app-mybb: 8005
web-app-confluence: 8005
web-app-yourls: 8006
web-app-mailu: 8007
web-app-elk: 8008
@@ -36,7 +36,7 @@ ports:
web-app-funkwhale: 8012
web-app-roulette-wheel: 8013
web-app-joomla: 8014
web-app-attendize: 8015
web-app-jira: 8015
web-app-pgadmin: 8016
web-app-baserow: 8017
web-app-matomo: 8018
@@ -70,6 +70,11 @@ ports:
web-app-pretix: 8046
web-app-mig: 8047
web-svc-logout: 8048
web-app-bookwyrm: 8049
web-app-chess: 8050
web-app-bluesky_view: 8051
web-app-magento: 8052
web-app-bridgy-fed: 8053
web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port
public:
# The following ports should be changed to 22 on the subdomain via stream mapping
@@ -80,9 +85,10 @@ ports:
svc-db-openldap: 636
stun:
web-app-bigbluebutton: 3478 # Not sure if it's right placed here or if it should be moved to localhost section
web-app-nextcloud: 3479
# Occupied by BBB: 3479
web-app-nextcloud: 3480
turn:
web-app-bigbluebutton: 5349 # Not sure if it's right placed here or if it should be moved to localhost section
web-app-nextcloud: 5350 # Not used yet
web-app-nextcloud: 5350 # Not used yet
federation:
web-app-matrix_synapse: 8448

View File

@@ -25,7 +25,7 @@ defaults_oidc:
URL: "{{ _oidc_url }}"
CLIENT:
ID: "{{ _oidc_client_id }}" # Client identifier, typically matching your primary domain
# secret: # Client secret for authenticating with the OIDC provider (set in the inventory file). Recommend greater then 32 characters
# SECRET: # Client secret for authenticating with the OIDC provider (set in the inventory file). Recommend greater then 32 characters
REALM: "{{ _oidc_client_realm }}" # The realm to which the client belongs in the OIDC provider
ISSUER_URL: "{{ _oidc_client_issuer_url }}" # Base URL of the OIDC provider (issuer)
DISCOVERY_DOCUMENT: "{{ _oidc_client_issuer_url ~ '/.well-known/openid-configuration' }}" # URL for fetching the provider's configuration details

View File

@@ -14,22 +14,22 @@ _ldap_domain: "{{ PRIMARY_DOMAIN }}" # LDAP is jsut listening to
_ldap_user_id: "uid"
_ldap_filters_users_all: "(|(objectclass=inetOrgPerson))"
ldap:
LDAP:
# Distinguished Names (DN)
dn:
DN:
# -------------------------------------------------------------------------
# Base DN / Suffix
# This is the top-level naming context for your directory, used as the
# default search base for most operations (e.g. adding users, groups).
# Example: “dc=example,dc=com”
root: "{{ LDAP_DN_BASE }}"
administrator:
ROOT: "{{ LDAP_DN_BASE }}"
ADMINISTRATOR:
# -------------------------------------------------------------------------
# Data-Tree Administrator Bind DN
# The DN used to authenticate for regular directory operations under
# the data tree (adding users, modifying attributes, creating OUs, etc.).
# Typically: “cn=admin,dc=example,dc=com”
data: "cn={{ applications['svc-db-openldap'].users.administrator.username }},{{ LDAP_DN_BASE }}"
DATA: "cn={{ applications['svc-db-openldap'].users.administrator.username }},{{ LDAP_DN_BASE }}"
# -------------------------------------------------------------------------
# Config-Tree Administrator Bind DN
@@ -37,9 +37,9 @@ ldap:
# need to load or modify schema, overlays, modules, or other server-
# level settings.
# Typically: “cn=admin,cn=config”
configuration: "cn={{ applications['svc-db-openldap'].users.administrator.username }},cn=config"
CONFIGURATION: "cn={{ applications['svc-db-openldap'].users.administrator.username }},cn=config"
ou:
OU:
# -------------------------------------------------------------------------
# Organizational Units (OUs)
# Pre-created containers in the directory tree to logically separate entries:
@@ -47,9 +47,9 @@ ldap:
# groups: Contains organizational or business groups (e.g., departments, teams).
# roles: Contains application-specific RBAC roles
# (e.g., "cn=app1-user", "cn=yourls-admin").
users: "ou=users,{{ LDAP_DN_BASE }}"
groups: "ou=groups,{{ LDAP_DN_BASE }}"
roles: "ou=roles,{{ LDAP_DN_BASE }}"
USERS: "ou=users,{{ LDAP_DN_BASE }}"
GROUPS: "ou=groups,{{ LDAP_DN_BASE }}"
ROLES: "ou=roles,{{ LDAP_DN_BASE }}"
# -------------------------------------------------------------------------
# Additional Notes
@@ -59,17 +59,17 @@ ldap:
# for ordinary user/group operations, and vice versa.
# Password to access dn.bind
bind_credential: "{{ applications | get_app_conf('svc-db-openldap', 'credentials.administrator_database_password') }}"
server:
domain: "{{ _ldap_name if _ldap_docker_network_enabled else _ldap_domain }}" # Mapping for public or locale access
port: "{{ _ldap_server_port }}"
uri: "{{ _ldap_protocol }}://{{ _ldap_name if _ldap_docker_network_enabled else _ldap_domain }}:{{ _ldap_server_port }}"
security: "" #TLS, SSL - Leave empty for none
network:
local: "{{ _ldap_docker_network_enabled }}" # Uses the application configuration to define if local network should be available or not
user:
objects:
structural:
BIND_CREDENTIAL: "{{ applications | get_app_conf('svc-db-openldap', 'credentials.administrator_database_password') }}"
SERVER:
DOMAIN: "{{ _ldap_name if _ldap_docker_network_enabled else _ldap_domain }}" # Mapping for public or locale access
PORT: "{{ _ldap_server_port }}"
URI: "{{ _ldap_protocol }}://{{ _ldap_name if _ldap_docker_network_enabled else _ldap_domain }}:{{ _ldap_server_port }}"
SECURITY: "" #TLS, SSL - Leave empty for none
NETWORK:
LOCAL: "{{ _ldap_docker_network_enabled }}" # Uses the application configuration to define if local network should be available or not
USER:
OBJECTS:
STRUCTURAL:
- person # Structural Classes define the core identity of an entry:
# • Specify mandatory attributes (e.g. sn, cn)
# • Each entry must have exactly one structural class
@@ -77,26 +77,26 @@ ldap:
# (e.g. mail, employeeNumber)
- posixAccount # Provides UNIX account attributes (uidNumber, gidNumber,
# homeDirectory)
auxiliary:
nextloud_user: "nextcloudUser" # Auxiliary Classes attach optional attributes without
AUXILIARY:
NEXTCLOUD_USER: "nextcloudUser" # Auxiliary Classes attach optional attributes without
# changing the entrys structural role. Here they add
# nextcloudQuota and nextcloudEnabled for Nextcloud.
ssh_public_key: "ldapPublicKey" # Allows storing SSH public keys for services like Gitea.
attributes:
SSH_PUBLIC_KEY: "ldapPublicKey" # Allows storing SSH public keys for services like Gitea.
ATTRIBUTES:
# Attribut to identify the user
id: "{{ _ldap_user_id }}"
mail: "mail"
fullname: "cn"
firstname: "givenname"
surname: "sn"
ssh_public_key: "sshPublicKey"
nextcloud_quota: "nextcloudQuota"
filters:
users:
login: "(&{{ _ldap_filters_users_all }}({{_ldap_user_id}}=%{{_ldap_user_id}}))"
all: "{{ _ldap_filters_users_all }}"
rbac:
flavors:
ID: "{{ _ldap_user_id }}"
MAIL: "mail"
FULLNAME: "cn"
FIRSTNAME: "givenname"
SURNAME: "sn"
SSH_PUBLIC_KEY: "sshPublicKey"
NEXTCLOUD_QUOTA: "nextcloudQuota"
FILTERS:
USERS:
LOGIN: "(&{{ _ldap_filters_users_all }}({{_ldap_user_id}}=%{{_ldap_user_id}}))"
ALL: "{{ _ldap_filters_users_all }}"
RBAC:
FLAVORS:
# Valid values posixGroup, groupOfNames
- groupOfNames
# - posixGroup

View File

@@ -0,0 +1,53 @@
from __future__ import annotations
from ansible.plugins.lookup import LookupBase
from ansible.errors import AnsibleError
import os
class LookupModule(LookupBase):
"""
Return a cache-busting string based on the LOCAL file's mtime.
Usage (single path → string via Jinja):
{{ lookup('local_mtime_qs', '/path/to/file.css') }}
-> "?version=1712323456"
Options:
param (str): query parameter name (default: "version")
mode (str): "qs" (default) → returns "?<param>=<mtime>"
"epoch" → returns "<mtime>"
Multiple paths (returns list, one result per term):
{{ lookup('local_mtime_qs', '/a.js', '/b.js', param='v') }}
"""
def run(self, terms, variables=None, **kwargs):
if not terms:
return []
param = kwargs.get('param', 'version')
mode = kwargs.get('mode', 'qs')
if mode not in ('qs', 'epoch'):
raise AnsibleError("local_mtime_qs: 'mode' must be 'qs' or 'epoch'")
results = []
for term in terms:
path = os.path.abspath(os.path.expanduser(str(term)))
# Fail fast if path is missing or not a regular file
if not os.path.exists(path):
raise AnsibleError(f"local_mtime_qs: file does not exist: {path}")
if not os.path.isfile(path):
raise AnsibleError(f"local_mtime_qs: not a regular file: {path}")
try:
mtime = int(os.stat(path).st_mtime)
except OSError as e:
raise AnsibleError(f"local_mtime_qs: cannot stat '{path}': {e}")
if mode == 'qs':
results.append(f"?{param}={mtime}")
else: # mode == 'epoch'
results.append(str(mtime))
return results

View File

@@ -1,9 +1,4 @@
roles:
cmp:
title: "Compositions"
description: "Composition of other roles."
icon: "fas fa-sitemap"
invokable: false
docker:
title: "Docker Toolkit"
description: "Generic Docker helpers and utilities (compose wrappers, container tooling)."
@@ -56,6 +51,21 @@ roles:
description: "DNS providers, records, and rDNS management (Cloudflare, Hetzner, etc.)."
icon: "fas fa-network-wired"
invokable: false
stk:
title: "Stack"
description: "Stack levels to setup the server"
icon: "fas fa-bars-staggered"
invokable: false
front:
title: "System Frontend Helpers"
description: "Frontend helpers for reverse-proxied apps (injection, shared assets, CDN plumbing)."
icon: "fas fa-wand-magic-sparkles"
invokable: false
inj:
title: "Injection"
description: "Composable HTML injection roles (CSS, JS, logout interceptor, analytics, desktop iframe) for Nginx/OpenResty via sub_filter/Lua with CDN-backed assets."
icon: "fas fa-filter"
invokable: false
update:
title: "Updates & Package Management"
description: "OS & package updates"
@@ -101,21 +111,6 @@ roles:
description: "Developer-centric server utilities and admin toolkits."
icon: "fas fa-code"
invokable: false
srv:
title: "Server"
description: "General server roles for provisioning and managing server infrastructure—covering web servers, proxy servers, network services, and other backend components."
icon: "fas fa-server"
invokable: false
web:
title: "Webserver"
description: "Web-server roles for installing and configuring Nginx (core, TLS, injection filters, composer modules)."
icon: "fas fa-server"
invokable: false
proxy:
title: "Proxy Server"
description: "Proxy-server roles for virtual-host orchestration and reverse-proxy setups."
icon: "fas fa-project-diagram"
invokable: false
web:
title: "Web Infrastructure"
description: "Roles for managing web infrastructure—covering static content services and deployable web applications."

View File

@@ -1,11 +0,0 @@
# Database Docker with Web Proxy
This role builds on `cmp-db-docker` by adding a reverse-proxy frontend for HTTP access to your database service.
## Features
- **Database Composition**
Leverages the `cmp-db-docker` role to stand up your containerized database (PostgreSQL, MariaDB, etc.) with backups and user management.
- **Reverse Proxy**
Includes the `srv-domain-provision` role to configure a proxy (e.g. nginx) for routing HTTP(S) traffic to your database UI or management endpoint.

View File

@@ -1 +0,0 @@
DATABASE_VARS_FILE: "{{ playbook_dir }}/roles/cmp-rdbms/vars/database.yml"

View File

@@ -1 +0,0 @@
{% include 'roles/cmp-rdbms/templates/services/' + database_type + '.yml.j2' %}

View File

@@ -1,20 +0,0 @@
# Helper variables
_dbtype: "{{ (database_type | d('') | trim) }}"
_database_id: "{{ ('svc-db-' ~ _dbtype) if _dbtype else '' }}"
_database_central_name: "{{ (applications | get_app_conf(_database_id, 'docker.services.' ~ _dbtype ~ '.name', False, '')) if _dbtype else '' }}"
_database_consumer_id: "{{ database_application_id | d(application_id) }}"
_database_consumer_entity_name: "{{ _database_consumer_id | get_entity_name }}"
_database_central_enabled: "{{ (applications | get_app_conf(_database_consumer_id, 'features.central_database', False)) if _dbtype else False }}"
# Definition
database_name: "{{ _database_consumer_entity_name }}"
database_instance: "{{ _database_central_name if _database_central_enabled else database_name }}" # This could lead to bugs at dedicated database @todo cleanup
database_host: "{{ _database_central_name if _database_central_enabled else 'database' }}" # This could lead to bugs at dedicated database @todo cleanup
database_username: "{{ _database_consumer_entity_name }}"
database_password: "{{ applications | get_app_conf(_database_consumer_id, 'credentials.database_password', true) }}"
database_port: "{{ (ports.localhost.database[_database_id] | d('')) if _dbtype else '' }}"
database_env: "{{ docker_compose.directories.env }}{{ database_type }}.env"
database_url_jdbc: "jdbc:{{ database_type if database_type == 'mariadb' else 'postgresql' }}://{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_url_full: "{{ database_type }}://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_volume: "{{ _database_consumer_entity_name ~ '_' if not _database_central_enabled }}{{ database_host }}"

View File

@@ -19,3 +19,5 @@
template:
src: caffeine.desktop.j2
dest: "{{auto_start_directory}}caffeine.desktop"
- include_tasks: utils/run_once.yml

View File

@@ -1,4 +1,3 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
when: run_once_desk_gnome_caffeine is not defined

View File

@@ -48,4 +48,6 @@
state: present
create: yes
mode: "0644"
become: false
become: false
- include_tasks: utils/run_once.yml

View File

@@ -1,4 +1,3 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
when: run_once_desk_ssh is not defined

View File

@@ -1,4 +0,0 @@
---
- name: reload virtualbox kernel modules
become: true
command: vboxreload

View File

@@ -1,8 +1,14 @@
---
- name: Setup locale.gen
template: src=locale.gen dest=/etc/locale.gen
template:
src: locale.gen.j2
dest: /etc/locale.gen
- name: Setup locale.conf
template: src=locale.conf dest=/etc/locale.conf
template:
src: locale.conf.j2
dest: /etc/locale.conf
- name: Generate locales
shell: locale-gen
become: true

View File

@@ -1,2 +0,0 @@
LANG=en_US.UTF-8
LANGUAGE=en_US.UTF-8

View File

@@ -0,0 +1,2 @@
LANG={{ HOST_LL_CC }}.UTF-8
LANGUAGE={{ HOST_LL_CC }}.UTF-8

View File

@@ -20,7 +20,7 @@ To offer a centralized, extensible system for managing containerized application
- **Reset Logic:** Cleans previous Compose project files and data when `MODE_RESET` is enabled.
- **Handlers for Runtime Control:** Automatically builds, sets up, or restarts containers based on handlers.
- **Template-ready Service Files:** Predefined service base and health check templates.
- **Integration Support:** Compatible with `srv-proxy-core` and other Infinito.Nexus service roles.
- **Integration Support:** Compatible with `sys-svc-proxy` and other Infinito.Nexus service roles.
## Administration Tips

View File

@@ -15,10 +15,17 @@
- name: docker compose pull
shell: |
set -euo pipefail
lock="{{ [ PATH_DOCKER_COMPOSE_PULL_LOCK_DIR | docker_compose.directories.instance ] | path_join | hash('sha1') }}"
lock="{{ [ PATH_DOCKER_COMPOSE_PULL_LOCK_DIR, (docker_compose.directories.instance | hash('sha1')) ~ '.lock' ] | path_join }}"
if [ ! -e "$lock" ]; then
mkdir -p "$(dirname "$lock")"
docker compose pull
if docker compose config | grep -qE '^[[:space:]]+build:'; then
docker compose build --pull
fi
if docker compose pull --help 2>/dev/null | grep -q -- '--ignore-buildable'; then
docker compose pull --ignore-buildable
else
docker compose pull || true
fi
: > "$lock"
echo "pulled"
fi
@@ -41,7 +48,7 @@
set -euo pipefail
docker compose build || {
echo "Retrying without cache and pulling bases...";
docker compose build --no-cache --pull;
docker compose build --no-cache{{ ' --pull' if MODE_UPDATE | bool else ''}};
}
args:
chdir: "{{ docker_compose.directories.instance }}"

View File

@@ -5,7 +5,9 @@
loop:
- "{{ application_id | abs_role_path_by_application_id }}/templates/Dockerfile.j2"
- "{{ application_id | abs_role_path_by_application_id }}/files/Dockerfile"
notify: docker compose up
notify:
- docker compose up
- docker compose build
register: create_dockerfile_result
failed_when:
- create_dockerfile_result is failed

View File

@@ -2,7 +2,7 @@
services:
{# Load Database #}
{% if applications | is_docker_service_enabled(application_id, 'database') %}
{% include 'roles/cmp-rdbms/templates/services/main.yml.j2' %}
{% include 'roles/sys-svc-rdbms/templates/services/main.yml.j2' %}
{% endif %}
{# Load Redis #}
{% if applications | is_docker_service_enabled(application_id, 'redis') or applications | get_app_conf(application_id, 'features.oauth2', False) %}

View File

@@ -3,6 +3,10 @@
- "CMD"
- "curl"
- "-f"
{% if container_hostname is defined %}
- "-H"
- "Host: {{ container_hostname }}"
{% endif %}
- "http://127.0.0.1{{ (":" ~ container_port) if container_port is defined else '' }}/{{ container_healthcheck | default('') }}"
interval: 1m
timeout: 10s

View File

@@ -0,0 +1,7 @@
healthcheck:
test: ["CMD-SHELL", "nc -z localhost {{ container_port }} || exit 1"]
interval: 30s
timeout: 3s
retries: 3
start_period: 10s
{{ "\n" }}

View File

@@ -16,29 +16,23 @@
- name: Create installation directory for Kevin's Package Manager
file:
path: "{{ pkgmgr_install_path }}"
path: "{{ PKGMGR_INSTALL_PATH }}"
state: directory
mode: '0755'
become: true
- name: Clone Kevin's Package Manager repository
git:
repo: "{{ pkgmgr_repo_url }}"
dest: "{{ pkgmgr_install_path }}"
repo: "{{ PKGMGR_REPO_URL }}"
dest: "{{ PKGMGR_INSTALL_PATH }}"
version: "HEAD"
force: yes
become: true
- name: Ensure main.py is executable
file:
path: "{{ pkgmgr_install_path }}/main.py"
mode: '0755'
become: true
- name: create config.yaml
template:
src: config.yaml.j2
dest: "{{ pkgmgr_config_path }}"
dest: "{{ PKGMGR_CONFIG_PATH }}"
become: true
- name: Run the Package Manager install command to create an alias for Kevins package manager
@@ -46,6 +40,10 @@
source ~/.venvs/pkgmgr/bin/activate
make setup
args:
chdir: "{{ pkgmgr_install_path }}"
chdir: "{{ PKGMGR_INSTALL_PATH }}"
executable: /bin/bash
become: true
- name: "Update all repositories with pkgmgr"
command: "pkgmgr pull --all"
when: MODE_UPDATE | bool

View File

@@ -1,3 +1,3 @@
directories:
repositories: "{{repositories_directory}}"
binaries: "{{binaries_directory}}"
repositories: "{{ PKGMGR_REPOSITORIES_DIR }}"
binaries: "{{ PKGMGR_BINARIES_DIR }}"

View File

@@ -2,16 +2,16 @@
# Variables for Kevin's Package Manager installation
# The Git repository URL for Kevin's Package Manager
pkgmgr_repo_url: "https://github.com/kevinveenbirkenbach/package-manager.git"
# Directory which contains all Repositories managed by Kevin's Package Manager
repositories_directory: "/opt/Repositories/"
# The directory where the repository will be cloned
pkgmgr_install_path: "{{repositories_directory}}github.com/kevinveenbirkenbach/package-manager"
# File containing the configuration
pkgmgr_config_path: "{{pkgmgr_install_path}}/config/config.yaml"
PKGMGR_REPO_URL: "https://github.com/kevinveenbirkenbach/package-manager.git"
# The directory where executable aliases will be installed (ensure it's in your PATH)
binaries_directory: "/usr/local/bin"
PKGMGR_BINARIES_DIR: "/usr/local/bin"
# Directory which contains all Repositories managed by Kevin's Package Manager
PKGMGR_REPOSITORIES_DIR: "/opt/Repositories/"
# The directory where the repository will be cloned
PKGMGR_INSTALL_PATH: "{{ [ PKGMGR_REPOSITORIES_DIR, 'github.com/kevinveenbirkenbach/package-manager' ] | path_join }}"
# File containing the configuration
PKGMGR_CONFIG_PATH: "{{ [ PKGMGR_INSTALL_PATH, 'config/config.yaml' ] | path_join }}"

View File

@@ -1,9 +0,0 @@
# run_once_srv_composer: deactivated
- name: "include role sys-srv-web-inj-compose for '{{ domain }}'"
include_role:
name: sys-srv-web-inj-compose
- name: "include role srv-tls-core for '{{ domain }}'"
include_role:
name: srv-tls-core

View File

@@ -1,5 +0,0 @@
# default vhost flavour
vhost_flavour: "basic" # valid: basic | ws_generic
# build the full template path from the flavour
vhost_template_src: "roles/srv-proxy-core/templates/vhost/{{ vhost_flavour }}.conf.j2"

View File

@@ -1 +0,0 @@
configuration_destination: "{{ NGINX.DIRECTORIES.HTTP.SERVERS }}{{ domain }}.conf"

View File

@@ -1,4 +0,0 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
when: run_once_srv_letsencrypt is not defined

View File

@@ -1,6 +0,0 @@
- name: "reload svc-bkp-loc-2-usb service"
systemd:
name: "{{ 'svc-bkp-loc-2-usb' | get_service_name(SOFTWARE_NAME) }}"
state: reloaded
daemon_reload: yes

View File

@@ -1,77 +0,0 @@
def build_ldap_nested_group_entries(applications, users, ldap):
"""
Builds structured LDAP role entries using the global `ldap` configuration.
Supports objectClasses: posixGroup (adds gidNumber, memberUid), groupOfNames (adds member).
Now nests roles under an application-level OU: application-id/role.
"""
result = {}
# Base DN components
role_dn_base = ldap["dn"]["ou"]["roles"]
user_dn_base = ldap["dn"]["ou"]["users"]
ldap_user_attr = ldap["user"]["attributes"]["id"]
# Supported objectClass flavors
flavors = ldap.get("rbac", {}).get("flavors", [])
for application_id, app_config in applications.items():
# Compute the DN for the application-level OU
app_ou_dn = f"ou={application_id},{role_dn_base}"
ou_entry = {
"dn": app_ou_dn,
"objectClass": ["top", "organizationalUnit"],
"ou": application_id,
"description": f"Roles for application {application_id}"
}
result[app_ou_dn] = ou_entry
# Standard roles with an extra 'administrator'
base_roles = app_config.get("rbac", {}).get("roles", {})
roles = {
**base_roles,
"administrator": {
"description": "Has full administrative access: manage themes, plugins, settings, and users"
}
}
group_id = app_config.get("group_id")
for role_name, role_conf in roles.items():
# Build CN under the application OU
cn = role_name
dn = f"cn={cn},{app_ou_dn}"
entry = {
"dn": dn,
"cn": cn,
"description": role_conf.get("description", ""),
"objectClass": ["top"] + flavors,
}
member_dns = []
member_uids = []
for username, user_conf in users.items():
if role_name in user_conf.get("roles", []):
member_dns.append(f"{ldap_user_attr}={username},{user_dn_base}")
member_uids.append(username)
if "posixGroup" in flavors:
entry["gidNumber"] = group_id
if member_uids:
entry["memberUid"] = member_uids
if "groupOfNames" in flavors and member_dns:
entry["member"] = member_dns
result[dn] = entry
return result
class FilterModule(object):
def filters(self):
return {
"build_ldap_nested_group_entries": build_ldap_nested_group_entries
}

View File

@@ -16,10 +16,10 @@ def build_ldap_role_entries(applications, users, ldap):
}
group_id = application_config.get("group_id")
user_dn_base = ldap["dn"]["ou"]["users"]
ldap_user_attr = ldap["user"]["attributes"]["id"]
role_dn_base = ldap["dn"]["ou"]["roles"]
flavors = ldap.get("rbac", {}).get("flavors", [])
user_dn_base = ldap["DN"]["OU"]["USERS"]
ldap_user_attr = ldap["USER"]["ATTRIBUTES"]["ID"]
role_dn_base = ldap["DN"]["OU"]["ROLES"]
flavors = ldap.get("RBAC").get("FLAVORS")
for role_name, role_conf in roles.items():
group_cn = f"{application_id}-{role_name}"

View File

@@ -1,55 +0,0 @@
- name: Load memberof module from file in OpenLDAP container
shell: >
docker exec -i {{ openldap_name }} ldapmodify -Y EXTERNAL -H ldapi:/// -f {{openldap_ldif_docker_path}}configuration/01_member_of_configuration.ldif
listen:
- "Import configuration LDIF files"
- "Import all LDIF files"
# @todo Remove the following ignore errors when setting up a new server
# Just here because debugging would take to much time
ignore_errors: true
- name: Refint Module Activation for OpenLDAP
shell: >
docker exec -i {{ openldap_name }} ldapadd -Y EXTERNAL -H ldapi:/// -f {{openldap_ldif_docker_path}}configuration/02_member_of_configuration.ldif
listen:
- "Import configuration LDIF files"
- "Import all LDIF files"
register: ldapadd_result
failed_when: ldapadd_result.rc not in [0, 68]
# @todo Remove the following ignore errors when setting up a new server
# Just here because debugging would take to much time
ignore_errors: true
- name: "Import schemas"
shell: >
docker exec -i {{ openldap_name }} ldapadd -Y EXTERNAL -H ldapi:/// -f "{{openldap_ldif_docker_path}}schema/{{ item | basename | regex_replace('\.j2$', '') }}"
register: ldapadd_result
changed_when: "'adding new entry' in ldapadd_result.stdout"
failed_when: ldapadd_result.rc not in [0, 80]
listen:
- "Import schema LDIF files"
- "Import all LDIF files"
loop: "{{ lookup('fileglob', role_path ~ '/templates/ldif/schema/*.j2', wantlist=True) }}"
- name: Refint Overlay Configuration for OpenLDAP
shell: >
docker exec -i {{ openldap_name }} ldapmodify -Y EXTERNAL -H ldapi:/// -f {{openldap_ldif_docker_path}}configuration/03_member_of_configuration.ldif
listen:
- "Import configuration LDIF files"
- "Import all LDIF files"
register: ldapadd_result
failed_when: ldapadd_result.rc not in [0, 68]
# @todo Remove the following ignore errors when setting up a new server
# Just here because debugging would take to much time
ignore_errors: true
- name: "Import users, groups, etc. to LDAP"
shell: >
docker exec -i {{ openldap_name }} ldapadd -x -D "{{ldap.dn.administrator.data}}" -w "{{ldap.bind_credential}}" -c -f "{{openldap_ldif_docker_path}}groups/{{ item | basename | regex_replace('\.j2$', '') }}"
register: ldapadd_result
changed_when: "'adding new entry' in ldapadd_result.stdout"
failed_when: ldapadd_result.rc not in [0, 20, 68, 65]
listen:
- "Import groups LDIF files"
- "Import all LDIF files"
loop: "{{ query('fileglob', role_path ~ '/templates/ldif/groups/*.j2') | sort }}"

View File

@@ -28,7 +28,7 @@
- name: "Generate hash for Database Admin password"
shell: |
docker exec {{ openldap_name }} \
slappasswd -s "{{ ldap.bind_credential }}"
slappasswd -s "{{ LDAP.BIND_CREDENTIAL }}"
register: database_admin_pw_hash
- name: "Reset Database Admin password in LDAP (olcRootPW)"

View File

@@ -3,11 +3,11 @@
###############################################################################
- name: Ensure LDAP users exist
community.general.ldap_entry:
dn: "{{ ldap.user.attributes.id }}={{ item.key }},{{ ldap.dn.ou.users }}"
dn: "{{ LDAP.USER.ATTRIBUTES.ID }}={{ item.key }},{{ LDAP.DN.OU.USERS }}"
server_uri: "{{ openldap_server_uri }}"
bind_dn: "{{ ldap.dn.administrator.data }}"
bind_pw: "{{ ldap.bind_credential }}"
objectClass: "{{ ldap.user.objects.structural }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
objectClass: "{{ LDAP.USER.OBJECTS.STRUCTURAL }}"
attributes:
uid: "{{ item.value.username }}"
sn: "{{ item.value.sn | default(item.key) }}"
@@ -29,12 +29,12 @@
###############################################################################
- name: Ensure required objectClass values and mail address are present
community.general.ldap_attrs:
dn: "{{ ldap.user.attributes.id }}={{ item.key }},{{ ldap.dn.ou.users }}"
dn: "{{ LDAP.USER.ATTRIBUTES.ID }}={{ item.key }},{{ LDAP.DN.OU.USERS }}"
server_uri: "{{ openldap_server_uri }}"
bind_dn: "{{ ldap.dn.administrator.data }}"
bind_pw: "{{ ldap.bind_credential }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
attributes:
objectClass: "{{ ldap.user.objects.structural }}"
objectClass: "{{ LDAP.USER.OBJECTS.STRUCTURAL }}"
mail: "{{ item.value.email }}"
state: exact
async: "{{ ASYNC_TIME if ASYNC_ENABLED | bool else omit }}"
@@ -45,10 +45,10 @@
- name: "Ensure container for application roles exists"
community.general.ldap_entry:
dn: "{{ ldap.dn.ou.roles }}"
dn: "{{ LDAP.DN.OU.ROLES }}"
server_uri: "{{ openldap_server_uri }}"
bind_dn: "{{ ldap.dn.administrator.data }}"
bind_pw: "{{ ldap.bind_credential }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
objectClass: organizationalUnit
attributes:
ou: roles

View File

@@ -1,22 +1,22 @@
- name: Gather all users with their current objectClass list
community.general.ldap_search:
server_uri: "{{ openldap_server_uri }}"
bind_dn: "{{ ldap.dn.administrator.data }}"
bind_pw: "{{ ldap.bind_credential }}"
dn: "{{ ldap.dn.ou.users }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
dn: "{{ LDAP.DN.OU.USERS }}"
scope: subordinate
filter: "{{ ldap.filters.users.all }}"
filter: "{{ LDAP.FILTERS.USERS.ALL }}"
attrs:
- dn
- objectClass
- "{{ ldap.user.attributes.id }}"
- "{{ LDAP.USER.ATTRIBUTES.ID }}"
register: ldap_users_with_classes
- name: Add only missing auxiliary classes
community.general.ldap_attrs:
server_uri: "{{ openldap_server_uri }}"
bind_dn: "{{ ldap.dn.administrator.data }}"
bind_pw: "{{ ldap.bind_credential }}"
bind_dn: "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
bind_pw: "{{ LDAP.BIND_CREDENTIAL }}"
dn: "{{ item.dn }}"
attributes:
objectClass: "{{ missing_auxiliary }}"
@@ -28,7 +28,7 @@
label: "{{ item.dn }}"
vars:
missing_auxiliary: >-
{{ (ldap.user.objects.auxiliary.values() | list)
{{ (LDAP.USER.OBJECTS.AUXILIARY.values() | list)
| difference(item.objectClass | default([]))
}}
when: missing_auxiliary | length > 0

View File

@@ -37,7 +37,7 @@
- name: "Reset LDAP Credentials"
include_tasks: 01_credentials.yml
when:
- applications | get_app_conf(application_id, 'network.local', True)
- applications | get_app_conf(application_id, 'network.local')
- applications | get_app_conf(application_id, 'provisioning.credentials', True)
- name: "create directory {{openldap_ldif_host_path}}{{item}}"

View File

@@ -8,9 +8,9 @@
vars:
schema_name: "nextcloud"
attribute_defs:
- "( 1.3.6.1.4.1.99999.1 NAME '{{ ldap.user.attributes.nextcloud_quota }}' DESC 'Quota for Nextcloud' EQUALITY integerMatch ORDERING integerOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE )"
- "( 1.3.6.1.4.1.99999.1 NAME '{{ LDAP.USER.ATTRIBUTES.NEXTCLOUD_QUOTA }}' DESC 'Quota for Nextcloud' EQUALITY integerMatch ORDERING integerOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE )"
objectclass_defs:
- "( 1.3.6.1.4.1.99999.2 NAME 'nextcloudUser' DESC 'Auxiliary class for Nextcloud attributes' AUXILIARY MAY ( {{ ldap.user.attributes.nextcloud_quota }} ) )"
- "( 1.3.6.1.4.1.99999.2 NAME '{{ LDAP.USER.OBJECTS.AUXILIARY.NEXTCLOUD_USER }}' DESC 'Auxiliary class for Nextcloud attributes' AUXILIARY MAY ( {{ LDAP.USER.ATTRIBUTES.NEXTCLOUD_QUOTA }} ) )"
command: >
ldapsm
-s {{ openldap_server_uri }}

View File

@@ -8,16 +8,16 @@
vars:
schema_name: "openssh-lpk"
attribute_defs:
- "( 1.3.6.1.4.1.24552.1.1 NAME '{{ ldap.user.attributes.ssh_public_key }}' DESC 'OpenSSH Public Key' EQUALITY octetStringMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 )"
- "( 1.3.6.1.4.1.24552.1.1 NAME '{{ LDAP.USER.ATTRIBUTES.SSH_PUBLIC_KEY }}' DESC 'OpenSSH Public Key' EQUALITY octetStringMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 )"
- "( 1.3.6.1.4.1.24552.1.2 NAME 'sshFingerprint' DESC 'OpenSSH Public Key Fingerprint' EQUALITY octetStringMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 )"
objectclass_defs:
- >-
( 1.3.6.1.4.1.24552.2.1
NAME '{{ ldap.user.objects.auxiliary.ssh_public_key }}'
NAME '{{ LDAP.USER.OBJECTS.AUXILIARY.SSH_PUBLIC_KEY }}'
DESC 'Auxiliary class for OpenSSH public keys'
SUP top
AUXILIARY
MAY ( {{ ldap.user.attributes.ssh_public_key }} $ sshFingerprint ) )
MAY ( {{ LDAP.USER.ATTRIBUTES.SSH_PUBLIC_KEY }} $ sshFingerprint ) )
command: >
ldapsm

View File

@@ -10,12 +10,12 @@
{% endif %}
volumes:
- 'data:/bitnami/openldap'
- '{{openldap_ldif_host_path}}:{{openldap_ldif_docker_path}}:ro'
- '{{openldap_ldif_host_path}}:{{ openldap_ldif_docker_path }}:ro'
healthcheck:
test: >
bash -c '
ldapsearch -x -H ldap://localhost:{{ openldap_docker_port_open }} \
-D "{{ ldap.dn.administrator.data }}" -w "{{ ldap.bind_credential }}" -b "{{ ldap.dn.root }}" > /dev/null \
-D "{{ LDAP.DN.ADMINISTRATOR.DATA }}" -w "{{ LDAP.BIND_CREDENTIAL }}" -b "{{ LDAP.DN.ROOT }}" > /dev/null \
&& ldapsearch -Y EXTERNAL -H ldapi:/// \
-b cn=config "(&(objectClass=olcOverlayConfig)(olcOverlay=memberof))" \
| grep "olcOverlay:" | grep -q "memberof"

View File

@@ -4,15 +4,15 @@
# GENERAL
## Admin (Data)
LDAP_ADMIN_USERNAME= {{ applications | get_app_conf(application_id, 'users.administrator.username') }} # LDAP database admin user.
LDAP_ADMIN_PASSWORD= {{ldap.bind_credential}} # LDAP database admin password.
LDAP_ADMIN_PASSWORD= {{ LDAP.BIND_CREDENTIAL }} # LDAP database admin password.
## Users
LDAP_USERS= ' ' # Comma separated list of LDAP users to create in the default LDAP tree. Default: user01,user02
LDAP_PASSWORDS= ' ' # Comma separated list of passwords to use for LDAP users. Default: bitnami1,bitnami2
LDAP_ROOT= {{ldap.dn.root}} # LDAP baseDN (or suffix) of the LDAP tree. Default: dc=example,dc=org
LDAP_ROOT= {{ LDAP.DN.ROOT }} # LDAP baseDN (or suffix) of the LDAP tree. Default: dc=example,dc=org
## Admin (Config)
LDAP_ADMIN_DN= {{ldap.dn.administrator.data}}
LDAP_ADMIN_DN= {{LDAP.DN.ADMINISTRATOR.DATA}}
LDAP_CONFIG_ADMIN_ENABLED= yes
LDAP_CONFIG_ADMIN_USERNAME= {{ applications | get_app_conf(application_id, 'users.administrator.username') }}
LDAP_CONFIG_ADMIN_PASSWORD= {{ applications | get_app_conf(application_id, 'credentials.administrator_password') }}

View File

@@ -2,5 +2,5 @@ server {
listen {{ ports.public.ldaps['svc-db-openldap'] }}ssl;
proxy_pass 127.0.0.1:{{ ports.localhost.ldap['svc-db-openldap'] }};
{% include 'roles/srv-letsencrypt/templates/ssl_credentials.j2' %}
{% include 'roles/sys-svc-letsencrypt/templates/ssl_credentials.j2' %}
}

View File

@@ -4,7 +4,7 @@ application_id: "svc-db-openldap"
openldap_docker_port_secure: 636
openldap_docker_port_open: 389
openldap_server_uri: "ldap://127.0.0.1:{{ ports.localhost.ldap[application_id] }}"
openldap_bind_dn: "{{ ldap.dn.administrator.configuration }}"
openldap_bind_dn: "{{ LDAP.DN.ADMINISTRATOR.CONFIGURATION }}"
openldap_bind_pw: "{{ applications | get_app_conf(application_id, 'credentials.administrator_password', True) }}"
# LDIF Variables
@@ -21,4 +21,4 @@ openldap_version: "{{ applications | get_app_conf(application_id,
openldap_volume: "{{ applications | get_app_conf(application_id, 'docker.volumes.data', True) }}"
openldap_network: "{{ applications | get_app_conf(application_id, 'docker.network', True) }}"
openldap_network_expose_local: "{{ applications | get_app_conf(application_id, 'network.public', True) | bool or applications | get_app_conf(application_id, 'network.local', True) | bool }}"
openldap_network_expose_local: "{{ applications | get_app_conf(application_id, 'network.public', True) | bool or applications | get_app_conf(application_id, 'network.local') | bool }}"

View File

@@ -0,0 +1,44 @@
import os
import yaml
from ansible.errors import AnsibleFilterError
def _iter_role_vars_files(roles_dir):
if not os.path.isdir(roles_dir):
raise AnsibleFilterError(f"roles_dir not found: {roles_dir}")
for name in os.listdir(roles_dir):
role_path = os.path.join(roles_dir, name)
if not os.path.isdir(role_path):
continue
vars_main = os.path.join(role_path, "vars", "main.yml")
if os.path.isfile(vars_main):
yield vars_main
def _is_postgres_role(vars_file):
try:
with open(vars_file, "r", encoding="utf-8") as f:
data = yaml.safe_load(f) or {}
# only count roles with explicit database_type: postgres in VARS
return str(data.get("database_type", "")).strip().lower() == "postgres"
except Exception:
# ignore unreadable/broken YAML files quietly
return False
def split_postgres_connections(total_connections, roles_dir="roles"):
"""
Return an integer average: total_connections / number_of_roles_with_database_type_postgres.
Uses max(count, 1) to avoid division-by-zero.
"""
try:
total = int(total_connections)
except Exception:
raise AnsibleFilterError(f"total_connections must be int-like, got: {total_connections!r}")
count = sum(1 for vf in _iter_role_vars_files(roles_dir) if _is_postgres_role(vf))
denom = max(count, 1)
return max(1, total // denom)
class FilterModule(object):
def filters(self):
return {
"split_postgres_connections": split_postgres_connections
}

View File

@@ -1,3 +1,7 @@
- name: Compute average allowed connections per Postgres app (once)
set_fact:
POSTGRES_ALLOWED_AVG_CONNECTIONS: "{{ (POSTGRES_MAX_CONNECTIONS | split_postgres_connections(playbook_dir ~ '/roles')) | int }}"
run_once: true
- name: Include dependency 'sys-svc-docker'
include_role:

View File

@@ -6,6 +6,20 @@
build:
context: .
dockerfile: Dockerfile
pull_policy: never
command:
- "postgres"
- "-c"
- "max_connections={{ POSTGRES_MAX_CONNECTIONS }}"
- "-c"
- "superuser_reserved_connections={{ POSTGRES_SUPERUSER_RESERVED_CONNECTIONS }}"
- "-c"
- "shared_buffers={{ POSTGRES_SHARED_BUFFERS }}"
- "-c"
- "work_mem={{ POSTGRES_WORK_MEM }}"
- "-c"
- "maintenance_work_mem={{ POSTGRES_MAINTENANCE_WORK_MEM }}"
{% include 'roles/docker-container/templates/base.yml.j2' %}
{% if POSTGRES_EXPOSE_LOCAL %}
ports:

View File

@@ -1,25 +1,37 @@
# General
application_id: svc-db-postgres
application_id: svc-db-postgres
# Docker
docker_compose_flush_handlers: true
docker_compose_flush_handlers: true
# Docker Compose
database_type: "{{ application_id | get_entity_name }}"
database_type: "{{ application_id | get_entity_name }}"
## Postgres
POSTGRES_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data', True) }}"
POSTGRES_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.postgres.name', True) }}"
POSTGRES_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.postgres.image', True) }}"
POSTGRES_SUBNET: "{{ networks.local['svc-db-postgres'].subnet }}"
POSTGRES_NETWORK_NAME: "{{ applications | get_app_conf(application_id, 'docker.network', True) }}"
POSTGRES_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.postgres.version', True) }}"
POSTGRES_PASSWORD: "{{ applications | get_app_conf(application_id, 'credentials.POSTGRES_PASSWORD', True) }}"
POSTGRES_PORT: "{{ database_port | default(ports.localhost.database[ application_id ]) }}"
POSTGRES_INIT: "{{ database_username is defined and database_password is defined and database_name is defined }}"
POSTGRES_EXPOSE_LOCAL: True # Exposes the db to localhost, almost everytime neccessary
POSTGRES_CUSTOM_IMAGE_NAME: "postgres_custom"
POSTGRES_LOCAL_HOST: "127.0.0.1"
POSTGRES_VECTOR_ENABLED: True # Required by discourse, propably in a later step it makes sense to define this as a configuration option in config/main.yml
POSTGRES_RETRIES: 5
POSTGRES_DELAY: 2
POSTGRES_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
POSTGRES_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.postgres.name') }}"
POSTGRES_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.postgres.image') }}"
POSTGRES_SUBNET: "{{ networks.local['svc-db-postgres'].subnet }}"
POSTGRES_NETWORK_NAME: "{{ applications | get_app_conf(application_id, 'docker.network') }}"
POSTGRES_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.postgres.version') }}"
POSTGRES_PASSWORD: "{{ applications | get_app_conf(application_id, 'credentials.POSTGRES_PASSWORD') }}"
POSTGRES_PORT: "{{ database_port | default(ports.localhost.database[ application_id ]) }}"
POSTGRES_INIT: "{{ database_username is defined and database_password is defined and database_name is defined }}"
POSTGRES_EXPOSE_LOCAL: True # Exposes the db to localhost, almost everytime neccessary
POSTGRES_CUSTOM_IMAGE_NAME: "postgres_custom"
POSTGRES_LOCAL_HOST: "127.0.0.1"
POSTGRES_VECTOR_ENABLED: True # Required by discourse, propably in a later step it makes sense to define this as a configuration option in config/main.yml
POSTGRES_RETRIES: 5
## Performance
POSTGRES_TOTAL_RAM_MB: "{{ ansible_memtotal_mb | int }}"
POSTGRES_VCPUS: "{{ ansible_processor_vcpus | int }}"
POSTGRES_MAX_CONNECTIONS: "{{ [ ((POSTGRES_VCPUS | int) * 30 + 50), 400 ] | min }}"
POSTGRES_SUPERUSER_RESERVED_CONNECTIONS: 3
POSTGRES_SHARED_BUFFERS_MB: "{{ ((POSTGRES_TOTAL_RAM_MB | int) * 25) // 100 }}"
POSTGRES_SHARED_BUFFERS: "{{ POSTGRES_SHARED_BUFFERS_MB ~ 'MB' }}"
POSTGRES_WORK_MEM_MB: "{{ [ ( (POSTGRES_TOTAL_RAM_MB | int) // ( [ (POSTGRES_MAX_CONNECTIONS | int), 1 ] | max ) // 2 ), 1 ] | max }}"
POSTGRES_WORK_MEM: "{{ POSTGRES_WORK_MEM_MB ~ 'MB' }}"
POSTGRES_MAINTENANCE_WORK_MEM_MB: "{{ [ (((POSTGRES_TOTAL_RAM_MB | int) * 5) // 100), 64 ] | max }}"
POSTGRES_MAINTENANCE_WORK_MEM: "{{ POSTGRES_MAINTENANCE_WORK_MEM_MB ~ 'MB' }}"
POSTGRES_DELAY: 2

View File

@@ -18,10 +18,10 @@
group: root
notify: reload sysctl configuration
- name: create /etc/wireguard/wg0.{{ SOFTWARE_NAME | lower }}.conf
- name: "deploy {{ WG0_CONF_DEST }}"
copy:
src: "{{ inventory_dir }}/files/{{ inventory_hostname }}/etc/wireguard/wg0.conf"
dest: /etc/wireguard/wg0.{{ SOFTWARE_NAME | lower }}.conf
owner: root
group: root
src: "{{ [inventory_dir, 'files', inventory_hostname, 'etc/wireguard/wg0.conf' ] | path_join }}"
dest: "{{ WG0_CONF_DEST }}"
owner: root
group: root
notify: restart wireguard

View File

@@ -1 +1,3 @@
application_id: svc-net-wireguard-core
WG0_CONF_DEST: "/etc/wireguard/wg0.conf"

View File

@@ -1,5 +1,5 @@
- include_role:
name: sys-service
vars:
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_OPTIMIZE_DRIVE }} {{ SYS_SERVICE_BACKUP_RMT_2_LOC }} --timeout "{{ SYS_TIMEOUT_STORAGE_OPTIMIZER }}"'
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_OPTIMIZE_DRIVE }} {{ SYS_SERVICE_BACKUP_RMT_2_LOC }} {{ SYS_SERVICE_GROUP_CLEANUP | join(" ") }} --timeout "{{ SYS_TIMEOUT_STORAGE_OPTIMIZER }}"'
system_service_tpl_exec_start: '{{ system_service_script_exec }} --mass-storage-path {{ OPT_DRIVE_MASS_STORAGE_PATH }} --rapid-storage-path {{ OPT_DRIVE_RAPID_STORAGE_PATH }}'

View File

@@ -8,4 +8,3 @@ database_type: ""
OPENRESTY_IMAGE: "openresty/openresty"
OPENRESTY_VERSION: "alpine"
OPENRESTY_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.openresty.name', True) }}"

View File

@@ -1,36 +0,0 @@
def dict_to_cli_args(data):
"""
Convert a dictionary into CLI argument string.
Example:
{
"backup-dir": "/mnt/backups",
"shutdown": True,
"ignore-volumes": ["redis", "memcached"]
}
becomes:
--backup-dir=/mnt/backups --shutdown --ignore-volumes="redis memcached"
"""
if not isinstance(data, dict):
raise TypeError("Expected a dictionary for CLI argument conversion")
args = []
for key, value in data.items():
cli_key = f"--{key}"
if isinstance(value, bool):
if value:
args.append(cli_key)
elif isinstance(value, list):
items = " ".join(map(str, value))
args.append(f'{cli_key}="{items}"')
elif value is not None:
args.append(f'{cli_key}={value}')
return " ".join(args)
class FilterModule(object):
def filters(self):
return {
'dict_to_cli_args': dict_to_cli_args
}

View File

@@ -1,2 +1,2 @@
system_service_id: sys-ctl-cln-faild-bkps
CLN_FAILED_DOCKER_BACKUPS_PKG: cleanup-failed-docker-backups
system_service_id: sys-ctl-cln-faild-bkps
CLN_FAILED_DOCKER_BACKUPS_PKG: cleanup-failed-docker-backups

View File

@@ -13,6 +13,8 @@
- include_role:
name: sys-service
vars:
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_CSP_CRAWLER }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_CSP_CRAWLER }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_timeout_start_sec: "{{ CURRENT_PLAY_DOMAINS_ALL | timeout_start_sec_for_domains }}"
system_service_tpl_exec_start: "{{ system_service_script_exec }} --nginx-config-dir={{ NGINX.DIRECTORIES.HTTP.SERVERS }}"

View File

@@ -1,7 +0,0 @@
[Unit]
Description=Check for CSP-blocked resources via Puppeteer
OnFailure={{ SYS_SERVICE_ON_FAILURE_COMPOSE }}
[Service]
Type=oneshot
ExecStart={{ system_service_script_exec }} --nginx-config-dir={{ NGINX.DIRECTORIES.HTTP.SERVERS }}

View File

@@ -3,9 +3,14 @@
name: sys-ctl-alm-compose
when: run_once_sys_ctl_alm_compose is not defined
- name: Include dependency 'sys-ctl-rpr-docker-soft'
include_role:
name: sys-ctl-rpr-docker-soft
when: run_once_sys_ctl_rpr_docker_soft is not defined
- include_role:
name: sys-service
vars:
system_service_timer_enabled: true
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER }}"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_timer_enabled: true
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER }}"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }}"

View File

@@ -2,7 +2,7 @@
include_role:
name: sys-ctl-alm-compose
when: run_once_sys_ctl_alm_compose is not defined
- include_role:
name: sys-service
vars:

View File

@@ -9,4 +9,4 @@
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_JOURNALCTL }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
# system_service_suppress_flush: true # There are almost allways errors in the journalctl logs so suppression is neccessary to let playbook run
system_service_suppress_flush: true # There are almost allways errors in the journalctl logs so suppression is neccessary to let playbook run

View File

@@ -16,6 +16,7 @@
- include_role:
name: sys-service
vars:
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_NGINX }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_NGINX }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_timeout_start_sec: "{{ CURRENT_PLAY_DOMAINS_ALL | timeout_start_sec_for_domains }}"

View File

@@ -43,7 +43,7 @@ for filename in os.listdir(config_path):
url = f"{{ WEB_PROTOCOL }}://{domain}"
redirected_domains = [domain['source'] for domain in {{ redirect_domain_mappings }}]
redirected_domains.append("{{domains | get_domain('web-app-mailu')}}")
redirected_domains.append("{{domains | get_domain('web-app-mailu') }}")
expected_statuses = get_expected_statuses(domain, parts, redirected_domains)

View File

@@ -3,7 +3,7 @@
name: '{{ item }}'
loop:
- sys-svc-certbot
- srv-core
- sys-svc-webserver
- sys-ctl-alm-compose
- name: install certbot

View File

@@ -8,7 +8,7 @@
vars:
system_service_on_calendar: "{{ SYS_SCHEDULE_REPAIR_DOCKER_HARD }}"
system_service_timer_enabled: true
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_REPAIR_DOCKER_HARD }} --timeout "{{ SYS_TIMEOUT_RESTART_DOCKER }}"'
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_REPAIR_DOCKER_HARD }} {{ SYS_SERVICE_GROUP_CLEANUP | join(" ") }} --timeout "{{ SYS_TIMEOUT_DOCKER_RPR_HARD }}"'
system_service_tpl_exec_start: '{{ system_service_script_exec }} {{ PATH_DOCKER_COMPOSE_INSTANCES }}'
system_service_tpl_exec_start_post: "/usr/bin/systemctl start {{ SYS_SERVICE_CLEANUP_ANONYMOUS_VOLUMES }}"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"

View File

@@ -1,15 +1,26 @@
#!/usr/bin/env python3
"""
Restart Docker-Compose configurations with exited or unhealthy containers.
This version receives the *manipulation services* via argparse (no Jinja).
STRICT mode: Resolve the Compose project exclusively via Docker labels
(com.docker.compose.project and com.docker.compose.project.working_dir).
No container-name fallback. If labels are missing or Docker is unavailable,
the script records an error for that container.
All shell interactions that matter for tests go through print_bash()
so they can be monkeypatched in unit tests.
"""
import subprocess
import time
import os
import argparse
from typing import List
from typing import List, Optional, Tuple
# ---------------------------
# Shell helpers
# ---------------------------
def bash(command: str) -> List[str]:
print(command)
process = subprocess.Popen(
@@ -30,31 +41,45 @@ def list_to_string(lst: List[str]) -> str:
def print_bash(command: str) -> List[str]:
"""
Wrapper around bash() that echoes combined output for easier debugging
and can be monkeypatched in tests.
"""
output = bash(command)
if output:
print(list_to_string(output))
return output
def find_docker_compose_file(directory: str) -> str | None:
# ---------------------------
# Filesystem / compose helpers
# ---------------------------
def find_docker_compose_file(directory: str) -> Optional[str]:
"""
Search for docker-compose.yml beneath a directory.
"""
for root, _, files in os.walk(directory):
if "docker-compose.yml" in files:
return os.path.join(root, "docker-compose.yml")
return None
def detect_env_file(project_path: str) -> str | None:
def detect_env_file(project_path: str) -> Optional[str]:
"""
Return the path to a Compose env file if present (.env preferred, fallback to env).
Return the path to a Compose env file if present (.env preferred, fallback to .env/env).
"""
candidates = [os.path.join(project_path, ".env"), os.path.join(project_path, ".env", "env")]
candidates = [
os.path.join(project_path, ".env"),
os.path.join(project_path, ".env", "env"),
]
for candidate in candidates:
if os.path.isfile(candidate):
return candidate
return None
def compose_cmd(subcmd: str, project_path: str, project_name: str | None = None) -> str:
def compose_cmd(subcmd: str, project_path: str, project_name: Optional[str] = None) -> str:
"""
Build a docker-compose command string with optional -p and --env-file if present.
Example: compose_cmd("restart", "/opt/docker/foo", "foo")
@@ -69,6 +94,10 @@ def compose_cmd(subcmd: str, project_path: str, project_name: str | None = None)
return " ".join(parts)
# ---------------------------
# Business logic
# ---------------------------
def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[str]:
"""
Accept either:
@@ -78,7 +107,6 @@ def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[s
if raw:
return [s for s in raw if s.strip()]
if raw_str:
# split on comma or whitespace
parts = [p.strip() for chunk in raw_str.split(",") for p in chunk.split()]
return [p for p in parts if p]
return []
@@ -87,7 +115,7 @@ def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[s
def wait_while_manipulation_running(
services: List[str],
waiting_time: int = 600,
timeout: int | None = None,
timeout: Optional[int] = None,
) -> None:
"""
Wait until none of the given services are active anymore.
@@ -107,7 +135,6 @@ def wait_while_manipulation_running(
break
if any_active:
# Check timeout
elapsed = time.time() - start
if timeout and elapsed >= timeout:
print(f"Timeout ({timeout}s) reached while waiting for services. Continuing anyway.")
@@ -119,7 +146,30 @@ def wait_while_manipulation_running(
break
def main(base_directory: str, manipulation_services: List[str], timeout: int | None) -> int:
def get_compose_project_info(container: str) -> Tuple[str, str]:
"""
Resolve project name and working dir from Docker labels.
STRICT: Raises RuntimeError if labels are missing/unreadable.
"""
out_project = print_bash(
f"docker inspect -f '{{{{ index .Config.Labels \"com.docker.compose.project\" }}}}' {container}"
)
out_workdir = print_bash(
f"docker inspect -f '{{{{ index .Config.Labels \"com.docker.compose.project.working_dir\" }}}}' {container}"
)
project = out_project[0].strip() if out_project else ""
workdir = out_workdir[0].strip() if out_workdir else ""
if not project:
raise RuntimeError(f"No compose project label found for container {container}")
if not workdir:
raise RuntimeError(f"No compose working_dir label found for container {container}")
return project, workdir
def main(base_directory: str, manipulation_services: List[str], timeout: Optional[int]) -> int:
errors = 0
wait_while_manipulation_running(manipulation_services, waiting_time=600, timeout=timeout)
@@ -131,43 +181,50 @@ def main(base_directory: str, manipulation_services: List[str], timeout: int | N
)
failed_containers = unhealthy_container_names + exited_container_names
unfiltered_failed_docker_compose_repositories = [
container.split("-")[0] for container in failed_containers
]
filtered_failed_docker_compose_repositories = list(
dict.fromkeys(unfiltered_failed_docker_compose_repositories)
)
for container in failed_containers:
try:
project, workdir = get_compose_project_info(container)
except Exception as e:
print(f"Error reading compose labels for {container}: {e}")
errors += 1
continue
for repo in filtered_failed_docker_compose_repositories:
compose_file_path = find_docker_compose_file(os.path.join(base_directory, repo))
compose_file_path = os.path.join(workdir, "docker-compose.yml")
if not os.path.isfile(compose_file_path):
# As STRICT: we only trust labels; if file not there, error out.
print(f"Error: docker-compose.yml not found at {compose_file_path} for container {container}")
errors += 1
continue
if compose_file_path:
project_path = os.path.dirname(compose_file_path)
try:
print("Restarting unhealthy container in:", compose_file_path)
project_path = os.path.dirname(compose_file_path)
try:
# restart with optional --env-file and -p
print_bash(compose_cmd("restart", project_path, repo))
except Exception as e:
if "port is already allocated" in str(e):
print("Detected port allocation problem. Executing recovery steps...")
# down (no -p needed), then engine restart, then up -d with -p
print_bash(compose_cmd("restart", project_path, project))
except Exception as e:
if "port is already allocated" in str(e):
print("Detected port allocation problem. Executing recovery steps...")
try:
print_bash(compose_cmd("down", project_path))
print_bash("systemctl restart docker")
print_bash(compose_cmd("up -d", project_path, repo))
else:
print("Unhandled exception during restart:", e)
print_bash(compose_cmd("up -d", project_path, project))
except Exception as e2:
print("Unhandled exception during recovery:", e2)
errors += 1
else:
print("Error: Docker Compose file not found for:", repo)
errors += 1
else:
print("Unhandled exception during restart:", e)
errors += 1
print("Finished restart procedure.")
return errors
# ---------------------------
# CLI
# ---------------------------
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Restart Docker-Compose configurations with exited or unhealthy containers."
description="Restart Docker-Compose configurations with exited or unhealthy containers (STRICT label mode)."
)
parser.add_argument(
"--manipulation",
@@ -184,12 +241,12 @@ if __name__ == "__main__":
"--timeout",
type=int,
default=60,
help="Maximum time in seconds to wait for manipulation services before continuing.(Default 1min)",
help="Maximum time in seconds to wait for manipulation services before continuing. (Default 1min)",
)
parser.add_argument(
"base_directory",
type=str,
help="Base directory where Docker Compose configurations are located.",
help="(Unused in STRICT mode) Base directory where Docker Compose configurations are located.",
)
args = parser.parse_args()
services = normalize_services_arg(args.manipulation, args.manipulation_string)

View File

@@ -6,10 +6,7 @@
- include_role:
name: sys-service
vars:
system_service_on_calendar: "{{ SYS_SCHEDULE_REPAIR_DOCKER_SOFT }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP| join(' ') }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }} --timeout '{{ SYS_TIMEOUT_HEAL_DOCKER }}'"
system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP| join(' ') }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }} --timeout '{{ SYS_TIMEOUT_DOCKER_RPR_SOFT }}'"
system_service_tpl_exec_start: >
/bin/sh -c '{{ system_service_script_exec }} --manipulation-string "{{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }}" {{ PATH_DOCKER_COMPOSE_INSTANCES }}'

View File

@@ -1,4 +1,3 @@
# roles/sys-srv-web-inj-compose/filter_plugins/inj_enabled.py
#
# Usage in tasks:
# - set_fact:

Some files were not shown because too many files have changed in this diff Show More