Compare commits

...

575 Commits

Author SHA1 Message Date
445c94788e Refactor: consolidate pkgmgr updates and remove legacy roles
Details:
- Added pkgmgr update task directly in pkgmgr role (pkgmgr pull --all)
- Removed deprecated update-pkgmgr role and references
- Removed deprecated update-pip role and references
- Simplified update-compose by dropping update-pkgmgr include

https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:46:39 +02:00
aac9704e8b Refactor: remove legacy update-docker role and references
Details:
- Removed update-docker role (README, meta, vars, tasks, script)
- Cleaned references from group_vars, update-compose, and docs
- Adjusted web-app-matrix role (removed @todo pointing to update-docker)
- Updated administrator guide (update-docker no longer mentioned)

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:32:33 +02:00
a57a5f8828 Refactor: remove Python-based Listmonk upgrade logic and implement upgrade as Ansible task
Details:
- Removed upgrade_listmonk() function and related calls from update-docker script
- Added dedicated Ansible task in web-app-listmonk role to run non-interactive DB/schema upgrade
- Conditional execution via MODE_UPDATE

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:25:41 +02:00
90843726de keycloak: update realm mail settings to use smtp_server.json.j2 (SPOT); merge via kc_merge_path; fix display name and SSL handling
See: https://chatgpt.com/share/68bb0b25-96bc-800f-8ff7-9ca8d7c7af11
2025-09-05 18:09:33 +02:00
d25da76117 Solved wrong variable bug 2025-09-05 17:30:08 +02:00
d48a1b3c0a Solved missing variable bugs. Role is not fully implemented need to pause development on it for the moment 2025-09-05 17:07:15 +02:00
2839d2e1a4 In between commit Magento implementation 2025-09-05 17:01:13 +02:00
00c99e58e9 Cleaned up bridgy fed 2025-09-04 17:09:35 +02:00
904040589e Added correct variables and health check 2025-09-04 15:13:10 +02:00
9f3d300bca Removed unneccessary handlers 2025-09-04 14:04:53 +02:00
9e253a2d09 Bluesky: Patch hardcoded IPCC_URL and proxy /ipcc
- Added Ansible replace task to override IPCC_URL in geolocation.tsx to same-origin '/ipcc'
- Extended Nginx extra_locations.conf to proxy /ipcc requests to https://bsky.app/ipcc
- Ensures frontend avoids CORS errors when fetching IP geolocation

See: https://chatgpt.com/share/68b97be3-0278-800f-9ee0-94389ca3ac0c
2025-09-04 13:45:57 +02:00
49120b0dcf Added more CSP headers 2025-09-04 13:36:35 +02:00
b6f91ab9d3 changed database_user to database_username 2025-09-04 12:45:22 +02:00
77e8e7ed7e Magento 2.4.8 refactor:
- Switch to split containers (markoshust/magento-php:8.2-fpm + magento-nginx:latest)
- Disable central DB; use app-local MariaDB and pin to 11.4
- Composer bootstrap of Magento in php container (Adobe repo keys), idempotent via creates
- Make setup:install idempotent; run as container user 'app'
- Wire OpenSearch (security disabled) and depends_on ordering
- Add credentials schema (adobe_public_key/adobe_private_key)
- Update vars for php/nginx/search containers + MAGENTO_USER
- Remove legacy docs (Administration.md, Upgrade.md)
Context: changes derived from our ChatGPT session about getting Magento 2.4.8 running with MariaDB 11.4.
Conversation: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 12:45:03 +02:00
32bc17e0c3 Optimized whitespacing 2025-09-04 12:41:11 +02:00
e294637cb6 Changed db config path attribut 2025-09-04 12:34:13 +02:00
577767bed6 sys-svc-rdbms: Refactor database service templates and add version support for Magento
- Unified Jinja2 variable spacing in tasks and templates
- Introduced database_image and database_version variables in vars/database.yml
- Updated mariadb.yml.j2 and postgres.yml.j2 to use {{ database_image }}:{{ database_version }}
- Ensured env file paths and includes are consistent
- Prepared support for versioned database images (needed for Magento deployment)

Ref: https://chatgpt.com/share/68b96a9d-c100-800f-856f-cd23d1eda2ed
2025-09-04 12:32:34 +02:00
e77f8da510 Added debug options to mastodon 2025-09-04 11:50:14 +02:00
4738b263ec Added docker_volume_path filter_plugin 2025-09-04 11:49:40 +02:00
0a588023a7 feat(bluesky): fix CORS by serving /config same-origin and pinning BAPP_CONFIG_URL
- Add `server.config_upstream_url` default in `roles/web-app-bluesky/config/main.yml`
  to define upstream for /config (defaults to https://ip.bsky.app/config).
- Introduce front-proxy injection `extra_locations.conf.j2` that:
  - proxies `/config` to the upstream,
  - sets SNI and correct Host header,
  - normalizes CORS headers for same-origin consumption.
- Wire the proxy injection only for the Web domain in
  `roles/web-app-bluesky/tasks/main.yml` via `proxy_extra_configuration`.
- Force fresh social-app checkout and patch
  `src/state/geolocation.tsx` to `const BAPP_CONFIG_URL = '/config'`
  in `roles/web-app-bluesky/tasks/02_social_app.yml`; notify `docker compose build` and `up`.
- Tidy and re-group PDS env in `roles/web-app-bluesky/templates/env.j2` (no functional change).
- Add vars in `roles/web-app-bluesky/vars/main.yml`:
  - `BLUESKY_FRONT_PROXY_CONTENT` (renders the extra locations),
  - `BLUESKY_CONFIG_UPSTREAM_URL` (reads `server.config_upstream_url`).

Security/Scope:
- Only affects the Bluesky web frontend (same-origin `/config`); PDS/API and AppView remain unchanged.

Refs:
- Conversation: https://chatgpt.com/share/68b8dd3a-2100-800f-959e-1495f6320aab
2025-09-04 02:29:10 +02:00
d2fa90774b Added fediverse bridge draft 2025-09-04 02:26:27 +02:00
0e72dcbe36 feat(magento): switch to ghcr.io/alexcheng1982/docker-magento2:2.4.6-p3; update Compose/Env/Tasks/Docs
• Docs: updated to MAGENTO_VOLUME; removed Installation/User_Administration guides
• Compose: volume path → /var/www/html; switched variables to MAGENTO_*/MYSQL_*/OPENSEARCH_*
• Env: new variable set + APACHE_SERVERNAME
• Task: setup:install via docker compose exec (multiline form)
• Schema: removed obsolete credentials definition
Link: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 02:25:49 +02:00
4f8ce598a9 Mastodon: allow internal chess host & refactor var names; OpenLDAP: safer get_app_conf
- Add ALLOWED_PRIVATE_ADDRESSES to .env (from svc-db-postgres) to handle 422 Mastodon::PrivateNetworkAddressError
- Switch docker-compose to MASTODON_* variables and align vars/main.yml
- Always run 01_setup.yml during deployment (removed conditional flag)
- OpenLDAP: remove implicit True default on network.local to avoid unintended truthy behavior

Context: chess.infinito.nexus resolved to 192.168.200.30 (private IP) from Mastodon; targeted allowlist unblocks federation lookups.

Ref: https://chat.openai.com/share/REPLACE_WITH_THIS_CONVERSATION_LINK
2025-09-03 21:44:47 +02:00
3769e66d8d Updated CSP for bluesky 2025-09-03 20:55:21 +02:00
33a5fadf67 web-app-chess: fix Corepack/Yarn EACCES and switch to ARG-driven Dockerfile
• Add roles/web-app-chess/files/Dockerfile using build ARGs (CHESS_VERSION, CHESS_REPO_URL, CHESS_REPO_REF, CHESS_ENTRYPOINT_REL, CHESS_ENTRYPOINT_INT, CHESS_APP_DATA_DIR, CONTAINER_PORT). Enable Corepack/Yarn as root in the runtime stage to avoid EACCES on /usr/local/bin symlinks, then drop privileges to 'node'.

• Delete Jinja-based templates/Dockerfile.j2; docker-compose now passes former Jinja vars via build.args. • Update templates/docker-compose.yml.j2 to forward all required build args. • Update config/main.yml: add CSP flag 'script-src-elem: unsafe-inline'.

Ref: https://chatgpt.com/share/68b88d3d-3bd8-800f-9723-e8df0cdc37e2
2025-09-03 20:47:50 +02:00
699a6b6f1e feat(web-app-magento): add Magento role + network/ports
- add role files (docs, vars, config, tasks, schema, templates)

- networks: add web-app-magento 192.168.103.208/28

- ports: add localhost http 8052

Conversation: https://chatgpt.com/share/68b8820f-f864-800f-8819-da509b99cee2
2025-09-03 20:00:01 +02:00
61c29eee60 web-app-chess: build/runtime hardening & feature enablement
Build: use Yarn 4 via Corepack; immutable install with inline builds.

Runtime: enable Corepack as user 'node', use project-local cache (/app/.yarn/cache), add curl; fix ownership.

Entrypoint: generate keys in correct dir; run 'yarn install --immutable --inline-builds' before migrations; wait for Postgres.

Config: enable matomo/css/desktop; notify 'docker compose build' on entrypoint changes.

Docs: rename README title to 'Chess'.

Ref: ChatGPT conversation (2025-09-03) — https://chatgpt.com/share/68b88126-7a6c-800f-acae-ae61ed577f46
2025-09-03 19:56:13 +02:00
d5204fb5c2 Removed unnecessary env loading 2025-09-03 17:41:53 +02:00
751615b1a4 Changed 09_ports.yml to 10_ports.yml 2025-09-03 17:41:14 +02:00
e2993d2912 Added more CSP urls for bluesky 2025-09-03 17:31:29 +02:00
24b6647bfb Corrected variable 2025-09-03 17:30:31 +02:00
d2dc2eab5f web-app-bluesky: refactor role, add Cloudflare DNS integration, split tasks
Changes: add AppView port; add CSP whitelist; new tasks (01_pds, 02_social_app, 03_dns); switch templates to BLUESKY_* vars; update docker-compose and env; TCP healthcheck; remove admin_password from schema.

Conversation context: https://chatgpt.com/share/68b85276-e0ec-800f-90ec-480a1d528593
2025-09-03 16:37:35 +02:00
a1130e33d7 web-app-chess: refactor runtime & entrypoint
- Move entrypoint to files/ and deploy via copy
- Parameterize APP_KEY_FILE, data dir, and entrypoint paths
- Require explicit PORT/PG envs (remove fallbacks)
- Drop stray header from config/main.yml
- Dockerfile: use templated data dir & entrypoint; keep node user
- Compose: set custom image, adjust volume mapping
- env: derive APP_SCHEME from WEB_PROTOCOL; NODE_ENV from ENVIRONMENT
- tasks: add 01_core and simplify main to include it

Ref: https://chatgpt.com/share/68b851c5-4dd8-800f-8e9e-22b985597b8f
2025-09-03 16:34:04 +02:00
df122905eb mailu: include base defaults for oletools (env_file/LD_PRELOAD)
Add base include to oletools service so it inherits env_file (LD_PRELOAD=/usr/lib/libhardened_malloc.so) and other defaults. Fixes crash: PermissionError: '/proc/cpuinfo' during hardened_malloc compatibility probe when LD_PRELOAD was absent. Aligns oletools with other Mailu services.

Refs: ChatGPT discussion – https://chatgpt.com/share/68b837ba-c9cc-800f-b5d9-62b60d6fafd9
2025-09-03 14:42:50 +02:00
d093a22d61 Added correct CSP for JIRA 2025-09-03 11:35:24 +02:00
5e550ce3a3 sys-ctl-rpr-docker-soft: switch to STRICT label mode and adapt tests
- script.py now resolves docker-compose project and working_dir strictly from container labels
- removed container-name fallback logic
- adjusted sys-ctl-hlth-docker-container to include sys-ctl-rpr-docker-soft
- cleaned up sys-svc-docker dependencies
- updated unit tests to mock docker inspect and os.path.isfile for STRICT mode

Conversation: https://chatgpt.com/share/68b80927-b800-800f-a909-0fe8d110fd0e
2025-09-03 11:24:14 +02:00
0ada12e3ca Enabled rpr service by failed health checkl isntead of tiumer 2025-09-03 10:46:46 +02:00
1a5ce4a7fa web-app-bookwyrm, web-app-confluence:
- Fix BookWyrm email SSL/TLS handling (use ternary without 'not' for clarity)
- Add truststore_enabled flag in Confluence config and vars
- Wire JVM_SUPPORT_RECOMMENDED_ARGS to disable UPM signature check if truststore is disabled
- Add placeholder style.css.j2 for Confluence

See conversation: https://chatgpt.com/share/68b80024-7100-800f-a2fe-ba8b9f5cec05
2025-09-03 10:45:41 +02:00
a9abb3ce5d Added unsafe-eval csp to jira 2025-09-03 09:43:07 +02:00
71ceb339fc Fix Confluence & BookWyrm setup:
- Add docker compose build trigger in docker-compose tasks
- Cleanup svc-prx-openresty vars
- Enable unsafe-inline CSP flags for BookWyrm, Confluence, Jira to allow Atlassian inline scripts
- Generalize CONFLUENCE_HOME usage in vars, env and docker-compose
- Ensure confluence-init.properties written with correct home
- Add JVM_SUPPORT_RECOMMENDED_ARGS to pass atlassian.home
- Update README to reference {{ CONFLUENCE_HOME }}

See: https://chatgpt.com/share/68b7582a-aeb8-800f-a14f-e98c5b4e6c70
2025-09-02 22:49:02 +02:00
61bba3d2ef feat(bookwyrm): production-ready runtime + Redis wiring
- Dockerfile: build & install gunicorn wheels
- compose: run initdb before start; use `python -m gunicorn`
- env: add POSTGRES_* and BookWyrm Redis aliases (BROKER/ACTIVITY/CACHE) + CACHE_URL
- vars: add cache URL, DB indices, and URL aliases for Redis

Ref: https://chatgpt.com/share/68b7492b-3200-800f-80c4-295bc3233d68
2025-09-02 21:45:11 +02:00
0bde4295c7 Implemented correct confluence version 2025-09-02 17:01:58 +02:00
8059f272d5 Refactor Confluence and Jira env templates to use official Atlassian ATL_* database variables instead of unused custom placeholders. Ensures containers connect directly to PostgreSQL without relying on CONFLUENCE_DATABASE_* or JIRA_DATABASE_* vars. See conversation: https://chatgpt.com/share/68b6ddfd-3c44-800f-a57e-244dbd7ceeb5 2025-09-02 14:07:38 +02:00
7c814e6e83 BookWyrm: update Dockerfile and env handling
- Remove ARG BOOKWYRM_VERSION default, use Jinja variable directly
- Add proper SMTP environment variables mapping (EMAIL_HOST, EMAIL_PORT, TLS/SSL flags, user, password, default_from)
- Ensure env.j2 uses BookWyrm-expected names only
Ref: ChatGPT conversation 2025-09-02 https://chatgpt.com/share/68b6dc73-3784-800f-9a7e-340be498a412
2025-09-02 14:01:04 +02:00
d760c042c2 Atlassian JVM sizing: cast memory vars to int before floor-division
Apply |int to TOTAL_MB and dependent values to prevent 'unsupported operand type(s) for //' during templating in Confluence and Jira roles.

Context: discussion on 2025-09-02 — https://chatgpt.com/share/68b6d386-4490-800f-9bad-aa7be1571ebe
2025-09-02 13:22:59 +02:00
6cac8085a8 feat(web-app-chess): add castling.club role with ports, networks, and build setup
- Added network subnet (192.168.103.192/28) and port 8050 for web-app-chess
- Replaced stub README with usability-focused description of castling.club
- Implemented config, vars, meta, and tasks for web-app-chess
- Added Dockerfile, docker-compose.yml, env, and docker-entrypoint.sh templates
- Integrated entrypoint asset placement
- Updated meta to reflect usability and software features

Ref: https://chatgpt.com/share/68b6c65a-3de8-800f-86b2-a110920cd50e
2025-09-02 13:21:15 +02:00
3a83f3d14e Refactor BookWyrm role: switch to source-built Dockerfile, update README/meta for usability, add env improvements (ALLOWED_HOSTS, Redis vars, Celery broker), and pin version v0.7.5. See https://chatgpt.com/share/68b6d273-abc4-800f-ad3f-e1a5b9f8dad0 2025-09-02 13:18:32 +02:00
61d852c508 Added ports and networks for bookwyrm, jira, confluence 2025-09-02 12:08:20 +02:00
188b098503 Confluence/Jira roles: add READMEs, switch to custom images, proxy/JVM envs, and integer-safe heap sizing
Confluence: README added; demo disables OIDC/LDAP; Dockerfile overlay; docker-compose now uses CONFLUENCE_CUSTOM_IMAGE and DB depends include; env.j2 adds ATL_* and JVM_*; vars use integer math (//) for Xmx/Xms and expose CUSTOM_IMAGE.

Jira: initial role skeleton with README, config/meta/tasks; Dockerfile overlay; docker-compose using JIRA_CUSTOM_IMAGE and DB depends include; env.j2 with proxy + JVM envs; vars with integer-safe memory sizing.

Context: https://chatgpt.com/share/68b6b592-2250-800f-b68e-b37ae98dbe70
2025-09-02 12:07:34 +02:00
bc56940e55 Implement initial BookWyrm role
- Removed obsolete TODO.md
- Added config/main.yml with service, feature, CSP, and registration settings
- Added schema/main.yml defining vaulted SECRET_KEY (alphanumeric)
- Added tasks/main.yml to load stateful stack
- Added Dockerfile.j2 ensuring data/media dirs
- Added docker-compose.yml.j2 with application, worker, redis, volumes
- Added env.j2 with registration, secrets, DB, Redis, OIDC support
- Extended vars/main.yml with BookWyrm variables and OIDC, Docker, Redis settings
- Updated meta/main.yml with logo and run_after dependencies

Ref: https://chatgpt.com/share/68b6c060-3a0c-800f-89f8-e114a16a4a80
2025-09-02 12:03:11 +02:00
5dfc2efb5a Used port variable 2025-09-02 11:59:50 +02:00
7f9dc65b37 Add README.md files for web-app-bookwyrm, web-app-postmarks, and web-app-socialhome roles
Introduce integration test to ensure all web-app-* roles contain a README.md (required for Web App Desktop visibility)

See: https://chatgpt.com/share/68b6be49-7b78-800f-a3ff-bf922b4b083f
2025-09-02 11:52:34 +02:00
163a925096 fix(docker-compose): proper lock path + robust pull for buildable services
- Store pull lock under ${PATH_DOCKER_COMPOSE_PULL_LOCK_DIR}/<hash>.lock so global cleanup removes it reliably
- If any service defines `build:`, run `docker compose build --pull` before pulling
- Use `docker compose pull --ignore-buildable` when supported; otherwise tolerate pull failures for locally built images

This prevents failures when images are meant to be built locally (e.g., custom images) and ensures lock handling is consistent.

Ref: https://chatgpt.com/share/68b6b592-2250-800f-b68e-b37ae98dbe70
2025-09-02 11:15:28 +02:00
a8c88634b5 cleanup: remove unused handlers and add integration test for unused handlers
Removed obsolete handlers from roles (VirtualBox, backup-to-USB, OpenLDAP)
and introduced an integration test under tests/integration/test_handlers_invoked.py
that ensures all handlers defined in roles/*/handlers are actually notified
somewhere in the code base. This keeps the repository clean by preventing
unused or forgotten handlers from accumulating.

Ref: https://chatgpt.com/share/68b6b28e-4388-800f-87d2-34dfb34b8d36
2025-09-02 11:02:30 +02:00
ce3fe1cd51 Nextcloud: integrate Talk & Whiteboard; adjust ports & healthchecks
- Enable Spreed (Talk); signaling via /standalone-signaling/
- STUN/TURN: move STUN to 3480 (3479 occupied by BBB), keep TURN 5350 reserved
- docker-compose: expose internal WS ports; explicit TURN port mapping
- Healthchecks: add nc-based TCP checks (roles/docker-container/templates/healthcheck/nc.yml.j2)
- Nginx: location proxy to talk:8081
- Schema: add talk_* secrets (turn/signaling/internal)
- Plugins: configure spreed/whiteboard via vars/*; remove old task files
- Ports matrix (group_vars/all/09_ports.yml) updated/commented

Conversation: https://chatgpt.com/share/68b61a6a-e1dc-800f-b793-4aa600bc0166
2025-09-02 00:13:23 +02:00
7ca8b7c71d feat(nextcloud): integrate Talk & Whiteboard; refactor to NEXTCLOUD_* vars; full-stack setup
config(ports): add Nextcloud websocket port (4003); canonical domains (nextcloud/talk/whiteboard)

refactor: unify get_app_conf usage & Jinja spacing; migrate paths/handlers to new NEXTCLOUD_* vars

feat(plugins): split plugin routines; configure Whiteboard via occ (URL + JWT)

fix(oidc): use NEXTCLOUD_URL for logout; correct LDAP attribute mappings; add OIDC flavor switch

feat: Whiteboard container & reverse-proxy location; Talk STUN/WS ports; Redis URL for Whiteboard

chore: drop obsolete TODO; minor cleanups in oauth2-proxy, matrix, peertube, pgadmin, phpldapadmin, pixelfed, phpmyadmin

security(schema): Bluesky jwt_secret now base64_prefixed_32; add Nextcloud whiteboard_jwt_secret

db: normalize postgres image tag templating; central DB host checks spacing fixes

ops: add full-stack bootstrap (certs, proxy, volumes); internal nginx config reload handler update

refs: https://chatgpt.com/share/68b5f5b7-8d64-800f-b001-1241f818dc0e
2025-09-01 21:37:02 +02:00
110381e80c Refactored peertube role and implemented config volume 2025-09-01 18:19:50 +02:00
b02d88adc0 Refactored server roles for better readability 2025-09-01 18:08:35 +02:00
b7065837df MediaWiki: switch feature.css to false and add custom Vector 2022 override stylesheet
See: https://chatgpt.com/share/68b5b925-f418-800f-8f84-de744dd2d093
2025-09-01 17:18:12 +02:00
c98a2378c4 Added is defined condition 2025-09-01 17:05:30 +02:00
4ae3cee36c web-svc-logout: merge logout domains into CSP connect-src and refactor task flow
• Add tasks/01_core.yml to set applications[application_id].server.csp.whitelist['connect-src'] = LOGOUT_CONNECT_SRC_NEW.

• Switch tasks/main.yml to include 01_core.yml (run-once guard preserved).

• Update templates/env.j2 to emit LOGOUT_DOMAINS as a comma-separated list.

• Rework vars/main.yml: compute LOGOUT_DOMAINS, derive LOGOUT_ORIGINS with WEB_PROTOCOL, read connect-src via the get_app_conf filter, and merge/dedupe (unique).

Rationale: ensure CSP allows cross-domain logout requests for all configured services.

Conversation: https://chatgpt.com/share/68b5b07d-b208-800f-b6b2-f26934607c8a
2025-09-01 16:41:33 +02:00
b834f0c95c Implemented config image for pretix 2025-09-01 16:20:04 +02:00
9f734dff17 web-app-pretix: fix healthcheck and allowed hosts
- Add Host header to curl healthcheck when container_hostname is defined
- Use PRETIX_PRETIX_ALLOWED_HOSTS to fix Django 400 Bad Request during healthcheck
- Centralize PRETIX_HOSTNAME from container_hostname var
- Add Redis broker/result backend config for Celery

See: https://chatgpt.com/share/68b59c42-c0fc-800f-9bfb-f1137c59b3de
2025-09-01 15:15:04 +02:00
6fa4d00547 Refactor CDN and run_once handling
- Move run_once include from main.yml to 01_core.yml in desk-gnome-caffeine and desk-ssh
- Introduce sys-svc-cdn/01_core.yml to handle shared/vendor dirs once and role dirs per run
- Replace cdn.* with cdn_paths_all.* across inj roles
- Split cdn_dirs into cdn_dirs_role and CDN_DIRS_GLOBAL
- Ensure cdn_urls uses cdn_paths_all

Details: https://chatgpt.com/share/68b58d64-1e28-800f-8907-36926a9e9a9b
2025-09-01 14:11:36 +02:00
7254667186 Nextcloud: make app:update more robust by retrying once with retries/until (fixes transient migration errors)
See: https://chatgpt.com/share/68b57e29-4420-800f-b326-b34d09fa64b5
2025-09-01 13:06:44 +02:00
aaedaab3da refactor(web-app-mediawiki): unify debug & oidc handling via _ensure_require, introduce host-side prep, switch to bind mounts
- Removed obsolete Installation.md, TODO.md, 02_debug.yml, 05_oidc.yml and legacy debug enable/disable tasks
- Added 01_prep.yml to render debug.php/oidc.php on host side before container start
- Introduced _ensure_require.yml for generic require_once management in LocalSettings.php
- Renamed 01_install.yml -> 02_install.yml to align with new numbering
- Updated docker-compose.yml.j2 to bind-mount mw-local into /opt/mw-local
- Adjusted vars/main.yml to define MEDIAWIKI_LOCAL_MOUNT_DIR and MEDIAWIKI_LOCAL_PATH
- Templates debug.php.j2 and oidc.php.j2 now gated by MODE_DEBUG and MEDIAWIKI_OIDC_ENABLED
- main.yml now orchestrates prep, install, debug, extensions, oidc require, admin consistently

Ref: https://chatgpt.com/share/68b57db2-efcc-800f-a733-aca952298437
2025-09-01 13:04:57 +02:00
7791bd8c04 Implement filter checks: ensure all defined filters are used and remove dead code
Integration tests added/updated:
- tests/integration/test_filters_usage.py: AST-based detection of filter definitions (FilterModule.filters), robust Jinja detection ({{ ... }}, {% ... %}, {% filter ... %}), plus Python call tracking; fails if a filter is used only under tests/.
- tests/integration/test_filters_are_defined.py: inverse check — every filter used in .yml/.yaml/.j2/.jinja2/.tmpl must be defined locally. Scans only inside Jinja blocks and ignores pipes inside strings (e.g., lookup('pipe', "... | grep ... | awk ...")) to avoid false positives like trusted_hosts, woff/woff2, etc.

Bug fixes & robustness:
- Build regexes without %-string formatting to avoid ValueError from literal '%' in Jinja tags.
- Strip quoted strings in usage analysis so sed/grep/awk pipes are not miscounted as filters.
- Prevent self-matches in the defining file.

Cleanup / removal of dead code:
- Removed unused filter plugins and related unit tests:
  * filter_plugins/alias_domains_map.py
  * filter_plugins/get_application_id.py
  * filter_plugins/load_configuration.py
  * filter_plugins/safe.py
  * filter_plugins/safe_join.py
  * roles/svc-db-openldap/filter_plugins/build_ldap_nested_group_entries.py
  * roles/sys-ctl-bkp-docker-2-loc/filter_plugins/dict_to_cli_args.py
  * corresponding tests under tests/unit/*
- roles/svc-db-postgres/filter_plugins/split_postgres_connections.py: dropped no-longer-needed list_postgres_roles API; adjusted tests.

Misc:
- sys-stk-front-proxy/defaults/main.yml: clarified valid vhost_flavour values (comma-separated).

Ref: https://chatgpt.com/share/68b56bac-c4f8-800f-aeef-6708dbb44199
2025-09-01 11:47:51 +02:00
34b3f3b0ad Optimized healthcheck link for web-app-yourls 2025-09-01 10:54:08 +02:00
94fe58b5da safe_join: raise ValueError on None parameters and update tests
Changed safe_join to raise ValueError if base or tail is None instead of returning 'None/path'.
Adjusted unit tests accordingly to expect exceptions for None inputs and kept empty-string handling valid.

Ref: https://chatgpt.com/share/68b55850-e854-800f-9702-09ea956b8dc4
2025-09-01 10:25:08 +02:00
9feb766e6f replaced style-src-elem by style-src 2025-09-01 10:14:03 +02:00
231fd567b3 feat(frontend): rename inj roles to sys-front-*, add sys-svc-cdn, cache-busting lookup
Introduce sys-svc-cdn (cdn_paths/cdn_urls/cdn_dirs) and ensure CDN directories + latest symlink.

Rename sys-srv-web-inj-* → sys-front-inj-*; update includes/templates; serve shared/per-app CSS & JS via CDN.

Add lookup_plugins/local_mtime_qs.py for mtime-based cache busting; split CSS into default.css/bootstrap.css + optional per-app style.css.

CSP: use style-src-elem; drop unsafe-inline for styles. Services: fix SYS_SERVICE_ALL_ENABLED bool and controlled flush.

BREAKING CHANGE: role names changed; replace includes and references accordingly.

Conversation: https://chatgpt.com/share/68b55494-9ec4-800f-b559-44707029141d
2025-09-01 10:10:23 +02:00
3f8e7c1733 Refactor CSP filter:
- Move default 'unsafe-inline' for style-src and style-src-elem into get_csp_flags
- Ensure hashes are only added if 'unsafe-inline' not in final tokens
- Improve comments and structure
- Extend unit tests to cover default flags, overrides, and final-token logic
See: https://chatgpt.com/share/68b54520-5cfc-800f-9bac-45093740df78
2025-09-01 09:03:22 +02:00
3bfab9ef8e feat(filter_plugins/url_join): add query parameter support
- Support query elements starting with '?' or '&'
  * First query element normalized to '?', subsequent to '&'
  * Each query element must be exactly one 'key=value' pair
  * Query elements may only appear after path elements
  * Once query starts, no more path elements are allowed
- Extend test suite with success and failure cases for query handling

See: https://chatgpt.com/share/68b537ea-d198-800f-927a-940c4de832f2
2025-09-01 08:16:22 +02:00
f1870c07be refactor(filter_plugins/url_join): enforce mandatory scheme and raise specific AnsibleFilterError messages
Improved url_join filter:
- Requires first element to contain a valid '<scheme>://'
- Raises specific errors for None, empty list, wrong type, missing scheme,
  extra schemes in later parts, or string conversion failures
- Provides clearer error messages with index context in parts

See: https://chatgpt.com/share/68b537ea-d198-800f-927a-940c4de832f2
2025-09-01 08:06:48 +02:00
d0cec9a7d4 CSP filters: add explicit style-src-elem handling and improve unit tests
See ChatGPT conversation: https://chatgpt.com/share/68b4a82c-e0c8-800f-9273-9165ce1aa8d6
2025-08-31 21:53:39 +02:00
1dbd714a56 yourls: move container_port/healthcheck to vars; listen on 8080
• Removed hardcoded container_port/container_healthcheck from docker-compose.yml.j2
• Added container_port=8080 and container_healthcheck to vars/main.yml
• Rationale: current image listens on 8080; centralizes settings in vars

Ref: https://chatgpt.com/share/68b4a69d-e4b0-800f-a4f8-6c8e4fc55ee4
2025-08-31 21:48:24 +02:00
3a17b2979e Refactor CSP filters to use get_url for domain resolution and update tests to check CSP directives order-independently. See: https://chatgpt.com/share/68b49e5c-6774-800f-9d8e-a3f980799c08 2025-08-31 21:11:57 +02:00
bb0530c2ac Optimized yourls variables and healthcheck 2025-08-31 20:38:02 +02:00
aa2eb53776 fix(csp): always include internal CDN in script-src/connect-src and update tests accordingly
See ChatGPT conversation: https://chatgpt.com/share/68b492b8-847c-800f-82a9-fb890d4add7f
2025-08-31 20:22:05 +02:00
5f66c1a622 feat(postgres): add split_postgres_connections filter and average pool fact
Compute POSTGRES_ALLOWED_AVG_CONNECTIONS once and propagate to app roles (gitlab, mastodon, listmonk, matrix, pretix, mobilizon, openproject, discourse). Fix docker-compose postgres command (-c flags split). Add unit tests. Minor env/locale tweaks and includes.

Conversation: https://chatgpt.com/share/68b48e72-cc28-800f-9c21-270cbc17d82a
2025-08-31 20:04:14 +02:00
b3dfb8bf22 Fix: Resolved Discourse plugin bug and unified variable/path handling
- Discourse: fixed 'DISCOURSE_CONTAINERS_DIR' and 'DISCOURSE_APPLICATION_YML_DEST'
- Nextcloud: improved plugin enable/configure tasks formatting
- WordPress: unified OIDC, msmtp, and upload.ini variables and tasks
- General: aligned spacing and switched to path_join for consistency
2025-08-29 20:53:36 +02:00
db642c1c39 refactor(schedule): unify service timeouts, rename 08_timer.yml → 08_schedule.yml, fix docker repair/update timeouts, raise WP upload limit
See https://chatgpt.com/share/68b1deb9-2534-800f-b28f-7f19925b1fa7
2025-08-29 19:09:28 +02:00
2fccebbd1f Enforce uppercase README.md and TODO.md filenames
- Renamed all Readme.md → README.md
- Renamed all Todo.md → TODO.md
- Added integration test (tests/integration/test_filename_conventions.py) to automatically check naming convention.

Background:
Consistency in file naming (uppercase README.md and TODO.md) avoids issues with case-sensitive filesystems and ensures desktop cards (e.g. Pretix) are properly included.
Ref: https://chatgpt.com/share/68b1d135-c688-800f-9441-46a3cbfee175
2025-08-29 18:11:53 +02:00
c23fbd8ec4 Add new role web-app-confluence
Introduced a new Ansible role for deploying Atlassian Confluence within the Infinito.Nexus ecosystem.
The role follows the same structure as web-app-pretix and includes:

- : Core variables, database config, OIDC integration.
- : Docker service definitions, features (Matomo, CSS, OIDC, logout, central DB).
- : Loads docker, db and proxy stack.
- : Placeholder for schema definitions.
- :
  -  (base for OIDC plugins/extensions),
  -  (service orchestration),
  -  (environment configuration).
- : Metadata, license, company, logo (Font Awesome book-open icon).

Canonical domain is set to `confluence.{{ PRIMARY_DOMAIN }}`.
This role ensures Confluence integrates seamlessly with Keycloak OIDC and the Infinito.Nexus service stack.

Conversation: https://chatgpt.com/share/68b1d006-bbd4-800f-9d2e-9c8a8af2c00f
2025-08-29 18:07:01 +02:00
2999d9af77 web-app-pretix: fully implemented role
Summary:
- Replace draft with complete README (features, resources, credits).
- Remove obsolete Todo.md.
- Switch to custom image tag (PRETIX_IMAGE_CUSTOM) and install 'pretix-oidc' in Dockerfile.
- Drop unused 'config' volume; keep persistent 'data' only.
- Rename docker-compose service from 'application' to 'pretix' and use container_port.
- Use standard depends_on include for DB/Redis (dmbs_excl).
- Align vars to docker.services.pretix.* (image/version/name); add PRETIX_IMAGE_CUSTOM.

Breaking:
- Service key changed to 'pretix' under docker.services.
- 'config' volume removed from compose.

Status:
- Pretix role is now fully implemented and production-ready.

Reference:
- Conversation: https://chatgpt.com/share/68b1cb34-b7dc-800f-8b39-c183124972f2
2025-08-29 17:46:31 +02:00
2809ffb9f0 Added correct fa class and description for pretix 2025-08-29 17:02:50 +02:00
cb12114ce8 Added correct run_after for pretix 2025-08-29 16:47:21 +02:00
ba99e558f7 Improve SAN certbundle task logic and messages
- Fixed typo: 'seperat' → 'separate'
- Added more robust changed_when conditions (stdout + stderr, handle already-issued, rate-limit, service-down cases)
- Added explicit warnings for Let's Encrypt rate limits (exact set and generic)
- Improved readability of SAN encapsulation task with descriptive name

See conversation: https://chatgpt.com/share/68b1bc75-c3a0-800f-8861-fcf4f5f4a48c
2025-08-29 16:45:03 +02:00
2aed0f97d2 Enhance timeout_start_sec_for_domains filter to accept dict, list, or str
- Updated filter to handle dict (domain map), list (flattened domains), or single str inputs.
- Prevents duplicate 'www.' prefixes by checking prefix before adding.
- Adjusted unit tests:
  * Replaced old non-dict test with invalid type tests (int, None).
  * Added explicit tests for list and string input types.

See conversation: https://chatgpt.com/share/68b1ae9a-1ac0-800f-b49d-2915386a1a23
2025-08-29 15:57:00 +02:00
f36c7831b1 Implement dynamic TimeoutStartSec filter for domains and update roles
- Added new filter plugin 'timeout_start_sec_for_domains' to calculate TimeoutStartSec based on number of domains.
- Updated sys-ctl-hlth-csp and sys-ctl-hlth-webserver tasks to use the filter.
- Removed obsolete systemctl.service.j2 in sys-ctl-hlth-csp.
- Adjusted variable naming (CURRENT_PLAY_DOMAINS_ALL etc.) in multiple roles.
- Updated srv-letsencrypt and sys-svc-certs to use uppercase vars.
- Switched pretix role to sys-stk-full-stateful and removed leftover javascript.js.
- Added unittests for the new filter under tests/unit/filter_plugins.

See conversation: https://chatgpt.com/share/68b1ae9a-1ac0-800f-b49d-2915386a1a23
2025-08-29 15:44:31 +02:00
009bee531b Refactor role naming for TLS and proxy stack
- Renamed role `srv-tls-core` → `sys-svc-certs`
- Renamed role `srv-https-stack` → `sys-stk-front-pure`
- Renamed role `sys-stk-front` → `sys-stk-front-proxy`
- Updated all includes, READMEs, meta, and dependent roles accordingly

This improves clarity and consistency of naming conventions for certificate management and proxy orchestration.

See: https://chatgpt.com/share/68b19f2c-22b0-800f-ba9b-3f2c8fd427b0
2025-08-29 14:38:20 +02:00
4c7bb6d9db Solved path bugs and optimized them 2025-08-29 14:13:59 +02:00
092869b29a pretix: enable OIDC support
- add pretix-oidc plugin installation (Dockerfile, version 2.3.1 default)
- configure OIDC env vars (issuer, endpoints, client ID/secret, scopes, unique attribute)
- enable redis + database, add config/data volumes
- switch canonical domain to ticket.<PRIMARY_DOMAIN> with pretix.<PRIMARY_DOMAIN> alias
- mirror GitLab-style OIDC var structure for consistency

Implements pretix authentication via Keycloak/SSO.
See: https://chatgpt.com/share/68b19721-341c-800f-b372-527164474018
2025-08-29 14:04:03 +02:00
f4ea6c6c0f refactor(web-app-gitlab): restructure configuration and add OIDC support
- Added oidc feature flag in config
- Removed obsolete credentials schema (initial_root_password)
- Updated docker-compose.yml.j2 to use explicit GITLAB_* vars (image, version, container, volumes)
- Moved initial_root_password into vars/main.yml
- Introduced GITLAB_OMNIBUS_BASE and GITLAB_OMNIBUS_OIDC config lists
- Switched env.j2 to use GITLAB_OMNIBUS_ALL join

See conversation: https://chatgpt.com/share/68b1962c-3ee0-800f-a858-d4590ff6132a
2025-08-29 14:02:46 +02:00
3ed84717a7 Solved wireguard name bugs 2025-08-29 13:03:06 +02:00
1cfc2b7e23 Optimized mastodon url 2025-08-29 12:27:59 +02:00
01b9648650 Made OIDC secret UPPER 2025-08-29 12:27:29 +02:00
65d3b3040d Activated system_service_suppress_flush for journalctl servic4 2025-08-29 12:26:53 +02:00
28f7ac5aba Removed attendize because it isn't maintained anymore. Pretix is the successor. 2025-08-29 11:17:10 +02:00
19926b0c57 Optimized web-app-desktop variables 2025-08-29 11:04:52 +02:00
3a79d9d630 Optimized pkgmgr variables and removed 'Ensure main.py is executable' because it should be preset by repositories itself 2025-08-29 10:53:36 +02:00
983287a84a Finished mediawiki oidc implementation 2025-08-29 04:24:50 +02:00
dd9a9b6d84 feat(mediawiki): Refactor OIDC + debug; install Composer deps in-container; modularize role
Discussion: https://chatgpt.com/share/68b10c0a-c308-800f-93ac-2ffb386cf58b

- Split tasks into 01_install, 02_debug, 03_admin, 04_extensions, 05_oidc.
- Ensure unzip+git+composer on demand in the container; run Composer as www-data with COMPOSER_HOME=/tmp/composer.
- Idempotently unpack/install PluggableAuth & OpenIDConnect; run composer install only if vendor/ is missing.
- Add sanity check for Jumbojett\OpenIDConnectClient.
- Copy oidc.php only when changed and append a single require_once to LocalSettings.php.
- Use REL1_44-compatible numeric array for $wgPluggableAuth_Config; set $wgPluggableAuth_ButtonLabelMessage.
- Debug: add debug.php that logs to STDERR (visible via docker logs); toggle cleanly with MODE_DEBUG.
- Enable OIDC feature in config; add paths/OIDC/extension vars in vars/main.yml.

fix(services): include SYS_SERVICE_GROUP_CLEANUP in StartPre lock (ssd-hdd, docker-hard).

fix(desktop/joomla): simplify MODE_DEBUG templating.

chore: minor cleanups and renames.
2025-08-29 04:10:46 +02:00
23a2e081bf Optimized services 2025-08-29 01:11:06 +02:00
4cbd848026 Set SYS_TIMER_ALL_ENABLED ny default to DEBUG_MODE 2025-08-29 01:06:09 +02:00
d67f660152 Enabled CSS and Desktop for Mediawiki 2025-08-29 00:46:29 +02:00
5c6349321b Removed MyBB role, because it's deprecated and Discourse takes over 2025-08-29 00:12:35 +02:00
af1ee64246 web-app-mediawiki: installer-driven bootstrap, DB readiness, idempotent admin; drop LocalSettings bind-mount
Tasks:
- Enable docker_compose_flush_handlers=true so services come up immediately.
- Add DB readiness guard via maintenance/sql.php (SELECT 1).
- Run maintenance/install.php on empty schema with robust changed_when/failed_when (merge stdout+stderr); keep secrets hidden.
- Run maintenance/update.php for migrations with neutral changed_when unless work is done.
- Make admin creation idempotent: tolerate 'already exists' and 'Account exists', keep async+no_log.

Config changes:
- Remove LocalSettings.php template and its host bind-mount from compose.
- Drop MediaWiki settings path variables and META namespace variable (unused after switch).

Result: First boot is fully automated (schema + admin), subsequent runs are cleanly idempotent.

Ref: ChatGPT conversation (Aug 28, 2025, Europe/Berlin) — https://chatgpt.com/share/68b0d2e1-9bc0-800f-81a5-db03ce0b81e3.
2025-08-29 00:07:00 +02:00
d96bfc64a6 added possibility to deactivate docker service loading for performance 2025-08-28 22:47:05 +02:00
6ea8301364 Refactor: migrate cmp/* and srv/* roles into sys-stk/* and sys-svc/* namespaces
- Removed obsolete 'cmp' category, introduced 'stk' category (fa-bars-staggered icon).
- Renamed roles:
  * cmp-db-docker → sys-stk-back-stateful
  * cmp-docker-oauth2 → sys-stk-back-stateless
  * srv-domain-provision → sys-stk-front
  * cmp-db-docker-proxy → sys-stk-full-stateful
  * cmp-docker-proxy → sys-stk-full-stateless
  * cmp-rdbms → sys-svc-rdbms
- Updated all include_role references, vars, templates and README.md files.
- Adjusted run_once comments and variable paths accordingly.
- Updated all web-app roles to use new sys-stk/* and sys-svc/* roles.

Conversation: https://chatgpt.com/share/68b0ba66-09f8-800f-86fc-76c47009d431
2025-08-28 22:23:09 +02:00
92f5bf6481 refactor(web-app-mybb): remove obsolete Installation.md, introduce schema for secret_pin, and rework task/vars handling
- Removed outdated Installation.md (manual plugin instructions no longer needed)
- Added schema/main.yml with validation for secret_pin
- Added config.php.j2 template to manage DB + admin config
- Refactored tasks/main.yml to deploy config.php instead of legacy docker-compose
- Removed setup-domain.yml (TLS/domain handling moved to core roles)
- Updated docker-compose.yml.j2 to mount config.php and use new vars
- Cleaned up vars/main.yml: standardized MYBB_* variable names, added MYBB_SECRET_PIN, config paths, and container port

See ChatGPT conversation: https://chatgpt.com/share/68b0ae26-93ec-800f-8785-0da7c9303090
2025-08-28 21:29:58 +02:00
58c17bf043 web-app-mediawiki: template-driven LocalSettings.php + admin automation; compose & config tweaks
Config & features:
- roles/web-app-mediawiki/config/main.yml:
  - Add sitename ('Wiki on {{ PRIMARY_DOMAIN | upper }}') and meta_namespace ('Meta')
  - Enable central_database feature and database service
  - Move volumes under docker.volumes (correct indentation)

Tasks & automation:
- roles/web-app-mediawiki/tasks/main.yml:
  - Avoid immediate compose handler flush (docker_compose_flush_handlers: false), then explicit meta: flush_handlers
  - Deploy templated LocalSettings.php to host path
  - Create admin via maintenance/createAndPromote.php (docker exec, idempotent changed_when/failed_when)

Templates:
- roles/web-app-mediawiki/templates/LocalSettings.php.j2:
  - Set $wgSitename, $wgMetaNamespace, $wgServer from MEDIAWIKI_*
  - DB settings (mysql, host:port, name, user, password)
  - Mail settings (EmergencyContact/PasswordSender)
  - Default skin: vector
  - Load basic extensions (ParserFunctions, Cite)
- roles/web-app-mediawiki/templates/docker-compose.yml.j2:
  - Switch to MEDIAWIKI_* vars, mount LocalSettings.php (ro)
  - Use container_port, include curl healthcheck
  - Fix volumes name to MEDIAWIKI_VOLUME

Vars:
- roles/web-app-mediawiki/vars/main.yml:
  - Restructure with MEDIAWIKI_* (sitename, meta_namespace, URL, image/version/container/volume)
  - Define SETTINGS host/dock paths, container_port, default user (www-data)
  - Admin bootstrap vars (name/password/email)

Misc:
- Add empty schema/main.yml placeholder for future validation

Refs: ChatGPT conversation (2025-08-28, Europe/Berlin). Link: https://chatgpt.com/share/68b0ace6-f8f4-800f-b7a7-a51a6c5260f1
2025-08-28 21:28:47 +02:00
6c2d5c52c8 Attached 'not (system_service_suppress_flush | bool)' directly to handler 2025-08-28 21:16:04 +02:00
b919f39e35 Made stop unrequired for joomla container 2025-08-28 21:15:07 +02:00
9f2cfe65af Remove non-functional Joomla LDAP integration
- Disabled LDAP feature flag (set to false by default, with comment)
- Removed ldapautocreate plugin (PHP + XML)
- Deleted LDAP helper tasks (01_ldap_files.yml, 05_ldap.yml, 07_diagnose.yml)
- Deleted LDAP CLI helper scripts (cli.php, diagnose.php, plugins.php, auth-trace.php)
- Removed LDAP configuration variables from vars/main.yml
- Removed LDAP environment variables from env.j2
- Removed LDAP-specific mounts from docker-compose.yml.j2
- Dropped php-ldap installation from Dockerfile
- Renamed task files for consistent numbering (02->01_install, 03->02_debug, 04->03_patch, 06->04_assert)

Reason: LDAP integration was removed because it was not functional.

Conversation: https://chatgpt.com/share/68b09373-7aa8-800f-8f2c-11e27123bad1
2025-08-28 19:36:12 +02:00
fe399c3967 Added all LDAP changes before removing, because it doesn't work. Will trty to replace it by OIDC 2025-08-28 19:22:37 +02:00
ef801aa498 Joomla: Add LDAP autocreate plugin support
- Introduced autocreate_users feature flag in config/main.yml
- Added ldapautocreate.php and ldapautocreate.xml plugin files
- Implemented tasks/01_ldap_files.yml for plugin deployment
- Added tasks/05_ldap.yml to configure LDAP plugin and register ldapautocreate
- Renamed tasks for better structure (01→02, 02→03, etc.)
- Updated cli-ldap.php.j2 for clean parameter handling
- Mounted ldapautocreate plugin via docker-compose.yml.j2
- Extended vars/main.yml with LDAP autocreate configuration

Ref: https://chatgpt.com/share/68b0802f-bfd4-800f-b10a-57cf0c091f7e
2025-08-28 18:13:53 +02:00
18f3b1042f feat(web-app-joomla): reliable first-run install, safe debug toggler, DB patching, LDAP scaffolding
Why
- Fix flaky first-run installs and make config edits idempotent.
- Prepare LDAP support and allow optional inline CSP for UI.
- Improve observability and guard against broken configuration.php.

What
- config/main.yml: enable features.ldap; add CSP flags (allow inline style/script elem); minor spacing.
- tasks/: split into 01_install (wait for core, absolute CLI path), 02_debug (toggle $debug/$error_reporting safely), 03_patch (patch DB creds in configuration.php), 04_ldap (configure plugin via helper), 05_assert (optional php -l).
- templates/Dockerfile.j2: conditionally install/compile php-ldap (fallback to docker-php-ext-install with libsasl2-dev).
- templates/cli-ldap.php.j2: idempotently enable & configure Authentication - LDAP from env.
- templates/docker-compose.yml.j2: build custom image when LDAP is enabled; mount cli-ldap.php; pull_policy: never.
- templates/env.j2: add site/admin vars, MariaDB connector/env, full LDAP env.
- vars/main.yml: default to MariaDB (mysqli), add JOOMLA_* vars incl. JOOMLA_CONFIG_FILE.

Notes
- LDAP path implemented but NOT yet tested end-to-end.
- Ref: https://chatgpt.com/share/68b068a8-2aa4-800f-8cd1-56383561a9a8.
2025-08-28 16:33:45 +02:00
dece6228a4 Refactor docker-compose build logic and pull policy
- Added conditional '--pull' flag on retry in docker-compose build handler, tied to MODE_UPDATE
- Added 'pull_policy: never' to multiple docker-compose service templates to prevent unwanted image pulls
- Fixed minor formatting issues (e.g. Nextcloud volume spacing, WordPress desktop alignment)

Reference: https://chatgpt.com/share/68b0207a-4d9c-800f-b76f-9515885e5183
2025-08-28 11:25:35 +02:00
cb66fb2978 Refactor LDAP variable schema to use top-level constant LDAP and nested ALL-CAPS keys.
- Converted group_vars/all/13_ldap.yml from lower-case to ALL-CAPS nested keys.
- Updated all roles, tasks, templates, and filter_plugins to reference LDAP.* instead of ldap.*.
- Fixed Keycloak JSON templates to properly quote Jinja variables.
- Adjusted svc-db-openldap filter plugins and unit tests to handle new LDAP structure.
- Updated integration test to only check uniqueness of TOP-LEVEL ALL-CAPS constants, ignoring nested keys.

See: https://chatgpt.com/share/68b01017-efe0-800f-a508-7d7e2f1c8c8d
2025-08-28 10:15:48 +02:00
b9da6908ec keycloak(role): add realm support to generic updater
- Allow kc_object_kind='realm'
- Map endpoint to 'realms' and default lookup_field to 'id'
- Use realm-specific kcadm GET/UPDATE (no -r flag)
- Preserve immutables: id, realm
- Guard query-based ID resolution to non-realm objects

Context: fixing failure in 'Update REALM mail settings' task.
See: https://chatgpt.com/share/68affdb8-3d28-800f-8480-aa6a74000bf8
2025-08-28 08:57:29 +02:00
8baec17562 web-app-taiga: extract admin bootstrap into dedicated task; add robust upsert path
Add roles/web-app-taiga/tasks/01_administrator.yml to handle admin creation via 'createsuperuser' and, on failure, an upsert fallback using 'manage.py shell'. Ensures email, is_staff, is_superuser, is_active are set and password is updated when needed; emits CHANGED marker for idempotence.

Update roles/web-app-taiga/tasks/main.yml to include the new 01_administrator.yml task file, removing the inline admin logic for better separation of concerns.

Uses taiga-manage helper service and composes docker-compose.yml with docker-compose-inits.yml to inherit env/networks/volumes consistently.

Chat reference: https://chatgpt.com/share/68af7637-225c-800f-b670-2b948f5dea54
2025-08-27 23:58:37 +02:00
1401779a9d web-app-taiga: add manage/init flow and idempotent admin bootstrap; fix OIDC config and env quoting
config/main.yml: convert oidc from empty mapping to block; indent flavor under oidc; enable javascript feature.

tasks/main.yml: use path_join for taiga settings; create docker-compose-inits via TAIGA_DOCKER_COMPOSE_INIT_PATH; flush handlers; add idempotent createsuperuser via taiga-manage with async/poll and masked logs.

templates/docker-compose-inits.yml.j2: include compose/container base to inherit env and project settings.

templates/env.j2: quote WEB_PROTOCOL and WEBSOCKET_PROTOCOL.

templates/javascript.js.j2: add SSO warning include.

users/main.yml: add administrator email stub.

vars/main.yml: add js_application_name; restructure OIDC flavor flags; add compose PATH vars; expose TAIGA_SUPERUSER_* vars.

Chat reference: https://chatgpt.com/share/68af7637-225c-800f-b670-2b948f5dea54
2025-08-27 23:19:42 +02:00
707a3fc1d0 Optimized defaults for modes 2025-08-27 22:58:05 +02:00
d595d46e2e Solved unquoted bug 2025-08-27 22:30:03 +02:00
73d5651eea web-app-taiga: refactor OIDC gating + defaults
- Introduced dedicated variables in vars/main.yml:
  * TAIGA_FLAVOR_TAIGAIO
  * TAIGA_TAIGAIO_ENABLED
- Replaced inline Jinja2 get_app_conf checks with TAIGA_TAIGAIO_ENABLED for
  consistency in tasks, docker-compose template and env file.
- Adjusted env.j2 to use TAIGA_TAIGAIO_ENABLED instead of direct flavor checks.
- Enabled css by default (true instead of false).
- Cleaned up spacing/indentation in config and env.

This improves readability, reduces duplicated logic, and makes it easier to
maintain both OIDC flavors (robrotheram, taigaio).

Conversation: https://chatgpt.com/share/68af65b3-27c0-800f-964f-ff4f2d96ff5d
2025-08-27 22:08:35 +02:00
12a267827d Refactor websocket and Taiga variables
- Introduce WEBSOCKET_PROTOCOL derived from WEB_PROTOCOL (wss if https, else ws).
- Replace hardcoded websocket URLs in EspoCRM, Nextcloud and Taiga with {{ WEBSOCKET_PROTOCOL }}.
- Fix mautrix-imessage to use ws:// for internal synapse:8008.
- Standardize Pixelfed OIDC env spacing.
- Refactor Taiga variables to TAIGA_* naming convention and clean up EMAIL_BACKEND definition.

See: https://chatgpt.com/share/68af62fa-4dcc-800f-9aaf-cff746daab1e
2025-08-27 21:57:04 +02:00
c6cd6430bb Refactor Joomla role to new docker.* schema
- Move image definition from images.joomla to docker.services.joomla
- Add container name, container_port variable, and healthcheck
- Introduce JOOMLA_IMAGE, JOOMLA_VERSION, JOOMLA_CONTAINER, JOOMLA_VOLUME in vars
- Use volume mapping via docker.volumes.data

See: https://chatgpt.com/share/68af55a9-6514-800f-b6f7-1dc86356936e
2025-08-27 21:00:08 +02:00
67b2ebf001 Encapsulated code to pass performance tests 2025-08-27 20:58:00 +02:00
ebb6660473 Renamed Gitea variables 2025-08-27 20:49:35 +02:00
f62d09d8f1 Handle Let's Encrypt maintenance errors gracefully
- Extend certbundle task to ignore 'The service is down for maintenance or had an internal error'
  as a fatal failure.
- Add debug/warning output when this error occurs, so playbook does not stop but logs the issue.
- Ensure changed_when does not mark run as changed if only maintenance error was hit.

Ref: https://chatgpt.com/share/68af4e15-24cc-800f-b1dd-6a5f2380e35a
2025-08-27 20:28:25 +02:00
de159db918 web-app-wordpress: move msmtp configuration from Docker image to docker-compose mount
- Removed COPY of msmtp configuration from Dockerfile to avoid baking secrets/config into the image
- Added volume mount for host-side msmtp config ({{ WORDPRESS_HOST_MSMTP_CONF }}) in docker-compose.yml
- Keeps PHP upload.ini handling inside the image, but externalizes sensitive mail configuration
- Increases flexibility and avoids rebuilds when msmtp config changes

Ref: https://chatgpt.com/share/68af3c51-0544-800f-b76f-b2660c43addb
2025-08-27 19:12:03 +02:00
e2c2cf4bcf Updated sys-svc-msmtp execution condition 2025-08-27 18:12:49 +02:00
6e1e1ad5c5 Renamed pixelfed parameter 2025-08-27 18:11:31 +02:00
06baa4b03a Added correct validation handling 2025-08-27 18:10:49 +02:00
73e7fbdc8a refactor(web-app-wordpress): unify variable naming to uppercase WORDPRESS_* style
- Replaced all lowercase wordpress_* variables with uppercase WORDPRESS_* equivalents
- Ensured consistency across tasks, templates, and vars
- Improves readability and aligns with naming conventions

Conversation: https://chatgpt.com/share/68af29b5-8e7c-800f-bd12-48cc5956311c
2025-08-27 17:52:38 +02:00
bae2bc21ec Optimized system services included suppress option and solved bugs 2025-08-27 17:34:59 +02:00
a8f4dea9d2 Solved matrix name bug 2025-08-27 16:39:07 +02:00
5aaf2d28dc Refactor path handling, service conditions and dependencies
- Fixed incorrect filter usage in docker-compose handler (proper use of | path_join).
- Improved LetsEncrypt template by joining paths with filenames instead of appending manually.
- Enhanced sys-svc-msmtp task with an additional condition to only run if no-reply mailu_token exists.
- Updated Keycloak meta to depend on Mailu (ensuring token generation before setup).
- Refactored Keycloak import path variables to use path_join consistently.
- Adjusted Mailu meta dependency to run after Matomo instead of Keycloak.

See: https://chatgpt.com/share/68af13e6-edc0-800f-b76a-a5f427837173
2025-08-27 16:19:57 +02:00
5287bb4d74 Refactor Akaunting role and CSP handling
- Improved CSP filter to properly include web-svc-cdn and use protocol-aware domains
- Added Todo.md with redis and OIDC notes
- Enhanced Akaunting role config with CSP flags and redis option
- Updated schema to include app_key validation
- Reworked tasks to handle first-run marker logic cleanly
- Fixed docker-compose template (marker, healthcheck, setup flag)
- Expanded env.j2 with cache, email, proxy, and redis options
- Added javascript.js.j2 template for SSO warning
- Introduced structured vars for Akaunting role
- Removed deprecated update-repository-with-files.yml task

See conversation: https://chatgpt.com/share/68af00df-2c74-800f-90b6-6ac5b29acdcb
2025-08-27 14:58:44 +02:00
5446a1497e Optimized attendize role. Role can be removed as soon as pretix as alternative tool is implemented 2025-08-27 12:27:55 +02:00
19889a8cfc fix(credentials, akaunting):
- update cli/create/credentials.py to handle vault literals correctly:
  * strip 'vault |' headers and keep only ANSIBLE_VAULT body
  * skip reprocessing keys added in same run (no duplicate confirmation prompts)
  * detect both 'vault' and 'ANSIBLE_VAULT' as already encrypted

Refs: https://chatgpt.com/share/68aed780-ad4c-800f-877d-aa4c40a47755
2025-08-27 12:02:36 +02:00
d9980c0d8f feat(baserow): add one-time SSO warning JavaScript
- Introduced a generic sso_warning.js.j2 template under
  templates/roles/web-app/templates/javascripts/
- Included this template in web-app-baserow/templates/javascript.js.j2
- Added new variable js_application_name in
  roles/web-app-baserow/vars/main.yml to make the warning
  application-specific
- Implemented cookie-based logic so the warning is only shown once
  per user (default: 365 days)

Reference: https://chatgpt.com/share/68aecdae-82d0-800f-b05e-f2cb680664f1
2025-08-27 11:19:59 +02:00
35206aaafd Solved undeclared docker compose variable bug 2025-08-26 22:35:41 +02:00
942e8c9c12 Updated baserow CSP adn variables for new Infinito.Nexus structure 2025-08-26 22:20:31 +02:00
97f4045c68 Keycloak: align client attributes with realm dictionary
- Extended kc_force_attrs in tasks/main.yml to source 'publicClient',
  'serviceAccountsEnabled' and 'frontchannelLogout' directly from
  KEYCLOAK_DICTIONARY_REALM for consistency with import definitions.
- Updated default.json.j2 import template to set 'publicClient' to true.
- Public client mode is required so the frontend API of role 'web-app-desktop'
  can handle login/logout flows without client secret.

Ref: https://chatgpt.com/share/68ae0060-4fac-800f-9f02-22592a4087d3
2025-08-26 21:22:27 +02:00
c182ecf516 Refactor and cleanup OIDC, desktop, and web-app roles
- Improved OIDC variable definitions (12_oidc.yml)
- Added account/security/profile URLs
- Restructured web-app-desktop tasks and JS handling
- Introduced oidc.js and iframe.js with runtime loader
- Fixed nginx.conf, LDAP, and healthcheck templates spacing
- Improved Lua injection for CSP and snippets
- Fixed typos (WordPress, receive, etc.)
- Added silent-check-sso nginx location

Conversation: https://chatgpt.com/share/68ae0060-4fac-800f-9f02-22592a4087d3
2025-08-26 20:44:05 +02:00
ce033c370a Removed waiting for other services, otherwise it ends up breaking, waiting for hard restart service 2025-08-26 19:23:47 +02:00
a0477ad54c Switched OnFailure with StartPost 2025-08-26 19:10:41 +02:00
35c3681f55 sys-daemon & sys-service: align timeout handling
- Updated sys-daemon defaults:
  * Increased SYSTEMD_DEFAULT_TIMEOUT_START to 24h
  * Improved inline comments for clarity
- Changed sys-service vars:
  * Removed hardcoded 60s TimeoutStartSec
  * Now empty by default → inherits manager defaults from sys-daemon

See: https://chatgpt.com/share/68ade432-67f8-800f-b6c2-b8f87764479b
2025-08-26 18:48:45 +02:00
af97e71976 Fix: correct Docker Go template syntax in sys-ctl-rpr-docker-soft script
Replaced over-escaped '{{{{.Names}}}}' with proper '{{.Names}}'
in docker ps commands. This resolves 'failed to parse template:
unexpected "{" in command' errors during unhealthy/exited
container detection.

Reference: https://chatgpt.com/share/68addfd9-fa78-800f-abda-49161699e673
2025-08-26 18:25:25 +02:00
19a51fd718 Solved linebreak bug 2025-08-26 17:13:29 +02:00
b916173422 Renamed web-app-port-ui to web-app-desktop 2025-08-26 11:35:22 +02:00
9756a0f75f Extend repair scripts with env-file support and unit tests
- Added detect_env_file() to both sys-ctl-rpr-docker-soft and sys-ctl-rpr-docker-hard
  * prefer .env, fallback to .env/env
  * append --env-file parameter automatically
- Refactored soft script to use compose_cmd() for consistent command building
- Adjusted error recovery path in soft script to also respect env-file
- Extended unit tests for soft script to cover env-file priority and restart commands
- Added new unit tests for hard script verifying env-file priority, cwd handling,
  and --only filter logic

Ref: https://chatgpt.com/share/68ad7b30-7510-800f-8172-56f03a2f40f5
2025-08-26 11:15:59 +02:00
e417bc19bd Refactor sys-ctl-rpr-docker-soft role to use standalone Python script with argparse and unittests
- Replace Jinja2 template (script.py.j2) with raw Python script (files/script.py)
- Add argparse options: --manipulation, --manipulation-string, --timeout
- Implement timeout handling in wait_while_manipulation_running
- Update systemd ExecStart/ExecStartPre handling in tasks/01_core.yml
- Remove obsolete systemctl.service.j2 and script.py.j2 templates
- Add unittest suite under tests/unit/roles/sys-ctl-rpr-docker-soft/files/test_script.py
- Mock docker and systemctl calls in tests for safe execution

Reference: ChatGPT conversation (see https://chatgpt.com/share/68ad770b-ea84-800f-b378-559cb61fc43a)
2025-08-26 10:58:17 +02:00
7ad14673e1 sys-service: add ExecStartPost support and adjust health/repair roles
- extended generic systemctl template to support ExecStartPost
- health-docker-volumes: run main script with whitelist, trigger both compose alarm and cleanup on failure
- repair-docker-hard: added ExecStartPre lock, ExecStart, and ExecStartPost to trigger compose alarm always, plus cleanup on failure
- removed obsolete role-specific systemctl.service.j2 templates
- improved consistency across vars and defaults

See: https://chatgpt.com/share/68ad6cb8-c164-800f-96b6-a45c6c7779b3
2025-08-26 10:15:35 +02:00
eb781dbf8b fix(keycloak/ldap): make userObjectClasses JSON-safe and exclude posixAccount
- Render userObjectClasses via `tojson` (and trim) to avoid invalid control
  characters and ensure valid realm import parsing.
- Introduce KEYCLOAK_LDAP_USER_OBJECT_CLASSES in vars; exclude `posixAccount`
  for Keycloak’s LDAP config while keeping it for Ansible-managed UNIX users.
- Update UserStorageProvider template to use the new variable.

Rationale:
Keycloak must not require `posixAccount` on every LDAP user. We keep
`posixAccount` structural for Ansible provisioning, but filter it out for
Keycloak to prevent sync/import errors on entries without POSIX attributes.

Touched:
- roles/web-app-keycloak/templates/import/components/org.keycloak.storage.UserStorageProvider.json.j2
- roles/web-app-keycloak/vars/main.yml

Refs: conversation https://chatgpt.com/share/68aa1ef0-3658-800f-bdf4-5b57131d03b4
2025-08-23 22:05:26 +02:00
6016da6f1f Optimized bbb variables 2025-08-23 19:21:07 +02:00
8b2f0ac47b refactor(web-app-espocrm): improve config patching and container vars
- Replace `ESPOCRM_NAME` with `ESPOCRM_CONTAINER` for clarity and consistency.
- Drop unused `ESPOCRM_CONFIG_FILE_PUBLIC`, rely only on `config-internal.php`.
- Make DB credential patching idempotent using `grep` + `sed` checks.
- Replace direct sed edits for maintenance/cron/cache with EspoCRM ConfigWriter.
- Add fallback execution as root if www-data user cannot write config.
- Clear EspoCRM cache only when config changes and in update mode.
- Remove obsolete OIDC scopes inline task (now handled via env/vars).
- Fix docker-compose template to use `ESPOCRM_CONTAINER`.

This refactor makes the EspoCRM role more robust, idempotent, and aligned
with EspoCRM’s official ConfigWriter mechanism.

See conversation: https://chatgpt.com/share/68a87820-12f8-800f-90d6-01ba97a1b279
2025-08-22 16:01:48 +02:00
9d6d64e11d Renamed espocrm data volume 2025-08-22 14:49:25 +02:00
f1a2967a37 Implemented sys-svc-cln-anon-volumes as service so that it can be triggert after sys-ctl-rpr-docker-hard 2025-08-22 14:48:50 +02:00
95a2172fff Corrected link 2025-08-22 09:23:40 +02:00
dc3f4e05a8 sys-ctl-rpr-docker-hard: Refactor restart script with argparse & update systemd ExecStart
- Removed unused soft restart function and switched to argparse-based CLI.
- Added --only argument to selectively restart subdirectories.
- Updated systemctl service template to pass PATH_DOCKER_COMPOSE_INSTANCES as argument.
- Ensures service unit correctly invokes the Python script with target path.

See conversation: https://chatgpt.com/share/68a771d9-5fd8-800f-a410-08132699cc3a
2025-08-21 21:22:29 +02:00
e33944cda2 Solved service ignore parameter bugs 2025-08-21 21:04:21 +02:00
efa68cc1e0 sys-ctl: make service file generation deterministic and simplify ignore logic
- Added '| sort' to all service group lists and backup routine lists to ensure
  deterministic ordering and stable checksums across Ansible runs.
- Adjusted systemctl templates to use a single service variable
  ('SYS_SERVICE_BACKUP_RMT_2_LOC') instead of rejecting dynamic list entries,
  making the ignore logic simpler and more predictable.
- Fixed minor whitespace inconsistencies in Jinja templates to avoid
  unnecessary changes.

This change was made to prevent spurious 'changed' states in Ansible caused by
non-deterministic list order and to reduce complexity in service definitions.

See discussion: https://chatgpt.com/share/68a74c20-6300-800f-a44e-da43ae2f3dea
2025-08-21 18:43:17 +02:00
79e702a3ab web-svc-collabora: localize vars, adjust CSP, fix systemd perms; refactor role composition
- sys-service:
  - Set explicit ownership and permissions for generated unit files:
    owner=root, group=root, mode=0644. Prevents drift and makes idempotence
    predictable when handlers reload/refresh systemd.

- web-svc-collabora:
  - Move cmp-docker-proxy include into tasks/01_core.yml and run it
    before Nginx config generation. Use public: true only to initialize the
    proxy/compose context and docker_compose_flush_handlers: true to ensure
    timely handler execution.
  - Define role-local variables domain and http_port in vars/main.yml
    and use {{ domain }} for the Nginx server file path. These values MUST
    be defined locally because they cannot be reliably imported via
    public: true — other roles may override them later in the play, leading
    to leakage and nondeterministic behavior. Localizing avoids precedence
    conflicts without resorting to host-wide set_fact.
  - CSP adjusted: add server.security.flags.style-src.unsafe-inline: true
    to accommodate Collabora’s inline styles (requested as “csr” in notes).
  - Minor variable alignment/cleanup and TODO note for future refactor.

- Housekeeping:
  - Rename task title to reflect {{ domain }} usage.

Refs:
- Discussion and rationale in this chat https://chatgpt.com/share/68a731aa-d394-800f-9eb4-2499f45ed54b (2025-08-21, Europe/Berlin).
2025-08-21 16:48:37 +02:00
9180182d5b Optimized variables 2025-08-21 16:27:10 +02:00
535094d15d Added more update tasks for ESPOCRM config 2025-08-21 16:23:08 +02:00
658003f5b9 Added test user entry 2025-08-21 09:56:50 +02:00
3ff783df17 Updated mailu move docs 2025-08-21 09:49:36 +02:00
3df511aee9 Changed constructor order. emails need to be defned before users 2025-08-20 18:54:44 +02:00
c27d16322b Optimized variables 2025-08-20 18:17:13 +02:00
7a6e273ea4 In between commit, updated matrix and optimized mailu 2025-08-20 17:51:17 +02:00
384beae7c1 Added task to update default email settings 2025-08-20 16:41:53 +02:00
ad7e61e8b1 Set default buffer level for proxy basic conf, which are necessary for OIDC login 2025-08-20 15:56:32 +02:00
fa46523433 Update trusted domains for matomo 2025-08-20 15:35:08 +02:00
f4a380d802 Optimized alarm and system handlers 2025-08-20 15:17:04 +02:00
42d6c1799b sys-service: add systemd_directive filter and refactor service template
Introduced custom filter plugin to render optional systemd directives, refactored template to loop over directives, and adjusted default vars (TimeoutStartSec, RuntimeMaxSec handling).

Details: see ChatGPT conversation
https://chatgpt.com/share/68a5a730-6344-800f-b9a3-dc62d5902e9b
2025-08-20 12:46:07 +02:00
8608d89653 Implemented correct template for collabora 2025-08-20 09:07:33 +02:00
a4f39ac732 Renamed webserver roles to more speakable names 2025-08-20 08:54:17 +02:00
9cfb8f3a60 Different optimations for collabora 2025-08-20 08:34:12 +02:00
3e5344a46c Optimized Collabora CSP for Nextcloud 2025-08-20 07:03:02 +02:00
ec07d1a20b Added logic to start docker compose pull just once per directory 2025-08-20 07:02:27 +02:00
594d9417d1 handlers(docker): add once-per-directory docker compose pull with lockfile
- Introduced a new handler 'docker compose pull' that runs only once per
  {{ docker_compose.directories.instance }} directory by using a lock
  file under /run/ansible/compose-pull.
- Ensures idempotency by marking the task as changed only when a pull
  was actually executed.
- Restricted execution with 'when: MODE_UPDATE | bool'.
- Improves update workflow by avoiding redundant docker pulls during
  the same Ansible run.

Reference: ChatGPT discussion
https://chatgpt.com/share/68a55151-959c-800f-8b70-160ffe43e776
2025-08-20 06:42:49 +02:00
dc125e4843 Solved path bug 2025-08-20 06:18:52 +02:00
39a54294dd Moved update commands to nextcloud role 2025-08-20 06:07:33 +02:00
a57fe718de Optimized spacinbg 2025-08-20 05:49:35 +02:00
b6aec5fe33 Optimized features 2025-08-20 05:39:49 +02:00
de07d890dc Solvewd 'sys-ctl-bkp-docker-2-loc' bug 2025-08-20 05:25:24 +02:00
e27f355697 Solvewd tabulator bug 2025-08-20 05:02:16 +02:00
790762d397 Renamed some web apps to web servicesy 2025-08-20 05:00:24 +02:00
4ce681e643 Add integration test: ensure roles including 'sys-service' define system_service_id
This test scans all roles for tasks including:
  - include_role:
      name: sys-service

If present, the role must define a non-empty 'system_service_id' in vars/main.yml.
Helps enforce consistency and prevent misconfiguration.

Ref: https://chatgpt.com/share/68a536e5-c384-800f-937a-f9d91249950c
2025-08-20 04:46:27 +02:00
55cf3d0d8e Solved unit performance tests 2025-08-20 04:35:46 +02:00
2708b67751 Optimized webserver on failure 2025-08-20 04:12:42 +02:00
f477ee3731 Deactivated redis, moved version to correct place for web-svc-collabora 2025-08-20 03:40:37 +02:00
6d70f78989 fix(domain-filters): support dependency expansion via seed param
- Added missing 'Iterable' import in 'canonical_domains_map' to avoid NameError.
- Introduced 'seed' parameter so the filter can start traversal from current play apps
  while still emitting canonical domains for discovered dependencies (e.g. web-svc-collabora).
- Updated 01_constructor.yml to pass full 'applications' and a clean 'seed' list
  (using dict2items → key) instead of '.keys()' method calls, fixing integration
  test error: 'reference to application keys is invalid'.

This resolves issues where collabora domains were missing and integration tests failed.

Ref: https://chatgpt.com/share/68a51f9b-3924-800f-a41b-803d8dd10397
2025-08-20 03:07:14 +02:00
b867a52471 Refactor and extend role dependency resolution:
- Introduced module_utils/role_dependency_resolver.py with full support for include_role, import_role, meta dependencies, and run_after.
- Refactored cli/build/tree.py to use RoleDependencyResolver (added toggles for include/import/dependencies/run_after).
- Extended filter_plugins/canonical_domains_map.py with optional 'recursive' mode (ignores run_after by design).
- Updated roles/web-app-nextcloud to properly include Collabora dependency.
- Added comprehensive unittests under tests/unit/module_utils for RoleDependencyResolver.

Ref: https://chatgpt.com/share/68a519c8-8e54-800f-83c0-be38546620d9
2025-08-20 02:42:07 +02:00
78ee3e3c64 Deactivated on_failure for telegram and email 2025-08-20 01:20:06 +02:00
d7ece2a8c3 Optimized message 2025-08-20 01:03:07 +02:00
3794aa87b0 Optimized spacing 2025-08-20 01:02:29 +02:00
4cf996b1bb Removed old collabora 2025-08-20 01:02:11 +02:00
79517b2fe9 Optimized spacing 2025-08-20 01:01:32 +02:00
a84ee1240a Optimized collabora name 2025-08-20 01:00:51 +02:00
7019b307c5 Optimized collabora draft 2025-08-20 01:00:20 +02:00
838a8fc7a1 Solved svc-opt-ssd-hdd path bug 2025-08-19 21:50:55 +02:00
95aba805c0 Removed variable which leads to bugs in other contexts 2025-08-19 20:50:08 +02:00
0856c340c7 Removed unnecessary logic 2025-08-19 20:35:02 +02:00
b90a2f6c87 sys-ctl-alm-{email,telegram}: unescape instance names before alerts
Use `systemd-escape --unescape` to restore human-readable unit identifiers in
Telegram and Email alerts. Also ensure Telegram messages are URL-encoded and
Email status checks try both raw and escaped forms for robustness.

Fixes issue where slashes were shown as dashes in notifications.

Context: see ChatGPT conversation
https://chatgpt.com/share/68a4c171-db08-800f-8399-7e07f237a441
2025-08-19 20:25:15 +02:00
98e045196b Removed cleanup service lock 2025-08-19 19:06:58 +02:00
a10dd402b8 refactor: improve service handling and introduce MODE_ASSERT
- Improved get_service_name filter plugin (clearer suffix handling, consistent var names).
- Added MODE_ASSERT flag to optionally execute validation/assertion tasks.
- Fixed systemd unit handling: consistent use of %I instead of %i, correct escaping of instance names.
- Unified on_failure behavior and alarm composer scripts.
- Cleaned up redundant logging, handlers, and debug config.
- Strengthened sys-service template resolution with assert (only active when MODE_ASSERT).
- Simplified timer and suffix handling with get_service_name filter.
- Hardened sensitive tasks with no_log.
- Added conditional asserts across roles (Keycloak, DNS, Mailu, Discourse, etc.).

These changes improve consistency, safety, and validation across the automation stack.

Conversation: https://chatgpt.com/share/68a4ae28-483c-800f-b2f7-f64c7124c274
2025-08-19 19:02:52 +02:00
6e538eabc8 Enhance tree builder: detect include_role dependencies from tasks/*.yml
- Added logic to scan each role’s tasks/*.yml files for include_role usage
- Supports:
  * loop/with_items with literal strings → adds each role
  * patterns with variables inside literals (e.g. svc-db-{{database_type}}) → expanded to glob and matched
  * pure variable-only names ({{var}}) → ignored
  * pure literal names → added directly
- Merges discovered dependencies under graphs["dependencies"]["include_role"]
- Added dedicated unit test covering looped includes, glob patterns, pure literals, and ignoring pure variables

See ChatGPT conversation (https://chatgpt.com/share/68a4ace0-7268-800f-bd32-b475c5c9ba1d) for context.
2025-08-19 19:00:03 +02:00
82cc24a7f5 Added reset condition for openresty 2025-08-19 17:48:02 +02:00
26b392ea76 refactor!: replace sys-systemctl with sys-service, add sys-daemon, and rename systemctl_* → system_service_* across repo
- Swap role includes: sys-systemctl → sys-service in all roles
- Rename variables everywhere: systemctl_* → system_service_* (incl. systemctl_id → system_service_id)
- Templates: ExecStart now uses {{ system_service_script_exec }}; add optional RuntimeMaxSec via SYS_SERVICE_DEFAULT_RUNTIME
- Move SYS_SERVICE defaults into roles/sys-service/defaults (remove SYS_SERVICE_ALL_ENABLED & SYS_SERVICE_DEFAULT_STATE from group_vars/07_services.yml)
- Tidy group_vars/all/08_timer.yml formatting
- Introduce roles/sys-daemon:
  - default manager timeouts (timeouts.conf)
  - optional purge of /etc/systemd/system.conf.d
  - validation via systemd-analyze verify
  - handlers for daemon-reload & daemon-reexec
- Refactor sys-timer to system_service_* variables (docs and templates updated)
- Move filter_plugins/filetype.py under sys-service
- Update meta/README to point to official systemd docs
- Touch many roles (backup/cleanup/health/repair/certs/nginx/csp/wireguard/ssd-hdd/keyboard/update-docker/alarm compose/email/telegram/etc.) to new naming

BREAKING CHANGE:
- Role path/name change: use `sys-service` instead of `sys-systemctl`
- All `systemctl_*` vars are now `system_service_*` (e.g., on_calendar, state, timer_enabled, script_exec, id)
- If you have custom templates, adopt RuntimeMaxSec and new variable names

Chat context: https://chatgpt.com/share/68a47568-312c-800f-af3f-e98575446327
2025-08-19 15:00:44 +02:00
b49fdc509e Refactor alarm compose service and systemctl templates
- Fixed bug where not both alarm services (email + telegram) were triggered.
- Removed direct OnFailure references for email and telegram,
  now handled by unified compose service.
- Introduced 01_core.yml in sys-ctl-alm-compose to structure
  role execution (subservices → core service → test run).
- Added configurable variables SYSTEMCTL_ALARM_COMPOSER_SUBSERVICES
  and SYSTEMCTL_ALARM_COMPOSER_DUMMY_MESSAGE.
- Replaced dedicated @.service template with generic systemctl template
  using systemctl_tpl_* variables for flexibility.
- Updated script.sh.j2 to collect exit codes and print clear errors.
- Fixed typos and streamlined vars in sys-systemctl.

See conversation: https://chatgpt.com/share/68a46172-7c3c-800f-a69c-0cb9edd6839f
2025-08-19 13:35:39 +02:00
b1e8339283 Added /bin/systemctl start {{ SYS_SERVICE_CLEANUP_BACKUPS_OLD }} 2025-08-19 12:56:25 +02:00
f5db786878 Restart and activate all services and timer when in debug mode 2025-08-19 12:20:19 +02:00
7ef20474a0 Renamed sys-ctl-cln-backups to sys-ctl-cln-bkps 2025-08-19 12:15:33 +02:00
83b9f697ab Encapsulated again to see output in journald 2025-08-19 11:25:07 +02:00
dd7b5e844c removed /bin/sh -c encapsulation and solved wrong --ignore names 2025-08-19 11:15:36 +02:00
da01305cac Replaced {{ systemctl_id | get_service_script_path( by systemctl_script_exec 2025-08-19 10:56:46 +02:00
1082caddae refactor(sys-ctl-alm-compose, sys-timer-cln-bkps):
- update alarm compose unit to run email/telegram notifiers independently via multiple ExecStart lines
- ensure cleanup backup dependencies are included before timer setup with handler flush
conversation: https://chatgpt.com/share/68a43429-c0cc-800f-9cc9-9a5ae258dc50
2025-08-19 10:22:38 +02:00
242347878d Moved email host and domain SPOT to SYSTEM_EMAIL constant 2025-08-19 10:00:35 +02:00
f46aabe884 Moved healthcheck to the end so that it is setup after email configuration 2025-08-19 09:46:12 +02:00
d3cc187c3b Made System Email Variables UPPER 2025-08-19 09:34:18 +02:00
0a4b9bc8e4 Generated service names with function 2025-08-19 02:01:15 +02:00
2887e54cca Solved path bug 2025-08-19 01:48:43 +02:00
630fd43382 refactor(services): unify service/timer runtime control and cleanup handling
- Introduce SYS_SERVICE_ALL_ENABLED and SYS_TIMER_ALL_ENABLED runtime flags
- Add SYS_SERVICE_DEFAULT_STATE for consistent default handling
- Ensure all on-failure service names use lowercase software_name
- Load sys-svc-cln-anon-volumes role during Docker cleanup
- Allow forced service refresh when SYS_SERVICE_ALL_ENABLED is true
- Replace ACTIVATE_ALL_TIMERS with SYS_TIMER_ALL_ENABLED
- Use SYS_SERVICE_DEFAULT_STATE in sys-systemctl vars
- Remove redundant MIG build job fail check

Related to service/timer process control refactoring.
2025-08-19 01:27:37 +02:00
3114a7b586 solved missing vars bug 2025-08-19 01:01:09 +02:00
34d771266a Solved path bug 2025-08-19 00:46:47 +02:00
73b7d2728e Solved timer bug 2025-08-19 00:33:00 +02:00
fc4df980c5 Solved empty entry bug 2025-08-18 23:54:23 +02:00
763b43b44c Implemented dynamic script path to sys-ctl-cln-disc-space 2025-08-18 23:50:28 +02:00
db860e6ae3 Adapted load order 2025-08-18 23:47:14 +02:00
2ba486902f Deactivated file copying for sys-ctl-cln-faild-bkps 2025-08-18 23:34:06 +02:00
7848226f83 Optimized service configuration for allerts 2025-08-18 23:28:41 +02:00
185f37af52 Refactor systemctl service handling with @ support
- Unified variable naming: system_service_id → systemctl_id
- Added automatic removal of trailing '@' for role directory resolution
- Improved first_found search: prefer target role, fallback to sys-systemctl defaults
- Split template resolution logic to avoid undefined variable errors
- Added assertion in sys-timer to forbid '@' in systemctl_id
- Corrected default systemctl.service.j2 template description
- Cleaned up path handling and script directory generation

Context: conversation about fixing template resolution and @ handling
https://chatgpt.com/share/68a39994-1bb0-800f-a219-109e643c3efb
2025-08-18 23:22:46 +02:00
b9461026a6 refactor: improve get_service_name suffix handling and handler usage
- Updated filter_plugins/get_service_name.py:
  * Default suffix handling: auto-select .service (no '@') or .timer (with '@')
  * Explicit False disables suffix entirely
  * Explicit string suffix still supported
- Updated sys-systemctl handler to use new filter instead of SYS_SERVICE_SUFFIX
- Extended unit tests to cover new suffix behavior

Ref: https://chat.openai.com/share/8c2de9e6-daa0-44dd-ae13-d7a7d8d8b6d9
2025-08-18 22:36:31 +02:00
bf63e01b98 refactor(systemd-services): migrate SYS_SERVICE_SUFFIX usage to get_service_name filter
Replaced all hardcoded service name concatenations with the new get_service_name filter.
This ensures consistency, proper lowercase formatting, and correct handling of '@' suffixed units.

Added unittests for the filter (normal, custom suffix, '@'-units, and lowercase normalization).

Context: see ChatGPT discussion https://chatgpt.com/share/68a38beb-b9bc-800f-b7ed-cdd2b64b2604
2025-08-18 22:24:33 +02:00
4a600ac531 Added get_service_name 2025-08-18 22:10:52 +02:00
dc0bb555c1 Added another group_names validation 2025-08-18 21:37:07 +02:00
5adce08aea Optimized variable names 2025-08-18 21:26:46 +02:00
2569abc0be Refactor systemctl services and timers
- Unified service templates into generic systemctl templates
- Introduced reusable filter plugins for script path handling
- Updated path variables and service/timer definitions
- Migrated roles (backup, cleanup, repair, etc.) to use systemctl role
- Added sys-daemon role for core systemd cleanup
- Simplified timer handling via sys-timer role

Note: This is a large refactor and some errors may still exist. Further testing and adjustments will be needed.
2025-08-18 21:22:16 +02:00
3a839cfe37 Refactor systemctl services and categories due to alarm bugs
This commit restructures systemctl service definitions and category mappings.

Motivation: Alarm-related bugs revealed inconsistencies in service and role handling.

Preparation step: lays the groundwork for fixing the alarm issues by aligning categories, roles, and service templates.
2025-08-18 13:35:43 +02:00
29f50da226 Add custom Ansible filter plugin get_category_entries
This commit introduces a new Ansible filter plugin named
'get_category_entries', which returns all role names under the
roles/ directory that start with a given prefix.

Additionally, unit tests (unittest framework) have been added under
tests/unit/filterplugins/ to ensure correct behavior, including:

- Returns empty list when roles/ directory is missing
- Correctly filters and sorts by prefix
- Ignores non-directory entries
- Supports custom roles_path argument
- Returns all roles when prefix is empty

Reference: https://chatgpt.com/share/68a2f1ab-1fe8-800f-b22a-28c1c95802c2
2025-08-18 11:27:26 +02:00
a5941763ff refactor: normalize Jinja2 spacing in volume paths and add async support in backup task
- Standardized spacing in {{ docker_compose.directories.volumes }} across multiple roles
- Added async and poll support to sys-bkp-docker-2-loc database seeding and file permission tasks
- Moved Installation.md for web-app-matrix into docs/ for better structure
2025-08-18 01:05:01 +02:00
3d7bbabd7b mailu: enable central database, improve token creation task, and add migration guide
- Enabled central_database in Mailu config
- Improved API token creation task:
  * use curl -f to fail on HTTP errors
  * added explicit failed_when and changed_when conditions
- Adjusted docker-compose template spacing for readability
- Made logging level configurable (DEBUG when MODE_DEBUG is set)
- Added new documentation Move_Domain.md explaining safe procedure for migrating mailboxes to a new domain
2025-08-18 01:03:40 +02:00
e4b8c97e03 Solved port-ui keycloak url bug and optimized var names 2025-08-18 00:46:11 +02:00
29df95ed82 Optimized RBAC variables and async in keycloak 2025-08-18 00:15:41 +02:00
6443771d93 Optimized Mailu docs 2025-08-18 00:14:58 +02:00
d1cd87c843 Fix RBAC groups handling and refactor Keycloak role
- Fixed incorrect handling of RBAC group configuration (moved from OIDC claims into dedicated RBAC variable set).
- Unified RBAC group usage across applications (LAM, pgAdmin, phpLDAPadmin, phpMyAdmin, YOURLS).
- Replaced old 'KEYCLOAK_OIDC_RBAC_SCOPE_NAME' with dedicated 'KEYCLOAK_RBAC_GROUP_*' variables.
- Updated OAuth2 Proxy configuration to use 'RBAC.GROUP.CLAIM'.
- Refactored Keycloak role task structure:
  * Renamed and reorganized task files for clarity ('_update.yml', '02_cleanup.yml', etc.).
  * Introduced meta and dependency handling separation.
- Cleaned up Keycloak config defaults and recaptcha placeholders.
2025-08-17 23:27:01 +02:00
5f0762e4f6 Finished implementation of oauth2 import 2025-08-17 21:59:58 +02:00
5642793f4a Added parameter to skipp dependency loading to speed up debugging 2025-08-17 21:44:15 +02:00
7d0502ebc5 feat(keycloak): implement SPOT with Realm
Replace 01_import.yml with 01_initialize.yml (KEYCLOAK_HOST_IMPORT_DIR)
Add generic 02_update.yml (kcadm updater for clients/components)
- Resolve ID → read current → merge (kc_merge_path optional)
- Preserve immutable fields; support kc_force_attrs
Update tasks/main.yml:
- Readiness via KEYCLOAK_MASTER_REALM_URL; kcadm login
- Merge LDAP component config from Realm when KEYCLOAK_LDAP_ENABLED
- Update client settings incl. frontchannel.logout.url
realm.json.j2: include ldap.json in UserStorageProvider
ldap.json.j2: use KEYCLOAK_LDAP_* vars for bindDn/credential/connectionUrl
vars/main.yml: add KEYCLOAK_* URLs/dirs and KEYCLOAK_DICTIONARY_REALM(_RAW)
docker-compose.yml.j2: mount KEYCLOAK_HOST_IMPORT_DIR
Cleanup: remove 02_update_client_redirects.yml, 03_update-ldap-bind.yml, 04_ssh_public_key.yml; drop obsolete config flag; formatting

Note: redirectUris/webOrigins ordering may still cause changed=true; consider sorting for stability in a follow-up.
2025-08-17 14:27:33 +02:00
20c8d46f54 Keycloak import templates cleanup
- Removed all static 'id' fields from realm.json.j2, ldap.json.j2, and client.json.j2
- Replaced 'desktop-secret' with correct 'client-secret' authenticator type
- Standardized Jinja filters to use 'to_json' consistently
- Corrected defaultClientScopes entry from 'web-app-origins' to built-in 'web-origins'
- Verified LDAP mapper definitions and optional realm role mapping
- Ensured realm.json.j2 contains only required scopes

References: Chat with ChatGPT (2025-08-17)
https://chatgpt.com/share/68a1aaae-1b04-800f-aa8d-8a0ef6d33cba
2025-08-17 12:11:14 +02:00
a524c52f89 Created own ldap.json.j2 for better readability in keycloak 2025-08-17 11:49:50 +02:00
5c9ca20e04 Optimized keycloak variables 2025-08-17 11:40:15 +02:00
bfe18dd83c Refactor Keycloak role:
- Replace KEYCLOAK_KCADM_PATH with KEYCLOAK_EXEC_KCADM consistently
- Externalize client.json to separate Jinja2 template and include it in realm.json
- Simplify LDAP bind update to use explicit KEYCLOAK_LDAP_* vars
- Add async/poll support for long-running kcadm updates
- Restructure vars/main.yml: clearer grouping (General, Docker, Server, Update, LDAP, API)
- Compute redirectUris/webOrigins centrally in vars
- Align post.logout.redirect.uris handling with playbook

Conversation: https://chatgpt.com/share/68a1a11f-f8ac-800f-bada-cdc99a4fa1bf
2025-08-17 11:30:33 +02:00
0a83f3159a Updated keycloak variables 2025-08-17 10:47:40 +02:00
fb7b3a3c8e Added setting of frontchannel.logout.url for keycloak 2025-08-17 10:38:25 +02:00
42f9ebad34 Solved escaping bug 2025-08-17 09:35:19 +02:00
33b2d3f582 Optimized docker2local variables and constants 2025-08-17 09:26:46 +02:00
14e868a644 Fix OIDC issuer URL concatenation for Mastodon bug
- Removed trailing slash in '_oidc_client_issuer_url' to avoid issuer mismatch
- Use '.rstrip('/')' to normalize '_oidc_url'
- Switched to '~' concatenation instead of inline slashes for all OIDC endpoints
- Ensures that Mastodon and other OIDC clients match the issuer from Keycloak discovery

Change motivated by Mastodon issuer mismatch bug (OpenIDConnect::Discovery::DiscoveryFailed).
See related discussion: https://chatgpt.com/share/68a17d3c-c980-800f-934c-d56955b45f81
2025-08-17 09:02:38 +02:00
2a1a956739 feat(web-opt-rdr-www): split flavors into edge (Cloudflare redirect rule) and origin (Nginx redirect) with dynamic selection via prefered_flavor 2025-08-17 01:29:37 +02:00
bd2dde3af6 refactor: replace srv-web-7-7-dns-records with sys-dns-cloudflare-records
- removed obsolete role `srv-web-7-7-dns-records` (README, meta, tasks)
- updated Gitea role to use `sys-dns-cloudflare-records` with explicit record vars
- updated web-opt-rdr-www role to use new DNS role with zone detection (`to_zone`)
- added REDIRECT_WWW_FLAVOR var to support "edge" flavor selection
2025-08-16 23:52:46 +02:00
1126765da2 Fix variable definition test to detect set_fact and ansible.builtin.set_fact (both block and inline forms)
- Support fully qualified ansible.builtin.set_fact
- Parse inline set_fact mappings (e.g. set_fact: { a: 1, b: 2 })
- Continue scanning inside vars/set_fact blocks for Jinja {% set %}, {% for %}, and {% macro %}
- Ensures variables defined by set_fact are correctly recognized as defined
2025-08-16 23:51:27 +02:00
2620ee088e refactor(dns): unify Cloudflare + Hetzner handling across roles
- replaced CERTBOT_DNS_API_TOKEN with CLOUDFLARE_API_TOKEN everywhere
- introduced generic sys-dns-cloudflare-records role for managing DNS records
- added sys-dns-hetzner-rdns role with both Cloud (hcloud) and Robot API flavors
- updated Mailu role to:
  - generate DKIM before DNS setup
  - delegate DNS + rDNS records to the new generic roles
- removed legacy per-role Cloudflare vars (MAILU_CLOUDFLARE_API_TOKEN)
- extended group vars with HOSTING_PROVIDER for rDNS flavor decision
- added hetzner.hcloud collection to requirements

This consolidates DNS management into reusable roles,
supports both Cloudflare and Hetzner providers,
and standardizes variable naming across the project.
2025-08-16 21:43:01 +02:00
838a55ea94 Solved realm bug which appeared due to refactoring 2025-08-16 18:38:22 +02:00
1b26f1da8d Deactivated IP6 for Mailu 2025-08-16 18:17:09 +02:00
43362e1694 Optimized sys-hlth-csp performance 2025-08-16 18:03:44 +02:00
14d3f65a70 Included docker compose handler flush for mailu 2025-08-16 18:02:40 +02:00
b8ccd50ab2 Added async und logs 2025-08-16 17:29:16 +02:00
4a39cc90c0 Solved variable bugs in sys-svc-cert-sync-docker 2025-08-16 17:27:56 +02:00
0de26fa6c7 Solved bug existed due to difference between mailu domain and hostname difference. also refactored during this to find the bug 2025-08-16 14:29:07 +02:00
1bed83078e Added no_logs, asyncs, and optimized listmonk variable names 2025-08-16 02:00:13 +02:00
7ffd79ebd9 Added no_logs to mailu 2025-08-16 01:49:48 +02:00
2b7950920c Added no_logs 2025-08-16 01:41:37 +02:00
f0b323afee Added auto snippet for webserver injection 2025-08-16 01:31:49 +02:00
eadcb62f2a Added web-svc-logout as dependency for keycloak 2025-08-16 00:05:33 +02:00
cc2c1dc730 Renamed injection services 2025-08-16 00:01:46 +02:00
3b4821f7e7 Solved missing logout injection bug and refactored srv-web-7-7-inj-compose 2025-08-15 23:55:19 +02:00
5b64b47754 Added no_log 2025-08-15 23:18:44 +02:00
cb2b9462e1 Removed default 2025-08-15 21:56:20 +02:00
03564b34bb Optimized reset routine for docker images and specially discourse 2025-08-15 21:35:45 +02:00
e3b09e7f1a Refactoring of discourse role during debugging 2025-08-15 20:06:56 +02:00
3adb08fc68 Prevent exposition of applications credentials 2025-08-15 20:06:01 +02:00
e9a41bd40c Added deletion of containers to reset routine 2025-08-15 20:05:05 +02:00
cb539b038c Marked as not changed 2025-08-15 19:00:03 +02:00
3ac9bd9f90 Optimized variable typos 2025-08-15 18:43:42 +02:00
85a2f4b3d2 Solved matrix federation port bug 2025-08-15 18:37:18 +02:00
012426cf3b Added more matrix constants for easier debugging and readability 2025-08-15 18:15:58 +02:00
6c966bce2e Added health check and restart policy to openresty 2025-08-15 17:59:09 +02:00
3587531bda Removed unnecessary wait_for logic from mig 2025-08-15 15:45:20 +02:00
411a1f8931 Optimized LDAP_DN_BASE for hostname 2025-08-15 15:31:38 +02:00
cc51629337 Added spacing between {{}} 2025-08-15 15:21:48 +02:00
022800425d THE HUGE REFACTORING CALENDER WEEK 33; Optimized Matrix and during this updated variables, and implemented better reset and cleanup mode handling, also solved some initial setup bugs 2025-08-15 15:15:48 +02:00
0228014d34 Replaced .infinito.service and .infinito.timer by SOFTWARE_NAME suffix, optimized LICENSE link and update OIDC Realm and ID conf 2025-08-14 14:39:18 +02:00
1b638c366e Introduced variable SOFTWARE_NAME, to make better visible when software components are used. Will be relevant for OIDC 2025-08-14 12:49:06 +02:00
5c90c252d0 Optimized typos 2025-08-14 12:32:21 +02:00
4a65a254ae replaced port-ui-desktop with desktop to make it more speakable 2025-08-14 11:45:08 +02:00
5e00deea19 Implemented desktop csp policies 2025-08-14 11:40:09 +02:00
bf7b24c3ee Implemented get_app_conf 2025-08-14 11:14:15 +02:00
85924ab3c5 Optimized openproject csp 2025-08-14 10:59:19 +02:00
ac293c90f4 Optimized links, description and docs 2025-08-14 08:45:01 +02:00
e0f35c4bbd Added todos 2025-08-14 08:20:29 +02:00
989bee9522 Merged hp spectre and msi 2025-08-14 08:16:55 +02:00
2f12d8ea83 Added handler for discourse buiöd 2025-08-14 00:27:18 +02:00
58620f6695 Added async for DNS Records creation 2025-08-14 00:23:42 +02:00
abc064fa56 Added async for openproject settings 2025-08-14 00:07:09 +02:00
7f42462514 Fixed reload button bug 2025-08-13 23:50:35 +02:00
41cd6b7702 Replaced get_domain with get_url 2025-08-13 23:33:49 +02:00
a40d48bb03 Refactor srv-web-7-7-inj-port-ui-desktop to use CDN-served JS file with inline initializer
- Added vars/main.yml to define iframe-handler.js file name and destination
- Implemented 01_deploy.yml to deploy iframe-handler.js to CDN and set mtime-based version fact
- Split original iframe logic into:
  • iframe-handler.js (full logic, served from CDN)
  • iframe-init_one_liner.js.j2 (small inline bootstrap, CSP-hashed)
- Updated head_sub.j2 to load script from CDN instead of embedding full code
- Added body_sub.j2 for inline init code
- Updated iframe-handler.js.j2 with initIframeHandler() function and global exposure
- Activated role earlier in inj-compose with public: true so vars are available for templates
- Included 'port-ui-desktop' in body_snippets loop in location.lua.j2
- Disabled 'port-ui-desktop' feature in web-svc-cdn config by default

https://chatgpt.com/share/689d03a8-4c28-800f-8b06-58ce2807b075
2025-08-13 23:29:32 +02:00
2fba32d384 Solved listmonk path bug 2025-08-13 22:39:43 +02:00
f2a765d69a Removed unused ansible matrix role 2025-08-13 22:01:09 +02:00
c729edb525 Refactor async task handling
- Standardize async/poll usage with 'ASYNC_ENABLED | bool'
- Add async/poll parameters to Cloudflare, Nginx, Mailu, MIG, Nextcloud, and OpenLDAP tasks
- Update async configuration in 'group_vars/all/00_general.yml' to ensure boolean evaluation
- Allow CAA, cache, and DNS tasks to run asynchronously when enabled

https://chatgpt.com/share/689cd8cc-7fbc-800f-bd06-a667561573bf
2025-08-13 21:56:26 +02:00
597e9d5222 Refactor async execution handling across LDAP and Nextcloud roles
- Introduce global async configuration in group_vars/all/00_general.yml:
  - ASYNC_ENABLED (disabled in debug mode)
  - ASYNC_TIME (default 300s, omitted if async disabled)
  - ASYNC_POLL (0 for async fire-and-forget, 10 for sync mode)
- Replace hardcoded async/poll values with global vars in:
  - svc-db-openldap (03_users.yml, 04_update.yml)
  - web-app-mig (02_build_data.yml)
  - web-app-nextcloud (03_admin.yml, 04_system_config.yml, 05_plugin.yml,
    06_plugin_routines.yml, 07_plugin_enable_and_configure.yml)
- Guard changed_when and failed_when conditions to only evaluate in synchronous
  mode to avoid accessing undefined rc/stdout/stderr in async runs

  https://chatgpt.com/share/689cd8cc-7fbc-800f-bd06-a667561573bf
2025-08-13 20:26:40 +02:00
db0e030900 Renamed general and mode constants and implemented a check to verify that constants are just defined ones over the whole repository 2025-08-13 19:11:14 +02:00
004507e233 Optimized handler flushing 2025-08-13 18:17:05 +02:00
e2014b9b59 nextcloud(role): remove async → use batched shell; more robust changed_when/failed_when; fix quoting; refactor plugin routines; clean up vars
• 02_add_missing_indices.yml: switched to shell (+ansible_command_timeout), removed async/poll.

• 04_system_config.yml: batch OCC calls (set -euo pipefail, /bin/bash), safer quoting, change detection via ' set to '.

• 05_plugin.yml: disable task with stricter failed_when/changed_when (combine stdout+stderr).

• 06_plugin_routines.yml: disable incompatible plugins in a single batch; no async_status; robust changed_when.

• 07_plugin_enable_and_configure.yml: batch config:app:set, safe quoting, clear changed_when/failed_when.

• config/main.yml & vars/main.yml: removed performance.async.wait_for and nextcloud_wait_for_async_enabled.
2025-08-13 18:15:50 +02:00
567b1365c0 Nextcloud: async overhaul & task refactor (conditional wait, faster polling)
• Add config.performance.async.wait_for and expose as nextcloud_wait_for_async_enabled to toggle waiting for async jobs.

• Split system/admin/index maintenance into separate tasks: 02_add_missing_indices.yml, 03_admin.yml, 04_system_config.yml.

• Refactor plugin flow: rename 02_plugin→05_plugin, 03_plugin_routines→06_plugin_routines, 04_plugin_enable_and_configure→07_plugin_enable_and_configure; remove old 03_plugin_routines and 05_system.

• Harden async handling: filter async_status loops by ansible_job_id; conditionally wait only when nextcloud_wait_for_async_enabled; reduce delay to 1s.

• Reorder main.yml to run system steps before plugin setup; keep handlers flush earlier.

• env.j2: simplify get_app_conf lookups (drop extra True flag).

• vars/main.yml: add nextcloud_host_nginx_path and nextcloud_wait_for_async_enabled.

https://chatgpt.com/share/689c9d4a-1748-800f-b490-06a5a48dd831
2025-08-13 16:13:00 +02:00
e99fa77b91 Optimized docker handlers for espocrm and wordpress 2025-08-13 13:34:12 +02:00
80dad1a5ed Removed proxy_extra_configuration fact 2025-08-13 06:32:52 +02:00
03290eafe1 feat(proxy,bigbluebutton): use parameterized HTML location template & add build retry
- proxy(html.conf.j2):
  * Make proxy_pass more robust (strip '=', '^~' prefixes; ignore @/~ match locations)
  * Switch WS header to $connection_upgrade
  * Unify timeouts (proxy_connect_timeout 5s)
  * Lua optional: include only when proxy_lua_enabled=true; unset Accept-Encoding only then
  * Buffering via flag: proxy_buffering/proxy_request_buffering 'on' with Lua, otherwise 'off'
- proxy(media.conf.j2): minor formatting/spacing fix
- inj-css(head_sub.j2): consistent spacing for global_css_version
- bigbluebutton(tasks/main.yml):
  * Render HTML location block once before include_role (location='^~ /html5client', OAuth2/Lua disabled)
  * Pass rendered snippet via proxy_extra_configuration to the vHost
  * Cleanup afterwards: proxy_extra_configuration = undef()
- docker-compose(handlers):
  * Build with retry: if 'docker compose build' fails -> retry with '--no-cache --pull'
  * Enable BuildKit (DOCKER_BUILDKIT=1, COMPOSE_DOCKER_CLI_BUILD=1)
- vars: trailing newline / minor formatting

Motivation:
- BBB HTML5 client (^~ /html5client) needs a separate location without Lua/buffering.
- More resilient CI/CD builds via automatic no-cache retry.
- Cleaner headers/proxy defaults and fewer side effects.

Files:
- roles/docker-compose/handlers/main.yml
- roles/srv-proxy-7-4-core/templates/location/html.conf.j2
- roles/srv-proxy-7-4-core/templates/location/media.conf.j2
- roles/srv-web-7-7-inj-css/templates/head_sub.j2
- roles/web-app-bigbluebutton/tasks/main.yml
- roles/web-app-bigbluebutton/vars/main.yml
2025-08-13 06:01:50 +02:00
58c64bd7c6 Placed docker compose flush more specific 2025-08-13 03:53:13 +02:00
e497c001d6 keycloak: robust LDAP bind and connectionUrl update via kcadm (argv + JSON); strict ldap.*; idempotent
Switch to command:argv to avoid shell quoting and argument splitting issues.

Pass -s config values as JSON arrays via to_json, fixing previous errors: Cannot parse the JSON / failed at splitting arguments.

Also reconcile config.connectionUrl from ldap.server.uri.

Source desired values strictly from ldap.* (no computed defaults) and assert their presence.

Keep operation idempotent by reading current values and updating only on change.

Minor refactor: build reusable kcadm_argv_base and expand client state extraction.

Touch: roles/web-app-keycloak/tasks/03_update-ldap-bind.yml

https://chatgpt.com/share/689bea84-7188-800f-ba51-830a0735f24c
2025-08-13 03:30:14 +02:00
4fa1c6cfbd ansible: quote file modes; keycloak: robust LDAP bind update + config cleanup
Highlights
- Quote all file modes as strings ("0755"/"0770") across multiple roles to avoid YAML octal quirks and improve portability.
- Keycloak: introduce actions.{import_realm,update_ldap_bind} feature flags and wire them via vars/config.
- Implement idempotent LDAP bind updater (tasks/03_update-ldap-bind.yml):
  * kcadm login with no_log protection,
  * fetch LDAP UserStorage component by name,
  * compare current bindDn/bindCredential and update only when changed.
- Keycloak realm import template: keep providerId="ldap" and set name from keycloak_ldap_component_name.
- Centralize Keycloak readiness check in tasks/main.yml; remove duplicate waits from 02_update_client_redirects.yml and 04_ssh_public_key.yml.
- 01_import.yml: fix typo (keycloak), quote modes, tidy spacing, and replace Jinja-in-Jinja fileglob with concatenation.
- 02_update_client_redirects.yml: correct assert fail_msg filename; keep login-first flow.
- Minor template/vars tidy-ups (spacing, comments, consistent variable usage).

Files touched (excerpt)
- roles/*/*: replace 0755/0770 → "0755"/"0770"
- roles/web-app-keycloak/config/main.yml: add actions map
- roles/web-app-keycloak/vars/main.yml: unify Keycloak vars and feature flags
- roles/web-app-keycloak/tasks/{01_import,02_update_client_redirects,03_update-ldap-bind,04_ssh_public_key,main}.yml
- roles/web-app-keycloak/templates/{docker-compose.yml.j2,import/realm.json.j2}

https://chatgpt.com/share/689bda16-b138-800f-8258-e13f6d7d8239
2025-08-13 02:20:38 +02:00
53770f5308 Optimized flush order to solve yourls oauth2 proxy bug 2025-08-13 01:03:31 +02:00
13d8663796 Added version and repository to bbb 2025-08-13 00:35:14 +02:00
f31565e4c5 Optimized URLS 2025-08-13 00:33:47 +02:00
a4d8de2152 feat(web-app-espocrm): ensure 'siteUrl' is updated to canonical domain on deploy
Use EspoCRM's ConfigWriter API to patch the 'siteUrl' setting during updates.
This makes the process idempotent, avoids brittle regex replacements, and
ensures the running configuration stays in sync with the deployment domain.

https://chatgpt.com/share/689bb860-ba90-800f-adb5-4fa5a992b267
2025-08-12 23:56:19 +02:00
c744ebe3f9 feat(web-app-wordpress): add idempotent single-site domain update via WP-CLI
- New task 04_update_domain.yml updates home/siteurl only when needed
- DB-wide search-replace (old → new), GUID-safe, precise, tables-with-prefix
- Normalizes http→https, strips trailing slashes, then flushes cache/rewrites
- Guarded by is_multisite()==0; multisite untouched
- Wired into main.yml with auto target URL via domains|get_url

Fixes post-domain-change mixed/CSP issues due to hard-coded old URLs.

https://chatgpt.com/share/689bac2d-3610-800f-b6f0-41dc79d13a14
2025-08-12 23:03:59 +02:00
ce029881d0 cmp-rdbms: make vars resilient when database_type is empty
Fix a templating crash during docker-compose.yml rendering when a role sets database_type to an empty string or does not expose it (e.g., svc-prx-openresty). Previously _database_id resolved to 'svc-db-' and get_app_conf attempted to read 'docker.services..name', raising AppConfigKeyError: Application ID 'svc-db-' not found.

Changes:
- Introduce _dbtype = (database_type | d('') | trim) and build _database_id only if _dbtype is non-empty.
- Guard central DB lookups: use get_app_conf(..., strict=False, default='') and only when _dbtype is set.
- Default _database_consumer_entity_name to get_entity_name of database_application_id or fallback to application_id.
- Only resolve database_port when _dbtype is set; otherwise empty.
- Minor formatting fixes for env and URL strings.

Impact:
- Prevents failures in roles without a DB or with database_type=''.
- Keeps previous behavior intact for apps with a valid database_type (mariadb/postgres).
- Eliminates 'config_path: docker.services..name' errors while keeping compose templates stable.

https://chatgpt.com/share/689b9d11-6308-800f-b20c-2d9f18d832f1
2025-08-12 21:59:37 +02:00
94da112736 perf(friendica): single-pass patch for DB creds + system.url; align env URL; tidy vars
- Patch local.config.php in one sed exec:
  * hostname, database, username, password
  * system.url via '#' delimiter to avoid URL slash escaping
  * Single notify: docker compose up
- env.j2:
  * FRIENDICA_URL now uses domains|get_url(application_id, WEB_PROTOCOL)
  * Simplify FRIENDICA_DEBUGGING with |lower
  * Normalize spacing for readability
- vars/main.yml:
  * Minor cleanups (comment header, spacing)
  * Consistent friendica_docker_ldap_config path construction

Why: fewer container execs ⇒ faster runs; idempotent key updates; consistent URL configuration across env and PHP config.
Risk: requires WEB_PROTOCOL and domains|get_url to be defined in inventory/vars as elsewhere in the project.

https://chatgpt.com/share/689b92af-b184-800f-9664-2450e00b29d6
2025-08-12 21:15:33 +02:00
b62df5599d Optimized typos 2025-08-12 19:17:02 +02:00
c9a7830953 Renamed to CONSTANT DATABASE_VARS_FILE 2025-08-12 17:45:19 +02:00
53e5c563ae Refactor MIG build process to run asynchronously with optional wait control
- Moved MIG data build commands into a dedicated 02_build_data.yml task file.
- Added async execution (async: 3600, poll: 0) for non-blocking build.
- Introduced mig_wait_for_build variable to optionally wait for completion.
- Added debug message to inform how to disable waiting via build_data.wait_for=false for performance.
- Updated config to use nested build_data.enabled and build_data.wait_for structure.
- Adjusted variable lookups accordingly.

https://chatgpt.com/share/689b54d2-e3b0-800f-91df-939ebc5e12ef
2025-08-12 16:51:24 +02:00
0b3b3a810a Solved bug which prevented backup2loc to be activated 2025-08-12 15:42:26 +02:00
6d14f16dfd Optimized sys-timer 2025-08-12 15:00:12 +02:00
632d922977 Solved discourse flush handlers bug 2025-08-12 14:59:00 +02:00
26b29debc0 Add integration test to ensure no Jinja variables are used in handler names
This test scans roles/*/handlers/main.yml and fails if a handler's 'name' contains a Jinja variable ({{ ... }}).
Reason:
- Handler names must be static to ensure reliable 'notify' resolution.
- Dynamic names can break handler matching, cause undefined-variable errors, and produce unstable logs.
Recommendation:
- Keep handler names static and, if dynamic behavior is needed, use a static 'listen:' key.

https://chatgpt.com/share/689b37dc-e1e4-800f-bd56-00b43c7701f6
2025-08-12 14:48:43 +02:00
0c4cd283c4 Optimized CDN variables during bug research 2025-08-12 14:31:24 +02:00
5d36a806ff svc-db-postgres: add retry mechanism to all PostgreSQL tasks and fix condition handling
- Added register, until, retries, and delay to all PostgreSQL-related tasks
  in 02_init.yml to handle transient 'tuple concurrently updated' and similar errors.
- Changed 'when: "{{ postgres_init }}"' to 'when: postgres_init | bool' in main.yml
  for correct boolean evaluation.
- Switched 'role' to 'roles' in postgresql_privs tasks for forward compatibility.
- Added postgres_retry_retries and postgres_retry_delay defaults in vars/main.yml
  to centralize retry configuration.

  https://chatgpt.com/share/689b2360-a8a4-800f-9acb-6d88d6aa5cb7
2025-08-12 13:20:30 +02:00
84de85d905 Solved matrix flush handler bug 2025-08-12 12:54:27 +02:00
457f3659fa Solved mobilizon flush docker handler bug 2025-08-12 12:03:53 +02:00
4c7ee0441e Solved baserow variable bugs 2025-08-12 11:23:56 +02:00
140572a0a4 Solved missing deployment of postgres bug 2025-08-12 10:58:09 +02:00
a30cd4e8b5 Solved listmonk handler bugs 2025-08-12 04:38:41 +02:00
2067804e9f Solved ansible multiplexing auth bug 2025-08-12 04:23:45 +02:00
1a42e8bd14 Replaced depenencies by includes for performance reasons 2025-08-12 03:08:33 +02:00
8634b5e1b3 Finished move_unnecessary_dependencies implementation 2025-08-12 02:39:22 +02:00
1595a7c4a6 Optimized tests für run once 2025-08-12 02:38:37 +02:00
82aaf7ad74 fixed move_unnecessary_dependencies.py 2025-08-11 23:41:48 +02:00
7e4a1062af Added draft for fixing dependencies 2025-08-11 23:16:32 +02:00
d5e5f57f92 Optimized openproject for new repository structure 2025-08-11 23:03:24 +02:00
f671678720 Add integration test to detect unnecessary meta dependencies
This test scans all roles/*/meta/main.yml for meta dependencies that are
likely unnecessary and could be replaced with guarded include_role/import_role
calls to improve performance.

A dependency is flagged as unnecessary when:
- The consumer role does not use provider variables in defaults/vars/handlers
  (no early variable requirement), and
- Any usage of provider variables or handler notifications in tasks occurs
  only after an explicit include/import of the provider in the same file,
  or there is no usage at all.

Purpose:
Helps reduce redundant parsing/execution of roles and improves Ansible
performance by converting heavy global dependencies into conditional,
guarded includes where possible.

https://chatgpt.com/share/689a59ee-52f4-800f-8349-4f477dc97c7c
2025-08-11 23:00:49 +02:00
2219696c3f Removed redirects for performance 2025-08-11 22:21:17 +02:00
fbaee683fd Removed dependencies and used include_roles to raise performance and make infinito to a racing car 2025-08-11 21:56:34 +02:00
b301e58ee6 Removed redirect to safe performance 2025-08-11 21:48:33 +02:00
de15c42de8 Added database patch to wordpress 2025-08-11 21:46:29 +02:00
918355743f Updated ansible.cfg for better performance and tracking 2025-08-11 21:00:33 +02:00
f6e62525d1 Optimized wordpress variables 2025-08-11 20:00:48 +02:00
f72ac30884 Replaced redirects by origine to raise performance 2025-08-11 19:44:14 +02:00
1496f1de95 Replaced community.general.pacman: by pacman to raise performance 2025-08-11 19:33:28 +02:00
38de10ba65 Solved bigbluebutton admin creation bug 2025-08-11 19:24:08 +02:00
e8c19b4b84 Implemented correct path replace not just for context: but also for build: paths 2025-08-11 18:46:02 +02:00
b0737b1cdb Merge branch 'master' of github.com:kevinveenbirkenbach/cymais 2025-08-11 14:31:19 +02:00
e4cc928eea Encapsulated SAN in block with when 2025-08-11 14:31:10 +02:00
c9b2136578 Merge pull request #5 from ocrampete16/logs-dir
Create logs dir to prevent failing test
2025-08-11 14:15:39 +02:00
5709935c92 Improved performance by avoiding the load of roles which are anyhow just protected by one condition 2025-08-11 13:52:24 +02:00
c7badc608a Solved typo 2025-08-11 13:25:32 +02:00
0e59d35129 Update RunOnceSchemaTest to skip files with deactivated run_once variables via comment https://chatgpt.com/share/6899d297-4bec-800f-a748-6816398d8c7e 2025-08-11 13:23:20 +02:00
1ba50397db Optimized performance by moving multiple similar when includes to own tasks file 2025-08-11 13:15:31 +02:00
6318611931 Add integration test to detect excessive duplicate 'when' conditions in tasks files
This test scans all .yml/.yaml files under any tasks/ directory and flags cases where the same
'when' condition appears on more than 3 tasks in the same file. Excessive duplication of identical
conditions can harm Ansible performance because the condition is re-evaluated for every task.

The test suggests replacing repeated conditions with an include_tasks call or a block guarded
by the condition to evaluate it only once.

https://chatgpt.com/share/6899c605-6f40-800f-a954-ccb62f8bbcf1
2025-08-11 12:29:57 +02:00
6e04ac58d2 Moved blocks to include_tasks to raise performance. Deploy was really slow 2025-08-11 12:28:31 +02:00
b6e571a496 Solved config path entry bug 2025-08-11 12:19:24 +02:00
21b6362bc1 test(integration): fail if reset.yml exists but is never included
Updated test_mode_reset.py to also validate roles that contain a reset
task file (*_reset.yml or reset.yml) even when no mode_reset keyword is
found. The test now:

- Detects roles with reset files but no include, and fails accordingly.
- Ignores commented include_tasks and when lines.
- Ensures exactly one non-commented include of the reset file exists.
- Requires that the include is guarded in the same task block by a
  when containing mode_reset | bool (with optional extra conditions).

This prevents silent omissions of reset task integration.

https://chatgpt.com/share/6899b745-7150-800f-98f3-ca714486f5ba
2025-08-11 11:27:15 +02:00
1fcf072257 Added performance violation test for blocks 2025-08-11 10:50:42 +02:00
ea0149b5d4 Replaced nextcloud-application by nextcloud container name 2025-08-11 10:41:06 +02:00
fe76fe1e62 Added correct flush parameters for docker compose 2025-08-11 10:33:48 +02:00
3431796283 Wrapped docker compose file routines tasks in block for docker compose 2025-08-11 10:20:06 +02:00
b5d8ac5462 Reactivated keycloak docker and webserver tasks and implemented correct logic for element and synapse redirect handling 2025-08-11 02:21:02 +02:00
5426014096 Optimized handlers order for mailu 2025-08-11 01:56:22 +02:00
a9d77de2a4 Optimized docker compose ensure logic 2025-08-11 01:26:31 +02:00
766ef8619f Encapsulated cmp-docker-oauth2 into block 2025-08-11 01:25:31 +02:00
66013a4da3 Added line 2025-08-11 01:24:02 +02:00
1cb5a12d85 Encapsulated srv-web-7-7-letsencrypt into block 2025-08-11 01:23:00 +02:00
6e8ae793e3 Added auto setting for redirect urls for keycloak clients. Element and Synapse still need to be mapped 2025-08-11 00:17:18 +02:00
0746acedfd Introduced run_once_ block for srv-web-6-6-tls-renew 2025-08-10 21:50:39 +02:00
f5659a44f8 Optimized blocks in roles/srv-proxy-6-6-domain/tasks/main.yml 2025-08-10 18:31:12 +02:00
77816ac4e7 Solved bkp_docker_to_local_pkg bug 2025-08-10 18:17:52 +02:00
8779afd1f7 Removed beep backup sound 2025-08-10 17:54:14 +02:00
0074bcbd69 Implemented functioning warning sound 2025-08-10 17:39:33 +02:00
149c563831 Optimized logic for database backups and integrated test to verify that database feature is used correct 2025-08-10 15:06:37 +02:00
e9ef62b95d Optimized cloudflare purge and cache dev mdoe 2025-08-10 14:18:29 +02:00
aeaf84de6f Deactivated central_database for lam 2025-08-10 13:42:52 +02:00
fdceb0f792 Implmented dev mode für cloudflare 2025-08-10 12:18:17 +02:00
2fd83eaf55 Keep logs during deploy cleanup 2025-08-10 12:01:34 +02:00
Marco Petersen
21eb614912 Create logs dir to prevent failing test 2025-08-10 12:50:13 +03:00
b880b98ac3 Added hints for infinito modes 2025-08-10 11:34:33 +02:00
acfb1a2ee7 Made logs folder permanent 2025-08-10 11:31:56 +02:00
4885ad7eb4 Overwritte handlers for CDN 2025-08-09 18:08:30 +02:00
d9669fc6dd Added test to verify that no handlers are skipped due to when condition 2025-08-09 15:24:47 +02:00
8e0341c120 Solved some handler reloading bugs 2025-08-08 19:33:16 +02:00
22c8c395f0 Refactored handlers loading 2025-08-08 19:01:12 +02:00
aae69ea15b Ensure that keycloak is up 2025-08-08 17:25:31 +02:00
c7b25ed093 Normalized run_once_, made openresty handlers without when aviable and forced flush in run_once when blocks to avoid handlers with when conditions 2025-08-08 15:32:26 +02:00
e675aa5886 Wrapped in block to avoid multiple similar when conditions for 7-4 web core 2025-08-08 12:25:09 +02:00
14f07adc9d Wrapped in block to avoid multiple similar when conditions 2025-08-08 12:14:01 +02:00
dba12b89d8 Normalized cmp-docker-proxy include 2025-08-08 12:02:14 +02:00
0607974dac Patched url in moodle config 2025-08-08 11:46:23 +02:00
e8fa22cb43 Normalized variable 2025-08-08 11:27:34 +02:00
eedfe83ece Solved missing redirect bug 2025-08-08 11:03:43 +02:00
9f865dd215 Removed catetory domain präfix from redirect domains 2025-08-08 09:47:31 +02:00
220e3e1c60 Optimized namings in moodle 2025-08-08 09:12:50 +02:00
2996c7cbb6 Added default value for internet interfadces 2025-08-08 08:39:40 +02:00
59bd4ca8eb Added handling of multiple domains and used get_url function in mailu 2025-08-08 08:39:09 +02:00
da58691d25 Added comments why autoflush isn't possible 2025-08-08 08:37:52 +02:00
c96f278ac3 Added autoflush to mastodon für docker 2025-08-08 08:37:12 +02:00
2715479c95 Assert just applications which are in group_names 2025-08-08 08:36:07 +02:00
926640371f Optimized description 2025-08-08 08:35:16 +02:00
cdc97c8ba5 Raised certbot_dns_propagation_wait_seconds to 5min 2025-08-08 08:34:49 +02:00
4124e97aeb Added domain validator for web- apps and services for port-ui 2025-08-07 20:37:47 +02:00
7f0d40bdc3 Optimized code 2025-08-07 18:17:38 +02:00
8dc2238ba2 Optimized Funkwhale bug 2025-08-07 17:52:34 +02:00
b9b08feadd Added logout overwritte logic for espocrm 2025-08-07 17:35:13 +02:00
dc437c7621 Activated logout catcher 2025-08-07 16:11:20 +02:00
7d63d92166 Solved status codes bug 2025-08-07 15:46:56 +02:00
3eb51a32ce Adapted webserver test for web-app-yourls 2025-08-07 15:35:33 +02:00
6272303b55 Changed LAM container name 2025-08-07 15:34:40 +02:00
dfd7be9d72 Solved database naming bug 2025-08-07 15:11:45 +02:00
90ad688ca9 Solved bbb backup bug 2025-08-07 14:57:02 +02:00
2f02ad6c15 Moved docs and anylyses from code to cloud 2025-08-07 13:41:31 +02:00
1257bef61d Solved reset bug 2025-08-07 13:28:40 +02:00
3eca5dabdf Fixed certificate renewal bug 2025-08-07 13:21:38 +02:00
5a0684fa2d Adapted CSP to new server dict structure 2025-08-07 13:00:52 +02:00
051e4accd6 Activated redirect for yourls homepage to admin pannel 2025-08-07 12:59:54 +02:00
7f53cc3a12 Replaced web_protocol by WEB_PROTOCOL 2025-08-07 12:31:20 +02:00
9228d51e86 Restructured server config 2025-08-07 11:31:06 +02:00
99c6c9ec92 Optimized CSP check 2025-08-07 09:33:19 +02:00
34f9d773bd Deactivated meta CSP via HTML due to management via infintio.nexus 2025-08-06 16:26:35 +02:00
5edb9d19cf Activated sphinx listener 2025-08-06 16:12:38 +02:00
7a09f223af Implemented the correct setup of the bbb administrator 2025-08-06 15:51:08 +02:00
f88e57ca52 Optimized debugging documentation and deprecated warning for openresty 2025-08-06 14:18:32 +02:00
7bc11f9b31 Renamed parameter 2025-08-06 12:56:31 +02:00
0b25161af6 Merge branch 'master' of github.com:kevinveenbirkenbach/cymais 2025-08-06 12:30:20 +02:00
14c3ff1253 Renamed doamin mappigns to redirect_domain_mappings 2025-08-06 12:30:12 +02:00
234cfea02f Merge pull request #4 from ocrampete16/normalize-line-endings
Normalize line endings
2025-08-06 00:47:54 +02:00
Marco Petersen
69e29029af Normalize line endings 2025-08-05 22:05:46 +03:00
bc5374cf52 Merge pull request #3 from ocrampete16/clone-on-windows
Enable cloning on Windows
2025-08-05 19:28:44 +02:00
Marco Petersen
1660bcd384 Remove duplicate README.md 2025-08-05 00:28:29 +03:00
Marco Petersen
41d924af1c Remove mistakenly cloned file 2025-08-05 00:24:31 +03:00
80278f2bb0 Solved CLI bug 2025-07-29 16:46:50 +02:00
44e0fea0b2 Renamed cymais to infinito and did some other optimations and logout implementations 2025-07-29 16:35:42 +02:00
a9e7ed3605 Implemented flexible upload limits for wordpress and matrix :) 2025-07-26 11:22:01 +02:00
f9f76892af Solved peertube bugs 2025-07-26 08:08:51 +02:00
996244b672 Solved application_id overwritting bug 2025-07-25 22:15:09 +02:00
9f61b4e50b Added database type to postgres 2025-07-25 21:32:40 +02:00
3549f4de32 Activated domains creation for svc-db- due to that the domains are used for certificate creation and they need it to use secure communication 2025-07-25 21:12:45 +02:00
552bb1bbae Moved python packages to requirements.txt for venv instead of local install 2025-07-25 20:44:45 +02:00
1b385c5215 Added request to requirements.txt 2025-07-25 20:41:08 +02:00
1240d3bfdf Added debugging option to keycloak 2025-07-25 20:14:04 +02:00
27973c2773 Optimized injection layer on lua base, as replace for nginx replace. Also optimized cloudflare cache deletion(no everytime for cleanup). Still CDN is required for logout mechanism via JS and Nextcloud deploy is buggy after changing from nginx to openresty. Propably some variable overwritte topic. Should be solved tomorrow. 2025-07-24 19:13:13 +02:00
f62355e490 Replaced nginx native with openresty for logout injection. Right now still buggy on nextcloud and espocrm 2025-07-24 03:19:16 +02:00
f5213fd59c Optimized virtualbox draft 2025-07-23 21:18:09 +02:00
0472fecd64 Solved ooauth2 bugs and restructured postgres roile to implement extensions used by discourse 2025-07-23 13:24:55 +02:00
d1fcbedef6 Set correct roles path for oidc keycloak groups\roles 2025-07-22 22:11:00 +02:00
c8be88e3b1 Activated redis for oauth2 for large cookies 2025-07-22 22:00:11 +02:00
5e315f9603 Activated correct oidc solution for nextcloud 2025-07-22 21:32:26 +02:00
bab1035a24 Activated oauth2 for lam 2025-07-22 21:31:11 +02:00
30930c4136 Fixed empty canonicals 2025-07-22 19:26:44 +02:00
bba663f95d Added missing canonicals 2025-07-22 19:20:29 +02:00
c2f83abb60 Added better debugging 2025-07-22 19:14:40 +02:00
3bc64023af Added logout pages to some applications 2025-07-22 18:49:23 +02:00
d94254effb Made canonical domains obligatorix 2025-07-22 18:47:38 +02:00
ff18c7cd73 Expect this to solve openldap import bug 2025-07-22 14:18:33 +02:00
a84abbdade Deactivated ESPOCRM for port-ui tmp until bug is solved 2025-07-22 14:14:41 +02:00
5dc8ec2344 Deactivated caching 2025-07-22 13:53:20 +02:00
4b9e7dd3b7 Implemented universal logout 2025-07-22 13:14:06 +02:00
22ff2dc1f3 Solved cleanup file naming bug 2025-07-22 12:48:22 +02:00
16c1a5d834 Refactored get_app_conf 2025-07-22 10:08:47 +02:00
b25f7f52b3 Left hint 2025-07-22 08:34:26 +02:00
4826de621e Solved drive optimizer path bug 2025-07-22 08:33:58 +02:00
4501c31756 Solved docker compose handler bugs 2025-07-22 08:33:36 +02:00
c185c537cb Added auto seeding of credentials 2025-07-21 17:52:19 +02:00
809ac1adf4 Removed unnecessary schema 2025-07-21 17:43:45 +02:00
1a2451af4e Added workflow deploy routine draft 2025-07-21 17:41:57 +02:00
e78974b469 Solved openldap folder naming bug 2025-07-21 17:41:18 +02:00
b1bf7aaba5 Fixed BBB stuff 2025-07-21 15:10:05 +02:00
a1643870db Optimized auto playbook creation 2025-07-21 15:09:38 +02:00
aeeae776c7 Finished implementation of correct application id 2025-07-21 11:33:35 +02:00
356c214718 Renamed multiple roles incl. oauth2 to to web-app-* 2025-07-21 11:28:54 +02:00
4717e33649 Renamed multiple roles incl. gitlab to to web-app-* 2025-07-21 11:25:24 +02:00
ee4ee9a1b7 Changed funkwhale to web-app-funkwhale 2025-07-21 11:14:02 +02:00
57211c2076 Changed phpldapadmin to web-app-phpldapadmin 2025-07-21 11:12:05 +02:00
2ffaadfaca Changed bluesky to web-app-bluesky 2025-07-21 11:10:06 +02:00
bc5059fe62 Solved taiga path bugs 2025-07-21 10:56:55 +02:00
e6db73c02a Changed taiga to web-app-taiga 2025-07-21 10:47:45 +02:00
4ad6f1f8ea Changed roulette-wheel to web-app-roulette-wheel 2025-07-21 10:40:02 +02:00
7e58b825ea Changed pgadmin to web-app-pgadmin 2025-07-21 10:36:51 +02:00
f3aa7625fe Renamed presentation to navigator 2025-07-21 09:22:30 +02:00
d9c4493e0d Optimized mode_reset test 2025-07-21 09:08:03 +02:00
14dde77134 Implemented correct sphinx id 2025-07-21 08:56:23 +02:00
fd422a14ce Set correct id for simpleicons 2025-07-21 08:52:22 +02:00
5343536d27 Optimized snipe-it und bbb 2025-07-21 01:40:42 +02:00
6e2e3e45a7 Solved matrix bug 2025-07-21 01:36:10 +02:00
ed866bf177 Finished bbb implementation 2025-07-20 20:07:43 +02:00
a580f41edd Replaced aur_builder by builder in dockerfile 2025-07-20 18:39:54 +02:00
dcb57af6f7 Implemented gitea database patch 2025-07-20 18:14:20 +02:00
2699edd197 Optimized friendica database patch 2025-07-20 16:13:48 +02:00
257d0c4673 Set entity name as default domain instead of application_id 2025-07-20 15:44:28 +02:00
4cbd29735f Solved pixelfed bugs 2025-07-20 15:43:48 +02:00
8ea86d2bd7 Solved friendica path bug and closed all failed tests! PAAAAAAAAAAAAAAAAAARTY! 2025-07-20 14:34:57 +02:00
3951376a29 Added draft for neovim and micro 2025-07-20 14:27:09 +02:00
e1d36045da Solved open run_once issues 2025-07-20 14:23:08 +02:00
c572d535e2 Optimized test for tree creation 2025-07-20 11:41:16 +02:00
c79dbeec68 Optimized run_once variable 2025-07-20 11:31:15 +02:00
5501e40b7b Optimized run_once test 2025-07-20 11:21:14 +02:00
e84c7e5612 Optimized desk-copyq draft and implemented server to use in gnome 2025-07-20 11:20:49 +02:00
be675d5f9e Solved variable name bugs 2025-07-20 10:52:33 +02:00
bf16a44e87 Implemented allowed_groups 2025-07-20 10:46:35 +02:00
98cc3d5070 Changed yourls to yourls and additional optimations 2025-07-20 10:41:06 +02:00
2db5f75888 Changed snipe-it to web-app-snipe-it and additional optimations 2025-07-20 10:26:09 +02:00
867b377115 Changed mobolizon to web-app-mobilizon 2025-07-20 10:10:17 +02:00
1882fcfef5 Changed lam to web-app-lam 2025-07-20 09:59:31 +02:00
15dc99a221 Activated port ui desktop for mobilizon 2025-07-20 09:45:41 +02:00
6b35454f35 Solved openproject variable bug 2025-07-20 09:44:14 +02:00
d86ca6cc0e Adapted discourse version to new code after the big refactoring 2025-07-20 09:29:56 +02:00
1b9775ccb5 Added draft for desk-copyq (untested) 2025-07-19 17:02:25 +02:00
45d9da3125 Implemented friendica database credentials update (untested) 2025-07-19 16:45:04 +02:00
8ccfb1dfbe Added icon to mig 2025-07-19 16:26:10 +02:00
6a1a83432f Different optimations and mig integration. test will fail due to strickter validation checks. need to be cleaned up tomorrow 2025-07-18 20:08:20 +02:00
85195e01f9 Activated loading of env depending on if it exist 2025-07-18 19:40:34 +02:00
45624037b1 Added shadow option to tree for mig 2025-07-18 19:35:44 +02:00
d4fbdb409f Added missing sounds file from previous commit 2025-07-18 14:44:38 +02:00
a738199868 Added run_once_validator 2025-07-18 14:43:53 +02:00
c1da74de3f Optimized sound for cli 2025-07-18 14:43:09 +02:00
c23624e30c Added workflow todos 2025-07-18 11:45:46 +02:00
0f1f40f2e0 Optimized deployment script 2025-07-18 11:42:05 +02:00
d1982af63d Optimized mastodon and network integration. added options for mig build to make 2025-07-17 18:50:28 +02:00
409e659143 Overall optimations for application id naming 2025-07-17 17:41:52 +02:00
562603a8cd Restructured libraries 2025-07-17 16:38:20 +02:00
6d4b7227ce Added variable desktop_username to identify the user on a single user pc workstation 2025-07-17 16:10:39 +02:00
9a8ef5e047 Implemented new appid for bbb 2025-07-17 16:04:05 +02:00
ad449c3b6a Adapted roles to new architecture 2025-07-17 15:39:31 +02:00
9469452275 Finalized matomo integration into new architecture 2025-07-17 12:09:20 +02:00
fd8ef26b53 Solved streaming bugs 2025-07-17 09:47:08 +02:00
8cda54c46e Finished moodle adaptation to new structure 2025-07-17 09:18:24 +02:00
90bc52632e Moved web-app-phpmyadmin to new structure 2025-07-17 08:24:07 +02:00
0b8d2e0b40 Solved variable bug 2025-07-17 08:22:59 +02:00
40491dbc2e Solved typos in port-ui 2025-07-17 07:54:19 +02:00
fac8971982 Solved typo 2025-07-17 07:49:05 +02:00
c791e86b8b Solved discourse variable bug 2025-07-17 07:46:39 +02:00
d222b55f30 Changed espocrm application id to new forma 2025-07-17 07:43:50 +02:00
a04a1710d3 Changed keycloak application id 2025-07-17 07:16:38 +02:00
4f06f94023 Added credentials replacement draft for matomo 2025-07-17 06:57:35 +02:00
2529c7cdb3 Optimized moodle variables 2025-07-17 06:56:54 +02:00
ab12a933f6 Optimized keycloak variables 2025-07-17 06:56:26 +02:00
529efc0bd7 Optimized moodle variable names 2025-07-17 06:38:51 +02:00
725fea1169 Solved database credentials bug 2025-07-17 06:32:53 +02:00
84322f81ef Implemented draft for auto database credentials change moodle 2025-07-17 06:31:55 +02:00
fd637c58e3 Solved oauth2 path bugs 2025-07-17 05:49:45 +02:00
bfc42ce2ac Different little optimations 2025-07-17 04:23:05 +02:00
1bdfb71f2f Finished backup update 2025-07-17 00:34:54 +02:00
807fab42c3 Solved variable escaping bug 2025-07-16 23:09:42 +02:00
2f45038bef Solved variable bugs 2025-07-16 23:01:25 +02:00
f263992393 Added recursion test for group_vars 2025-07-16 22:31:48 +02:00
f4d1f2a303 Added partial test and skip-build flag 2025-07-16 22:16:22 +02:00
3b2190f7ab Replaced by loading of default values 2025-07-16 21:46:44 +02:00
7145213f45 Finished new backup implementation logik 2025-07-16 20:49:20 +02:00
70f7953027 Added backup value key mapper 2025-07-16 20:03:00 +02:00
c155e82f8c Solved variable bugs 2025-07-16 19:30:54 +02:00
169493179e Restructuring for new backup solution 2025-07-16 19:09:31 +02:00
dea2669de2 Solved unclosed file <_io.TextIOWrapper warnings 2025-07-16 14:33:10 +02:00
e4ce3848fc Optimized for github workflow 2025-07-16 14:22:47 +02:00
8113e412dd Deactivated tests during docker build, in the hope that it now passes the git workflow 2025-07-16 14:07:20 +02:00
94796efae8 Replaced cp by rsync to keep .git folder for workflow 2025-07-16 13:45:33 +02:00
1904 changed files with 30433 additions and 16160 deletions

View File

@@ -1,3 +1,7 @@
# The .gitignore is the single point of truth for files which should be ignored.
# Add patterns, files and folders to the .gitignore and execute 'make build'
# NEVER TOUCH THE .dockerignore, BECAUSE IT ANYHOW WILL BE OVERWRITTEN
site.retry
*__pycache__
venv
@@ -5,4 +9,5 @@ venv
*.bak
*tree.json
roles/list.json
*.pyc
.git

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
* text=auto eol=lf

4
.github/workflows/TODO.md vendored Normal file
View File

@@ -0,0 +1,4 @@
# Todo
- Create workflow test-server, which tests all server roles
- Create workflow test-desktop, which tests all desktop roles
- For the backup services keep in mind to setup a tandem, which pulls the backups from each other to verify that this also works

View File

@@ -1,4 +1,4 @@
name: Build & Test Container
name: Build & Test Infinito.Nexus CLI in Docker Container
on:
push:
@@ -17,16 +17,16 @@ jobs:
- name: Build Docker image
run: |
docker build -t cymais:latest .
docker build -t infinito:latest .
- name: Clean build artifacts
run: |
docker run --rm cymais:latest make clean
docker run --rm infinito:latest make clean
- name: Generate project outputs
run: |
docker run --rm cymais:latest make build
docker run --rm infinito:latest make build
- name: Run tests
run: |
docker run --rm cymais:latest make test
docker run --rm infinito:latest make test

5
.gitignore vendored
View File

@@ -1,3 +1,7 @@
# The .gitignore is the single point of truth for files which should be ignored.
# Add patterns, files and folders to the .gitignore and execute 'make build'
# NEVER TOUCH THE .dockerignore, BECAUSE IT ANYHOW WILL BE OVERWRITTEN
site.retry
*__pycache__
venv
@@ -5,3 +9,4 @@ venv
*.bak
*tree.json
roles/list.json
*.pyc

View File

@@ -1,6 +1,6 @@
# Code of Conduct
In order to foster a welcoming, open, and respectful community for everyone, we expect all contributors and participants in the CyMaIS project to abide by the following Code of Conduct.
In order to foster a welcoming, open, and respectful community for everyone, we expect all contributors and participants in the Infinito.Nexus project to abide by the following Code of Conduct.
## Our Pledge
@@ -29,10 +29,10 @@ Our project maintainers and community leaders will review all reports and take a
## Scope
This Code of Conduct applies to all spaces managed by the CyMaIS project, including GitHub repositories, mailing lists, chat rooms, and other communication channels.
This Code of Conduct applies to all spaces managed by the Infinito.Nexus project, including GitHub repositories, mailing lists, chat rooms, and other communication channels.
## Acknowledgment
By participating in the CyMaIS project, you agree to adhere to this Code of Conduct. We appreciate your cooperation in helping us build a positive and productive community.
By participating in the Infinito.Nexus project, you agree to adhere to this Code of Conduct. We appreciate your cooperation in helping us build a positive and productive community.
Thank you for contributing to a safe and inclusive CyMaIS community!
Thank you for contributing to a safe and inclusive Infinito.Nexus community!

View File

@@ -2,13 +2,13 @@
<img src="https://cybermaster.space/wp-content/uploads/sites/7/2023/11/FVG_8364BW-scaled.jpg" width="300" style="float: right; margin-left: 30px;">
My name is Kevin Veen-Birkenbach and I'm the author and founder of CyMaIS.
My name is Kevin Veen-Birkenbach and I'm the author and founder of Infinito.Nexus.
I'm glad to assist you in the implementation of your secure and scalable IT infrastrucutre solution with CyMaIS.
I'm glad to assist you in the implementation of your secure and scalable IT infrastrucutre solution with Infinito.Nexus.
My expertise in server administration, digital corporate infrastructure, custom software, and information security, all underpinned by a commitment to Open Source solutions, guarantees that your IT setup meets the highest industry standards.
Discover how CyMaIS can transform your IT landscape.
Discover how Infinito.Nexus can transform your IT landscape.
Contact me for more details:

View File

@@ -1,14 +1,14 @@
# Contributing
Thank you for your interest in contributing to CyMaIS! We welcome contributions from the community to help improve and enhance this project. Your input makes the project stronger and more adaptable to a wide range of IT infrastructure needs.
Thank you for your interest in contributing to Infinito.Nexus! We welcome contributions from the community to help improve and enhance this project. Your input makes the project stronger and more adaptable to a wide range of IT infrastructure needs.
## How to Contribute
There are several ways you can help:
- **Reporting Issues:** Found a bug or have a feature request? Please open an issue on our [GitHub Issues page](https://github.com/kevinveenbirkenbach/cymais/issues) with a clear description and steps to reproduce the problem.
- **Reporting Issues:** Found a bug or have a feature request? Please open an issue on our [GitHub Issues page](https://s.infinito.nexus/issues) with a clear description and steps to reproduce the problem.
- **Code Contributions:** If you'd like to contribute code, fork the repository, create a new branch for your feature or bug fix, and submit a pull request. Ensure your code adheres to our coding style and includes tests where applicable.
- **Documentation:** Improving the documentation is a great way to contribute. Whether it's clarifying an existing section or adding new guides, your contributions help others understand and use CyMaIS effectively.
- **Financial Contributions:** If you appreciate CyMaIS and want to support its ongoing development, consider making a financial contribution. For more details, please see our [donate options](12_DONATE.md).
- **Documentation:** Improving the documentation is a great way to contribute. Whether it's clarifying an existing section or adding new guides, your contributions help others understand and use Infinito.Nexus effectively.
- **Financial Contributions:** If you appreciate Infinito.Nexus and want to support its ongoing development, consider making a financial contribution. For more details, please see our [donate options](12_DONATE.md).
## Code of Conduct
@@ -40,7 +40,7 @@ Please follow these guidelines when contributing code:
## License and Commercial Use
CyMaIS is primarily designed for private use. Commercial use of CyMaIS is not permitted without a proper licensing agreement. By contributing to this project, you agree that your contributions will be licensed under the same terms as the rest of the project.
Infinito.Nexus is primarily designed for private use. Commercial use of Infinito.Nexus is not permitted without a proper licensing agreement. By contributing to this project, you agree that your contributions will be licensed under the same terms as the rest of the project.
## Getting Started
@@ -54,4 +54,4 @@ CyMaIS is primarily designed for private use. Commercial use of CyMaIS is not pe
If you have any questions or need help, feel free to open an issue or join our community discussions. We appreciate your efforts and are here to support you.
Thank you for contributing to CyMaIS and helping us build a better, more efficient IT infrastructure solution!
Thank you for contributing to Infinito.Nexus and helping us build a better, more efficient IT infrastructure solution!

View File

@@ -1,8 +1,8 @@
# Support Us
CyMaIS is an Open Source Based transformative tool designed to redefine IT infrastructure setup for organizations and individuals alike. Your contributions directly support the ongoing development and innovation behind CyMaIS, ensuring that it continues to grow and serve its community effectively.
Infinito.Nexus is an Open Source Based transformative tool designed to redefine IT infrastructure setup for organizations and individuals alike. Your contributions directly support the ongoing development and innovation behind Infinito.Nexus, ensuring that it continues to grow and serve its community effectively.
If you enjoy using CyMaIS and would like to contribute to its improvement, please consider donating. Every contribution, no matter the size, helps us maintain and expand this project.
If you enjoy using Infinito.Nexus and would like to contribute to its improvement, please consider donating. Every contribution, no matter the size, helps us maintain and expand this project.
[![GitHub Sponsors](https://img.shields.io/badge/Sponsor-GitHub%20Sponsors-blue?logo=github)](https://github.com/sponsors/kevinveenbirkenbach) [![Patreon](https://img.shields.io/badge/Support-Patreon-orange?logo=patreon)](https://www.patreon.com/c/kevinveenbirkenbach) [![Buy Me a Coffee](https://img.shields.io/badge/Buy%20me%20a%20Coffee-Funding-yellow?logo=buymeacoffee)](https://buymeacoffee.com/kevinveenbirkenbach) [![PayPal](https://img.shields.io/badge/Donate-PayPal-blue?logo=paypal)](https://s.veen.world/paypaldonate)

View File

@@ -9,6 +9,7 @@ RUN pacman -Syu --noconfirm \
python-setuptools \
alsa-lib \
go \
rsync \
&& pacman -Scc --noconfirm
# 2) Stub out systemctl & yay so post-install hooks and AUR calls never fail
@@ -18,12 +19,12 @@ RUN printf '#!/bin/sh\nexit 0\n' > /usr/bin/systemctl \
&& chmod +x /usr/bin/yay
# 3) Build & install python-simpleaudio from AUR manually (as non-root)
RUN useradd -m builder \
&& su builder -c "git clone https://aur.archlinux.org/python-simpleaudio.git /home/builder/psa && \
cd /home/builder/psa && \
RUN useradd -m aur_builder \
&& su aur_builder -c "git clone https://aur.archlinux.org/python-simpleaudio.git /home/aur_builder/psa && \
cd /home/aur_builder/psa && \
makepkg --noconfirm --skippgpcheck" \
&& pacman -U --noconfirm /home/builder/psa/*.pkg.tar.zst \
&& rm -rf /home/builder/psa
&& pacman -U --noconfirm /home/aur_builder/psa/*.pkg.tar.zst \
&& rm -rf /home/aur_builder/psa
# 4) Clone Kevins Package Manager and create its venv
ENV PKGMGR_REPO=/opt/package-manager \
@@ -32,7 +33,7 @@ ENV PKGMGR_REPO=/opt/package-manager \
RUN git clone https://github.com/kevinveenbirkenbach/package-manager.git $PKGMGR_REPO \
&& python -m venv $PKGMGR_VENV \
&& $PKGMGR_VENV/bin/pip install --upgrade pip \
# install pkgmgrs own deps + the ansible Python library so cymais import yaml & ansible.plugins.lookup work
# install pkgmgrs own deps + the ansible Python library so infinito import yaml & ansible.plugins.lookup work
&& $PKGMGR_VENV/bin/pip install --no-cache-dir -r $PKGMGR_REPO/requirements.txt ansible \
# drop a thin wrapper so `pkgmgr` always runs inside that venv
&& printf '#!/bin/sh\n. %s/bin/activate\nexec python %s/main.py "$@"\n' \
@@ -42,28 +43,27 @@ RUN git clone https://github.com/kevinveenbirkenbach/package-manager.git $PKGMGR
# 5) Ensure pkgmgr venv bin and user-local bin are on PATH
ENV PATH="$PKGMGR_VENV/bin:/root/.local/bin:${PATH}"
# 6) Copy local CyMaIS source into the image for override
COPY . /opt/cymais-src
# 6) Copy local Infinito.Nexus source into the image for override
COPY . /opt/infinito-src
# 7) Install CyMaIS via pkgmgr (clone-mode https)
RUN pkgmgr install cymais --clone-mode https
# 7) Install Infinito.Nexus via pkgmgr (clone-mode https)
RUN pkgmgr install infinito --clone-mode https
# 8) Override installed CyMaIS with local source and clean ignored files
RUN CMAIS_PATH=$(pkgmgr path cymais) && \
rm -rf "$CMAIS_PATH"/* && \
cp -R /opt/cymais-src/* "$CMAIS_PATH"/ && \
cd "$CMAIS_PATH" && \
make clean
# 8) Override installed Infinito.Nexus with local source and clean ignored files
RUN INFINITO_PATH=$(pkgmgr path infinito) && \
rm -rf "$INFINITO_PATH"/* && \
rsync -a --delete --exclude='.git' /opt/infinito-src/ "$INFINITO_PATH"/
# 9) Symlink the cymais script into /usr/local/bin so ENTRYPOINT works
RUN CMAIS_PATH=$(pkgmgr path cymais) && \
ln -sf "$CMAIS_PATH"/main.py /usr/local/bin/cymais && \
chmod +x /usr/local/bin/cymais
# 9) Symlink the infinito script into /usr/local/bin so ENTRYPOINT works
RUN INFINITO_PATH=$(pkgmgr path infinito) && \
ln -sf "$INFINITO_PATH"/main.py /usr/local/bin/infinito && \
chmod +x /usr/local/bin/infinito
# 10) Run integration tests
RUN CMAIS_PATH=$(pkgmgr path cymais) && \
cd "$CMAIS_PATH" && \
make test
# This needed to be deactivated becaus it doesn't work with gitthub workflow
#RUN INFINITO_PATH=$(pkgmgr path infinito) && \
# cd "$INFINITO_PATH" && \
# make test
ENTRYPOINT ["cymais"]
ENTRYPOINT ["infinito"]
CMD ["--help"]

View File

@@ -1,9 +1,9 @@
# License Agreement
## CyMaIS NonCommercial License (CNCL)
## Infinito.Nexus NonCommercial License
### Definitions
- **"Software":** Refers to *"[CyMaIS - Cyber Master Infrastructure Solution](https://cymais.cloud/)"* and its associated source code.
- **"Software":** Refers to *"[Infinito.Nexus](https://infinito.nexus/)"* and its associated source code.
- **"Commercial Use":** Any use of the Software intended for direct or indirect financial gain, including but not limited to sales, rentals, or provision of services.
### Provisions

View File

@@ -21,20 +21,31 @@ EXTRA_USERS := $(shell \
.PHONY: build install test
clean-keep-logs:
@echo "🧹 Cleaning ignored files but keeping logs/…"
git clean -fdX -- ':!logs' ':!logs/**'
clean:
@echo "Removing ignored git files"
git clean -fdX
list:
@echo Generating the roles list
python3 main.py build roles_list
tree:
@echo Generating Tree
python3 main.py build tree -D 2 --no-signal
mig: list tree
@echo Creating meta data for meta infinity graph
dockerignore:
@echo Create dockerignore
cat .gitignore > .dockerignore
echo ".git" >> .dockerignore
build: clean dockerignore
messy-build: dockerignore
@echo "🔧 Generating users defaults → $(USERS_OUT)"
python3 $(USERS_SCRIPT) \
--roles-dir $(ROLES_DIR) \
@@ -58,11 +69,17 @@ build: clean dockerignore
echo "$$out"; \
)
messy-test:
@echo "🧪 Running Python tests…"
PYTHONPATH=. python -m unittest discover -s tests
@echo "📑 Checking Ansible syntax…"
ansible-playbook playbook.yml --syntax-check
install: build
@echo "⚙️ Install complete."
test: build
@echo "🧪 Running Python tests…"
python -m unittest discover -s tests
@echo "📑 Checking Ansible syntax…"
ansible-playbook playbook.yml --syntax-check
build: clean messy-build
@echo "Full build with cleanup before was executed."
test: build messy-test
@echo "Full test with build before was executed."

View File

@@ -1,22 +1,20 @@
# IT-Infrastructure Automation Framework 🚀
# Infinito.Nexus 🚀
**🔐 One login. ♾️ Infinite application**
*Automate the Provisioning of All Your Servers and Workstations with a Single OpenSource Script!*
![CyMaIS Logo](assets/img/logo.png)
![Infinito.Nexus Logo](assets/img/logo.png)
---
## What is CyMaIS? 📌
## What is Infinito.Nexus? 📌
**CyMaIS** is an **automated, modular infrastructure framework** built on **Docker**, **Linux**, and **Ansible**, equally suited for cloud services, local server management, and desktop workstations. At its core lies a **web-based desktop with single sign-on**—backed by an **LDAP directory** and **OIDC**—granting **seamless access** to an almost limitless portfolio of self-hosted applications. It fully supports **ActivityPub applications** and is **Fediverse-compatible**, while integrated **monitoring**, **alerting**, **cleanup**, **self-healing**, **automated updates**, and **backup solutions** provide everything an organization needs to run at scale.
**Infinito.Nexus** is an **automated, modular infrastructure framework** built on **Docker**, **Linux**, and **Ansible**, equally suited for cloud services, local server management, and desktop workstations. At its core lies a **web-based desktop with single sign-on**—backed by an **LDAP directory** and **OIDC**—granting **seamless access** to an almost limitless portfolio of self-hosted applications. It fully supports **ActivityPub applications** and is **Fediverse-compatible**, while integrated **monitoring**, **alerting**, **cleanup**, **self-healing**, **automated updates**, and **backup solutions** provide everything an organization needs to run at scale.
| 📚 | 🔗 |
|---|---|
| 🌐 Try It Live | [![CyMaIS.Cloud](https://img.shields.io/badge/CyMaIS-%2ECloud-000000?labelColor=004B8D&style=flat&borderRadius=8)](https://cymais.cloud) |
| 🌐 Try It Live | [![Infinito.Nexus](https://img.shields.io/badge/Infinito.Nexus-%2ECloud-000000?labelColor=004B8D&style=flat&borderRadius=8)](https://infinito.nexus) |
| 🔧 Request Your Setup | [![CyberMaster.Space](https://img.shields.io/badge/CyberMaster-%2ESpace-000000?labelColor=004B8D&style=flat&borderRadius=8)](https://cybermaster.space) |
| 📖 About This Project | [![GitHub Sponsors](https://img.shields.io/badge/Sponsor-GitHub%20Sponsors-blue?logo=github)](https://github.com/sponsors/kevinveenbirkenbach) [![Build Status](https://github.com/kevinveenbirkenbach/cymais/actions/workflows/test-container.yml/badge.svg?branch=master)](https://github.com/kevinveenbirkenbach/cymais/actions/workflows/test-container.yml?query=branch%3Amaster) [![View Source](https://img.shields.io/badge/View_Source-Repository-000000?logo=github&labelColor=004B8D&style=flat&borderRadius=8)](https://github.com/kevinveenbirkenbach/cymais) |
| ☕️ Support Us | [![Patreon](https://img.shields.io/badge/Support-Patreon-orange?logo=patreon)](https://www.patreon.com/c/kevinveenbirkenbach) [![Buy Me a Coffee](https://img.shields.io/badge/Buy%20me%20a%20Coffee-Funding-yellow?logo=buymeacoffee)](https://buymeacoffee.com/kevinveenbirkenbach) [![PayPal](https://img.shields.io/badge/Donate-PayPal-blue?logo=paypal)](https://s.veen.world/paypaldonate) [![Sponsor CyMaIS](https://img.shields.io/badge/DonateCyMaIS-000000?style=flat&labelColor=004B8D&logo=github-sponsors&logoColor=white&borderRadius=8)](https://github.com/sponsors/kevinveenbirkenbach) |
| 📖 About This Project | [![GitHub Sponsors](https://img.shields.io/badge/Sponsor-GitHub%20Sponsors-blue?logo=github)](https://github.com/sponsors/kevinveenbirkenbach) [![Build & Test Infinito.Nexus CLI in Docker Container](https://github.com/kevinveenbirkenbach/infinito-nexus/actions/workflows/test-cli.yml/badge.svg)](https://github.com/kevinveenbirkenbach/infinito-nexus/actions/workflows/test-cli.yml) [![View Source](https://img.shields.io/badge/View_Source-Repository-000000?logo=github&labelColor=004B8D&style=flat&borderRadius=8)](https://s.infinito.nexus/code) |
| ☕️ Support Us | [![Patreon](https://img.shields.io/badge/Support-Patreon-orange?logo=patreon)](https://www.patreon.com/c/kevinveenbirkenbach) [![Buy Me a Coffee](https://img.shields.io/badge/Buy%20me%20a%20Coffee-Funding-yellow?logo=buymeacoffee)](https://buymeacoffee.com/kevinveenbirkenbach) [![PayPal](https://img.shields.io/badge/Donate-PayPal-blue?logo=paypal)](https://s.veen.world/paypaldonate) [![Sponsor Infinito.Nexus](https://img.shields.io/badge/DonateInfinito.Nexus-000000?style=flat&labelColor=004B8D&logo=github-sponsors&logoColor=white&borderRadius=8)](https://github.com/sponsors/kevinveenbirkenbach) |
---
@@ -57,37 +55,37 @@ More informations about the features you will find [here](docs/overview/Features
### Use it online 🌐
Try [CyMaIS.Cloud](https://cymais.cloud) sign up in seconds, explore the platform, and discover what our solution can do for you! 🚀🔧✨
Try [Infinito.Nexus](https://infinito.nexus) sign up in seconds, explore the platform, and discover what our solution can do for you! 🚀🔧✨
### Install locally 💻
1. **Install CyMaIS** via [Kevin's Package Manager](https://github.com/kevinveenbirkenbach/package-manager)
2. **Setup CyMaIS** using:
1. **Install Infinito.Nexus** via [Kevin's Package Manager](https://github.com/kevinveenbirkenbach/package-manager)
2. **Setup Infinito.Nexus** using:
```sh
pkgmgr install cymais
pkgmgr install infinito
```
3. **Explore Commands** with:
```sh
cymais --help
infinito --help
```
---
### Setup with Docker🚢
Get CyMaIS up and running inside Docker in just a few steps. For detailed build options and troubleshooting, see the [Docker Guide](docs/Docker.md).
Get Infinito.Nexus up and running inside Docker in just a few steps. For detailed build options and troubleshooting, see the [Docker Guide](docs/Docker.md).
```bash
# 1. Build the Docker image: the Docker image:
docker build -t cymais:latest .
docker build -t infinito:latest .
# 2. Run the CLI interactively:
docker run --rm -it cymais:latest cymais --help
docker run --rm -it infinito:latest infinito --help
```
---
## License ⚖️
CyMaIS is distributed under the **CyMaIS NonCommercial License**. Please see [LICENSE.md](LICENSE.md) for full terms.
Infinito.Nexus is distributed under the **Infinito.Nexus NonCommercial License**. Please see [LICENSE.md](LICENSE.md) for full terms.
---

5
TODO.md Normal file
View File

@@ -0,0 +1,5 @@
# Todos
- Implement multi language
- Implement rbac administration interface
- Implement ``MASK_CREDENTIALS_IN_LOGS`` for all sensible tasks
- [Enable IP6 for docker](https://chatgpt.com/share/68a0acb8-db20-800f-9d2c-b34e38b5cdee).

View File

@@ -1,3 +0,0 @@
# Todos
- Implement multi language
- Implement rbac administration interface

View File

@@ -1,4 +1,33 @@
[defaults]
lookup_plugins = ./lookup_plugins
# --- Performance & Behavior ---
forks = 25
strategy = linear
gathering = smart
timeout = 120
retry_files_enabled = False
host_key_checking = True
deprecation_warnings = True
interpreter_python = auto_silent
# --- Output & Profiling ---
stdout_callback = yaml
callbacks_enabled = profile_tasks,timer
# --- Plugin paths ---
filter_plugins = ./filter_plugins
module_utils = ./module_utils
lookup_plugins = ./lookup_plugins
module_utils = ./module_utils
[ssh_connection]
# Multiplexing: safer socket path in HOME instead of /tmp
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r \
-o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new \
-o PreferredAuthentications=publickey,password,keyboard-interactive
# Pipelining boosts speed; works fine if sudoers does not enforce "requiretty"
pipelining = True
scp_if_ssh = smart
[persistent_connection]
connect_timeout = 30
command_timeout = 60

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 701 KiB

After

Width:  |  Height:  |  Size: 1015 KiB

View File

@@ -5,7 +5,7 @@ import sys
import time
from pathlib import Path
# Ensure project root on PYTHONPATH so utils is importable
# Ensure project root on PYTHONPATH so module_utils is importable
repo_root = Path(__file__).resolve().parent.parent.parent.parent
sys.path.insert(0, str(repo_root))
@@ -13,7 +13,7 @@ sys.path.insert(0, str(repo_root))
plugin_path = repo_root / "lookup_plugins"
sys.path.insert(0, str(plugin_path))
from utils.dict_renderer import DictRenderer
from module_utils.dict_renderer import DictRenderer
from application_gid import LookupModule
def load_yaml_file(path: Path) -> dict:

View File

@@ -189,7 +189,7 @@ def parse_args():
def main():
args = parse_args()
primary_domain = '{{ primary_domain }}'
primary_domain = '{{ SYSTEM_EMAIL.DOMAIN }}'
become_pwd = '{{ lookup("password", "/dev/null length=42 chars=ascii_letters,digits") }}'
try:

View File

@@ -71,8 +71,8 @@ def build_single_graph(
meta = load_meta(find_role_meta(roles_dir, role))
node = {'id': role}
node.update(meta['galaxy_info'])
node['doc_url'] = f"https://docs.cymais.cloud/roles/{role}/README.html"
node['source_url'] = f"https://github.com/kevinveenbirkenbach/cymais/tree/master/roles/{role}"
node['doc_url'] = f"https://docs.infinito.nexus/roles/{role}/README.html"
node['source_url'] = f"https://s.infinito.nexus/code/tree/master/roles/{role}"
nodes[role] = node
if max_depth > 0 and depth >= max_depth:

127
cli/build/inventory/full.py Normal file
View File

@@ -0,0 +1,127 @@
#!/usr/bin/env python3
# cli/build/inventory/full.py
import argparse
import sys
import os
try:
from filter_plugins.get_all_invokable_apps import get_all_invokable_apps
except ImportError:
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..')))
from filter_plugins.get_all_invokable_apps import get_all_invokable_apps
import yaml
import json
def build_group_inventory(apps, host):
"""
Build an Ansible inventory in which each application is a group containing the given host.
"""
groups = {app: {"hosts": [host]} for app in apps}
inventory = {
"all": {
"hosts": [host],
"children": {app: {} for app in apps},
},
**groups
}
return inventory
def build_hostvar_inventory(apps, host):
"""
Alternative: Build an inventory where all invokable apps are set as a host variable (as a list).
"""
return {
"all": {
"hosts": [host],
},
"_meta": {
"hostvars": {
host: {
"invokable_applications": apps
}
}
}
}
def main():
parser = argparse.ArgumentParser(
description='Build a dynamic Ansible inventory for a given host with all invokable applications.'
)
parser.add_argument(
'--host',
required=True,
help='Hostname to assign to all invokable application groups'
)
parser.add_argument(
'-f', '--format',
choices=['json', 'yaml'],
default='yaml',
help='Output format (yaml [default], json)'
)
parser.add_argument(
'--inventory-style',
choices=['group', 'hostvars'],
default='group',
help='Inventory style: group (default, one group per app) or hostvars (list as hostvar)'
)
parser.add_argument(
'-c', '--categories-file',
default=os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..', 'roles', 'categories.yml')),
help='Path to roles/categories.yml (default: roles/categories.yml at project root)'
)
parser.add_argument(
'-r', '--roles-dir',
default=os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..', 'roles')),
help='Path to roles/ directory (default: roles/ at project root)'
)
parser.add_argument(
'-o', '--output',
help='Write output to file instead of stdout'
)
parser.add_argument(
'-i', '--ignore',
action='append',
default=[],
help='Application ID(s) to ignore (can be specified multiple times or comma-separated)'
)
args = parser.parse_args()
try:
apps = get_all_invokable_apps(
categories_file=args.categories_file,
roles_dir=args.roles_dir
)
except Exception as e:
sys.stderr.write(f"Error: {e}\n")
sys.exit(1)
# Combine all ignore arguments into a flat set
ignore_ids = set()
for entry in args.ignore:
ignore_ids.update(i.strip() for i in entry.split(',') if i.strip())
if ignore_ids:
apps = [app for app in apps if app not in ignore_ids]
# Build the requested inventory style
if args.inventory_style == 'group':
inventory = build_group_inventory(apps, args.host)
else:
inventory = build_hostvar_inventory(apps, args.host)
# Output in the chosen format
if args.format == 'json':
output = json.dumps(inventory, indent=2)
else:
output = yaml.safe_dump(inventory, default_flow_style=False)
if args.output:
with open(args.output, 'w') as f:
f.write(output)
else:
print(output)
if __name__ == '__main__':
main()

View File

@@ -102,8 +102,10 @@ def find_cycle(roles):
def topological_sort(graph, in_degree, roles=None):
"""
Perform topological sort on the dependency graph.
If `roles` is provided, on error it will include detailed debug info.
If a cycle is detected, raise an Exception with detailed debug info.
"""
from collections import deque
queue = deque([r for r, d in in_degree.items() if d == 0])
sorted_roles = []
local_in = dict(in_degree)
@@ -117,28 +119,26 @@ def topological_sort(graph, in_degree, roles=None):
queue.append(nbr)
if len(sorted_roles) != len(in_degree):
# Something went wrong: likely a cycle
cycle = find_cycle(roles or {})
if roles is not None:
if cycle:
header = f"Circular dependency detected: {' -> '.join(cycle)}"
else:
header = "Circular dependency detected among the roles!"
unsorted = [r for r in in_degree if r not in sorted_roles]
unsorted = [r for r in in_degree if r not in sorted_roles]
detail_lines = ["Unsorted roles and their dependencies:"]
header = "❌ Dependency resolution failed"
if cycle:
reason = f"Circular dependency detected: {' -> '.join(cycle)}"
else:
reason = "Unresolved dependencies among roles (possible cycle or missing role)."
details = []
if unsorted:
details.append("Unsorted roles and their declared run_after dependencies:")
for r in unsorted:
deps = roles.get(r, {}).get('run_after', [])
detail_lines.append(f" - {r} depends on {deps!r}")
details.append(f" - {r} depends on {deps!r}")
detail_lines.append("Full dependency graph:")
detail_lines.append(f" {dict(graph)!r}")
graph_repr = f"Full dependency graph: {dict(graph)!r}"
raise Exception("\n".join([header] + detail_lines))
else:
if cycle:
raise Exception(f"Circular dependency detected: {' -> '.join(cycle)}")
else:
raise Exception("Circular dependency detected among the roles!")
raise Exception("\n".join([header, reason] + details + [graph_repr]))
return sorted_roles

View File

@@ -5,10 +5,10 @@ import json
from typing import Dict, Any
from cli.build.graph import build_mappings, output_graph
from module_utils.role_dependency_resolver import RoleDependencyResolver
def find_roles(roles_dir: str):
"""Yield (role_name, role_path) for every subfolder in roles_dir."""
for entry in os.listdir(roles_dir):
path = os.path.join(roles_dir, entry)
if os.path.isdir(path):
@@ -16,40 +16,31 @@ def find_roles(roles_dir: str):
def main():
# default roles dir is ../../roles relative to this script
script_dir = os.path.dirname(os.path.abspath(__file__))
default_roles_dir = os.path.abspath(os.path.join(script_dir, '..', '..', 'roles'))
default_roles_dir = os.path.abspath(os.path.join(script_dir, "..", "..", "roles"))
parser = argparse.ArgumentParser(
description="Generate all graphs for each role and write meta/tree.json"
)
parser.add_argument(
'-d', '--role_dir',
default=default_roles_dir,
help=f"Path to roles directory (default: {default_roles_dir})"
)
parser.add_argument(
'-D', '--depth',
type=int,
default=0,
help="Max recursion depth (>0) or <=0 to stop on cycle"
)
parser.add_argument(
'-o', '--output',
choices=['yaml', 'json', 'console'],
default='json',
help="Output format"
)
parser.add_argument(
'-p', '--preview',
action='store_true',
help="Preview graphs to console instead of writing files"
)
parser.add_argument(
'-v', '--verbose',
action='store_true',
help="Enable verbose logging"
)
parser.add_argument("-d", "--role_dir", default=default_roles_dir,
help=f"Path to roles directory (default: {default_roles_dir})")
parser.add_argument("-D", "--depth", type=int, default=0,
help="Max recursion depth (>0) or <=0 to stop on cycle")
parser.add_argument("-o", "--output", choices=["yaml", "json", "console"],
default="json", help="Output format")
parser.add_argument("-p", "--preview", action="store_true",
help="Preview graphs to console instead of writing files")
parser.add_argument("-s", "--shadow-folder", type=str, default=None,
help="If set, writes tree.json to this shadow folder instead of the role's actual meta/ folder")
parser.add_argument("-v", "--verbose", action="store_true", help="Enable verbose logging")
# Toggles
parser.add_argument("--no-include-role", action="store_true", help="Do not scan include_role")
parser.add_argument("--no-import-role", action="store_true", help="Do not scan import_role")
parser.add_argument("--no-dependencies", action="store_true", help="Do not read meta/main.yml dependencies")
parser.add_argument("--no-run-after", action="store_true",
help="Do not read galaxy_info.run_after from meta/main.yml")
args = parser.parse_args()
if args.verbose:
@@ -57,6 +48,9 @@ def main():
print(f"Max depth: {args.depth}")
print(f"Output format: {args.output}")
print(f"Preview mode: {args.preview}")
print(f"Shadow folder: {args.shadow_folder}")
resolver = RoleDependencyResolver(args.role_dir)
for role_name, role_path in find_roles(args.role_dir):
if args.verbose:
@@ -68,18 +62,43 @@ def main():
max_depth=args.depth
)
# Direct deps (depth=1) getrennt erfasst für buckets
inc_roles, imp_roles = resolver._scan_tasks(role_path)
meta_deps = resolver._extract_meta_dependencies(role_path)
run_after = set()
if not args.no_run_after:
run_after = resolver._extract_meta_run_after(role_path)
if any([not args.no_include_role and inc_roles,
not args.no_import_role and imp_roles,
not args.no_dependencies and meta_deps,
not args.no_run_after and run_after]):
deps_root = graphs.setdefault("dependencies", {})
if not args.no_include_role and inc_roles:
deps_root["include_role"] = sorted(inc_roles)
if not args.no_import_role and imp_roles:
deps_root["import_role"] = sorted(imp_roles)
if not args.no_dependencies and meta_deps:
deps_root["dependencies"] = sorted(meta_deps)
if not args.no_run_after and run_after:
deps_root["run_after"] = sorted(run_after)
graphs["dependencies"] = deps_root
if args.preview:
for key, data in graphs.items():
if args.verbose:
print(f"Previewing graph '{key}' for role '{role_name}'")
output_graph(data, 'console', role_name, key)
output_graph(data, "console", role_name, key)
else:
tree_file = os.path.join(role_path, 'meta', 'tree.json')
if args.shadow_folder:
tree_file = os.path.join(args.shadow_folder, role_name, "meta", "tree.json")
else:
tree_file = os.path.join(role_path, "meta", "tree.json")
os.makedirs(os.path.dirname(tree_file), exist_ok=True)
with open(tree_file, 'w') as f:
with open(tree_file, "w", encoding="utf-8") as f:
json.dump(graphs, f, indent=2)
print(f"Wrote {tree_file}")
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -1,14 +1,29 @@
#!/usr/bin/env python3
"""
Selectively add & vault NEW credentials in your inventory, preserving comments
and formatting. Existing values are left untouched unless --force is used.
Usage example:
infinito create credentials \
--role-path roles/web-app-akaunting \
--inventory-file host_vars/echoserver.yml \
--vault-password-file .pass/echoserver.txt \
--set credentials.database_password=mysecret
"""
import argparse
import subprocess
import sys
from pathlib import Path
import yaml
from typing import Dict, Any
from utils.manager.inventory import InventoryManager
from utils.handler.vault import VaultHandler, VaultScalar
from utils.handler.yaml import YamlHandler
from yaml.dumper import SafeDumper
from typing import Dict, Any, Union
from ruamel.yaml import YAML
from ruamel.yaml.comments import CommentedMap
from module_utils.manager.inventory import InventoryManager
from module_utils.handler.vault import VaultHandler # uses your existing handler
# ---------- helpers ----------
def ask_for_confirmation(key: str) -> bool:
"""Prompt the user for confirmation to overwrite an existing value."""
@@ -18,35 +33,117 @@ def ask_for_confirmation(key: str) -> bool:
return confirmation == 'y'
def main():
def ensure_map(node: CommentedMap, key: str) -> CommentedMap:
"""
Ensure node[key] exists and is a mapping (CommentedMap) for round-trip safety.
"""
if key not in node or not isinstance(node.get(key), CommentedMap):
node[key] = CommentedMap()
return node[key]
def _is_ruamel_vault(val: Any) -> bool:
"""Detect if a ruamel scalar already carries the !vault tag."""
try:
return getattr(val, 'tag', None) == '!vault'
except Exception:
return False
def _is_vault_encrypted(val: Any) -> bool:
"""
Detect if value is already a vault string or a ruamel !vault scalar.
Accept both '$ANSIBLE_VAULT' and '!vault' markers.
"""
if _is_ruamel_vault(val):
return True
if isinstance(val, str) and ("$ANSIBLE_VAULT" in val or "!vault" in val):
return True
return False
def _vault_body(text: str) -> str:
"""
Return only the vault body starting from the first line that contains
'$ANSIBLE_VAULT'. If not found, return the original text.
Also strips any leading '!vault |' header if present.
"""
lines = text.splitlines()
for i, ln in enumerate(lines):
if "$ANSIBLE_VAULT" in ln:
return "\n".join(lines[i:])
return text
def _make_vault_scalar_from_text(text: str) -> Any:
"""
Build a ruamel object representing a literal block scalar tagged with !vault
by parsing a tiny YAML snippet. This avoids depending on yaml_set_tag().
"""
body = _vault_body(text)
indented = " " + body.replace("\n", "\n ") # proper block scalar indentation
snippet = f"v: !vault |\n{indented}\n"
y = YAML(typ="rt")
return y.load(snippet)["v"]
def to_vault_block(vault_handler: VaultHandler, value: Union[str, Any], label: str) -> Any:
"""
Return a ruamel scalar tagged as !vault. If the input value is already
vault-encrypted (string contains $ANSIBLE_VAULT or is a !vault scalar), reuse/wrap.
Otherwise, encrypt plaintext via ansible-vault.
"""
# Already a ruamel !vault scalar → reuse
if _is_ruamel_vault(value):
return value
# Already an encrypted string (may include '!vault |' or just the header)
if isinstance(value, str) and ("$ANSIBLE_VAULT" in value or "!vault" in value):
return _make_vault_scalar_from_text(value)
# Plaintext → encrypt now
snippet = vault_handler.encrypt_string(str(value), label)
return _make_vault_scalar_from_text(snippet)
def parse_overrides(pairs: list[str]) -> Dict[str, str]:
"""
Parse --set key=value pairs into a dict.
Supports both 'credentials.key=val' and 'key=val' (short) forms.
"""
out: Dict[str, str] = {}
for pair in pairs:
k, v = pair.split("=", 1)
out[k.strip()] = v.strip()
return out
# ---------- main ----------
def main() -> int:
parser = argparse.ArgumentParser(
description="Selectively vault credentials + become-password in your inventory."
description="Selectively add & vault NEW credentials in your inventory, preserving comments/formatting."
)
parser.add_argument("--role-path", required=True, help="Path to your role")
parser.add_argument("--inventory-file", required=True, help="Host vars file to update")
parser.add_argument("--vault-password-file", required=True, help="Vault password file")
parser.add_argument(
"--role-path", required=True, help="Path to your role"
)
parser.add_argument(
"--inventory-file", required=True, help="Host vars file to update"
)
parser.add_argument(
"--vault-password-file", required=True, help="Vault password file"
)
parser.add_argument(
"--set", nargs="*", default=[], help="Override values key.subkey=VALUE"
"--set", nargs="*", default=[],
help="Override values key[.subkey]=VALUE (applied to NEW keys; with --force also to existing)"
)
parser.add_argument(
"-f", "--force", action="store_true",
help="Force overwrite without confirmation"
help="Allow overrides to replace existing values (will ask per key unless combined with --yes)"
)
parser.add_argument(
"-y", "--yes", action="store_true",
help="Non-interactive: assume 'yes' for all overwrite confirmations when --force is used"
)
args = parser.parse_args()
# Parse overrides
overrides = {
k.strip(): v.strip()
for pair in args.set for k, v in [pair.split("=", 1)]
}
overrides = parse_overrides(args.set)
# Initialize inventory manager
# Initialize inventory manager (provides schema + app_id + vault)
manager = InventoryManager(
role_path=Path(args.role_path),
inventory_path=Path(args.inventory_file),
@@ -54,62 +151,90 @@ def main():
overrides=overrides
)
# Load existing credentials to preserve
existing_apps = manager.inventory.get("applications", {})
existing_creds = {}
if manager.app_id in existing_apps:
existing_creds = existing_apps[manager.app_id].get("credentials", {}).copy()
# 1) Load existing inventory with ruamel (round-trip)
yaml_rt = YAML(typ="rt")
yaml_rt.preserve_quotes = True
# Apply schema (may generate defaults)
updated_inventory = manager.apply_schema()
with open(args.inventory_file, "r", encoding="utf-8") as f:
data = yaml_rt.load(f) # CommentedMap or None
if data is None:
data = CommentedMap()
# Restore existing database_password if present
apps = updated_inventory.setdefault("applications", {})
app_block = apps.setdefault(manager.app_id, {})
creds = app_block.setdefault("credentials", {})
if "database_password" in existing_creds:
creds["database_password"] = existing_creds["database_password"]
# 2) Get schema-applied structure (defaults etc.) for *non-destructive* merge
schema_inventory: Dict[str, Any] = manager.apply_schema()
# Store original plaintext values
original_plain = {key: str(val) for key, val in creds.items()}
# 3) Ensure structural path exists
apps = ensure_map(data, "applications")
app_block = ensure_map(apps, manager.app_id)
creds = ensure_map(app_block, "credentials")
for key, raw_val in list(creds.items()):
# Skip if already vaulted
if isinstance(raw_val, VaultScalar) or str(raw_val).lstrip().startswith("$ANSIBLE_VAULT"):
# 4) Determine defaults we could add
schema_apps = schema_inventory.get("applications", {})
schema_app_block = schema_apps.get(manager.app_id, {})
schema_creds = schema_app_block.get("credentials", {}) if isinstance(schema_app_block, dict) else {}
# 5) Add ONLY missing credential keys
newly_added_keys = set()
for key, default_val in schema_creds.items():
if key in creds:
# existing → do not touch (preserve plaintext/vault/formatting/comments)
continue
# Determine plaintext
plain = original_plain.get(key, "")
if key in overrides and (args.force or ask_for_confirmation(key)):
plain = overrides[key]
# Value to use for the new key
# Priority: --set exact key → default from schema → empty string
ov = overrides.get(f"credentials.{key}", None)
if ov is None:
ov = overrides.get(key, None)
# Encrypt the plaintext
encrypted = manager.vault_handler.encrypt_string(plain, key)
lines = encrypted.splitlines()
indent = len(lines[1]) - len(lines[1].lstrip())
body = "\n".join(line[indent:] for line in lines[1:])
creds[key] = VaultScalar(body)
# Vault top-level become password if present
if "ansible_become_password" in updated_inventory:
val = str(updated_inventory["ansible_become_password"])
if val.lstrip().startswith("$ANSIBLE_VAULT"):
updated_inventory["ansible_become_password"] = VaultScalar(val)
if ov is not None:
value_for_new_key: Union[str, Any] = ov
else:
snippet = manager.vault_handler.encrypt_string(
val, "ansible_become_password"
if _is_vault_encrypted(default_val):
# Schema already provides a vault value → take it as-is
creds[key] = to_vault_block(manager.vault_handler, default_val, key)
newly_added_keys.add(key)
continue
value_for_new_key = "" if default_val is None else str(default_val)
# Insert as !vault literal (encrypt if needed)
creds[key] = to_vault_block(manager.vault_handler, value_for_new_key, key)
newly_added_keys.add(key)
# 6) ansible_become_password: only add if missing;
# never rewrite an existing one unless --force (+ confirm/--yes) and override provided.
if "ansible_become_password" not in data:
val = overrides.get("ansible_become_password", None)
if val is not None:
data["ansible_become_password"] = to_vault_block(
manager.vault_handler, val, "ansible_become_password"
)
lines = snippet.splitlines()
indent = len(lines[1]) - len(lines[1].lstrip())
body = "\n".join(line[indent:] for line in lines[1:])
updated_inventory["ansible_become_password"] = VaultScalar(body)
else:
if args.force and "ansible_become_password" in overrides:
do_overwrite = args.yes or ask_for_confirmation("ansible_become_password")
if do_overwrite:
data["ansible_become_password"] = to_vault_block(
manager.vault_handler, overrides["ansible_become_password"], "ansible_become_password"
)
# Write back to file
# 7) Overrides for existing credential keys (only with --force)
if args.force:
for ov_key, ov_val in overrides.items():
# Accept both 'credentials.key' and bare 'key'
key = ov_key.split(".", 1)[1] if ov_key.startswith("credentials.") else ov_key
if key in creds:
# If we just added it in this run, don't ask again or rewrap
if key in newly_added_keys:
continue
if args.yes or ask_for_confirmation(key):
creds[key] = to_vault_block(manager.vault_handler, ov_val, key)
# 8) Write back with ruamel (preserve formatting & comments)
with open(args.inventory_file, "w", encoding="utf-8") as f:
yaml.dump(updated_inventory, f, sort_keys=False, Dumper=SafeDumper)
yaml_rt.dump(data, f)
print(f"Inventory selectively vaulted{args.inventory_file}")
print(f"Added new credentials without touching existing formatting/comments{args.inventory_file}")
return 0
if __name__ == "__main__":
main()
sys.exit(main())

View File

@@ -1,16 +1,18 @@
#!/usr/bin/env python3
import argparse
import os
import shutil
import sys
import ipaddress
import difflib
from jinja2 import Environment, FileSystemLoader
from ruamel.yaml import YAML
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.entity_name_utils import get_entity_name
# Paths to the group-vars files
PORTS_FILE = './group_vars/all/09_ports.yml'
NETWORKS_FILE = './group_vars/all/10_networks.yml'
PORTS_FILE = './group_vars/all/10_ports.yml'
NETWORKS_FILE = './group_vars/all/09_networks.yml'
ROLE_TEMPLATE_DIR = './templates/roles/web-app'
ROLES_DIR = './roles'
@@ -65,6 +67,7 @@ def prompt_conflict(dst_file):
def render_templates(src_dir, dst_dir, context):
env = Environment(loader=FileSystemLoader(src_dir), keep_trailing_newline=True, autoescape=False)
env.filters['bool'] = lambda x: bool(x)
env.filters['get_entity_name'] = get_entity_name
for root, _, files in os.walk(src_dir):
rel = os.path.relpath(root, src_dir)

View File

@@ -14,14 +14,27 @@ def run_ansible_playbook(
password_file=None,
verbose=0,
skip_tests=False,
skip_validation=False
skip_validation=False,
skip_build=False,
cleanup=False,
logs=False
):
start_time = datetime.datetime.now()
print(f"\n▶️ Script started at: {start_time.isoformat()}\n")
print("\n🛠️ Building project (make build)...\n")
subprocess.run(["make", "build"], check=True)
if cleanup:
cleanup_command = ["make", "clean-keep-logs"] if logs else ["make", "clean"]
print("\n🧹 Cleaning up project (" + " ".join(cleanup_command) +")...\n")
subprocess.run(cleanup_command, check=True)
else:
print("\n⚠️ Skipping build as requested.\n")
if not skip_build:
print("\n🛠️ Building project (make messy-build)...\n")
subprocess.run(["make", "messy-build"], check=True)
else:
print("\n⚠️ Skipping build as requested.\n")
script_dir = os.path.dirname(os.path.realpath(__file__))
playbook = os.path.join(os.path.dirname(script_dir), "playbook.yml")
@@ -46,8 +59,8 @@ def run_ansible_playbook(
print("\n⚠️ Skipping inventory validation as requested.\n")
if not skip_tests:
print("\n🧪 Running tests (make test)...\n")
subprocess.run(["make", "test"], check=True)
print("\n🧪 Running tests (make messy-test)...\n")
subprocess.run(["make", "messy-test"], check=True)
# Build ansible-playbook command
cmd = ["ansible-playbook", "-i", inventory, playbook]
@@ -84,7 +97,7 @@ def validate_application_ids(inventory, app_ids):
"""
Abort the script if any application IDs are invalid, with detailed reasons.
"""
from utils.valid_deploy_id import ValidDeployId
from module_utils.valid_deploy_id import ValidDeployId
validator = ValidDeployId()
invalid = validator.validate(inventory, app_ids)
if invalid:
@@ -92,7 +105,7 @@ def validate_application_ids(inventory, app_ids):
for app_id, status in invalid.items():
reasons = []
if not status['in_roles']:
reasons.append("not defined in roles (cymais)")
reasons.append("not defined in roles (infinito)")
if not status['in_inventory']:
reasons.append("not found in inventory file")
print(f" - {app_id}: " + ", ".join(reasons))
@@ -120,7 +133,7 @@ def main():
)
parser.add_argument(
"-r", "--reset", action="store_true",
help="Reset all CyMaIS files and configurations, and run the entire playbook (not just individual roles)."
help="Reset all Infinito.Nexus files and configurations, and run the entire playbook (not just individual roles)."
)
parser.add_argument(
"-t", "--test", action="store_true",
@@ -136,7 +149,7 @@ def main():
)
parser.add_argument(
"-c", "--cleanup", action="store_true",
help="Clean up unused files and outdated configurations after all tasks are complete."
help="Clean up unused files and outdated configurations after all tasks are complete. Also cleans up the repository before the deployment procedure."
)
parser.add_argument(
"-d", "--debug", action="store_true",
@@ -154,6 +167,10 @@ def main():
"-V", "--skip-validation", action="store_true",
help="Skip inventory validation before deployment."
)
parser.add_argument(
"-B", "--skip-build", action="store_true",
help="Skip running 'make build' before deployment."
)
parser.add_argument(
"-i", "--id",
nargs="+",
@@ -165,17 +182,23 @@ def main():
"-v", "--verbose", action="count", default=0,
help="Increase verbosity level. Multiple -v flags increase detail (e.g., -vvv for maximum log output)."
)
parser.add_argument(
"--logs", action="store_true",
help="Keep the CLI logs during cleanup command"
)
args = parser.parse_args()
validate_application_ids(args.inventory, args.id)
modes = {
"mode_reset": args.reset,
"mode_test": args.test,
"mode_update": args.update,
"mode_backup": args.backup,
"mode_cleanup": args.cleanup,
"enable_debug": args.debug,
"MODE_RESET": args.reset,
"MODE_TEST": args.test,
"MODE_UPDATE": args.update,
"MODE_BACKUP": args.backup,
"MODE_CLEANUP": args.cleanup,
"MODE_LOGS": args.logs,
"MODE_DEBUG": args.debug,
"MODE_ASSERT": not args.skip_validation,
"host_type": args.host_type
}
@@ -187,7 +210,10 @@ def main():
password_file=args.password_file,
verbose=args.verbose,
skip_tests=args.skip_tests,
skip_validation=args.skip_validation
skip_validation=args.skip_validation,
skip_build=args.skip_build,
cleanup=args.cleanup,
logs=args.logs
)

View File

@@ -4,8 +4,8 @@ import sys
from pathlib import Path
import yaml
from typing import Dict, Any
from utils.handler.vault import VaultHandler, VaultScalar
from utils.handler.yaml import YamlHandler
from module_utils.handler.vault import VaultHandler, VaultScalar
from module_utils.handler.yaml import YamlHandler
from yaml.dumper import SafeDumper
def ask_for_confirmation(key: str) -> bool:

View File

@@ -0,0 +1,480 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Move unnecessary meta dependencies to guarded include_role/import_role
for better performance, while preserving YAML comments, quotes, and layout.
Heuristic (matches tests/integration/test_unnecessary_role_dependencies.py):
- A dependency is considered UNNECESSARY if:
* The consumer does NOT use provider variables in defaults/vars/handlers
(no early-var need), AND
* In tasks, any usage of provider vars or provider-handler notifications
occurs only AFTER an include/import of the provider in the same file,
OR there is no usage at all.
Action:
- Remove such dependencies from roles/<role>/meta/main.yml.
- Prepend a guarded include block to roles/<role>/tasks/01_core.yml (preferred)
or roles/<role>/tasks/main.yml if 01_core.yml is absent.
- If multiple dependencies are moved for a role, use a loop over include_role.
Notes:
- Creates .bak backups for modified YAML files.
- Requires ruamel.yaml to preserve comments/quotes everywhere.
"""
import argparse
import glob
import os
import re
import shutil
import sys
from typing import Dict, Set, List, Tuple, Optional
# --- Require ruamel.yaml for full round-trip preservation ---
try:
from ruamel.yaml import YAML
from ruamel.yaml.comments import CommentedMap, CommentedSeq
from ruamel.yaml.scalarstring import SingleQuotedScalarString
_HAVE_RUAMEL = True
except Exception:
_HAVE_RUAMEL = False
if not _HAVE_RUAMEL:
print("[ERR] ruamel.yaml is required to preserve comments/quotes. Install with: pip install ruamel.yaml", file=sys.stderr)
sys.exit(3)
yaml_rt = YAML()
yaml_rt.preserve_quotes = True
yaml_rt.width = 10**9 # prevent line wrapping
# ---------------- Utilities ----------------
def _backup(path: str):
if os.path.exists(path):
shutil.copy2(path, path + ".bak")
def read_text(path: str) -> str:
try:
with open(path, "r", encoding="utf-8") as f:
return f.read()
except Exception:
return ""
def load_yaml_rt(path: str):
try:
with open(path, "r", encoding="utf-8") as f:
data = yaml_rt.load(f)
return data if data is not None else CommentedMap()
except FileNotFoundError:
return CommentedMap()
except Exception as e:
print(f"[WARN] Failed to parse YAML: {path}: {e}", file=sys.stderr)
return CommentedMap()
def dump_yaml_rt(data, path: str):
_backup(path)
with open(path, "w", encoding="utf-8") as f:
yaml_rt.dump(data, f)
def roles_root(project_root: str) -> str:
return os.path.join(project_root, "roles")
def iter_role_dirs(project_root: str) -> List[str]:
root = roles_root(project_root)
return [d for d in glob.glob(os.path.join(root, "*")) if os.path.isdir(d)]
def role_name_from_dir(role_dir: str) -> str:
return os.path.basename(role_dir.rstrip(os.sep))
def path_if_exists(*parts) -> Optional[str]:
p = os.path.join(*parts)
return p if os.path.exists(p) else None
def gather_yaml_files(base: str, patterns: List[str]) -> List[str]:
files: List[str] = []
for pat in patterns:
files.extend(glob.glob(os.path.join(base, pat), recursive=True))
return [f for f in files if os.path.isfile(f)]
def sq(v: str):
"""Return a single-quoted scalar (ruamel) for consistent quoting."""
return SingleQuotedScalarString(v)
# ---------------- Providers: vars & handlers ----------------
def flatten_keys(data) -> Set[str]:
out: Set[str] = set()
if isinstance(data, dict):
for k, v in data.items():
if isinstance(k, str):
out.add(k)
out |= flatten_keys(v)
elif isinstance(data, list):
for item in data:
out |= flatten_keys(item)
return out
def collect_role_defined_vars(role_dir: str) -> Set[str]:
"""Vars a role 'provides': defaults/vars keys + set_fact keys in tasks."""
provided: Set[str] = set()
for rel in ("defaults/main.yml", "vars/main.yml"):
p = path_if_exists(role_dir, rel)
if p:
data = load_yaml_rt(p)
provided |= flatten_keys(data)
# set_fact keys
task_files = gather_yaml_files(os.path.join(role_dir, "tasks"), ["**/*.yml", "*.yml"])
for tf in task_files:
data = load_yaml_rt(tf)
if isinstance(data, list):
for task in data:
if isinstance(task, dict) and "set_fact" in task and isinstance(task["set_fact"], dict):
provided |= set(task["set_fact"].keys())
noisy = {"when", "name", "vars", "tags", "register"}
return {v for v in provided if isinstance(v, str) and v and v not in noisy}
def collect_role_handler_names(role_dir: str) -> Set[str]:
"""Handler names defined by a role (for notify detection)."""
handler_file = path_if_exists(role_dir, "handlers/main.yml")
if not handler_file:
return set()
data = load_yaml_rt(handler_file)
names: Set[str] = set()
if isinstance(data, list):
for task in data:
if isinstance(task, dict):
nm = task.get("name")
if isinstance(nm, str) and nm.strip():
names.add(nm.strip())
return names
# ---------------- Consumers: usage scanning ----------------
def find_var_positions(text: str, varname: str) -> List[int]:
"""Return byte offsets for occurrences of varname (word-ish boundary)."""
positions: List[int] = []
if not varname:
return positions
pattern = re.compile(rf"(?<!\w){re.escape(varname)}(?!\w)")
for m in pattern.finditer(text):
positions.append(m.start())
return positions
def first_var_use_offset_in_text(text: str, provided_vars: Set[str]) -> Optional[int]:
first: Optional[int] = None
for v in provided_vars:
for off in find_var_positions(text, v):
if first is None or off < first:
first = off
return first
def first_include_offset_for_role(text: str, producer_role: str) -> Optional[int]:
"""
Find earliest include/import of a given role in this YAML text.
Handles compact dict and block styles.
"""
pattern = re.compile(
r"(include_role|import_role)\s*:\s*\{[^}]*\bname\s*:\s*['\"]?"
+ re.escape(producer_role) + r"['\"]?[^}]*\}"
r"|"
r"(include_role|import_role)\s*:\s*\n(?:\s+[a-z_]+\s*:\s*.*\n)*\s*name\s*:\s*['\"]?"
+ re.escape(producer_role) + r"['\"]?",
re.IGNORECASE,
)
m = pattern.search(text)
return m.start() if m else None
def find_notify_offsets_for_handlers(text: str, handler_names: Set[str]) -> List[int]:
"""
Heuristic: for each handler name, find occurrences where 'notify' appears within
the preceding ~200 chars. Works for single string or list-style notify blocks.
"""
if not handler_names:
return []
offsets: List[int] = []
for h in handler_names:
for m in re.finditer(re.escape(h), text):
start = m.start()
back = max(0, start - 200)
context = text[back:start]
if re.search(r"notify\s*:", context):
offsets.append(start)
return sorted(offsets)
def parse_meta_dependencies(role_dir: str) -> List[str]:
meta = path_if_exists(role_dir, "meta/main.yml")
if not meta:
return []
data = load_yaml_rt(meta)
dd = data.get("dependencies")
deps: List[str] = []
if isinstance(dd, list):
for item in dd:
if isinstance(item, str):
deps.append(item)
elif isinstance(item, dict) and "role" in item:
deps.append(str(item["role"]))
elif isinstance(item, dict) and "name" in item:
deps.append(str(item["name"]))
return deps
# ---------------- Fix application ----------------
def sanitize_run_once_var(role_name: str) -> str:
"""
Generate run_once variable name from role name.
Example: 'sys-front-inj-logout' -> 'run_once_sys_front_inj_logout'
"""
return "run_once_" + role_name.replace("-", "_")
def build_include_block_yaml(consumer_role: str, moved_deps: List[str]) -> List[dict]:
"""
Build a guarded block that includes one or many roles.
This block will be prepended to tasks/01_core.yml or tasks/main.yml.
"""
guard_var = sanitize_run_once_var(consumer_role)
if len(moved_deps) == 1:
inner_tasks = [
{
"name": f"Include dependency '{moved_deps[0]}'",
"include_role": {"name": moved_deps[0]},
}
]
else:
inner_tasks = [
{
"name": "Include dependencies",
"include_role": {"name": "{{ item }}"},
"loop": moved_deps,
}
]
# Always set the run_once fact at the end
inner_tasks.append({"set_fact": {guard_var: True}})
# Correct Ansible block structure
block_task = {
"name": "Load former meta dependencies once",
"block": inner_tasks,
"when": f"{guard_var} is not defined",
}
return [block_task]
def prepend_tasks(tasks_path: str, new_tasks, dry_run: bool):
"""
Prepend new_tasks (CommentedSeq) to an existing tasks YAML list while preserving comments.
If the file does not exist, create it with new_tasks.
"""
if os.path.exists(tasks_path):
existing = load_yaml_rt(tasks_path)
if isinstance(existing, list):
combined = CommentedSeq()
for item in new_tasks:
combined.append(item)
for item in existing:
combined.append(item)
elif isinstance(existing, dict):
# Rare case: tasks file with a single mapping; coerce to list
combined = CommentedSeq()
for item in new_tasks:
combined.append(item)
combined.append(existing)
else:
combined = new_tasks
else:
os.makedirs(os.path.dirname(tasks_path), exist_ok=True)
combined = new_tasks
if dry_run:
print(f"[DRY-RUN] Would write {tasks_path} with {len(new_tasks)} prepended task(s).")
return
dump_yaml_rt(combined, tasks_path)
print(f"[OK] Updated {tasks_path} (prepended {len(new_tasks)} task(s)).")
def update_meta_remove_deps(meta_path: str, remove: List[str], dry_run: bool):
"""
Remove entries from meta.dependencies while leaving the rest of the file intact.
Quotes, comments, key order, and line breaks are preserved.
Returns True if a change would be made (or was made when not in dry-run).
"""
if not os.path.exists(meta_path):
return False
doc = load_yaml_rt(meta_path)
deps = doc.get("dependencies")
if not isinstance(deps, list):
return False
def dep_name(item):
if isinstance(item, dict):
return item.get("role") or item.get("name")
return item
keep = CommentedSeq()
removed = []
for item in deps:
name = dep_name(item)
if name in remove:
removed.append(name)
else:
keep.append(item)
if not removed:
return False
if keep:
doc["dependencies"] = keep
else:
if "dependencies" in doc:
del doc["dependencies"]
if dry_run:
print(f"[DRY-RUN] Would rewrite {meta_path}; removed: {', '.join(removed)}")
return True
dump_yaml_rt(doc, meta_path)
print(f"[OK] Rewrote {meta_path}; removed: {', '.join(removed)}")
return True
def dependency_is_unnecessary(consumer_dir: str,
consumer_name: str,
producer_name: str,
provider_vars: Set[str],
provider_handlers: Set[str]) -> bool:
"""Apply heuristic to decide if we can move this dependency."""
# 1) Early usage in defaults/vars/handlers? If yes -> necessary
defaults_files = [p for p in [
path_if_exists(consumer_dir, "defaults/main.yml"),
path_if_exists(consumer_dir, "vars/main.yml"),
path_if_exists(consumer_dir, "handlers/main.yml"),
] if p]
for p in defaults_files:
text = read_text(p)
if first_var_use_offset_in_text(text, provider_vars) is not None:
return False # needs meta dep
# 2) Tasks: any usage before include/import? If yes -> keep meta dep
task_files = gather_yaml_files(os.path.join(consumer_dir, "tasks"), ["**/*.yml", "*.yml"])
for p in task_files:
text = read_text(p)
if not text:
continue
include_off = first_include_offset_for_role(text, producer_name)
var_use_off = first_var_use_offset_in_text(text, provider_vars)
notify_offs = find_notify_offsets_for_handlers(text, provider_handlers)
if var_use_off is not None:
if include_off is None or include_off > var_use_off:
return False # used before include
for noff in notify_offs:
if include_off is None or include_off > noff:
return False # notify before include
# If we get here: no early use, and either no usage at all or usage after include
return True
def process_role(role_dir: str,
providers_index: Dict[str, Tuple[Set[str], Set[str]]],
only_role: Optional[str],
dry_run: bool) -> bool:
"""
Returns True if any change suggested/made for this role.
"""
consumer_name = role_name_from_dir(role_dir)
if only_role and only_role != consumer_name:
return False
meta_deps = parse_meta_dependencies(role_dir)
if not meta_deps:
return False
# Build provider vars/handlers accessors
moved: List[str] = []
for producer in meta_deps:
# Only consider local roles we can analyze
producer_dir = path_if_exists(os.path.dirname(role_dir), producer) or path_if_exists(os.path.dirname(roles_root(os.path.dirname(role_dir))), "roles", producer)
if producer not in providers_index:
# Unknown/external role → skip (we cannot verify safety)
continue
pvars, phandlers = providers_index[producer]
if dependency_is_unnecessary(role_dir, consumer_name, producer, pvars, phandlers):
moved.append(producer)
if not moved:
return False
# 1) Remove from meta
meta_path = os.path.join(role_dir, "meta", "main.yml")
update_meta_remove_deps(meta_path, moved, dry_run=dry_run)
# 2) Prepend include block to tasks/01_core.yml or tasks/main.yml
target_tasks = path_if_exists(role_dir, "tasks/01_core.yml")
if not target_tasks:
target_tasks = os.path.join(role_dir, "tasks", "main.yml")
include_block = build_include_block_yaml(consumer_name, moved)
prepend_tasks(target_tasks, include_block, dry_run=dry_run)
return True
def build_providers_index(all_roles: List[str]) -> Dict[str, Tuple[Set[str], Set[str]]]:
"""
Map role_name -> (provided_vars, handler_names)
"""
index: Dict[str, Tuple[Set[str], Set[str]]] = {}
for rd in all_roles:
rn = role_name_from_dir(rd)
index[rn] = (collect_role_defined_vars(rd), collect_role_handler_names(rd))
return index
def main():
parser = argparse.ArgumentParser(
description="Move unnecessary meta dependencies to guarded include_role for performance (preserve comments/quotes)."
)
parser.add_argument(
"--project-root",
default=os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")),
help="Path to project root (default: two levels up from this script).",
)
parser.add_argument(
"--role",
dest="only_role",
default=None,
help="Only process a specific role name (e.g., 'docker-core').",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Analyze and print planned changes without modifying files.",
)
args = parser.parse_args()
roles = iter_role_dirs(args.project_root)
if not roles:
print(f"[ERR] No roles found under {roles_root(args.project_root)}", file=sys.stderr)
sys.exit(2)
providers_index = build_providers_index(roles)
changed_any = False
for role_dir in roles:
changed = process_role(role_dir, providers_index, args.only_role, args.dry_run)
changed_any = changed_any or changed
if not changed_any:
print("[OK] No unnecessary meta dependencies to move (per heuristic).")
else:
if args.dry_run:
print("[DRY-RUN] Completed analysis. No files were changed.")
else:
print("[OK] Finished moving unnecessary dependencies.")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,126 @@
#!/usr/bin/env python3
"""
Run the full localhost integration flow entirely inside the infinito Docker container,
without writing any artifacts to the host filesystem.
Catches missing schema/config errors during credential vaulting and skips those apps.
"""
import subprocess
import os
import sys
def main():
repo = os.path.abspath(os.getcwd())
bash_script = '''
set -e
ART=/integration-artifacts
mkdir -p "$ART"
echo testpassword > "$ART/vaultpw.txt"
# 1) Generate inventory
python3 -m cli.build.inventory.full \
--host localhost \
--inventory-style hostvars \
--format yaml \
--output "$ART/inventory.yml"
# 2) Credentials per-app
apps=$(python3 <<EOF
import yaml
inv = yaml.safe_load(open('/integration-artifacts/inventory.yml'))
print(' '.join(inv['_meta']['hostvars']['localhost']['invokable_applications']))
EOF
)
for app in $apps; do
echo "⏳ Vaulting credentials for $app..."
output=$(python3 -m cli.create.credentials \
--role-path "/repo/roles/$app" \
--inventory-file "$ART/inventory.yml" \
--vault-password-file "$ART/vaultpw.txt" \
--force 2>&1) || rc=$?; rc=${rc:-0}
if [ "$rc" -eq 0 ]; then
echo "✅ Credentials generated for $app"
elif echo "$output" | grep -q "No such file or directory"; then
echo "⚠️ Skipping $app (no schema/config)"
elif echo "$output" | grep -q "Plain algorithm for"; then
# Collect all plain-algo keys
keys=( $(echo "$output" | grep -oP "Plain algorithm for '\K[^']+") )
overrides=()
for key in "${keys[@]}"; do
if [[ "$key" == *api_key ]]; then
val=$(python3 - << 'PY'
import random, string
print(''.join(random.choices(string.ascii_letters+string.digits, k=32)))
PY
)
elif [[ "$key" == *password ]]; then
val=$(python3 - << 'PY'
import random, string
print(''.join(random.choices(string.ascii_letters+string.digits, k=12)))
PY
)
else
val=$(python3 - << 'PY'
import random, string
print(''.join(random.choices(string.ascii_letters+string.digits, k=16)))
PY
)
fi
echo " → Overriding $key=$val"
overrides+=("--set" "$key=$val")
done
# Retry with overrides
echo "🔄 Retrying with overrides..."
retry_out=$(python3 -m cli.create.credentials \
--role-path "/repo/roles/$app" \
--inventory-file "$ART/inventory.yml" \
--vault-password-file "$ART/vaultpw.txt" \
"${overrides[@]}" \
--force 2>&1) || retry_rc=$?; retry_rc=${retry_rc:-0}
if [ "$retry_rc" -eq 0 ]; then
echo "✅ Credentials generated for $app (with overrides)"
else
echo "❌ Override failed for $app:"
echo "$retry_out"
fi
else
echo "❌ Credential error for $app:"
echo "$output"
fi
done
# 3) Show generated files
ls -R "$ART" 2>/dev/null
echo "
===== inventory.yml ====="
cat "$ART/inventory.yml"
echo "
===== vaultpw.txt ====="
cat "$ART/vaultpw.txt"
# 4) Deploy
python3 -m cli.deploy \
"$ART/inventory.yml" \
--limit localhost \
--vault-password-file "$ART/vaultpw.txt" \
--verbose
'''
cmd = [
"docker", "run", "--rm",
"-v", f"{repo}:/repo",
"-w", "/repo",
"--entrypoint", "bash",
"infinito:latest",
"-c", bash_script
]
print(f"\033[96m> {' '.join(cmd)}\033[0m")
rc = subprocess.call(cmd)
sys.exit(rc)
if __name__ == '__main__':
main()

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python3
"""
CLI wrapper for Makefile targets within CyMaIS.
CLI wrapper for Makefile targets within Infinito.Nexus.
Invokes `make` commands in the project root directory.
"""
import argparse
@@ -11,8 +11,8 @@ import sys
def main():
parser = argparse.ArgumentParser(
prog='cymais make',
description='Run Makefile targets for CyMaIS project'
prog='infinito make',
description='Run Makefile targets for Infinito.Nexus project'
)
parser.add_argument(
'targets',

View File

@@ -0,0 +1,49 @@
#!/usr/bin/env python3
# cli/meta/applications/invokable.py
import argparse
import sys
import os
# Import filter plugin for get_all_invokable_apps
try:
from filter_plugins.get_all_invokable_apps import get_all_invokable_apps
except ImportError:
# Try to adjust sys.path if running outside Ansible
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..')))
try:
from filter_plugins.get_all_invokable_apps import get_all_invokable_apps
except ImportError:
sys.stderr.write("Could not import filter_plugins.get_all_invokable_apps. Check your PYTHONPATH.\n")
sys.exit(1)
def main():
parser = argparse.ArgumentParser(
description='List all invokable applications (application_ids) based on invokable paths from categories.yml and available roles.'
)
parser.add_argument(
'-c', '--categories-file',
default=os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..', 'roles', 'categories.yml')),
help='Path to roles/categories.yml (default: roles/categories.yml at project root)'
)
parser.add_argument(
'-r', '--roles-dir',
default=os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..', 'roles')),
help='Path to roles/ directory (default: roles/ at project root)'
)
args = parser.parse_args()
try:
result = get_all_invokable_apps(
categories_file=args.categories_file,
roles_dir=args.roles_dir
)
except Exception as e:
sys.stderr.write(f"Error: {e}\n")
sys.exit(1)
for app_id in result:
print(app_id)
if __name__ == '__main__':
main()

View File

@@ -1,8 +1,8 @@
# CyMaIS Architecture Overview
# Infinito.Nexus Architecture
## Introduction
CyMaIS (Cyber Master Infrastructure Solution) is a modular, open-source IT infrastructure automation platform designed to simplify the deployment, management, and security of self-hosted environments.
[Infinito.Nexus](https://infinito.nexus) is a modular, open-source IT infrastructure automation platform designed to simplify the deployment, management, and security of self-hosted environments.
It provides a flexible, scalable, and secure architecture based on modern [DevOps](https://en.wikipedia.org/wiki/DevOps) principles, leveraging technologies like [Ansible](https://en.wikipedia.org/wiki/Ansible_(software)), [Docker](https://en.wikipedia.org/wiki/Docker_(software)), and [Infrastructure as Code (IaC)](https://en.wikipedia.org/wiki/Infrastructure_as_code).
@@ -55,4 +55,4 @@ https://github.com/kevinveenbirkenbach/hetzner-arch-luks
---
> *CyMaIS — Modular. Secure. Automated. Decentralized.*
> *Infinito.Nexus — Modular. Secure. Automated. Decentralized.*

View File

@@ -1,6 +1,6 @@
# Docker Build Guide 🚢
This guide explains how to build the **CyMaIS** Docker image with advanced options to avoid common issues (e.g. mirror timeouts) and control build caching.
This guide explains how to build the **Infinito.Nexus** Docker image with advanced options to avoid common issues (e.g. mirror timeouts) and control build caching.
---
@@ -47,7 +47,7 @@ export DOCKER_BUILDKIT=1
docker build \
--network=host \
--no-cache \
-t cymais:latest \
-t infinito:latest \
.
```
@@ -59,23 +59,23 @@ docker build \
* `--no-cache`
Guarantees that changes to package lists or dependencies are picked up immediately by rebuilding every layer.
* `-t cymais:latest`
Tags the resulting image as `cymais:latest`.
* `-t infinito:latest`
Tags the resulting image as `infinito:latest`.
---
## 4. Running the Container
Once built, you can run CyMaIS as usual:
Once built, you can run Infinito.Nexus as usual:
```bash
docker run --rm -it \
-v "$(pwd)":/opt/cymais \
-w /opt/cymais \
cymais:latest --help
-v "$(pwd)":/opt/infinito \
-w /opt/infinito \
infinito:latest --help
```
Mount any host directory into `/opt/cymais/logs` to persist logs across runs.
Mount any host directory into `/opt/infinito/logs` to persist logs across runs.
---
@@ -89,35 +89,35 @@ Mount any host directory into `/opt/cymais/logs` to persist logs across runs.
## 6. Live Development via Volume Mount
The CyMaIS installation inside the container always resides at:
The Infinito.Nexus installation inside the container always resides at:
```
/root/Repositories/github.com/kevinveenbirkenbach/cymais
/root/Repositories/github.com/kevinveenbirkenbach/infinito
```
To apply code changes without rebuilding the image, mount your local installation directory into that static path:
```bash
# 1. Determine the CyMaIS install path on your host
CMAIS_PATH=$(pkgmgr path cymais)
# 1. Determine the Infinito.Nexus install path on your host
INFINITO_PATH=$(pkgmgr path infinito)
# 2. Launch the container with a bind mount:
docker run --rm -it \
-v "${CMAIS_PATH}:/root/Repositories/github.com/kevinveenbirkenbach/cymais" \
-w "/root/Repositories/github.com/kevinveenbirkenbach/cymais" \
cymais:latest make build
-v "${INFINITO_PATH}:/root/Repositories/github.com/kevinveenbirkenbach/infinito" \
-w "/root/Repositories/github.com/kevinveenbirkenbach/infinito" \
infinito:latest make build
```
Or, to test the CLI help interactively:
```bash
docker run --rm -it \
-v "${CMAIS_PATH}:/root/Repositories/github.com/kevinveenbirkenbach/cymais" \
-w "/root/Repositories/github.com/kevinveenbirkenbach/cymais" \
cymais:latest --help
-v "${INFINITO_PATH}:/root/Repositories/github.com/kevinveenbirkenbach/infinito" \
-w "/root/Repositories/github.com/kevinveenbirkenbach/infinito" \
infinito:latest --help
```
Any edits you make in `${CMAIS_PATH}` on your host are immediately reflected inside the container, eliminating the need for repeated `docker build` cycles.
Any edits you make in `${INFINITO_PATH}` on your host are immediately reflected inside the container, eliminating the need for repeated `docker build` cycles.
---

2
docs/TODO.md Normal file
View File

@@ -0,0 +1,2 @@
# TODO
- Move this files to https://hub.cymais.cloud

View File

@@ -1,26 +0,0 @@
# Features
**CyMaIS - Cyber Master Infrastructure Solution** revolutionizes IT infrastructure management, making it simpler, safer, and more adaptable for businesses of all sizes. Heres how it can benefit your organization:
## Effortless Setup and Management 🚀
Setting up and managing IT systems has never been easier. CyMaIS automates complex tasks, whether on Linux servers or personal computers, reducing manual effort and saving valuable time.
## Comprehensive IT Solutions 🛠️
CyMaIS covers everything from essential system setups to advanced configurations, including VPN, Docker, Ansible-based deployments, security optimizations, and monitoring tools. This makes IT management seamless and efficient.
## Tailored for Your Needs 🎯
Every business is unique, and so is CyMaIS! With a modular architecture, it adapts to specific requirements, whether for startups, growing businesses, NGOs, or large enterprises.
## Proactive Monitoring & Maintenance 🔍
With automated updates, system health checks, and security audits, CyMaIS ensures your infrastructure is always up-to-date and running smoothly. Roles such as `sys-hlth-docker-container`, `sys-hlth-btrfs`, and `sys-hlth-webserver` help monitor system integrity.
## Uncompromised Security 🔒
Security is a top priority! CyMaIS includes robust security features like full-disk encryption recommendations, 2FA enforcement, encrypted server deployments (`web-app-keycloak`, `svc-db-openldap`), and secure backup solutions (`sys-bkp-rmt-2-loc`, `svc-bkp-loc-2-usb`).
## User-Friendly with Expert Support 👩‍💻
No need to be a Linux or Docker expert! CyMaIS simplifies deployment with intuitive role-based automation. Documentation and community support make IT administration accessible to all experience levels.
## Open Source Trust & Transparency 🔓
As an open-source project, CyMaIS guarantees transparency, security, and community-driven development, ensuring continuous improvements and adherence to industry best practices.
For further information, check out the [application glosar](roles/application_glosar), [applications ordered by category](roles/application_categories) and the [detailled ansible role descriptions](roles/ansible_role_glosar).

View File

@@ -1,34 +0,0 @@
# Situation Analysis
This is the Situation Analysis for [CyMaIS](https://cymais.cloud), highlighting the challenges we aim to address.
## Short
The problem stems from businesses and individuals being dependent on monopolistic cloud providers, losing control over their data, facing security risks, and being vulnerable to geopolitical manipulation, while small businesses struggle to set up secure, enterprise-level IT infrastructures due to lack of resources and expertise.
## Explanation
In todays digital landscape, data is predominantly stored in the cloud, controlled by large corporations such as Microsoft, AWS, and other cloud providers. This creates a dependency on these providers, leading to increasingly expensive services and a lack of control over critical business data.
As organizations rely on these monopolistic players for their cloud services, they surrender ownership of their data, becoming vulnerable to the whims of these companies. This dependency puts them at the mercy of cloud and software giants, who not only dictate pricing and service levels but also influence the very governance of data.
Moreover, the ease with which governments, intelligence agencies, and private corporations can access sensitive data is a growing concern. With increasing surveillance capabilities, the privacy of users and businesses is constantly at risk, further amplifying the vulnerability of data stored in centralized cloud infrastructures.
Additionally, the dominance of these companies in sectors like social media further exacerbates the issue, making individuals and organizations susceptible to manipulation and control.
The problem intensifies in times of political unrest or global conflicts. As data is often centrally stored with monopolistic providers, businesses become highly dependent on these entities for accessing their data and services. This dependency increases the risk of coercion or pressure from governments or private corporations, leading to potential **extortion**. Governments may attempt to gain leverage over businesses by threatening access to critical data or services, while private companies may exploit this dependency for their own interests.
In essence, the lack of sovereignty over data and the increasing control of a few monopolistic entities undermine the fundamental values of privacy, security, and independence. Organizations, especially small businesses, are left vulnerable to external pressures, making them pawns in a larger game dominated by these cloud and software giants.
Furthermore, for small businesses, setting up enterprise-level open-source infrastructure with integrated solutions such as **Single Sign-On (SSO)**, **Identity and Access Management (IAM)**, **encryption**, **backup solutions**, and other essential IT services is nearly impossible. These businesses lack the resources, both financial and human, to deploy secure IT infrastructures at an enterprise level.
System administrators in small companies often dont have the specialized knowledge or the capacity to build and maintain such complex infrastructures, which further exacerbates the challenge of securing sensitive business data while ensuring compliance with industry standards.
## Key Points
- Dependency on monopolists
- Loss of data sovereignty
- Geopolitical vulnerabilities
- Lack of resources
- Limited secure infrastructure expertise
- Centralized data storage risks
- Manipulation through social media

View File

@@ -1,40 +0,0 @@
# Market Analysis for CyMaIS in Berlin
## 1. Introduction
Berlin is recognized as one of Europe's leading innovation and technology hubs. The capital is characterized by a dynamic start-up scene, numerous SMEs, and international corporations that drive digital transformation. This creates a promising market for modular IT infrastructure solutions like CyMaIS.
## 2. Market Overview and Business Landscape
- **Diverse Economic Hub:**
Berlin is home to an estimated several tens of thousands of companies—from innovative start-ups to established mid-sized businesses and large enterprises.
- **Digital Innovation:**
The city is known for its high concentration of technology companies, digital service providers, and creative industries constantly seeking efficient IT solutions.
- **Support and Infrastructure:**
Numerous initiatives, funding programs, and well-developed networks of technology parks and coworking spaces support the citys digital progress.
## 3. Level of Digitalization and IT Needs
- **Advanced Yet Heterogeneous Digitalization:**
Many Berlin companies already use modern IT solutions, but traditional businesses often require significant upgrades in integrating advanced infrastructure and cybersecurity measures.
- **Increasing Demands:**
Rising business process complexity and stricter requirements for data protection and security are driving the need for individualized, scalable IT solutions.
## 4. Overall Market Volume (Estimation)
- **Estimated Market Volume:**
Considering the diverse company sizes and varying investment levels—from start-ups to large enterprises—the annual overall market volume for IT infrastructure modernization solutions in Berlin is roughly estimated at **€12 billion**.
This figure reflects the aggregate potential of digital transformation initiatives across Berlins vibrant business ecosystem.
## 5. Price Segments and Investment Readiness
- **Low-Priced Segment:**
Many start-ups and small companies are capable of investing approximately €10,00030,000 to set up basic infrastructures.
- **Mid-Priced Segment:**
Established SMEs in Berlin are typically prepared to invest between €40,000 and €70,000 in tailored IT solutions to incorporate additional functionalities and security standards.
- **High-Priced Segment:**
Large enterprises and specialized industrial businesses invest in complex integration solutions starting at around €100,000 to implement comprehensive digital transformation projects.
## 6. Competitive Landscape and Positioning
- **High Innovation Pressure:**
Berlin's vibrant IT and digital services sector is highly competitive. To stand out, solutions must be flexible, scalable, and seamlessly integrable.
- **CyMaIS Advantages:**
The modular architecture of CyMaIS allows it to meet the individual requirements of Berlins diverse businesses—from start-ups to large industrial projects—perfectly. Additionally, its focus on cybersecurity and continuous updates offers a decisive added value.
## 7. Conclusion
Berlin offers an attractive market potential for IT infrastructure solutions. With a vibrant innovation landscape, a considerable overall market volume estimated at €12 billion, and numerous companies needing to take the next step in digital transformation, CyMaIS is well positioned as a powerful, modular solution. The combination of a dynamic start-up ecosystem and established businesses promises attractive long-term growth opportunities.

View File

@@ -1,37 +0,0 @@
# Berlin Market Diagrams
## 1. Digitalization in Berlin (Pie Chart)
```mermaid
pie
title Berlin: IT Digitalization Status
"Fully Modernized (25%)": 25
"Partially Digitalized (45%)": 45
"Requires Significant Upgrades (30%)": 30
```
*This pie chart displays the estimated IT digitalization status for Berlin-based companies, with 25% fully modernized, 45% partially digitalized, and 30% requiring major upgrades.*
## 2. Investment Segments in Berlin (Flowchart)
```mermaid
flowchart LR
A[Investment Segments in Berlin]
B[Low-Priced (€10k-30k): 40%]
C[Mid-Priced (€40k-70k): 40%]
D[High-Priced (€100k+): 20%]
A --> B
A --> C
A --> D
```
*This flowchart shows the distribution of investment segments for IT infrastructure projects in Berlin, categorized into low-, mid-, and high-priced solutions.*
## 3. Berlin Market Volume & Drivers (Flowchart)
```mermaid
flowchart TD
A[Berlin IT Infrastructure Market]
B[Market Volume: €1-2 Billion]
C[Drivers: Start-up Ecosystem, Established Firms, Local Initiatives]
A --> B
A --> C
```
*This diagram outlines Berlin's overall market volume (estimated at €12 billion) and identifies the main drivers such as the vibrant start-up ecosystem and support from local initiatives.*

View File

@@ -1,77 +0,0 @@
# Market Analysis for CyMaIS in Europe
This analysis provides a detailed overview of the potential for CyMaIS a modular IT infrastructure solution in the European market.
## 1. Introduction
CyMaIS addresses the growing need for flexible and scalable IT infrastructure solutions that support companies in their digital transformation. The European market, characterized by diverse economic systems, offers a variety of opportunities and challenges.
## 2. Market Overview and Digitalization in Europe
- **Business Landscape:**
- Europe is home to an estimated 2025 million companies, most of which are small and medium-sized enterprises (SMEs).
- Business structures vary significantly between regions: while countries such as the Nordic nations, Estonia, or Germany are highly advanced, other markets lag behind in certain aspects.
- **Degree of Digitalization:**
- Basic digital technologies have been implemented in many European companies; however, recent studies indicate that only about 5060% have reached a basic level of digitalization.
- A large share of companies approximately 7080% faces the challenge of further modernizing their IT infrastructures, particularly in areas like cybersecurity and automation.
## 3. Analysis of the Demand for IT Infrastructure Solutions
- **Target Market:**
- There is significant demand across Europe for solutions that modernize outdated IT structures while meeting increased requirements for data protection, security, and efficiency.
- SMEs, as well as larger companies in sectors with high security and compliance needs, can particularly benefit from specialized, modular solutions like CyMaIS.
- **Core Requirements:**
- Integration of modern IT components
- Enhancement of cybersecurity
- Support for automation and data analytics
## 4. Pricing Segments and Cost Structure
CyMaIS offers solutions that can be tailored to different budgets and requirements:
- **Low-Priced Segment (Basic Setup):**
- **Costs:** Approximately €10,00030,000
- **Target Group:** Small companies requiring standardized IT solutions
- **Mid-Priced Segment:**
- **Costs:** Approximately €40,00070,000
- **Target Group:** Medium-sized companies with specific customization needs
- **High-Priced Segment (Complex, Customized Solutions):**
- **Costs:** From €100,000 and upwards
- **Target Group:** Large companies and projects with extensive integration requirements
## 5. Total Market Volume and Revenue Potential
- **Total Market Volume:**
- The revenue potential for IT infrastructure solutions in Europe is estimated at approximately **€300500 billion**.
- This figure includes investments in hardware, software, consulting and integration services, as well as ongoing IT support services.
- **Growth Drivers:**
- The continuous need for digital transformation
- Increasing security requirements (cybersecurity)
- Government funding programs and initiatives that support digitalization across many European countries
## 6. Competitive Environment and Positioning of CyMaIS
- **Competition:**
- The European market is fragmented: in addition to major global IT service providers, there are numerous local providers.
- Cross-border differences create diverse market conditions where specialized, modular solutions can offer a strategic advantage.
- **Competitive Advantages of CyMaIS:**
- **Modularity and Flexibility:** Enables tailor-made adaptation to individual business requirements
- **Scalability:** Ranges from basic solutions for SMEs to complex system integrations for large enterprises
- **Seamless Integration:** Incorporates modern IT components, including advanced security solutions
## 7. Opportunities and Challenges
- **Opportunities:**
- Increasing investments in digital transformation and cybersecurity
- High demand in under-served markets and among SMEs needing to modernize their IT infrastructures
- Potential for international expansion through adaptable, modular solutions
- **Challenges:**
- Varied levels of digitalization and differing economic conditions across European countries
- Intense competition and pricing pressure, particularly in mature markets
- Requirements for country-specific regulations and compliance necessitating customized adaptations
## 8. Conclusion
The European market offers significant potential for CyMaIS. With an estimated total market volume of €300500 billion and a large number of companies needing to modernize their IT infrastructures, CyMaIS is well positioned as a flexible and scalable solution—ideal for meeting the diverse requirements of the European market. In the long term, ongoing digitalization and increasing security needs present attractive growth opportunities.
## Sources
- Analysis based on an interactive discussion with [ChatGPT](https://chatgpt.com/c/67f95f70-865c-800f-bd97-864a36f9b498) on April 11, 2025.

View File

@@ -1,38 +0,0 @@
# Europe Market Diagrams
## 1. Digitalization Status (Pie Chart)
```mermaid
pie
title Europe: Digitalization Status
"Fully Modernized (20%)": 20
"Partially Digitalized (50%)": 50
"Needs Advanced Modernization (30%)": 30
```
*This pie chart illustrates the digitalization status across European companies, with 20% fully modernized, 50% partially digitalized, and 30% needing advanced modernization.*
## 2. Investment Segments (Flowchart)
```mermaid
flowchart LR
A[European Investment Segments]
B[Low-Priced (€10k-30k): 35%]
C[Mid-Priced (€40k-70k): 45%]
D[High-Priced (€100k+): 20%]
A --> B
A --> C
A --> D
```
*This flowchart depicts the breakdown of IT investment segments in Europe, with approximate percentages for low-, mid-, and high-priced solutions.*
## 3. Overall Market Volume & Drivers (Flowchart)
```mermaid
flowchart TD
A[European IT Infrastructure Market]
B[Market Volume: €300-500 Billion]
C[Drivers: Digital Transformation, Cybersecurity, Govt. Initiatives]
A --> B
A --> C
```
*This diagram presents the European markets overall volume (estimated at €300500 billion) and highlights the main growth drivers such as digital transformation initiatives and cybersecurity needs.*
```

View File

@@ -1,83 +0,0 @@
# Market Analysis for CyMaIS in Germany
This analysis provides a detailed overview of the market potential of CyMaIS a modular solution for establishing and managing modern IT infrastructures in the German market.
## 1. Introduction
CyMaIS addresses the increasing need for modern, flexible IT infrastructure solutions in Germany. In particular, small and medium-sized enterprises (SMEs) face the challenge of advancing their digitalization while meeting security requirements. CyMaIS offers modular, customizable solutions ranging from basic setups to complex integration projects.
## 2. Market Overview and Digitalization in Germany
- **Business Landscape:**
- There are approximately 3.5 million companies in Germany.
- Over 99% of these companies are SMEs.
- **Degree of Digitalization:**
- About 6070% have already implemented basic digital technologies.
- An estimated 7585% of companies require additional support to build modern IT infrastructures (including cybersecurity, automation, and data management).
## 3. Analysis of the Demand for IT Infrastructure Solutions
- **Target Market:**
- Approximately 2.6 to 3 million companies predominantly SMEs face the challenge of modernizing outdated or incomplete IT structures.
- Industries with high security requirements and a strong need for digital transformation particularly benefit from specialized solutions like CyMaIS.
- **Core Requirements:**
- Integration of modern IT components
- Enhancement of cybersecurity
- Support for process automation and data analytics
## 4. Pricing Segments and Cost Structure
CyMaIS caters to different pricing segments in order to meet the diverse needs of companies:
- **Low-Priced Segment (Basic Setup):**
- **Costs:** Approximately €10,00030,000
- **Target Group:** Smaller companies and standardized IT requirements
- **Market Share:** Estimated 3040% of potential customers
- **Mid-Priced Segment:**
- **Costs:** Approximately €40,00070,000
- **Target Group:** Medium-sized companies with individual customization needs
- **Market Share:** Around 2025% of companies
- **High-Priced Segment (Complex, Customized Solutions):**
- **Costs:** Starting from €100,000 and above
- **Target Group:** Large companies and highly specialized projects
- **Market Share:** About 510% of potential customers
## 5. Total Market Volume and Revenue Potential
- **Market Volume:**
- The total market volume for IT infrastructure solutions in Germany is estimated at approximately **€80120 billion**.
- **Influencing Factors:**
- The scope of required solutions
- Consulting and integration services
- Ongoing investments in cybersecurity and digitalization
- **Growth Drivers:**
- Increasing digitalization across all industries
- Rising security requirements (cybersecurity)
- Government programs and initiatives supporting digital transformation
## 6. Competitive Environment and Positioning of CyMaIS
- **Competition:**
- The market for IT infrastructure solutions in Germany is fragmented, with numerous providers offering standardized as well as specialized solutions.
- **Competitive Advantages of CyMaIS:**
- **Modularity:** Flexible adaptation to individual business needs
- **Scalability:** From basic setups to complex systems
- **Integration:** Seamless incorporation of modern IT components, including security solutions
## 7. Opportunities and Challenges
- **Opportunities:**
- Growing demand for digital transformation and security solutions
- High market penetration among SMEs that are yet to modernize their IT infrastructures
- Government funding and initiatives for digitalization
- **Challenges:**
- Strong competition and pricing pressure
- Varied IT and digitalization levels across companies
- Technological complexity and the need for customized adaptations
## 8. Conclusion
The German IT market offers significant potential for CyMaIS. With an estimated market volume of €80120 billion and approximately 2.6 to 3 million companies needing to modernize their IT infrastructures, CyMaIS is well positioned. The modular and scalable nature of its solutions enables it to serve both small and large companies with individual requirements. In the long term, ongoing digitalization and increasing security demands present attractive growth opportunities for CyMaIS.
## Sources
- Analysis based on a conversation conducted with [ChatGPT](https://chatgpt.com/share/67f9608d-3904-800f-a9ca-9b893e252c05) on April 11, 2025.

View File

@@ -1,37 +0,0 @@
# Germany Market Diagrams
## 1. Digitalization / IT Modernization Need (Pie Chart)
```mermaid
pie
title Germany: IT Modernization Status
"Fully Modernized (20%)": 20
"Partially Digitalized (30%)": 30
"Requires Major Modernization (50%)": 50
```
*This diagram shows the estimated distribution of digitalization among German companies: 20% are fully modernized, 30% are partially digitalized, and 50% need major IT upgrades.*
## 2. Investment/Price Segments (Flowchart)
```mermaid
flowchart LR
A[Investment Segments]
B[Low-Priced (€10k-30k): 40%]
C[Mid-Priced (€40k-70k): 40%]
D[High-Priced (€100k+): 20%]
A --> B
A --> C
A --> D
```
*This flowchart represents the distribution of investment segments in Germany, indicating that approximately 40% of projects fall into the low- and mid-priced categories each, with 20% in the high-priced bracket.*
## 3. Overall Market Volume & Drivers (Flowchart)
```mermaid
flowchart TD
A[German IT Infrastructure Market]
B[Market Volume: €80-120 Billion]
C[Drivers: Digital Transformation, Cybersecurity, Integration]
A --> B
A --> C
```
*This diagram outlines the overall market volume (estimated at €80120 billion) and the key drivers shaping the demand for IT infrastructure solutions in Germany.*

View File

@@ -1,77 +0,0 @@
# Global Market Analysis for CyMaIS
This analysis provides a detailed overview of the global potential for CyMaIS a modular IT infrastructure solution addressing the growing worldwide demand for digital transformation and advanced cybersecurity measures.
## 1. Introduction
CyMaIS is designed to support enterprises in modernizing their IT infrastructures. As digital transformation accelerates globally, organizations of all sizes require scalable and flexible solutions to manage cybersecurity, automation, and data management. This analysis evaluates the global market potential for CyMaIS across diverse economic regions.
## 2. Global Market Overview and Digitalization
- **Business Landscape:**
- There are estimated to be hundreds of millions of companies worldwide, with tens of millions being small and medium-sized enterprises (SMEs).
- Developed markets (North America, Europe, parts of Asia) typically exhibit higher digitalization rates, whereas emerging markets are rapidly catching up.
- **Degree of Digitalization:**
- Many large enterprises have implemented advanced digital technologies, while a significant proportion of SMEs—potentially over 70% globally—still need to progress beyond basic digitalization.
- This gap is particularly apparent in regions where legacy systems are prevalent or where investment in IT modernization has been historically low.
## 3. Analysis of the Demand for IT Infrastructure Solutions
- **Target Market:**
- Globally, the demand for modern IT infrastructure solutions is strong due to rising cybersecurity threats, the need for automation, and the increasing reliance on data analytics.
- Industries across sectors—from finance and manufacturing to healthcare and retail—are actively seeking solutions to overhaul outdated IT systems.
- **Core Requirements:**
- Seamless integration of modern IT components
- Robust cybersecurity measures
- Tools for process automation and data-driven decision-making
## 4. Pricing Segments and Cost Structure
CyMaIS offers a range of solutions tailored to different budget levels and technical needs, including:
- **Low-Priced Segment (Basic Setup):**
- **Costs:** Approximately €10,00030,000
- **Target Group:** Small companies looking for standardized IT solutions
- **Mid-Priced Segment:**
- **Costs:** Approximately €40,00070,000
- **Target Group:** Medium-sized companies with customization requirements
- **High-Priced Segment (Complex, Customized Solutions):**
- **Costs:** From €100,000 upwards
- **Target Group:** Large enterprises and projects with extensive integration and security needs
## 5. Total Market Volume and Revenue Potential
- **Global Market Volume:**
- The overall revenue potential for modern IT infrastructure solutions worldwide is substantial, with estimates ranging between **€11.5 trillion**.
- This figure comprises investments in hardware, software, consulting, integration services, and ongoing IT support.
- **Growth Drivers:**
- The accelerating pace of digital transformation worldwide
- Increasing incidence of cybersecurity threats
- Government initiatives and private-sector investments that promote digitalization
## 6. Competitive Environment and Positioning of CyMaIS
- **Competition:**
- The global market is highly competitive, featuring major multinational IT service providers as well as numerous regional and niche players.
- Diverse regulatory environments and economic conditions across regions create both challenges and opportunities for market entrants.
- **Competitive Advantages of CyMaIS:**
- **Modularity and Flexibility:** Allows tailored solutions to meet a wide range of business needs
- **Scalability:** Suitable for organizations from startups to multinational corporations
- **Integration Capabilities:** Supports seamless incorporation of modern IT components along with advanced cybersecurity features
## 7. Opportunities and Challenges
- **Opportunities:**
- Rapid digital transformation across all regions creates a sustained demand for IT modernization
- High potential in emerging markets where digital infrastructure is underdeveloped
- Opportunities for strategic partnerships and government-driven digital initiatives
- **Challenges:**
- Navigating diverse regulatory landscapes and varying levels of IT maturity
- Intense global competition and pricing pressures
- Continuously evolving cybersecurity threats and technological changes that necessitate ongoing innovation
## 8. Conclusion
The global market presents significant opportunities for CyMaIS. With an estimated market volume of €11.5 trillion and millions of companies worldwide in need of modernized IT infrastructures, CyMaIS is well positioned to capture a diverse range of customers. Its modular and scalable solutions can meet the unique challenges and requirements of different markets, making it a competitive choice in the rapidly evolving field of digital transformation and cybersecurity.
## Sources
- Analysis based on an interactive discussion with [ChatGPT](https://chat.openai.com) on April 11, 2025.

View File

@@ -1,37 +0,0 @@
# Global Market Diagrams
## 1. Global Digitalization Status (Pie Chart)
```mermaid
pie
title Global Digitalization Status
"Advanced Digitalization (30%)": 30
"Moderate Digitalization (40%)": 40
"Needs Significant Modernization (30%)": 30
```
*This pie chart shows an estimated global digitalization distribution: 30% of companies are advanced, 40% have moderate digitalization, and 30% require significant modernization.*
## 2. Global Investment Segments (Flowchart)
```mermaid
flowchart LR
A[Global Investment Segments]
B[Low-Priced (€10k-30k): 40%]
C[Mid-Priced (€40k-70k): 40%]
D[High-Priced (€100k+): 20%]
A --> B
A --> C
A --> D
```
*This flowchart illustrates the distribution of investment segments globally, indicating that roughly 40% of IT projects fall into both low and mid-price categories, with 20% in the high-price category.*
## 3. Overall Global Market Volume & Drivers (Flowchart)
```mermaid
flowchart TD
A[Global IT Infrastructure Market]
B[Market Volume: €1-1.5 Trillion]
C[Drivers: Accelerated Digitalization, Cybersecurity, Global Investments]
A --> B
A --> C
```
*This diagram outlines the global market volume (estimated between €11.5 trillion) and the key factors fueling growth, such as digital transformation and cybersecurity initiatives.*

View File

@@ -1,53 +0,0 @@
# Migration Feature
## Seamless Migration of Existing Software Solutions to CyMaIS
CyMaIS is designed to simplify the migration of existing software solutions and IT infrastructures. The focus is on protecting existing investments while enabling the benefits of a modern and unified platform.
---
## Integration of Existing Applications
Existing applications can be easily integrated into the [CyMaIS](https://example.com) dashboard. There is no need to migrate or modify existing software — CyMaIS provides a central interface to access and manage already deployed systems.
---
## Parallel Operation of Existing Infrastructure
CyMaIS supports a parallel operation model, allowing the existing IT infrastructure to run alongside CyMaIS without disruption. This enables a step-by-step migration strategy where applications and user groups can be transitioned gradually.
---
## Flexible User Management and Single Sign-On (SSO)
CyMaIS offers flexible user management by supporting multiple directory services:
- [Microsoft Active Directory (AD)](https://en.wikipedia.org/wiki/Active_Directory)
- [LDAP (Lightweight Directory Access Protocol)](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol)
In both scenarios, centralized authentication is provided through [Keycloak](https://www.keycloak.org/), enabling modern [Single Sign-On (SSO)](https://en.wikipedia.org/wiki/Single_sign-on) capabilities — not only for applications managed by CyMaIS but also for existing external services.
---
## Key Points
- Simple migration of existing software solutions
- Integration of existing apps into dashboard
- Parallel operation of CyMaIS and existing infrastructure is fully supported
- User management via [Active Directory](https://en.wikipedia.org/wiki/Active_Directory) or [LDAP](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol)
- Central authentication with [SSO](https://en.wikipedia.org/wiki/Single_sign-on) using [Keycloak](https://www.keycloak.org/)
---
## Summary of Migration Benefits
| Feature | Description |
|--------------------------------|-------------------------------------------------------------------|
| Easy Application Integration | Integrate existing applications into the CyMaIS dashboard |
| Parallel Operation Supported | Continue using your current infrastructure without disruption |
| Flexible User Management | Support for AD and LDAP directory services |
| Single Sign-On (SSO) | Centralized authentication via Keycloak |
---
CyMaIS enables a smooth and controlled migration path — customized to the individual needs of your organization.

View File

@@ -2,7 +2,7 @@
## Ansible Vault Basics
CyMaIS uses Ansible Vault to protect sensitive data (e.g. passwords). Use these common commands:
Infinito.Nexus uses Ansible Vault to protect sensitive data (e.g. passwords). Use these common commands:
### Edit an Encrypted File
```bash

View File

@@ -1,6 +1,6 @@
# 🚀 Deployment Guide
This section explains how to deploy and manage the **Cyber Master Infrastructure Solution (CyMaIS)** using Ansible. CyMaIS uses a collection of Ansible tasks, which are controlled via different **"modes"** — such as **updates**, **backups**, **resets**, and **cleanup** operations.
This section explains how to deploy and manage **[Infinito.Nexus](https://infinito.nexus)** using Ansible. Infinito.Nexus uses a collection of Ansible tasks, which are controlled via different **"modes"** — such as **updates**, **backups**, **resets**, and **cleanup** operations.
---
@@ -9,27 +9,27 @@ This section explains how to deploy and manage the **Cyber Master Infrastructure
Before deploying, ensure the following are in place:
- **🧭 Inventory File:** A valid Ansible inventory file that defines your target systems (servers, personal computers, etc.). Adjust example paths to your environment.
- **📦 CyMaIS Installed:** Install via [Kevin's Package-Manager](https://github.com/kevinveenbirkenbach/package-manager).
- **📦 Infinito.Nexus Installed:** Install via [Kevin's Package-Manager](https://github.com/kevinveenbirkenbach/package-manager).
- **🔐 (Optional) Vault Password File:** If you don't want to enter your vault password interactively, create a password file.
---
## 📘 Show CyMaIS Help
## 📘 Show Infinito.Nexus Help
To get a full overview of available options and usage instructions, run:
```bash
cymais --help
infinito --help
```
---
## 💡 Example Deploy Command
To deploy CyMaIS on a personal computer (e.g., a laptop), you can run:
To deploy Infinito.Nexus on a personal computer (e.g., a laptop), you can run:
```bash
cymais playbook \
infinito playbook \
--limit hp-spectre-x360 \
--host-type personal-computer \
--update \
@@ -41,7 +41,7 @@ cymais playbook \
| Parameter | Description |
|----------|-------------|
| `playbook` | Executes the playbook subcommand of CyMaIS. |
| `playbook` | Executes the playbook subcommand of Infinito.Nexus. |
| `--limit hp-spectre-x360` | Limits execution to a specific host (`hp-spectre-x360`). |
| `--host-type personal-computer` | Defines the host type. Default is `server`; here it is set to `personal-computer`. |
| `--update` | Enables update mode to apply software or configuration updates. |
@@ -64,7 +64,7 @@ To avoid typing your vault password interactively, you can provide a file:
## 🔍 Full Command-Line Reference
Heres a breakdown of all available parameters from `cymais playbook --help`:
Heres a breakdown of all available parameters from `infinito playbook --help`:
| Argument | Description |
|----------|-------------|
@@ -87,7 +87,7 @@ Heres a breakdown of all available parameters from `cymais playbook --help`:
You can mix and match modes like this:
```bash
cymais playbook --update --backup --cleanup pcs.yml
infinito playbook --update --backup --cleanup pcs.yml
```
This will update the system, create a backup, and clean up unnecessary files in one run.

View File

@@ -1,9 +1,9 @@
# Administrator Guide
This guide is for **system administrators** who are deploying and managing CyMaIS infrastructure.
This guide is for **system administrators** who are deploying and managing Infinito.Nexus infrastructure.
## Setting Up CyMaIS 🏗️
Follow these guides to install and configure CyMaIS:
## Setting Up Infinito.Nexus 🏗️
Follow these guides to install and configure Infinito.Nexus:
- [Setup Guide](SETUP_GUIDE.md)
- [Configuration Guide](CONFIGURATION.md)
- [Deployment Guide](DEPLOY.md)
@@ -14,9 +14,9 @@ Follow these guides to install and configure CyMaIS:
- **Application Hosting** - Deploy services like `Nextcloud`, `Matrix`, `Gitea`, and more.
- **Networking & VPN** - Configure `WireGuard`, `OpenVPN`, and `Nginx Reverse Proxy`.
## Managing & Updating CyMaIS 🔄
- Regularly update services using `update-docker`, `update-pacman`, or `update-apt`.
- Monitor system health with `sys-hlth-btrfs`, `sys-hlth-webserver`, and `sys-hlth-docker-container`.
- Automate system maintenance with `sys-lock`, `sys-cln-bkps-service`, and `sys-rpr-docker-hard`.
## Managing & Updating Infinito.Nexus 🔄
- Regularly update services using `update-pacman`, or `update-apt`.
- Monitor system health with `sys-ctl-hlth-btrfs`, `sys-ctl-hlth-webserver`, and `sys-ctl-hlth-docker-container`.
- Automate system maintenance with `sys-lock`, `sys-ctl-cln-bkps`, and `sys-ctl-rpr-docker-hard`.
For more details, refer to the specific guides above.

View File

@@ -1,27 +1,27 @@
# Security Guidelines
CyMaIS is designed with security in mind. However, while following our guidelines can greatly improve your systems security, no IT system can be 100% secure. Please report any vulnerabilities as soon as possible.
Infinito.Nexus is designed with security in mind. However, while following our guidelines can greatly improve your systems security, no IT system can be 100% secure. Please report any vulnerabilities as soon as possible.
Additional to the user securitry guidelines administrators have additional responsibilities to secure the entire system:
- **Deploy on an Encrypted Server**
It is recommended to install CyMaIS on an encrypted server to prevent hosting providers from accessing end-user data. For a practical guide on setting up an encrypted server, refer to the [Hetzner Arch LUKS repository](https://github.com/kevinveenbirkenbach/hetzner-arch-luks) 🔐. (Learn more about [disk encryption](https://en.wikipedia.org/wiki/Disk_encryption) on Wikipedia.)
It is recommended to install Infinito.Nexus on an encrypted server to prevent hosting providers from accessing end-user data. For a practical guide on setting up an encrypted server, refer to the [Hetzner Arch LUKS repository](https://github.com/kevinveenbirkenbach/hetzner-arch-luks) 🔐. (Learn more about [disk encryption](https://en.wikipedia.org/wiki/Disk_encryption) on Wikipedia.)
- **Centralized User Management & SSO**
For robust authentication and central user management, set up CyMaIS using Keycloak and LDAP.
For robust authentication and central user management, set up Infinito.Nexus using Keycloak and LDAP.
This configuration enables centralized [Single Sign-On (SSO)](https://en.wikipedia.org/wiki/Single_sign-on) (SSO), simplifying user management and boosting security.
- **Enforce 2FA and Use a Password Manager**
Administrators should also enforce [2FA](https://en.wikipedia.org/wiki/Multi-factor_authentication) and use a password manager with auto-generated passwords. We again recommend [KeePass](https://keepass.info/). The KeePass database can be stored securely in your Nextcloud instance and synchronized between devices.
- **Avoid Root Logins & Plaintext Passwords**
CyMaIS forbids logging in via the root user or using simple passwords. Instead, an SSH key must be generated and transferred during system initialization. When executing commands as root, always use `sudo` (or, if necessary, `sudo su`—but only if you understand the risks). (More information on [SSH](https://en.wikipedia.org/wiki/Secure_Shell) and [sudo](https://en.wikipedia.org/wiki/Sudo) is available on Wikipedia.)
Infinito.Nexus forbids logging in via the root user or using simple passwords. Instead, an SSH key must be generated and transferred during system initialization. When executing commands as root, always use `sudo` (or, if necessary, `sudo su`—but only if you understand the risks). (More information on [SSH](https://en.wikipedia.org/wiki/Secure_Shell) and [sudo](https://en.wikipedia.org/wiki/Sudo) is available on Wikipedia.)
- **Manage Inventories Securely**
Your inventories for running CyMaIS should be managed in a separate repository and secured with tools such as [Ansible Vault](https://en.wikipedia.org/wiki/Encryption) 🔒. Sensitive credentials must never be stored in plaintext; use a password file to secure these details.
Your inventories for running Infinito.Nexus should be managed in a separate repository and secured with tools such as [Ansible Vault](https://en.wikipedia.org/wiki/Encryption) 🔒. Sensitive credentials must never be stored in plaintext; use a password file to secure these details.
- **Reporting Vulnerabilities**
If you discover a security vulnerability in CyMaIS, please report it immediately. We encourage proactive vulnerability reporting so that issues can be addressed as quickly as possible. Contact our security team at [security@cymais.cloud](mailto:security@cymais.cloud)
If you discover a security vulnerability in Infinito.Nexus, please report it immediately. We encourage proactive vulnerability reporting so that issues can be addressed as quickly as possible. Contact our security team at [security@infinito.nexus](mailto:security@infinito.nexus)
**DO NOT OPEN AN ISSUE.**
---

View File

@@ -1,26 +1,26 @@
# Setup Guide
To setup CyMaIS follow this steps:
To setup Infinito.Nexus follow this steps:
## Prerequisites
Before you setup CyMaIS you need to install [Kevin's Package Manager](https://github.com/kevinveenbirkenbach/package-manager).
Before you setup Infinito.Nexus you need to install [Kevin's Package Manager](https://github.com/kevinveenbirkenbach/package-manager).
Follow the installation instruction descriped [here](https://github.com/kevinveenbirkenbach/package-manager)
## Setup CyMaIS
## Setup Infinito.Nexus
To setup CyMaIS execute:
To setup Infinito.Nexus execute:
```bash
pkgmgr install cymais
pkgmgr install infinito
```
This command will setup CyMaIS on your system with the alias **cymais**.
This command will setup Infinito.Nexus on your system with the alias **infinito**.
## Get Help
After you setuped CyMaIS you can receive more help by executing:
After you setuped Infinito.Nexus you can receive more help by executing:
```bash
cymais --help
infinito --help
```

View File

@@ -1,6 +1,6 @@
## 📖 CyMaIS.Cloud Ansible & Python Directory Guide
## 📖 Infinito.Nexus Ansible & Python Directory Guide
This document provides a **decision matrix** for when to use each default Ansible plugin and module directory in the context of **CyMaIS.Cloud development** with Ansible and Python. It links to official docs, explains use-cases, and points back to our conversation.
This document provides a **decision matrix** for when to use each default Ansible plugin and module directory in the context of **Infinito.Nexus development** with Ansible and Python. It links to official docs, explains use-cases, and points back to our conversation.
---
@@ -31,12 +31,12 @@ ansible-repo/
### 🎯 Decision Matrix: Which Folder for What?
| Folder | Type | Use-Case | Example (CyMaIS.Cloud) | Emoji |
| Folder | Type | Use-Case | Example (Infinito.Nexus) | Emoji |
| -------------------- | -------------------- | ---------------------------------------- | ----------------------------------------------------- | ----- |
| `library/` | **Module** | Write idempotent actions | `cloud_network.py`: manage VPCs, subnets | 📦 |
| `filter_plugins/` | **Filter plugin** | Jinja2 data transforms in templates/vars | `to_camel_case.py`: convert keys for API calls | 🔍 |
| `lookup_plugins/` | **Lookup plugin** | Fetch external/secure data at runtime | `vault_lookup.py`: pull secrets from CyMaIS Vault | 👉 |
| `module_utils/` | **Utility library** | Shared Python code for modules | `cymais_client.py`: common API client base class | 🛠️ |
| `lookup_plugins/` | **Lookup plugin** | Fetch external/secure data at runtime | `vault_lookup.py`: pull secrets from Infinito.Nexus Vault | 👉 |
| `module_utils/` | **Utility library** | Shared Python code for modules | `infinito_client.py`: common API client base class | 🛠️ |
| `action_plugins/` | **Action plugin** | Complex task orchestration wrappers | `deploy_stack.py`: sequence Terraform + Ansible steps | ⚙️ |
| `callback_plugins/` | **Callback plugin** | Customize log/report behavior | `notify_slack.py`: send playbook status to Slack | 📣 |
| `inventory_plugins/` | **Inventory plugin** | Dynamic host/group sources | `azure_inventory.py`: list hosts from Azure tags | 🌐 |
@@ -96,16 +96,16 @@ ansible-repo/
---
### 🚀 CyMaIS.Cloud Best Practices
### 🚀 Infinito.Nexus Best Practices
* **Organize modules** by service under `library/cloud/` (e.g., `vm`, `network`, `storage`).
* **Shared client code** in `module_utils/cymais/` for authentication, request handling.
* **Secrets lookup** via `lookup_plugins/vault_lookup.py` pointing to CyMaIS Vault.
* **Shared client code** in `module_utils/infinito/` for authentication, request handling.
* **Secrets lookup** via `lookup_plugins/vault_lookup.py` pointing to Infinito.Nexus Vault.
* **Filters** to normalize data formats from cloud APIs (e.g., `snake_to_camel`).
* **Callbacks** to stream playbook results into CyMaIS Monitoring.
* **Callbacks** to stream playbook results into Infinito.Nexus Monitoring.
Use this matrix as your **single source of truth** when extending Ansible for CyMaIS.Cloud! 👍
Use this matrix as your **single source of truth** when extending Ansible for Infinito.Nexus! 👍
---
---
This matrix was created with the help of ChatGPT 🤖—see our conversation [here](https://chatgpt.com/canvas/shared/682b1a62d6dc819184ecdc696c51290a).

View File

@@ -1,11 +1,11 @@
Developer Guide
===============
Welcome to the **CyMaIS Developer Guide**! This guide provides essential information for developers who want to contribute to the CyMaIS open-source project.
Welcome to the **Infinito.Nexus Developer Guide**! This guide provides essential information for developers who want to contribute to the Infinito.Nexus open-source project.
Explore CyMaIS Solutions
Explore Infinito.Nexus Solutions
------------------------
CyMaIS offers various solutions for IT infrastructure automation. Learn more about the available applications:
Infinito.Nexus offers various solutions for IT infrastructure automation. Learn more about the available applications:
- :doc:`../../../roles/application_glosar`
- :doc:`../../../roles/application_categories`
@@ -16,21 +16,21 @@ For Developers
Understanding Ansible Roles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
CyMaIS is powered by **Ansible** roles to automate deployments. Developers can explore the technical details of our roles here:
Infinito.Nexus is powered by **Ansible** roles to automate deployments. Developers can explore the technical details of our roles here:
- :doc:`../../../roles/ansible_role_glosar`
Contributing to CyMaIS
Contributing to Infinito.Nexus
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Want to contribute to the project or explore the source code? Check out our **GitHub repository**:
- `CyMaIS GitHub Repository <https://github.com/kevinveenbirkenbach/cymais/tree/master/roles>`_
- `Infinito.Nexus GitHub Repository <https://s.infinito.nexus/code/tree/master/roles>`_
Contribution Guidelines
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. **Fork the Repository** Start by forking the CyMaIS repository.
1. **Fork the Repository** Start by forking the Infinito.Nexus repository.
2. **Create a New Branch** Make changes in a dedicated branch.
3. **Follow Coding Standards** Ensure your code is well-documented and follows best practices.
4. **Submit a Pull Request** Once your changes are tested, submit a PR for review.
@@ -42,12 +42,12 @@ For detailed guidelines, refer to:
Community & Support
-------------------
If you have questions or need help, visit the **CyMaIS Information Hub**:
If you have questions or need help, visit the **Infinito.Nexus Information Hub**:
- `hub.cymais.cloud <https://hub.cymais.cloud>`_
- `hub.infinito.nexus <https://hub.infinito.nexus>`_
This is the best place to ask questions, get support, and collaborate with other contributors.
Stay connected, collaborate, and help improve CyMaIS together!
Stay connected, collaborate, and help improve Infinito.Nexus together!
Happy coding! 🚀

View File

@@ -1,6 +1,6 @@
# Enterprise Guide
Are you looking for a **reliable IT infrastructure** for your business or organization? **CyMaIS** is here to help!
Are you looking for a **reliable IT infrastructure** for your business or organization? **Infinito.Nexus** is here to help!
## Who Can Benefit? 🎯
**Small & Medium Businesses** - IT infrastructure with everything included what you need. E.g. data clouds, mailservers, vpn's, homepages, documentation tools, etc.
@@ -8,7 +8,7 @@ Are you looking for a **reliable IT infrastructure** for your business or organi
**NGOs & Organizations** - Secure, cost-effective infrastructure solutions on Open Source Base
**Journalists & Content Creators** - Host your content on your own servers, share it via the Fediverse and avoid cencorship
## Why Choose CyMaIS? 🚀
## Why Choose Infinito.Nexus? 🚀
- **Fast Deployment** - Get your IT setup running in minutes
- **Security First** - Encrypted backups, 2FA, and secure logins
- **Scalable & Customizable** - Adapts to your specific needs

View File

@@ -1,15 +0,0 @@
# Investor Guide
🚀 **CyMaIS is seeking investors** to expand its reach and continue development. With an increasing demand for automated IT solutions, **CyMaIS has the potential to revolutionize IT infrastructure management.**
## Market Potential 📈
- **$500B+ Global IT Infrastructure Market**
- Growing **open-source adoption** across enterprises
- Increasing need for **automation & cybersecurity**
## Why Invest in CyMaIS? 🔥
- **Unique Automation Approach** - Pre-configured roles for quick IT setup
- **Security & Compliance Focus** - Built-in security best practices
- **Scalability** - Modular framework adaptable to various industries
Interested in investing? Contact **[Kevin Veen-Birkenbach](mailto:kevin@veen.world)** to discuss partnership opportunities.

View File

@@ -1,17 +0,0 @@
# Enterprise Solutions
**CyMaIS** provides powerful **enterprise-grade IT infrastructure solutions**, enabling businesses to scale securely and efficiently.
## How CyMaIS Helps Enterprises 🔧
- **Automated Deployment** - Set up secure servers & workstations effortlessly
- **Advanced Security** - Integrated 2FA, LDAP, encrypted storage
- **High Availability** - Scalable infrastructure for growing enterprises
- **Compliance & Audit Logs** - Maintain regulatory standards
## Use Cases 💼
-**Cloud-Based Infrastructure** (Docker, Kubernetes, CI/CD pipelines)
-**Enterprise Networking & VPN** (WireGuard, OpenVPN, Firewall rules)
-**Database & Business Apps** (PostgreSQL, Nextcloud, ERP systems)
-**Custom Security Solutions** (Keycloak, LDAP, 2FA enforcement)
Interested? Contact [Kevin Veen-Birkenbach](mailto:kevin@veen.world) to discuss tailored enterprise solutions.

View File

@@ -1,9 +1,9 @@
# User Guide
Welcome to **CyMaIS**! This guide is designed for **end-users** who want to use cloud services, email, and collaboration tools securely and efficiently. Whether you're an **enterprise user** or an **individual**, CyMaIS provides a wide range of services tailored to your needs.
Welcome to **Infinito.Nexus**! This guide is designed for **end-users** who want to use cloud services, email, and collaboration tools securely and efficiently. Whether you're an **enterprise user** or an **individual**, Infinito.Nexus provides a wide range of services tailored to your needs.
## What Can CyMaIS Do for You? 💡
CyMaIS enables you to securely and efficiently use a variety of **cloud-based applications**, including:
## What Can Infinito.Nexus Do for You? 💡
Infinito.Nexus enables you to securely and efficiently use a variety of **cloud-based applications**, including:
### 📂 Cloud Storage & File Sharing
- **Nextcloud** Securely store, sync, and share files across devices.
@@ -44,23 +44,23 @@ CyMaIS enables you to securely and efficiently use a variety of **cloud-based ap
## 🏢 Enterprise Users
### How to Get Started 🏁
If your organization provides CyMaIS services, follow these steps:
If your organization provides Infinito.Nexus services, follow these steps:
- Your **administrator** will provide login credentials.
- Access **cloud services** via a web browser or mobile apps.
- For support, contact your **system administrator**.
## 🏠 Private Users
### How to Get Started 🏁
If you're an **individual user**, you can sign up for CyMaIS services:
- **Register an account** at [cymais.cloud](https://cymais.cloud).
If you're an **individual user**, you can sign up for Infinito.Nexus services:
- **Register an account** at [infinito.nexus](https://infinito.nexus).
- Choose the applications and services you need.
- Follow the setup guide and start using CyMaIS services immediately.
- Follow the setup guide and start using Infinito.Nexus services immediately.
## 📚 Learn More
Discover more about CyMaIS applications:
Discover more about Infinito.Nexus applications:
- :doc:`roles/application_glosar`
- :doc:`roles/application_categories`
For further information, visit our **[Information Hub](https://hub.cymais.cloud)** for tutorials, FAQs, and community support.
For further information, visit our **[Information Hub](https://hub.infinito.nexus)** for tutorials, FAQs, and community support.
You can also register for updates and support from our community.

View File

@@ -1,6 +1,6 @@
# Security Guidelines
CyMaIS is designed with security in mind. However, while following our guidelines can greatly improve your systems security, no IT system can be 100% secure. Please report any vulnerabilities as soon as possible.
Infinito.Nexus is designed with security in mind. However, while following our guidelines can greatly improve your systems security, no IT system can be 100% secure. Please report any vulnerabilities as soon as possible.
For optimal personal security, we **strongly recommend** the following:
@@ -12,12 +12,12 @@ For optimal personal security, we **strongly recommend** the following:
Synchronize your password database across devices using the [Nextcloud Client](https://nextcloud.com/) 📱💻.
- **Use Encrypted Systems**
We recommend running CyMaIS only on systems with full disk encryption. For example, Linux distributions such as [Manjaro](https://manjaro.org/) (based on ArchLinux) with desktop environments like [GNOME](https://en.wikipedia.org/wiki/GNOME) provide excellent security. (Learn more about [disk encryption](https://en.wikipedia.org/wiki/Disk_encryption) on Wikipedia.)
We recommend running Infinito.Nexus only on systems with full disk encryption. For example, Linux distributions such as [Manjaro](https://manjaro.org/) (based on ArchLinux) with desktop environments like [GNOME](https://en.wikipedia.org/wiki/GNOME) provide excellent security. (Learn more about [disk encryption](https://en.wikipedia.org/wiki/Disk_encryption) on Wikipedia.)
- **Beware of Phishing and Social Engineering**
Always verify email senders, avoid clicking on unknown links, and never share your passwords or 2FA codes with anyone. (Learn more about [Phishing](https://en.wikipedia.org/wiki/Phishing) and [Social Engineering](https://en.wikipedia.org/wiki/Social_engineering_(security)) on Wikipedia.)
Following these guidelines will significantly enhance your personal security—but remember, no system is completely immune to risk.
A tutorial how to setup secure password management you will find [here](https://blog.veen.world/blog/2025/04/04/%f0%9f%9b%a1%ef%b8%8f-keepassxc-cymais-cloud-the-ultimate-guide-to-cross-device-password-security/)
A tutorial how to setup secure password management you will find [here](https://blog.veen.world/blog/2025/04/04/%f0%9f%9b%a1%ef%b8%8f-keepassxc-infinito-cloud-the-ultimate-guide-to-cross-device-password-security/)
---

View File

@@ -1,23 +0,0 @@
# Company Vision — CyMaIS
## Empowering Digital Sovereignty for Everyone.
CyMaIS is more than just software — it is a movement for digital independence, resilience, and transparency.
We believe that secure, self-hosted IT infrastructure must be accessible to everyone — regardless of company size, technical expertise, or budget.
### Our Mission
- Democratize access to secure IT infrastructure
- Enable data sovereignty and privacy for individuals and organizations
- Reduce global dependency on monopolistic cloud providers
- Promote Open Source, transparency, and community-driven innovation
- Build resilient digital ecosystems in uncertain times
### Long-Term Goal
We want to establish CyMaIS as the leading European and global alternative to centralized cloud platforms — open, modular, and self-sovereign.
Our vision is a future where every person and organization owns their infrastructure — free from control, censorship, and vendor lock-ins.
---
> *CyMaIS — Empowering a Sovereign Digital Future.*

View File

@@ -1,28 +0,0 @@
# Product Vision — CyMaIS Platform
## The Universal Automation Platform for Self-Hosted IT Infrastructure.
CyMaIS provides a modular, Open Source infrastructure automation platform that enables secure and scalable IT environments — for individuals, SMEs, NGOs, and enterprises.
### Key Product Goals
- Enterprise-grade infrastructure automation for everyone
- Rapid deployment of servers, clients, and cloud-native services
- Modular role-based architecture (VPN, Backup, Security, Monitoring, Web Services, IAM)
- Seamless integration of existing systems without forced migration
- Infrastructure-as-Code and reproducible deployments
- Reduced operational IT costs and vendor lock-ins
- Security by Design (encryption, 2FA, auditing, hardening)
- Support for decentralized protocols like ActivityPub, Matrix, Email
### Long-Term Product Vision
CyMaIS will become the central platform for:
- Automating any self-hosted infrastructure within minutes
- Maintaining full data control and regulatory compliance
- Empowering organizations to build their own sovereign cloud ecosystem
- Breaking the dependency on centralized and proprietary cloud services
---
> *CyMaIS — The Future of Self-Hosted Infrastructure.*
> *Secure. Automated. Sovereign.*

View File

@@ -1,33 +0,0 @@
# Vision Statement
This is the Vision Statement for [CyMaIS](https://cymais.cloud), outlining our future goals and direction.
## Short
CyMaIS aims to empower individuals, businesses, NGOs, and enterprises with a secure, scalable, and decentralized IT infrastructure solution that ensures data sovereignty, promotes Open Source innovation, and reduces reliance on monopolistic cloud providers.
## Explanation
At the core of our mission is the development of a groundbreaking tool designed to address the inherent problems in managing IT infrastructure today, for individuals, businesses, non-governmental organizations (NGOs), and large enterprises alike. From the rising costs of monopolistic cloud services to the loss of data sovereignty, security concerns, and dependency on centralized cloud providers, we aim to provide an alternative that empowers users, organizations, and businesses to regain control over their data and infrastructure.
Our vision is to create a fully automated solution that enables all users, regardless of size or industry, to establish a secure, scalable, and self-managed IT infrastructure. This tool will break down the complexities of IT infrastructure setup, making it faster, simpler, and more secure, while being accessible to everyone—from individuals and grassroots organizations to large-scale enterprises.
Grounded in Open Source principles, this solution will champion transparency, security, and innovation. It will be adaptable and flexible, offering a digital infrastructure that evolves alongside the diverse needs of businesses, organizations, and communities, all while maintaining a focus on usability and accessibility.
We envision a future where users and organizations are no longer at the mercy of monopolistic cloud providers, where they can securely manage their own data and infrastructure. This future will see individuals and NGOs empowered with the same capabilities as large enterprises—ensuring that people of all scales can maintain control and sovereignty over their digital lives, free from external manipulation.
CyMaIS will democratize access to advanced IT infrastructure solutions, providing security, flexibility, and scalability for all—from small NGOs to large multinational enterprises—without the cost and dependence on centralized, proprietary cloud services. By utilizing Open Source, our solution will meet the highest standards of security while fostering a collaborative, community-driven approach to innovation and continuous improvement.
Moreover, our vision goes beyond just IT infrastructure; it extends to the broader goal of democratizing the internet itself. By integrating decentralized protocols like **ActivityPub**, **email**, and **Matrix**, we aim to restore the foundational principles of a decentralized, resilient internet. In todays world, marked by political tensions, wars, and uncertainty, the importance of resilient, distributed infrastructures has never been greater. CyMaIS will enable all users—from individuals to NGOs and large enterprises—to remain independent and secure, ensuring that control over data and communications stays in their hands, not under the dominance of monopolistic entities.
Ultimately, our vision is to redefine the way IT infrastructure is deployed and managed, offering a solution that is swift, secure, and scalable, capable of meeting the needs of businesses, individuals, NGOs, and large enterprises. CyMaIS will empower all stakeholders by providing a foundation for a decentralized, transparent, and resilient digital future—setting a new benchmark for security, reliability, and sovereignty in the digital age.
## Key Points
- Empower people and institutions
- Data sovereignty
- Control over infrastructure
- Automated infrastructure setup
- Open Source
- Decentralized Services
- Scalabel
- Global resilience and security

27
filter_plugins/README.md Normal file
View File

@@ -0,0 +1,27 @@
# Custom Filter Plugins for Infinito.Nexus
This directory contains custom **Ansible filter plugins** used within the Infinito.Nexus project.
## When to Use a Filter Plugin
- **Transform values:** Use filters to transform, extract, reformat, or compute values from existing variables or facts.
- **Inline data manipulation:** Filters are designed for inline use in Jinja2 expressions (in templates, tasks, vars, etc.).
- **No external lookups:** Filters only operate on data you explicitly pass to them and cannot access external files, the Ansible inventory, or runtime context.
### Examples
```jinja2
{{ role_name | get_entity_name }}
{{ my_list | unique }}
{{ user_email | regex_replace('^(.+)@.*$', '\\1') }}
````
## When *not* to Use a Filter Plugin
* If you need to **load data from an external source** (e.g., file, environment, API), use a lookup plugin instead.
* If your logic requires **access to inventory, facts, or host-level information** that is not passed as a parameter.
## Further Reading
* [Ansible Filter Plugins Documentation](https://docs.ansible.com/ansible/latest/plugins/filter.html)
* [Developing Ansible Filter Plugins](https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#developing-filter-plugins)

View File

@@ -1,86 +0,0 @@
from ansible.errors import AnsibleFilterError
class FilterModule(object):
def filters(self):
return {'alias_domains_map': self.alias_domains_map}
def alias_domains_map(self, apps, primary_domain):
"""
Build a map of application IDs to their alias domains.
- If no `domains` key → []
- If `domains` exists but is an empty dict → return the original cfg
- Explicit `aliases` are used (default appended if missing)
- If only `canonical` defined and it doesn't include default, default is added
- Invalid types raise AnsibleFilterError
"""
def parse_entry(domains_cfg, key, app_id):
if key not in domains_cfg:
return None
entry = domains_cfg[key]
if isinstance(entry, dict):
values = list(entry.values())
elif isinstance(entry, list):
values = entry
else:
raise AnsibleFilterError(
f"Unexpected type for 'domains.{key}' in application '{app_id}': {type(entry).__name__}"
)
for d in values:
if not isinstance(d, str) or not d.strip():
raise AnsibleFilterError(
f"Invalid domain entry in '{key}' for application '{app_id}': {d!r}"
)
return values
def default_domain(app_id, primary):
return f"{app_id}.{primary}"
# 1) Precompute canonical domains per app (fallback to default)
canonical_map = {}
for app_id, cfg in apps.items():
domains_cfg = cfg.get('domains') or {}
entry = domains_cfg.get('canonical')
if entry is None:
canonical_map[app_id] = [default_domain(app_id, primary_domain)]
elif isinstance(entry, dict):
canonical_map[app_id] = list(entry.values())
elif isinstance(entry, list):
canonical_map[app_id] = list(entry)
else:
raise AnsibleFilterError(
f"Unexpected type for 'domains.canonical' in application '{app_id}': {type(entry).__name__}"
)
# 2) Build alias list per app
result = {}
for app_id, cfg in apps.items():
domains_cfg = cfg.get('domains')
# no domains key → no aliases
if domains_cfg is None:
result[app_id] = []
continue
# empty domains dict → return the original cfg
if isinstance(domains_cfg, dict) and not domains_cfg:
result[app_id] = cfg
continue
# otherwise, compute aliases
aliases = parse_entry(domains_cfg, 'aliases', app_id) or []
default = default_domain(app_id, primary_domain)
has_aliases = 'aliases' in domains_cfg
has_canon = 'canonical' in domains_cfg
if has_aliases:
if default not in aliases:
aliases.append(default)
elif has_canon:
canon = canonical_map.get(app_id, [])
if default not in canon and default not in aliases:
aliases.append(default)
result[app_id] = aliases
return result

View File

@@ -1,21 +1,76 @@
from ansible.errors import AnsibleFilterError
import sys
import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.entity_name_utils import get_entity_name
from module_utils.role_dependency_resolver import RoleDependencyResolver
from typing import Iterable
class FilterModule(object):
def filters(self):
return {'canonical_domains_map': self.canonical_domains_map}
def canonical_domains_map(self, apps, primary_domain):
def canonical_domains_map(
self,
apps,
PRIMARY_DOMAIN,
*,
recursive: bool = False,
roles_base_dir: str | None = None,
seed: Iterable[str] | None = None,
):
"""
Maps applications to their canonical domains, checking for conflicts
and ensuring all domains are valid and unique across applications.
Build { app_id: [canonical domains...] }.
Rekursiv werden nur include_role, import_role und meta/main.yml:dependencies verfolgt.
'run_after' wird hier absichtlich ignoriert.
"""
if not isinstance(apps, dict):
raise AnsibleFilterError(f"'apps' must be a dict, got {type(apps).__name__}")
app_keys = set(apps.keys())
seed_keys = set(seed) if seed is not None else app_keys
if recursive:
roles_base_dir = roles_base_dir or os.path.join(os.getcwd(), "roles")
if not os.path.isdir(roles_base_dir):
raise AnsibleFilterError(
f"roles_base_dir '{roles_base_dir}' not found or not a directory."
)
resolver = RoleDependencyResolver(roles_base_dir)
discovered_roles = resolver.resolve_transitively(
start_roles=seed_keys,
resolve_include_role=True,
resolve_import_role=True,
resolve_dependencies=True,
resolve_run_after=False,
max_depth=None,
)
# all discovered roles that actually have config entries in `apps`
target_apps = discovered_roles & app_keys
else:
target_apps = seed_keys
result = {}
seen_domains = {}
for app_id, cfg in apps.items():
domains_cfg = cfg.get('domains')
for app_id in sorted(target_apps):
cfg = apps.get(app_id)
if cfg is None:
continue
if not str(app_id).startswith(("web-", "svc-db-")):
continue
if not isinstance(cfg, dict):
raise AnsibleFilterError(
f"Invalid configuration for application '{app_id}': expected dict, got {cfg!r}"
)
domains_cfg = cfg.get('server', {}).get('domains', {})
if not domains_cfg or 'canonical' not in domains_cfg:
self._add_default_domain(app_id, primary_domain, seen_domains, result)
self._add_default_domain(app_id, PRIMARY_DOMAIN, seen_domains, result)
continue
canonical_domains = domains_cfg['canonical']
@@ -23,12 +78,9 @@ class FilterModule(object):
return result
def _add_default_domain(self, app_id, primary_domain, seen_domains, result):
"""
Add the default domain for an application if no canonical domains are defined.
Ensures the domain is unique across applications.
"""
default_domain = f"{app_id}.{primary_domain}"
def _add_default_domain(self, app_id, PRIMARY_DOMAIN, seen_domains, result):
entity_name = get_entity_name(app_id)
default_domain = f"{entity_name}.{PRIMARY_DOMAIN}"
if default_domain in seen_domains:
raise AnsibleFilterError(
f"Domain '{default_domain}' is already configured for "
@@ -38,40 +90,21 @@ class FilterModule(object):
result[app_id] = [default_domain]
def _process_canonical_domains(self, app_id, canonical_domains, seen_domains, result):
"""
Process the canonical domains for an application, handling both lists and dicts,
and ensuring each domain is unique.
"""
if isinstance(canonical_domains, dict):
self._process_canonical_domains_dict(app_id, canonical_domains, seen_domains, result)
for _, domain in canonical_domains.items():
self._validate_and_check_domain(app_id, domain, seen_domains)
result[app_id] = canonical_domains.copy()
elif isinstance(canonical_domains, list):
self._process_canonical_domains_list(app_id, canonical_domains, seen_domains, result)
for domain in canonical_domains:
self._validate_and_check_domain(app_id, domain, seen_domains)
result[app_id] = list(canonical_domains)
else:
raise AnsibleFilterError(
f"Unexpected type for 'domains.canonical' in application '{app_id}': "
f"Unexpected type for 'server.domains.canonical' in application '{app_id}': "
f"{type(canonical_domains).__name__}"
)
def _process_canonical_domains_dict(self, app_id, domains_dict, seen_domains, result):
"""
Process a dictionary of canonical domains for an application.
"""
for name, domain in domains_dict.items():
self._validate_and_check_domain(app_id, domain, seen_domains)
result[app_id] = domains_dict.copy()
def _process_canonical_domains_list(self, app_id, domains_list, seen_domains, result):
"""
Process a list of canonical domains for an application.
"""
for domain in domains_list:
self._validate_and_check_domain(app_id, domain, seen_domains)
result[app_id] = list(domains_list)
def _validate_and_check_domain(self, app_id, domain, seen_domains):
"""
Validate the domain and check if it has already been assigned to another application.
"""
if not isinstance(domain, str) or not domain.strip():
raise AnsibleFilterError(
f"Invalid domain entry in 'canonical' for application '{app_id}': {domain!r}"

View File

@@ -1,6 +1,14 @@
from ansible.errors import AnsibleFilterError
import hashlib
import base64
import sys
import os
# Ensure module_utils is importable when this filter runs from Ansible
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.config_utils import get_app_conf
from module_utils.get_url import get_url
class FilterModule(object):
"""
@@ -12,18 +20,36 @@ class FilterModule(object):
'build_csp_header': self.build_csp_header,
}
# -------------------------------
# Helpers
# -------------------------------
@staticmethod
def is_feature_enabled(applications: dict, feature: str, application_id: str) -> bool:
"""
Return True if applications[application_id].features[feature] is truthy.
Returns True if applications[application_id].features[feature] is truthy.
"""
app = applications.get(application_id, {})
return bool(app.get('features', {}).get(feature, False))
return get_app_conf(
applications,
application_id,
'features.' + feature,
False,
False
)
@staticmethod
def get_csp_whitelist(applications, application_id, directive):
app = applications.get(application_id, {})
wl = app.get('csp', {}).get('whitelist', {}).get(directive, [])
"""
Returns a list of additional whitelist entries for a given directive.
Accepts both scalar and list in config; always returns a list.
"""
wl = get_app_conf(
applications,
application_id,
'server.csp.whitelist.' + directive,
False,
[]
)
if isinstance(wl, list):
return wl
if wl:
@@ -33,26 +59,45 @@ class FilterModule(object):
@staticmethod
def get_csp_flags(applications, application_id, directive):
"""
Dynamically extract all CSP flags for a given directive and return them as tokens,
e.g., "'unsafe-eval'", "'unsafe-inline'", etc.
Returns CSP flag tokens (e.g., "'unsafe-eval'", "'unsafe-inline'") for a directive,
merging sane defaults with app config.
Default: 'unsafe-inline' is enabled for style-src and style-src-elem.
"""
app = applications.get(application_id, {})
flags = app.get('csp', {}).get('flags', {}).get(directive, {})
tokens = []
# Defaults that apply to all apps
default_flags = {}
if directive in ('style-src', 'style-src-elem'):
default_flags = {'unsafe-inline': True}
for flag_name, enabled in flags.items():
configured = get_app_conf(
applications,
application_id,
'server.csp.flags.' + directive,
False,
{}
)
# Merge defaults with configured flags (configured overrides defaults)
merged = {**default_flags, **configured}
tokens = []
for flag_name, enabled in merged.items():
if enabled:
tokens.append(f"'{flag_name}'")
return tokens
@staticmethod
def get_csp_inline_content(applications, application_id, directive):
"""
Return inline script/style snippets to hash for a given CSP directive.
Returns inline script/style snippets to hash for a given directive.
Accepts both scalar and list in config; always returns a list.
"""
app = applications.get(application_id, {})
snippets = app.get('csp', {}).get('hashes', {}).get(directive, [])
snippets = get_app_conf(
applications,
application_id,
'server.csp.hashes.' + directive,
False,
[]
)
if isinstance(snippets, list):
return snippets
if snippets:
@@ -62,7 +107,7 @@ class FilterModule(object):
@staticmethod
def get_csp_hash(content):
"""
Compute the SHA256 hash of the given inline content and return
Computes the SHA256 hash of the given inline content and returns
a CSP token like "'sha256-<base64>'".
"""
try:
@@ -72,6 +117,10 @@ class FilterModule(object):
except Exception as exc:
raise AnsibleFilterError(f"get_csp_hash failed: {exc}")
# -------------------------------
# Main builder
# -------------------------------
def build_csp_header(
self,
applications,
@@ -81,68 +130,80 @@ class FilterModule(object):
matomo_feature_name='matomo'
):
"""
Build the Content-Security-Policy header value dynamically based on application settings.
Inline hashes are read from applications[application_id].csp.hashes
Builds the Content-Security-Policy header value dynamically based on application settings.
- Flags (e.g., 'unsafe-eval', 'unsafe-inline') are read from server.csp.flags.<directive>,
with sane defaults applied in get_csp_flags (always 'unsafe-inline' for style-src and style-src-elem).
- Inline hashes are read from server.csp.hashes.<directive>.
- Whitelists are read from server.csp.whitelist.<directive>.
- Inline hashes are added only if the final tokens do NOT include 'unsafe-inline'.
"""
try:
directives = [
'default-src',
'connect-src',
'frame-ancestors',
'frame-src',
'script-src',
'script-src-elem',
'style-src',
'font-src',
'worker-src',
'manifest-src',
'media-src',
'default-src', # Fallback source list for content types not explicitly listed
'connect-src', # Allowed URLs for XHR, WebSockets, EventSource, fetch()
'frame-ancestors', # Who may embed this page
'frame-src', # Sources for nested browsing contexts (e.g., <iframe>)
'script-src', # Sources for script execution
'script-src-elem', # Sources for <script> elements
'style-src', # Sources for inline styles and <style>/<link> elements
'style-src-elem', # Sources for <style> and <link rel="stylesheet">
'font-src', # Sources for fonts
'worker-src', # Sources for workers
'manifest-src', # Sources for web app manifests
'media-src', # Sources for audio and video
]
parts = []
for directive in directives:
tokens = ["'self'"]
# unsafe-eval / unsafe-inline flags
# 1) Load flags (includes defaults from get_csp_flags)
flags = self.get_csp_flags(applications, application_id, directive)
tokens += flags
# Matomo integration
if (
self.is_feature_enabled(applications, matomo_feature_name, application_id)
and directive in ['script-src-elem', 'connect-src']
):
matomo_domain = domains.get('web-app-matomo')[0]
if matomo_domain:
tokens.append(f"{web_protocol}://{matomo_domain}")
# 2) Allow fetching from internal CDN by default for selected directives
if directive in ['script-src-elem', 'connect-src', 'style-src-elem']:
tokens.append(get_url(domains, 'web-svc-cdn', web_protocol))
# ReCaptcha integration: allow loading scripts from Google if feature enabled
# 3) Matomo integration if feature is enabled
if directive in ['script-src-elem', 'connect-src']:
if self.is_feature_enabled(applications, matomo_feature_name, application_id):
tokens.append(get_url(domains, 'web-app-matomo', web_protocol))
# 4) ReCaptcha integration (scripts + frames) if feature is enabled
if self.is_feature_enabled(applications, 'recaptcha', application_id):
if directive in ['script-src-elem',"frame-src"]:
if directive in ['script-src-elem', 'frame-src']:
tokens.append('https://www.gstatic.com')
tokens.append('https://www.google.com')
# Enable loading via ancestors
if (
self.is_feature_enabled(applications, 'port-ui-desktop', application_id)
and directive == 'frame-ancestors'
):
domain = domains.get('web-app-port-ui')[0]
sld_tld = ".".join(domain.split(".")[-2:]) # yields "example.com"
tokens.append(f"{sld_tld}") # yields "*.example.com"
# 5) Frame ancestors handling (desktop + logout support)
if directive == 'frame-ancestors':
if self.is_feature_enabled(applications, 'desktop', application_id):
# Allow being embedded by the desktop app domain (and potentially its parent)
domain = domains.get('web-app-desktop')[0]
sld_tld = ".".join(domain.split(".")[-2:]) # e.g., example.com
tokens.append(f"{sld_tld}")
if self.is_feature_enabled(applications, 'logout', application_id):
# Allow embedding via logout proxy and Keycloak app
tokens.append(get_url(domains, 'web-svc-logout', web_protocol))
tokens.append(get_url(domains, 'web-app-keycloak', web_protocol))
# whitelist
# 6) Custom whitelist entries
tokens += self.get_csp_whitelist(applications, application_id, directive)
# only add hashes if 'unsafe-inline' is NOT in flags
if "'unsafe-inline'" not in flags:
# 7) Add inline content hashes ONLY if final tokens do NOT include 'unsafe-inline'
# (Check tokens, not flags, to include defaults and later modifications.)
if "'unsafe-inline'" not in tokens:
for snippet in self.get_csp_inline_content(applications, application_id, directive):
tokens.append(self.get_csp_hash(snippet))
# Append directive
parts.append(f"{directive} {' '.join(tokens)};")
# static img-src
# 8) Static img-src directive (kept permissive for data/blob and any host)
parts.append("img-src * data: blob:;")
return ' '.join(parts)
except Exception as exc:

View File

@@ -13,7 +13,8 @@ def append_csp_hash(applications, application_id, code_one_liner):
apps = copy.deepcopy(applications)
app = apps[application_id]
csp = app.setdefault('csp', {})
server = app.setdefault('server', {})
csp = server.setdefault('csp', {})
hashes = csp.setdefault('hashes', {})
existing = hashes.get('script-src-elem', [])

View File

@@ -1,10 +1,13 @@
from ansible.errors import AnsibleFilterError
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.entity_name_utils import get_entity_name
class FilterModule(object):
def filters(self):
return {'domain_mappings': self.domain_mappings}
def domain_mappings(self, apps, primary_domain):
def domain_mappings(self, apps, PRIMARY_DOMAIN):
"""
Build a flat list of redirect mappings for all apps:
- source: each alias domain
@@ -30,38 +33,39 @@ class FilterModule(object):
)
return values
def default_domain(app_id, primary):
return f"{app_id}.{primary}"
def default_domain(app_id:str, primary:str):
subdomain = get_entity_name(app_id)
return f"{subdomain}.{primary}"
# 1) Compute canonical domains per app (always as a list)
canonical_map = {}
for app_id, cfg in apps.items():
domains_cfg = cfg.get('domains') or {}
domains_cfg = cfg.get('server',{}).get('domains',{})
entry = domains_cfg.get('canonical')
if entry is None:
canonical_map[app_id] = [default_domain(app_id, primary_domain)]
canonical_map[app_id] = [default_domain(app_id, PRIMARY_DOMAIN)]
elif isinstance(entry, dict):
canonical_map[app_id] = list(entry.values())
elif isinstance(entry, list):
canonical_map[app_id] = list(entry)
else:
raise AnsibleFilterError(
f"Unexpected type for 'domains.canonical' in application '{app_id}': {type(entry).__name__}"
f"Unexpected type for 'server.domains.canonical' in application '{app_id}': {type(entry).__name__}"
)
# 2) Compute alias domains per app
alias_map = {}
for app_id, cfg in apps.items():
domains_cfg = cfg.get('domains')
domains_cfg = cfg.get('server',{}).get('domains',{})
if domains_cfg is None:
alias_map[app_id] = []
continue
if isinstance(domains_cfg, dict) and not domains_cfg:
alias_map[app_id] = [default_domain(app_id, primary_domain)]
alias_map[app_id] = [default_domain(app_id, PRIMARY_DOMAIN)]
continue
aliases = parse_entry(domains_cfg, 'aliases', app_id) or []
default = default_domain(app_id, primary_domain)
default = default_domain(app_id, PRIMARY_DOMAIN)
has_aliases = 'aliases' in domains_cfg
has_canonical = 'canonical' in domains_cfg
@@ -80,7 +84,7 @@ class FilterModule(object):
mappings = []
for app_id, sources in alias_map.items():
canon_list = canonical_map.get(app_id, [])
target = canon_list[0] if canon_list else default_domain(app_id, primary_domain)
target = canon_list[0] if canon_list else default_domain(app_id, PRIMARY_DOMAIN)
for src in sources:
if src == target:
# skip self-redirects

View File

@@ -0,0 +1,19 @@
# filter_plugins/domain_tools.py
# Returns the DNS zone (SLD.TLD) from a hostname.
# Pure-Python, no external deps; handles simple cases. For exotic TLDs use tldextract (see note).
from ansible.errors import AnsibleFilterError
def to_zone(hostname: str) -> str:
if not isinstance(hostname, str) or not hostname.strip():
raise AnsibleFilterError("to_zone: hostname must be a non-empty string")
parts = hostname.strip(".").split(".")
if len(parts) < 2:
raise AnsibleFilterError(f"to_zone: '{hostname}' has no TLD part")
# naive default: last two labels -> SLD.TLD
return ".".join(parts[-2:])
class FilterModule(object):
def filters(self):
return {
"to_zone": to_zone,
}

View File

@@ -0,0 +1,54 @@
import os
import yaml
def get_all_invokable_apps(
categories_file=None,
roles_dir=None
):
"""
Return all application_ids (or role names) for roles whose directory names match invokable paths from categories.yml.
:param categories_file: Path to categories.yml (default: roles/categories.yml at project root)
:param roles_dir: Path to roles directory (default: roles/ at project root)
:return: List of application_ids (or role names)
"""
# Resolve defaults
here = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.abspath(os.path.join(here, '..'))
if not categories_file:
categories_file = os.path.join(project_root, 'roles', 'categories.yml')
if not roles_dir:
roles_dir = os.path.join(project_root, 'roles')
# Get invokable paths
from filter_plugins.invokable_paths import get_invokable_paths
invokable_paths = get_invokable_paths(categories_file)
if not invokable_paths:
return []
result = []
if not os.path.isdir(roles_dir):
return []
for role in sorted(os.listdir(roles_dir)):
role_path = os.path.join(roles_dir, role)
if not os.path.isdir(role_path):
continue
if any(role == p or role.startswith(p + '-') for p in invokable_paths):
vars_file = os.path.join(role_path, 'vars', 'main.yml')
if os.path.isfile(vars_file):
try:
with open(vars_file, 'r', encoding='utf-8') as f:
data = yaml.safe_load(f) or {}
app_id = data.get('application_id', role)
except Exception:
app_id = role
else:
app_id = role
result.append(app_id)
return sorted(result)
class FilterModule(object):
def filters(self):
return {
'get_all_invokable_apps': get_all_invokable_apps
}

View File

@@ -1,134 +1,9 @@
import os
import re
import yaml
from ansible.errors import AnsibleFilterError
class AppConfigKeyError(AnsibleFilterError, ValueError):
"""
Raised when a required application config key is missing (strict mode).
Compatible with Ansible error handling and Python ValueError.
"""
pass
class ConfigEntryNotSetError(AppConfigKeyError):
"""
Raised when a config entry is defined in schema but not set in application.
"""
pass
def get_app_conf(applications, application_id, config_path, strict=True, default=None):
# Path to the schema file for this application
schema_path = os.path.join('roles', application_id, 'schema', 'main.yml')
def schema_defines(path):
if not os.path.isfile(schema_path):
return False
with open(schema_path) as f:
schema = yaml.safe_load(f) or {}
node = schema
for part in path.split('.'):
key_match = re.match(r"^([a-zA-Z0-9_-]+)", part)
if not key_match:
return False
k = key_match.group(1)
if isinstance(node, dict) and k in node:
node = node[k]
else:
return False
return True
def access(obj, key, path_trace):
# Match either 'key' or 'key[index]'
m = re.match(r"^([a-zA-Z0-9_-]+)(?:\[(\d+)\])?$", key)
if not m:
raise AppConfigKeyError(
f"Invalid key format in config_path: '{key}'\n"
f"Full path so far: {'.'.join(path_trace)}\n"
f"application_id: {application_id}\n"
f"config_path: {config_path}"
)
k, idx = m.group(1), m.group(2)
# Access dict key
if isinstance(obj, dict):
if k not in obj:
# Non-strict mode: always return default on missing key
if not strict:
return default if default is not None else False
# Schema-defined but unset: strict raises ConfigEntryNotSetError
trace_path = '.'.join(path_trace[1:])
if schema_defines(trace_path):
raise ConfigEntryNotSetError(
f"Config entry '{trace_path}' is defined in schema at '{schema_path}' but not set in application '{application_id}'."
)
# Generic missing-key error
raise AppConfigKeyError(
f"Key '{k}' not found in dict at '{key}'\n"
f"Full path so far: {'.'.join(path_trace)}\n"
f"Current object: {repr(obj)}\n"
f"application_id: {application_id}\n"
f"config_path: {config_path}"
)
obj = obj[k]
else:
if not strict:
return default if default is not None else False
raise AppConfigKeyError(
f"Expected dict for '{k}', got {type(obj).__name__} at '{key}'\n"
f"Full path so far: {'.'.join(path_trace)}\n"
f"Current object: {repr(obj)}\n"
f"application_id: {application_id}\n"
f"config_path: {config_path}"
)
# If index was provided, access list element
if idx is not None:
if not isinstance(obj, list):
if not strict:
return default if default is not None else False
raise AppConfigKeyError(
f"Expected list for '{k}[{idx}]', got {type(obj).__name__}\n"
f"Full path so far: {'.'.join(path_trace)}\n"
f"Current object: {repr(obj)}\n"
f"application_id: {application_id}\n"
f"config_path: {config_path}"
)
i = int(idx)
if i >= len(obj):
if not strict:
return default if default is not None else False
raise AppConfigKeyError(
f"Index {i} out of range for list at '{k}'\n"
f"Full path so far: {'.'.join(path_trace)}\n"
f"Current object: {repr(obj)}\n"
f"application_id: {application_id}\n"
f"config_path: {config_path}"
)
obj = obj[i]
return obj
# Begin traversal
path_trace = [f"applications[{repr(application_id)}]"]
try:
obj = applications[application_id]
except KeyError:
raise AppConfigKeyError(
f"Application ID '{application_id}' not found in applications dict.\n"
f"path_trace: {path_trace}\n"
f"applications keys: {list(applications.keys())}\n"
f"config_path: {config_path}"
)
for part in config_path.split('.'):
path_trace.append(part)
obj = access(obj, part, path_trace)
if obj is False and not strict:
return default if default is not None else False
return obj
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.config_utils import get_app_conf, AppConfigKeyError,ConfigEntryNotSetError
class FilterModule(object):
''' CyMaIS application config extraction filters '''
''' Infinito.Nexus application config extraction filters '''
def filters(self):
return {
'get_app_conf': get_app_conf,

View File

@@ -1,51 +0,0 @@
# filter_plugins/get_application_id.py
import os
import re
import yaml
from ansible.errors import AnsibleFilterError
def get_application_id(role_name):
"""
Jinja2/Ansible filter: given a role name, load its vars/main.yml and return the application_id value.
"""
# Construct path: assumes current working directory is project root
vars_file = os.path.join(os.getcwd(), 'roles', role_name, 'vars', 'main.yml')
if not os.path.isfile(vars_file):
raise AnsibleFilterError(f"Vars file not found for role '{role_name}': {vars_file}")
try:
# Read entire file content to avoid lazy stream issues
with open(vars_file, 'r', encoding='utf-8') as f:
content = f.read()
data = yaml.safe_load(content)
except Exception as e:
raise AnsibleFilterError(f"Error reading YAML from {vars_file}: {e}")
# Ensure parsed data is a mapping
if not isinstance(data, dict):
raise AnsibleFilterError(
f"Error reading YAML from {vars_file}: expected mapping, got {type(data).__name__}"
)
# Detect malformed YAML: no valid identifier-like keys
valid_key_pattern = re.compile(r'^[A-Za-z_][A-Za-z0-9_]*$')
if data and not any(valid_key_pattern.match(k) for k in data.keys()):
raise AnsibleFilterError(f"Error reading YAML from {vars_file}: invalid top-level keys")
if 'application_id' not in data:
raise AnsibleFilterError(f"Key 'application_id' not found in {vars_file}")
return data['application_id']
class FilterModule(object):
"""
Ansible filter plugin entry point.
"""
def filters(self):
return {
'get_application_id': get_application_id,
}

View File

@@ -0,0 +1,31 @@
# Custom Ansible filter to get all role names under "roles/" with a given prefix.
import os
def get_category_entries(prefix, roles_path="roles"):
"""
Returns a list of role names under the given roles_path
that start with the specified prefix.
:param prefix: String prefix to match role names.
:param roles_path: Path to the roles directory (default: 'roles').
:return: List of matching role names.
"""
if not os.path.isdir(roles_path):
return []
roles = []
for entry in os.listdir(roles_path):
full_path = os.path.join(roles_path, entry)
if os.path.isdir(full_path) and entry.startswith(prefix):
roles.append(entry)
return sorted(roles)
class FilterModule(object):
""" Custom filters for Ansible """
def filters(self):
return {
"get_category_entries": get_category_entries
}

View File

@@ -1,50 +0,0 @@
# filter_plugins/get_cymais_path.py
"""
This plugin provides filters to extract the CyMaIS directory and file identifiers
from a given role name. It assumes the role name is structured as 'dir_file'.
If the structure is invalid (e.g., missing or too many underscores), it raises an error.
These filters are used to support internal processing within CyMaIS.
"""
from ansible.errors import AnsibleFilterError
class CymaisPathExtractor:
"""Extracts directory and file parts from role names in the format 'dir_file'."""
def __init__(self, value):
self.value = value
self._parts = self._split_value()
def _split_value(self):
parts = self.value.split("_")
if len(parts) != 2:
raise AnsibleFilterError(
f"Invalid format: '{self.value}' must contain exactly one underscore (_)"
)
return parts
def get_dir(self):
return self._parts[0]
def get_file(self):
return self._parts[1]
def get_cymais_dir(value):
return CymaisPathExtractor(value).get_dir()
def get_cymais_file(value):
return CymaisPathExtractor(value).get_file()
class FilterModule(object):
"""Ansible filter plugin for CyMaIS path parsing."""
def filters(self):
return {
"get_cymais_dir": get_cymais_dir,
"get_cymais_file": get_cymais_file,
}

View File

@@ -1,9 +1,15 @@
def get_docker_compose(path_docker_compose_instances: str, application_id: str) -> dict:
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.entity_name_utils import get_entity_name
def get_docker_paths(application_id: str, path_docker_compose_instances: str) -> dict:
"""
Build the docker_compose dict based on
path_docker_compose_instances and application_id.
Uses get_entity_name to extract the entity name from application_id.
"""
base = f"{path_docker_compose_instances}{application_id}/"
entity = get_entity_name(application_id)
base = f"{path_docker_compose_instances}{entity}/"
return {
'directories': {
@@ -23,5 +29,5 @@ def get_docker_compose(path_docker_compose_instances: str, application_id: str)
class FilterModule(object):
def filters(self):
return {
'get_docker_compose': get_docker_compose,
'get_docker_paths': get_docker_paths,
}

View File

@@ -0,0 +1,9 @@
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.entity_name_utils import get_entity_name
class FilterModule(object):
def filters(self):
return {
'get_entity_name': get_entity_name,
}

View File

@@ -0,0 +1,37 @@
"""
Custom Ansible filter to build a systemctl unit name (always lowercase).
Rules:
- If `systemctl_id` ends with '@': drop the '@' and return
"{systemctl_id_without_at}.{software_name}@{suffix_handling}".
- Else: return "{systemctl_id}.{software_name}{suffix_handling}".
Suffix handling:
- Default "" → automatically pick:
- ".service" if no '@' in systemctl_id
- ".timer" if '@' in systemctl_id
- Explicit False → no suffix at all
- Any string → ".{suffix}" (lowercased)
"""
def get_service_name(systemctl_id, software_name, suffix=""):
sid = str(systemctl_id).strip().lower()
software_name = str(software_name).strip().lower()
# Determine suffix
if suffix is False:
sfx = "" # no suffix at all
elif suffix == "" or suffix is None:
sfx = ".service"
else:
sfx = str(suffix).strip().lower()
if sid.endswith("@"):
base = sid[:-1] # drop the trailing '@'
return f"{base}.{software_name}@{sfx}"
else:
return f"{sid}.{software_name}{sfx}"
class FilterModule(object):
def filters(self):
return {"get_service_name": get_service_name}

View File

@@ -0,0 +1,24 @@
# filter_plugins/get_service_script_path.py
# Custom Ansible filter to generate service script paths.
def get_service_script_path(systemctl_id, script_type):
"""
Build the path to a service script based on systemctl_id and type.
:param systemctl_id: The identifier of the system service.
:param script_type: The script type/extension (e.g., sh, py, yml).
:return: The full path string.
"""
if not systemctl_id or not script_type:
raise ValueError("Both systemctl_id and script_type are required")
return f"/opt/scripts/systemctl/{systemctl_id}/script.{script_type}"
class FilterModule(object):
""" Custom filters for Ansible """
def filters(self):
return {
"get_service_script_path": get_service_script_path
}

View File

@@ -1,27 +1,11 @@
#!/usr/bin/python
import os
import sys
from ansible.errors import AnsibleFilterError
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.get_url import get_url
class FilterModule(object):
''' Infinito.Nexus application config extraction filters '''
def filters(self):
return {'get_url': self.get_url}
def get_url(self, domains, application_id, protocol):
# 1) module_util-Verzeichnis in den Pfad aufnehmen
plugin_dir = os.path.dirname(__file__)
project_root = os.path.dirname(plugin_dir)
module_utils = os.path.join(project_root, 'module_utils')
if module_utils not in sys.path:
sys.path.append(module_utils)
# 2) jetzt domain_utils importieren
try:
from domain_utils import get_domain
except ImportError as e:
raise AnsibleFilterError(f"could not import domain_utils: {e}")
# 3) Validierung und Aufruf
if not isinstance(protocol, str):
raise AnsibleFilterError("Protocol must be a string")
return f"{protocol}://{ get_domain(domains, application_id) }"
return {
'get_url': get_url,
}

14
filter_plugins/has_env.py Normal file
View File

@@ -0,0 +1,14 @@
import os
def has_env(application_id, base_dir='.'):
"""
Check if env.j2 exists under roles/{{ application_id }}/templates/env.j2
"""
path = os.path.join(base_dir, 'roles', application_id, 'templates', 'env.j2')
return os.path.isfile(path)
class FilterModule(object):
def filters(self):
return {
'has_env': has_env,
}

View File

@@ -1,122 +0,0 @@
import os
import yaml
import re
from ansible.errors import AnsibleFilterError
# in-memory cache: application_id → (parsed_yaml, is_nested)
_cfg_cache = {}
def load_configuration(application_id, key):
if not isinstance(key, str):
raise AnsibleFilterError("Key must be a dotted-string, e.g. 'features.matomo'")
# locate roles/
here = os.path.dirname(__file__)
root = os.path.abspath(os.path.join(here, '..'))
roles_dir = os.path.join(root, 'roles')
if not os.path.isdir(roles_dir):
raise AnsibleFilterError(f"Roles directory not found at {roles_dir}")
# first time? load & cache
if application_id not in _cfg_cache:
config_path = None
# 1) primary: vars/main.yml declares it
for role in os.listdir(roles_dir):
mv = os.path.join(roles_dir, role, 'vars', 'main.yml')
if os.path.exists(mv):
try:
md = yaml.safe_load(open(mv)) or {}
except Exception:
md = {}
if md.get('application_id') == application_id:
cf = os.path.join(roles_dir, role, "config" , "main.yml")
if not os.path.exists(cf):
raise AnsibleFilterError(
f"Role '{role}' declares '{application_id}' but missing config/main.yml"
)
config_path = cf
break
# 2) fallback nested
if config_path is None:
for role in os.listdir(roles_dir):
cf = os.path.join(roles_dir, role, "config" , "main.yml")
if not os.path.exists(cf):
continue
try:
dd = yaml.safe_load(open(cf)) or {}
except Exception:
dd = {}
if isinstance(dd, dict) and application_id in dd:
config_path = cf
break
# 3) fallback flat
if config_path is None:
for role in os.listdir(roles_dir):
cf = os.path.join(roles_dir, role, "config" , "main.yml")
if not os.path.exists(cf):
continue
try:
dd = yaml.safe_load(open(cf)) or {}
except Exception:
dd = {}
# flat style: dict with all non-dict values
if isinstance(dd, dict) and not any(isinstance(v, dict) for v in dd.values()):
config_path = cf
break
if config_path is None:
return None
# parse once
try:
parsed = yaml.safe_load(open(config_path)) or {}
except Exception as e:
raise AnsibleFilterError(f"Error loading config/main.yml at {config_path}: {e}")
# detect nested vs flat
is_nested = isinstance(parsed, dict) and (application_id in parsed)
_cfg_cache[application_id] = (parsed, is_nested)
parsed, is_nested = _cfg_cache[application_id]
# pick base entry
entry = parsed[application_id] if is_nested else parsed
# resolve dotted key
key_parts = key.split('.')
for part in key_parts:
# Check if part has an index (e.g., domains.canonical[0])
match = re.match(r'([^\[]+)\[([0-9]+)\]', part)
if match:
part, index = match.groups()
index = int(index)
if isinstance(entry, dict) and part in entry:
entry = entry[part]
# Check if entry is a list and access the index
if isinstance(entry, list) and 0 <= index < len(entry):
entry = entry[index]
else:
raise AnsibleFilterError(
f"Index '{index}' out of range for key '{part}' in application '{application_id}'"
)
else:
raise AnsibleFilterError(
f"Key '{part}' not found under application '{application_id}'"
)
else:
if isinstance(entry, dict) and part in entry:
entry = entry[part]
else:
raise AnsibleFilterError(
f"Key '{part}' not found under application '{application_id}'"
)
return entry
class FilterModule(object):
def filters(self):
return {'load_configuration': load_configuration}

View File

@@ -0,0 +1,39 @@
def merge_with_defaults(defaults, customs):
"""
Recursively merge two dicts (customs into defaults).
For each top-level key in customs, ensure all dict keys from defaults are present (at least empty dict).
Customs always take precedence.
"""
def merge_dict(d1, d2):
# Recursively merge d2 into d1, d2 wins
result = dict(d1) if d1 else {}
for k, v in (d2 or {}).items():
if k in result and isinstance(result[k], dict) and isinstance(v, dict):
result[k] = merge_dict(result[k], v)
else:
result[k] = v
return result
merged = {}
# Union of all app-keys
all_keys = set(defaults or {}).union(set(customs or {}))
for app_key in all_keys:
base = (defaults or {}).get(app_key, {})
override = (customs or {}).get(app_key, {})
# Step 1: merge override into base
result = merge_dict(base, override)
# Step 2: ensure all dict keys from base exist in result (at least {})
for k, v in (base or {}).items():
if isinstance(v, dict) and k not in result:
result[k] = {}
merged[app_key] = result
return merged
class FilterModule(object):
'''Custom merge filter for Infinito.Nexus: merge_with_defaults'''
def filters(self):
return {
'merge_with_defaults': merge_with_defaults,
}

View File

@@ -19,8 +19,8 @@ class FilterModule(object):
Usage in Jinja:
{{ redirect_list
| add_redirect_if_group('lam',
'ldap.' ~ primary_domain,
domains | get_domain('lam'),
'ldap.' ~ PRIMARY_DOMAIN,
domains | get_domain('web-app-lam'),
group_names) }}
"""
try:

View File

@@ -1,5 +1,3 @@
# filter_plugins/role_path_by_app_id.py
import os
import glob
import yaml

View File

@@ -1,55 +0,0 @@
from jinja2 import Undefined
def safe_placeholders(template: str, mapping: dict = None) -> str:
"""
Format a template like "{url}/logo.png".
If mapping is provided (not None) and ANY placeholder is missing or maps to None/empty string, the function will raise KeyError.
If mapping is None, missing placeholders or invalid templates return empty string.
Numerical zero or False are considered valid values.
Any other formatting errors return an empty string.
"""
# Non-string templates yield empty
if not isinstance(template, str):
return ''
class SafeDict(dict):
def __getitem__(self, key):
val = super().get(key, None)
# Treat None or empty string as missing
if val is None or (isinstance(val, str) and val == ''):
raise KeyError(key)
return val
def __missing__(self, key):
raise KeyError(key)
silent = mapping is None
data = mapping or {}
try:
return template.format_map(SafeDict(data))
except KeyError:
if silent:
return ''
raise
except Exception:
return ''
def safe_var(value):
"""
Ansible filter: returns the value unchanged unless it's Undefined or None,
in which case returns an empty string.
Catches all exceptions and yields ''.
"""
try:
if isinstance(value, Undefined) or value is None:
return ''
return value
except Exception:
return ''
class FilterModule(object):
def filters(self):
return {
'safe_var': safe_var,
'safe_placeholders': safe_placeholders,
}

View File

@@ -1,29 +0,0 @@
# file: filter_plugins/safe_join.py
"""
Ansible filter plugin that joins a base string and a tail path safely.
If the base is falsy (None, empty, etc.), returns an empty string.
"""
def safe_join(base, tail):
"""
Safely join base and tail into a path or URL.
- base: the base string. If falsy, returns ''.
- tail: the string to append. Leading/trailing slashes are handled.
- On any exception, returns ''.
"""
try:
if not base:
return ''
base_str = str(base).rstrip('/')
tail_str = str(tail).lstrip('/')
return f"{base_str}/{tail_str}"
except Exception:
return ''
class FilterModule(object):
def filters(self):
return {
'safe_join': safe_join,
}

View File

@@ -1,5 +1,3 @@
# filter_plugins/text_filters.py
from ansible.errors import AnsibleFilterError
import re

View File

@@ -0,0 +1,67 @@
# filter_plugins/timeout_start_sec_for_domains.py (nur Kern geändert)
from ansible.errors import AnsibleFilterError
class FilterModule(object):
def filters(self):
return {
"timeout_start_sec_for_domains": self.timeout_start_sec_for_domains,
}
def timeout_start_sec_for_domains(
self,
domains_dict,
include_www=True,
per_domain_seconds=25,
overhead_seconds=30,
min_seconds=120,
max_seconds=3600,
):
"""
Args:
domains_dict (dict | list[str] | str): Either the domain mapping dict
(values can be str | list[str] | dict[str,str]) or an already
flattened list of domains, or a single domain string.
include_www (bool): If true, add 'www.<domain>' for non-www entries.
...
"""
try:
# Local flattener for dict inputs (like your generate_all_domains source)
def _flatten_from_dict(domains_map):
flat = []
for v in (domains_map or {}).values():
if isinstance(v, str):
flat.append(v)
elif isinstance(v, list):
flat.extend(v)
elif isinstance(v, dict):
flat.extend(v.values())
return flat
# Accept dict | list | str
if isinstance(domains_dict, dict):
flat = _flatten_from_dict(domains_dict)
elif isinstance(domains_dict, list):
flat = list(domains_dict)
elif isinstance(domains_dict, str):
flat = [domains_dict]
else:
raise AnsibleFilterError(
"Expected 'domains_dict' to be dict | list | str."
)
if include_www:
base_unique = sorted(set(flat))
www_variants = [f"www.{d}" for d in base_unique if not str(d).lower().startswith("www.")]
flat.extend(www_variants)
unique_domains = sorted(set(flat))
count = len(unique_domains)
raw = overhead_seconds + per_domain_seconds * count
clamped = max(min_seconds, min(max_seconds, int(raw)))
return clamped
except AnsibleFilterError:
raise
except Exception as exc:
raise AnsibleFilterError(f"timeout_start_sec_for_domains failed: {exc}")

View File

@@ -0,0 +1,30 @@
from ansible.errors import AnsibleFilterError
try:
import tld
from tld.exceptions import TldDomainNotFound, TldBadUrl
except ImportError:
raise AnsibleFilterError("The 'tld' Python package is required for the to_primary_domain filter. Install with 'pip install tld'.")
class FilterModule(object):
''' Custom filter to extract the primary/zone domain from a full domain name '''
def filters(self):
return {
'to_primary_domain': self.to_primary_domain,
}
def to_primary_domain(self, domain):
"""
Converts a full domain or subdomain into its primary/zone domain.
E.g. 'foo.bar.example.co.uk' -> 'example.co.uk'
"""
if not isinstance(domain, str):
raise AnsibleFilterError("Input to to_primary_domain must be a string")
try:
res = tld.get_fld(domain, fix_protocol=True)
if not res:
raise AnsibleFilterError(f"Could not extract primary domain from: {domain}")
return res
except (TldDomainNotFound, TldBadUrl) as exc:
raise AnsibleFilterError(str(exc))

146
filter_plugins/url_join.py Normal file
View File

@@ -0,0 +1,146 @@
"""
Ansible filter plugin that safely joins URL components from a list.
- Requires a valid '<scheme>://' in the first element (any RFC-3986-ish scheme)
- Preserves the double slash after the scheme, collapses other duplicate slashes
- Supports query parts introduced by elements starting with '?' or '&'
* first query element uses '?', subsequent use '&' (regardless of given prefix)
* each query element must be exactly one 'key=value' pair
* query elements may only appear after path elements; once query starts, no more path parts
- Raises specific AnsibleFilterError messages for common misuse
"""
import re
from ansible.errors import AnsibleFilterError
_SCHEME_RE = re.compile(r'^([a-zA-Z][a-zA-Z0-9+.\-]*://)(.*)$')
_QUERY_PAIR_RE = re.compile(r'^[^&=?#]+=[^&?#]*$') # key=value (no '&', no extra '?' or '#')
def _to_str_or_error(obj, index):
"""Cast to str, raising a specific AnsibleFilterError with index context."""
try:
return str(obj)
except Exception as e:
raise AnsibleFilterError(
f"url_join: unable to convert part at index {index} to string: {e}"
)
def url_join(parts):
"""
Join a list of URL parts, URL-aware (scheme, path, query).
Args:
parts (list|tuple): URL segments. First element MUST include '<scheme>://'.
Path elements are plain strings.
Query elements must start with '?' or '&' and contain exactly one 'key=value'.
Returns:
str: Joined URL.
Raises:
AnsibleFilterError: with specific, descriptive messages.
"""
# --- basic input validation ---
if parts is None:
raise AnsibleFilterError("url_join: parts must be a non-empty list; got None")
if not isinstance(parts, (list, tuple)):
raise AnsibleFilterError(
f"url_join: parts must be a list/tuple; got {type(parts).__name__}"
)
if len(parts) == 0:
raise AnsibleFilterError("url_join: parts must be a non-empty list")
# --- first element must carry a scheme ---
first_raw = parts[0]
if first_raw is None:
raise AnsibleFilterError(
"url_join: first element must include a scheme like 'https://'; got None"
)
first_str = _to_str_or_error(first_raw, 0)
m = _SCHEME_RE.match(first_str)
if not m:
raise AnsibleFilterError(
"url_join: first element must start with '<scheme>://', e.g. 'https://example.com'; "
f"got '{first_str}'"
)
scheme = m.group(1) # e.g., 'https://', 'ftp://', 'myapp+v1://'
after_scheme = m.group(2).lstrip('/') # strip only leading slashes right after scheme
# --- iterate parts: collect path parts until first query part; then only query parts allowed ---
path_parts = []
query_pairs = []
in_query = False
for i, p in enumerate(parts):
if p is None:
# skip None silently (consistent with path_join-ish behavior)
continue
s = _to_str_or_error(p, i)
# disallow additional scheme in later parts
if i > 0 and "://" in s:
raise AnsibleFilterError(
f"url_join: only the first element may contain a scheme; part at index {i} "
f"looks like a URL with scheme ('{s}')."
)
# first element: replace with remainder after scheme and continue
if i == 0:
s = after_scheme
# check if this is a query element (starts with ? or &)
if s.startswith('?') or s.startswith('&'):
in_query = True
raw_pair = s[1:] # strip the leading ? or &
if raw_pair == '':
raise AnsibleFilterError(
f"url_join: query element at index {i} is empty; expected '?key=value' or '&key=value'"
)
# Disallow multiple pairs in a single element; enforce exactly one key=value
if '&' in raw_pair:
raise AnsibleFilterError(
f"url_join: query element at index {i} must contain exactly one 'key=value' pair "
f"without '&'; got '{s}'"
)
if not _QUERY_PAIR_RE.match(raw_pair):
raise AnsibleFilterError(
f"url_join: query element at index {i} must match 'key=value' (no extra '?', '&', '#'); got '{s}'"
)
query_pairs.append(raw_pair)
else:
# non-query element
if in_query:
# once query started, no more path parts allowed
raise AnsibleFilterError(
f"url_join: path element found at index {i} after query parameters started; "
f"query parts must come last"
)
# normal path part: strip slashes to avoid duplicate '/'
path_parts.append(s.strip('/'))
# normalize path: remove empty chunks
path_parts = [p for p in path_parts if p != '']
# --- build result ---
# path portion
if path_parts:
joined_path = "/".join(path_parts)
base = scheme + joined_path
else:
# no path beyond scheme
base = scheme
# query portion
if query_pairs:
base = base + "?" + "&".join(query_pairs)
return base
class FilterModule(object):
def filters(self):
return {
'url_join': url_join,
}

View File

@@ -0,0 +1,21 @@
from ansible.errors import AnsibleFilterError
def docker_volume_path(volume_name: str) -> str:
"""
Returns the absolute filesystem path of a Docker volume.
Example:
"akaunting_data" -> "/var/lib/docker/volumes/akaunting_data/_data/"
"""
if not volume_name or not isinstance(volume_name, str):
raise AnsibleFilterError(f"Invalid volume name: {volume_name}")
return f"/var/lib/docker/volumes/{volume_name}/_data/"
class FilterModule(object):
"""Docker volume path filters."""
def filters(self):
return {
"docker_volume_path": docker_volume_path,
}

View File

@@ -1,4 +1,13 @@
CYMAIS_ENVIRONMENT: "production"
SOFTWARE_NAME: "Infinito.Nexus" # Name of the software
# Deployment
ENVIRONMENT: "production" # Possible values: production, development
DEPLOYMENT_MODE: "single" # Use single, if you deploy on one server. Use cluster if you setup in cluster mode.
# If true, sensitive credentials will be masked or hidden from all Ansible task logs
# Recommendet to set to true
# @todo needs to be implemented everywhere
MASK_CREDENTIALS_IN_LOGS: true
HOST_CURRENCY: "EUR"
HOST_TIMEZONE: "UTC"
@@ -13,46 +22,44 @@ HOST_TIME_FORMAT: "HH:mm"
HOST_THOUSAND_SEPARATOR: "."
HOST_DECIMAL_MARK: ","
# Deployment mode
deployment_mode: "single" # Use single, if you deploy on one server. Use cluster if you setup in cluster mode.
# Web
WEB_PROTOCOL: "https" # Web protocol type. Use https or http. If you run local you need to change it to http
WEB_PORT: "{{ 443 if WEB_PROTOCOL == 'https' else 80 }}" # Default port web applications will listen to
web_protocol: "https" # Web protocol type. Use https or http. If you run local you need to change it to http
WEB_PORT: "{{ 443 if web_protocol == 'https' else 80 }}" # Default port web applications will listen to
# Websocket
WEBSOCKET_PROTOCOL: "{{ 'wss' if WEB_PROTOCOL == 'https' else 'ws' }}"
## Domain
primary_domain_tld: "localhost" # Top Level Domain of the server
primary_domain_sld: "cymais" # Second Level Domain of the server
primary_domain: "{{primary_domain_sld}}.{{primary_domain_tld}}" # Primary Domain of the server
# Domain
PRIMARY_DOMAIN: "localhost" # Primary Domain of the server
# Server Tact Variables
DNS_PROVIDER: cloudflare # The DNS Provider\Registrar for the domain
## Ours in which the server is "awake" (100% working). Rest of the time is reserved for maintanance
hours_server_awake: "0..23"
## Random delay for systemd timers to avoid peak loads.
randomized_delay_sec: "5min"
# Runtime Variables for Process Control
activate_all_timers: false # Activates all timers, independend if the handlers had been triggered
# This enables debugging in ansible and in the apps
# You SHOULD NOT enable this on production servers
enable_debug: false
dns_provider: cloudflare # The DNS Provider\Registrar for the domain
HOSTING_PROVIDER: hetzner # Provider which hosts the server
# Which ACME method to use: webroot, cloudflare, or hetzner
certbot_acme_challenge_method: "cloudflare"
certbot_credentials_dir: /etc/certbot
certbot_credentials_file: "{{ certbot_credentials_dir }}/{{ certbot_acme_challenge_method }}.ini"
certbot_dns_api_token: "" # Define in inventory file
certbot_dns_propagation_wait_seconds: 40 # How long should the script wait for DNS propagation before continuing
certbot_flavor: san # Possible options: san (recommended, with a dns flavor like cloudflare, or hetzner), wildcard(doesn't function with www redirect), deicated
certbot_webroot_path: "/var/lib/letsencrypt/" # Path used by Certbot to serve HTTP-01 ACME challenges
certbot_cert_path: "/etc/letsencrypt/live" # Path containing active certificate symlinks for domains
CERTBOT_ACME_CHALLENGE_METHOD: "cloudflare"
CERTBOT_CREDENTIALS_DIR: /etc/certbot
CERTBOT_CREDENTIALS_FILE: "{{ CERTBOT_CREDENTIALS_DIR }}/{{ CERTBOT_ACME_CHALLENGE_METHOD }}.ini"
CERTBOT_DNS_PROPAGATION_WAIT_SECONDS: 300 # How long should the script wait for DNS propagation before continuing
CERTBOT_FLAVOR: san # Possible options: san (recommended, with a dns flavor like cloudflare, or hetzner), wildcard(doesn't function with www redirect), dedicated
## Docker Role Specific Parameters
docker_restart_policy: "unless-stopped"
# Letsencrypt
LETSENCRYPT_WEBROOT_PATH: "/var/lib/letsencrypt/" # Path where Certbot stores challenge webroot files
LETSENCRYPT_BASE_PATH: "/etc/letsencrypt/" # Base directory containing Certbot configuration, account data, and archives
LETSENCRYPT_LIVE_PATH: "{{ LETSENCRYPT_BASE_PATH }}live/" # Symlink directory for the current active certificate and private key
## Docker
DOCKER_RESTART_POLICY: "unless-stopped" # Default restart parameter for docker containers
DOCKER_VARS_FILE: "{{ playbook_dir }}/roles/docker-compose/vars/docker-compose.yml" # File containing docker compose variables used by other services
DOCKER_WHITELISTET_ANON_VOLUMES: [] # Volumes which should be ignored during docker anonymous health check
# Asyn Confitguration
ASYNC_ENABLED: "{{ not MODE_DEBUG | bool }}" # Activate async, deactivated for debugging
ASYNC_TIME: "{{ 300 if ASYNC_ENABLED | bool else omit }}" # Run for mnax 5min
ASYNC_POLL: "{{ 0 if ASYNC_ENABLED | bool else 10 }}" # Don't wait for task
# default value if not set via CLI (-e) or in playbook vars
allowed_applications: []
# helper
_applications_nextcloud_oidc_flavor: >-
@@ -64,10 +71,14 @@ _applications_nextcloud_oidc_flavor: >-
False,
'oidc_login'
if applications
| get_app_conf('web-app-nextcloud','features.ldap',False)
| get_app_conf('web-app-nextcloud','features.ldap',False, True)
else 'sociallogin'
)
}}
# default value if not set via CLI (-e) or in playbook vars
allowed_applications: []
# Role-based access control
# @See https://en.wikipedia.org/wiki/Role-based_access_control
RBAC:
GROUP:
NAME: "/roles" # Name of the group which holds the RBAC roles
CLAIM: "groups" # Name of the claim containing the RBAC groups

View File

@@ -1,8 +1,10 @@
# Mode
# The following modes can be combined with each other
mode_reset: false # Cleans up all CyMaIS files. It's necessary to run to whole playbook and not particial roles when using this function.
mode_test: false # Executes test routines instead of productive routines
mode_update: true # Executes updates
mode_backup: true # Activates the backup before the update procedure
mode_cleanup: true # Cleanup unused files and configurations
MODE_TEST: false # Executes test routines instead of productive routines
MODE_UPDATE: true # Executes updates
MODE_DEBUG: false # This enables debugging in ansible and in the apps, You SHOULD NOT enable this on production servers
MODE_RESET: false # Cleans up all Infinito.Nexus files. It's necessary to run to whole playbook and not particial roles when using this function.
MODE_BACKUP: "{{ MODE_UPDATE }}" # Activates the backup before the update procedure
MODE_CLEANUP: "{{ MODE_DEBUG }}" # Cleanup unused files and configurations
MODE_ASSERT: "{{ MODE_DEBUG }}" # Executes validation tasks during the run.

View File

@@ -0,0 +1,8 @@
# Email Configuration
DEFAULT_SYSTEM_EMAIL:
DOMAIN: "{{ PRIMARY_DOMAIN }}"
HOST: "mail.{{ PRIMARY_DOMAIN }}"
PORT: 465
TLS: true # true for TLS and false for SSL
START_TLS: false
SMTP: true

View File

@@ -1,9 +0,0 @@
# Email Configuration
default_system_email:
domain: "{{primary_domain}}"
host: "mail.{{primary_domain}}"
port: 465
tls: true # true for TLS and false for SSL
start_tls: false
smtp: true
# password: # Needs to be defined in inventory file

View File

@@ -1,38 +0,0 @@
# System maintenance Services
## Timeouts to wait for other services to stop
system_maintenance_lock_timeout_cleanup_services: "15min"
system_maintenance_lock_timeout_storage_optimizer: "10min"
system_maintenance_lock_timeout_backup_services: "1h"
system_maintenance_lock_timeout_heal_docker: "30min"
system_maintenance_lock_timeout_update_docker: "2min"
system_maintenance_lock_timeout_restart_docker: "{{system_maintenance_lock_timeout_update_docker}}"
## Services
### Defined Services for Backup Tasks
system_maintenance_backup_services:
- "sys-bkp-docker-2-loc"
- "svc-bkp-rmt-2-loc"
- "svc-bkp-loc-2-usb"
- "sys-bkp-docker-2-loc-everything"
### Defined Services for System Cleanup
system_maintenance_cleanup_services:
- "sys-cln-backups"
- "sys-cln-disc-space"
- "sys-cln-faild-bkps"
### Services that Manipulate the System
system_maintenance_manipulation_services:
- "sys-rpr-docker-soft"
- "update-docker"
- "svc-opt-ssd-hdd"
- "sys-rpr-docker-hard"
## Total System Maintenance Services
system_maintenance_services: "{{ system_maintenance_backup_services + system_maintenance_cleanup_services + system_maintenance_manipulation_services }}"
### Define Variables for Docker Volume Health services
whitelisted_anonymous_docker_volumes: []

View File

@@ -0,0 +1,32 @@
# Webserver Configuration
# Helper
_nginx_www_dir: "{{ applications | get_app_conf('svc-prx-openresty','docker.volumes.www') }}"
_nginx_dir: "{{ applications | get_app_conf('svc-prx-openresty','docker.volumes.nginx') }}"
_nginx_conf_dir: "{{ _nginx_dir }}conf.d/"
_nginx_http_dir: "{{ _nginx_conf_dir }}http/"
## Nginx-Specific Path Configurations
NGINX:
FILES:
CONFIGURATION: "{{ _nginx_dir }}nginx.conf"
DIRECTORIES:
CONFIGURATION: "{{ _nginx_conf_dir }}" # Configuration directory
HTTP:
GLOBAL: "{{ _nginx_http_dir }}global/" # Contains global configurations which will be loaded into the http block
SERVERS: "{{ _nginx_http_dir }}servers/" # Contains one configuration per domain
MAPS: "{{ _nginx_http_dir }}maps/" # Contains mappings
STREAMS: "{{ _nginx_conf_dir }}streams/" # Contains streams configuration e.g. for ldaps
DATA:
WWW: "{{ _nginx_www_dir }}"
WELL_KNOWN: "/usr/share/nginx/well-known/" # Path where well-known files are stored
HTML: "{{ _nginx_www_dir }}public_html/" # Path where the static homepage files are stored
FILES: "{{ _nginx_www_dir }}public_files/" # Path where the web accessable files are stored
CDN: "{{ _nginx_www_dir }}public_cdn/" # Contains files which will be accessable via the content delivery network
GLOBAL: "{{ _nginx_www_dir }}global/" # Directory containing files which will be globaly accessable, @Todo remove this when css migrated to CDN
CACHE:
GENERAL: "/tmp/cache_nginx_general/" # Directory which nginx uses to cache general data
IMAGE: "/tmp/cache_nginx_image/" # Directory which nginx uses to cache images
USER: "http" # Default nginx user in ArchLinux
# @todo It propably makes sense to distinguish between target and source mount path, so that the config files can be stored in the openresty volumes folder

View File

@@ -1,21 +0,0 @@
# Webserver Configuration
## Nginx-Specific Path Configurations
nginx:
directories:
configuration: "/etc/nginx/conf.d/" # Configuration directory
http:
global: "/etc/nginx/conf.d/http/global/" # Contains global configurations which will be loaded into the http block
servers: "/etc/nginx/conf.d/http/servers/" # Contains one configuration per domain
maps: "/etc/nginx/conf.d/http/maps/" # Contains mappings
streams: "/etc/nginx/conf.d/streams/" # Contains streams configuration e.g. for ldaps
data:
well_known: "/usr/share/nginx/well-known/" # Path where well-known files are stored
html: "/var/www/public_html/" # Path where the static homepage files are stored
files: "/var/www/public_files/" # Path where the web accessable files are stored
global: "/var/www/global/" # Directory containing files which will be globaly accessable
cache:
general: "/tmp/cache_nginx_general/" # Directory which nginx uses to cache general data
image: "/tmp/cache_nginx_image/" # Directory which nginx uses to cache images
user: "http" # Default nginx user in ArchLinux
iframe: true # Allows applications to be loaded in iframe

View File

@@ -0,0 +1,9 @@
# Path Variables for Key Directories and Scripts
PATH_ADMINISTRATOR_HOME: "/home/administrator/"
PATH_ADMINISTRATOR_SCRIPTS: "/opt/scripts/"
PATH_SYSTEMCTL_SCRIPTS: "{{ [ PATH_ADMINISTRATOR_SCRIPTS, 'systemctl' ] | path_join }}"
PATH_DOCKER_COMPOSE_INSTANCES: "/opt/docker/"
PATH_SYSTEM_LOCK_SCRIPT: "/opt/scripts/sys-lock.py"
PATH_SYSTEM_SERVICE_DIR: "/etc/systemd/system"
PATH_DOCKER_COMPOSE_PULL_LOCK_DIR: "/run/ansible/compose-pull/"

View File

@@ -1,6 +0,0 @@
# Path Variables for Key Directories and Scripts
path_administrator_home: "/home/administrator/"
path_administrator_scripts: "/opt/scripts/"
path_docker_compose_instances: "/opt/docker/"
path_system_lock_script: "/opt/scripts/sys-lock.py"

Some files were not shown because too many files have changed in this diff Show More