Compare commits

...

54 Commits

Author SHA1 Message Date
445c94788e Refactor: consolidate pkgmgr updates and remove legacy roles
Details:
- Added pkgmgr update task directly in pkgmgr role (pkgmgr pull --all)
- Removed deprecated update-pkgmgr role and references
- Removed deprecated update-pip role and references
- Simplified update-compose by dropping update-pkgmgr include

https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:46:39 +02:00
aac9704e8b Refactor: remove legacy update-docker role and references
Details:
- Removed update-docker role (README, meta, vars, tasks, script)
- Cleaned references from group_vars, update-compose, and docs
- Adjusted web-app-matrix role (removed @todo pointing to update-docker)
- Updated administrator guide (update-docker no longer mentioned)

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:32:33 +02:00
a57a5f8828 Refactor: remove Python-based Listmonk upgrade logic and implement upgrade as Ansible task
Details:
- Removed upgrade_listmonk() function and related calls from update-docker script
- Added dedicated Ansible task in web-app-listmonk role to run non-interactive DB/schema upgrade
- Conditional execution via MODE_UPDATE

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:25:41 +02:00
90843726de keycloak: update realm mail settings to use smtp_server.json.j2 (SPOT); merge via kc_merge_path; fix display name and SSL handling
See: https://chatgpt.com/share/68bb0b25-96bc-800f-8ff7-9ca8d7c7af11
2025-09-05 18:09:33 +02:00
d25da76117 Solved wrong variable bug 2025-09-05 17:30:08 +02:00
d48a1b3c0a Solved missing variable bugs. Role is not fully implemented need to pause development on it for the moment 2025-09-05 17:07:15 +02:00
2839d2e1a4 In between commit Magento implementation 2025-09-05 17:01:13 +02:00
00c99e58e9 Cleaned up bridgy fed 2025-09-04 17:09:35 +02:00
904040589e Added correct variables and health check 2025-09-04 15:13:10 +02:00
9f3d300bca Removed unneccessary handlers 2025-09-04 14:04:53 +02:00
9e253a2d09 Bluesky: Patch hardcoded IPCC_URL and proxy /ipcc
- Added Ansible replace task to override IPCC_URL in geolocation.tsx to same-origin '/ipcc'
- Extended Nginx extra_locations.conf to proxy /ipcc requests to https://bsky.app/ipcc
- Ensures frontend avoids CORS errors when fetching IP geolocation

See: https://chatgpt.com/share/68b97be3-0278-800f-9ee0-94389ca3ac0c
2025-09-04 13:45:57 +02:00
49120b0dcf Added more CSP headers 2025-09-04 13:36:35 +02:00
b6f91ab9d3 changed database_user to database_username 2025-09-04 12:45:22 +02:00
77e8e7ed7e Magento 2.4.8 refactor:
- Switch to split containers (markoshust/magento-php:8.2-fpm + magento-nginx:latest)
- Disable central DB; use app-local MariaDB and pin to 11.4
- Composer bootstrap of Magento in php container (Adobe repo keys), idempotent via creates
- Make setup:install idempotent; run as container user 'app'
- Wire OpenSearch (security disabled) and depends_on ordering
- Add credentials schema (adobe_public_key/adobe_private_key)
- Update vars for php/nginx/search containers + MAGENTO_USER
- Remove legacy docs (Administration.md, Upgrade.md)
Context: changes derived from our ChatGPT session about getting Magento 2.4.8 running with MariaDB 11.4.
Conversation: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 12:45:03 +02:00
32bc17e0c3 Optimized whitespacing 2025-09-04 12:41:11 +02:00
e294637cb6 Changed db config path attribut 2025-09-04 12:34:13 +02:00
577767bed6 sys-svc-rdbms: Refactor database service templates and add version support for Magento
- Unified Jinja2 variable spacing in tasks and templates
- Introduced database_image and database_version variables in vars/database.yml
- Updated mariadb.yml.j2 and postgres.yml.j2 to use {{ database_image }}:{{ database_version }}
- Ensured env file paths and includes are consistent
- Prepared support for versioned database images (needed for Magento deployment)

Ref: https://chatgpt.com/share/68b96a9d-c100-800f-856f-cd23d1eda2ed
2025-09-04 12:32:34 +02:00
e77f8da510 Added debug options to mastodon 2025-09-04 11:50:14 +02:00
4738b263ec Added docker_volume_path filter_plugin 2025-09-04 11:49:40 +02:00
0a588023a7 feat(bluesky): fix CORS by serving /config same-origin and pinning BAPP_CONFIG_URL
- Add `server.config_upstream_url` default in `roles/web-app-bluesky/config/main.yml`
  to define upstream for /config (defaults to https://ip.bsky.app/config).
- Introduce front-proxy injection `extra_locations.conf.j2` that:
  - proxies `/config` to the upstream,
  - sets SNI and correct Host header,
  - normalizes CORS headers for same-origin consumption.
- Wire the proxy injection only for the Web domain in
  `roles/web-app-bluesky/tasks/main.yml` via `proxy_extra_configuration`.
- Force fresh social-app checkout and patch
  `src/state/geolocation.tsx` to `const BAPP_CONFIG_URL = '/config'`
  in `roles/web-app-bluesky/tasks/02_social_app.yml`; notify `docker compose build` and `up`.
- Tidy and re-group PDS env in `roles/web-app-bluesky/templates/env.j2` (no functional change).
- Add vars in `roles/web-app-bluesky/vars/main.yml`:
  - `BLUESKY_FRONT_PROXY_CONTENT` (renders the extra locations),
  - `BLUESKY_CONFIG_UPSTREAM_URL` (reads `server.config_upstream_url`).

Security/Scope:
- Only affects the Bluesky web frontend (same-origin `/config`); PDS/API and AppView remain unchanged.

Refs:
- Conversation: https://chatgpt.com/share/68b8dd3a-2100-800f-959e-1495f6320aab
2025-09-04 02:29:10 +02:00
d2fa90774b Added fediverse bridge draft 2025-09-04 02:26:27 +02:00
0e72dcbe36 feat(magento): switch to ghcr.io/alexcheng1982/docker-magento2:2.4.6-p3; update Compose/Env/Tasks/Docs
• Docs: updated to MAGENTO_VOLUME; removed Installation/User_Administration guides
• Compose: volume path → /var/www/html; switched variables to MAGENTO_*/MYSQL_*/OPENSEARCH_*
• Env: new variable set + APACHE_SERVERNAME
• Task: setup:install via docker compose exec (multiline form)
• Schema: removed obsolete credentials definition
Link: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 02:25:49 +02:00
4f8ce598a9 Mastodon: allow internal chess host & refactor var names; OpenLDAP: safer get_app_conf
- Add ALLOWED_PRIVATE_ADDRESSES to .env (from svc-db-postgres) to handle 422 Mastodon::PrivateNetworkAddressError
- Switch docker-compose to MASTODON_* variables and align vars/main.yml
- Always run 01_setup.yml during deployment (removed conditional flag)
- OpenLDAP: remove implicit True default on network.local to avoid unintended truthy behavior

Context: chess.infinito.nexus resolved to 192.168.200.30 (private IP) from Mastodon; targeted allowlist unblocks federation lookups.

Ref: https://chat.openai.com/share/REPLACE_WITH_THIS_CONVERSATION_LINK
2025-09-03 21:44:47 +02:00
3769e66d8d Updated CSP for bluesky 2025-09-03 20:55:21 +02:00
33a5fadf67 web-app-chess: fix Corepack/Yarn EACCES and switch to ARG-driven Dockerfile
• Add roles/web-app-chess/files/Dockerfile using build ARGs (CHESS_VERSION, CHESS_REPO_URL, CHESS_REPO_REF, CHESS_ENTRYPOINT_REL, CHESS_ENTRYPOINT_INT, CHESS_APP_DATA_DIR, CONTAINER_PORT). Enable Corepack/Yarn as root in the runtime stage to avoid EACCES on /usr/local/bin symlinks, then drop privileges to 'node'.

• Delete Jinja-based templates/Dockerfile.j2; docker-compose now passes former Jinja vars via build.args. • Update templates/docker-compose.yml.j2 to forward all required build args. • Update config/main.yml: add CSP flag 'script-src-elem: unsafe-inline'.

Ref: https://chatgpt.com/share/68b88d3d-3bd8-800f-9723-e8df0cdc37e2
2025-09-03 20:47:50 +02:00
699a6b6f1e feat(web-app-magento): add Magento role + network/ports
- add role files (docs, vars, config, tasks, schema, templates)

- networks: add web-app-magento 192.168.103.208/28

- ports: add localhost http 8052

Conversation: https://chatgpt.com/share/68b8820f-f864-800f-8819-da509b99cee2
2025-09-03 20:00:01 +02:00
61c29eee60 web-app-chess: build/runtime hardening & feature enablement
Build: use Yarn 4 via Corepack; immutable install with inline builds.

Runtime: enable Corepack as user 'node', use project-local cache (/app/.yarn/cache), add curl; fix ownership.

Entrypoint: generate keys in correct dir; run 'yarn install --immutable --inline-builds' before migrations; wait for Postgres.

Config: enable matomo/css/desktop; notify 'docker compose build' on entrypoint changes.

Docs: rename README title to 'Chess'.

Ref: ChatGPT conversation (2025-09-03) — https://chatgpt.com/share/68b88126-7a6c-800f-acae-ae61ed577f46
2025-09-03 19:56:13 +02:00
d5204fb5c2 Removed unnecessary env loading 2025-09-03 17:41:53 +02:00
751615b1a4 Changed 09_ports.yml to 10_ports.yml 2025-09-03 17:41:14 +02:00
e2993d2912 Added more CSP urls for bluesky 2025-09-03 17:31:29 +02:00
24b6647bfb Corrected variable 2025-09-03 17:30:31 +02:00
d2dc2eab5f web-app-bluesky: refactor role, add Cloudflare DNS integration, split tasks
Changes: add AppView port; add CSP whitelist; new tasks (01_pds, 02_social_app, 03_dns); switch templates to BLUESKY_* vars; update docker-compose and env; TCP healthcheck; remove admin_password from schema.

Conversation context: https://chatgpt.com/share/68b85276-e0ec-800f-90ec-480a1d528593
2025-09-03 16:37:35 +02:00
a1130e33d7 web-app-chess: refactor runtime & entrypoint
- Move entrypoint to files/ and deploy via copy
- Parameterize APP_KEY_FILE, data dir, and entrypoint paths
- Require explicit PORT/PG envs (remove fallbacks)
- Drop stray header from config/main.yml
- Dockerfile: use templated data dir & entrypoint; keep node user
- Compose: set custom image, adjust volume mapping
- env: derive APP_SCHEME from WEB_PROTOCOL; NODE_ENV from ENVIRONMENT
- tasks: add 01_core and simplify main to include it

Ref: https://chatgpt.com/share/68b851c5-4dd8-800f-8e9e-22b985597b8f
2025-09-03 16:34:04 +02:00
df122905eb mailu: include base defaults for oletools (env_file/LD_PRELOAD)
Add base include to oletools service so it inherits env_file (LD_PRELOAD=/usr/lib/libhardened_malloc.so) and other defaults. Fixes crash: PermissionError: '/proc/cpuinfo' during hardened_malloc compatibility probe when LD_PRELOAD was absent. Aligns oletools with other Mailu services.

Refs: ChatGPT discussion – https://chatgpt.com/share/68b837ba-c9cc-800f-b5d9-62b60d6fafd9
2025-09-03 14:42:50 +02:00
d093a22d61 Added correct CSP for JIRA 2025-09-03 11:35:24 +02:00
5e550ce3a3 sys-ctl-rpr-docker-soft: switch to STRICT label mode and adapt tests
- script.py now resolves docker-compose project and working_dir strictly from container labels
- removed container-name fallback logic
- adjusted sys-ctl-hlth-docker-container to include sys-ctl-rpr-docker-soft
- cleaned up sys-svc-docker dependencies
- updated unit tests to mock docker inspect and os.path.isfile for STRICT mode

Conversation: https://chatgpt.com/share/68b80927-b800-800f-a909-0fe8d110fd0e
2025-09-03 11:24:14 +02:00
0ada12e3ca Enabled rpr service by failed health checkl isntead of tiumer 2025-09-03 10:46:46 +02:00
1a5ce4a7fa web-app-bookwyrm, web-app-confluence:
- Fix BookWyrm email SSL/TLS handling (use ternary without 'not' for clarity)
- Add truststore_enabled flag in Confluence config and vars
- Wire JVM_SUPPORT_RECOMMENDED_ARGS to disable UPM signature check if truststore is disabled
- Add placeholder style.css.j2 for Confluence

See conversation: https://chatgpt.com/share/68b80024-7100-800f-a2fe-ba8b9f5cec05
2025-09-03 10:45:41 +02:00
a9abb3ce5d Added unsafe-eval csp to jira 2025-09-03 09:43:07 +02:00
71ceb339fc Fix Confluence & BookWyrm setup:
- Add docker compose build trigger in docker-compose tasks
- Cleanup svc-prx-openresty vars
- Enable unsafe-inline CSP flags for BookWyrm, Confluence, Jira to allow Atlassian inline scripts
- Generalize CONFLUENCE_HOME usage in vars, env and docker-compose
- Ensure confluence-init.properties written with correct home
- Add JVM_SUPPORT_RECOMMENDED_ARGS to pass atlassian.home
- Update README to reference {{ CONFLUENCE_HOME }}

See: https://chatgpt.com/share/68b7582a-aeb8-800f-a14f-e98c5b4e6c70
2025-09-02 22:49:02 +02:00
61bba3d2ef feat(bookwyrm): production-ready runtime + Redis wiring
- Dockerfile: build & install gunicorn wheels
- compose: run initdb before start; use `python -m gunicorn`
- env: add POSTGRES_* and BookWyrm Redis aliases (BROKER/ACTIVITY/CACHE) + CACHE_URL
- vars: add cache URL, DB indices, and URL aliases for Redis

Ref: https://chatgpt.com/share/68b7492b-3200-800f-80c4-295bc3233d68
2025-09-02 21:45:11 +02:00
0bde4295c7 Implemented correct confluence version 2025-09-02 17:01:58 +02:00
8059f272d5 Refactor Confluence and Jira env templates to use official Atlassian ATL_* database variables instead of unused custom placeholders. Ensures containers connect directly to PostgreSQL without relying on CONFLUENCE_DATABASE_* or JIRA_DATABASE_* vars. See conversation: https://chatgpt.com/share/68b6ddfd-3c44-800f-a57e-244dbd7ceeb5 2025-09-02 14:07:38 +02:00
7c814e6e83 BookWyrm: update Dockerfile and env handling
- Remove ARG BOOKWYRM_VERSION default, use Jinja variable directly
- Add proper SMTP environment variables mapping (EMAIL_HOST, EMAIL_PORT, TLS/SSL flags, user, password, default_from)
- Ensure env.j2 uses BookWyrm-expected names only
Ref: ChatGPT conversation 2025-09-02 https://chatgpt.com/share/68b6dc73-3784-800f-9a7e-340be498a412
2025-09-02 14:01:04 +02:00
d760c042c2 Atlassian JVM sizing: cast memory vars to int before floor-division
Apply |int to TOTAL_MB and dependent values to prevent 'unsupported operand type(s) for //' during templating in Confluence and Jira roles.

Context: discussion on 2025-09-02 — https://chatgpt.com/share/68b6d386-4490-800f-9bad-aa7be1571ebe
2025-09-02 13:22:59 +02:00
6cac8085a8 feat(web-app-chess): add castling.club role with ports, networks, and build setup
- Added network subnet (192.168.103.192/28) and port 8050 for web-app-chess
- Replaced stub README with usability-focused description of castling.club
- Implemented config, vars, meta, and tasks for web-app-chess
- Added Dockerfile, docker-compose.yml, env, and docker-entrypoint.sh templates
- Integrated entrypoint asset placement
- Updated meta to reflect usability and software features

Ref: https://chatgpt.com/share/68b6c65a-3de8-800f-86b2-a110920cd50e
2025-09-02 13:21:15 +02:00
3a83f3d14e Refactor BookWyrm role: switch to source-built Dockerfile, update README/meta for usability, add env improvements (ALLOWED_HOSTS, Redis vars, Celery broker), and pin version v0.7.5. See https://chatgpt.com/share/68b6d273-abc4-800f-ad3f-e1a5b9f8dad0 2025-09-02 13:18:32 +02:00
61d852c508 Added ports and networks for bookwyrm, jira, confluence 2025-09-02 12:08:20 +02:00
188b098503 Confluence/Jira roles: add READMEs, switch to custom images, proxy/JVM envs, and integer-safe heap sizing
Confluence: README added; demo disables OIDC/LDAP; Dockerfile overlay; docker-compose now uses CONFLUENCE_CUSTOM_IMAGE and DB depends include; env.j2 adds ATL_* and JVM_*; vars use integer math (//) for Xmx/Xms and expose CUSTOM_IMAGE.

Jira: initial role skeleton with README, config/meta/tasks; Dockerfile overlay; docker-compose using JIRA_CUSTOM_IMAGE and DB depends include; env.j2 with proxy + JVM envs; vars with integer-safe memory sizing.

Context: https://chatgpt.com/share/68b6b592-2250-800f-b68e-b37ae98dbe70
2025-09-02 12:07:34 +02:00
bc56940e55 Implement initial BookWyrm role
- Removed obsolete TODO.md
- Added config/main.yml with service, feature, CSP, and registration settings
- Added schema/main.yml defining vaulted SECRET_KEY (alphanumeric)
- Added tasks/main.yml to load stateful stack
- Added Dockerfile.j2 ensuring data/media dirs
- Added docker-compose.yml.j2 with application, worker, redis, volumes
- Added env.j2 with registration, secrets, DB, Redis, OIDC support
- Extended vars/main.yml with BookWyrm variables and OIDC, Docker, Redis settings
- Updated meta/main.yml with logo and run_after dependencies

Ref: https://chatgpt.com/share/68b6c060-3a0c-800f-89f8-e114a16a4a80
2025-09-02 12:03:11 +02:00
5dfc2efb5a Used port variable 2025-09-02 11:59:50 +02:00
7f9dc65b37 Add README.md files for web-app-bookwyrm, web-app-postmarks, and web-app-socialhome roles
Introduce integration test to ensure all web-app-* roles contain a README.md (required for Web App Desktop visibility)

See: https://chatgpt.com/share/68b6be49-7b78-800f-a3ff-bf922b4b083f
2025-09-02 11:52:34 +02:00
163a925096 fix(docker-compose): proper lock path + robust pull for buildable services
- Store pull lock under ${PATH_DOCKER_COMPOSE_PULL_LOCK_DIR}/<hash>.lock so global cleanup removes it reliably
- If any service defines `build:`, run `docker compose build --pull` before pulling
- Use `docker compose pull --ignore-buildable` when supported; otherwise tolerate pull failures for locally built images

This prevents failures when images are meant to be built locally (e.g., custom images) and ensures lock handling is consistent.

Ref: https://chatgpt.com/share/68b6b592-2250-800f-b68e-b37ae98dbe70
2025-09-02 11:15:28 +02:00
a8c88634b5 cleanup: remove unused handlers and add integration test for unused handlers
Removed obsolete handlers from roles (VirtualBox, backup-to-USB, OpenLDAP)
and introduced an integration test under tests/integration/test_handlers_invoked.py
that ensures all handlers defined in roles/*/handlers are actually notified
somewhere in the code base. This keeps the repository clean by preventing
unused or forgotten handlers from accumulating.

Ref: https://chatgpt.com/share/68b6b28e-4388-800f-87d2-34dfb34b8d36
2025-09-02 11:02:30 +02:00
137 changed files with 2396 additions and 804 deletions

View File

@@ -11,7 +11,7 @@ sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')
from module_utils.entity_name_utils import get_entity_name
# Paths to the group-vars files
PORTS_FILE = './group_vars/all/09_ports.yml'
PORTS_FILE = './group_vars/all/10_ports.yml'
NETWORKS_FILE = './group_vars/all/09_networks.yml'
ROLE_TEMPLATE_DIR = './templates/roles/web-app'
ROLES_DIR = './roles'

View File

@@ -15,7 +15,7 @@ Follow these guides to install and configure Infinito.Nexus:
- **Networking & VPN** - Configure `WireGuard`, `OpenVPN`, and `Nginx Reverse Proxy`.
## Managing & Updating Infinito.Nexus 🔄
- Regularly update services using `update-docker`, `update-pacman`, or `update-apt`.
- Regularly update services using `update-pacman`, or `update-apt`.
- Monitor system health with `sys-ctl-hlth-btrfs`, `sys-ctl-hlth-webserver`, and `sys-ctl-hlth-docker-container`.
- Automate system maintenance with `sys-lock`, `sys-ctl-cln-bkps`, and `sys-ctl-rpr-docker-hard`.

View File

@@ -0,0 +1,21 @@
from ansible.errors import AnsibleFilterError
def docker_volume_path(volume_name: str) -> str:
"""
Returns the absolute filesystem path of a Docker volume.
Example:
"akaunting_data" -> "/var/lib/docker/volumes/akaunting_data/_data/"
"""
if not volume_name or not isinstance(volume_name, str):
raise AnsibleFilterError(f"Invalid volume name: {volume_name}")
return f"/var/lib/docker/volumes/{volume_name}/_data/"
class FilterModule(object):
"""Docker volume path filters."""
def filters(self):
return {
"docker_volume_path": docker_volume_path,
}

View File

@@ -12,7 +12,6 @@ SYS_SERVICE_BACKUP_RMT_2_LOC: "{{ 'svc-bkp-rmt-2-loc' | get_se
SYS_SERVICE_BACKUP_DOCKER_2_LOC: "{{ 'sys-ctl-bkp-docker-2-loc' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_REPAIR_DOCKER_SOFT: "{{ 'sys-ctl-rpr-docker-soft' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_REPAIR_DOCKER_HARD: "{{ 'sys-ctl-rpr-docker-hard' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_UPDATE_DOCKER: "{{ 'update-docker' | get_service_name(SOFTWARE_NAME) }}"
## On Failure
SYS_SERVICE_ON_FAILURE_COMPOSE: "{{ ('sys-ctl-alm-compose@') | get_service_name(SOFTWARE_NAME, False) }}%n.service"
@@ -46,8 +45,7 @@ SYS_SERVICE_GROUP_MANIPULATION: >
SYS_SERVICE_GROUP_CLEANUP +
SYS_SERVICE_GROUP_REPAIR +
SYS_SERVICE_GROUP_OPTIMIZATION +
SYS_SERVICE_GROUP_MAINTANANCE +
[ SYS_SERVICE_UPDATE_DOCKER ]
SYS_SERVICE_GROUP_MAINTANANCE
) | sort
}}

View File

@@ -37,7 +37,6 @@ SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 12:00:00"
### Schedule for repair services
SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month
SYS_SCHEDULE_REPAIR_DOCKER_SOFT: "*-*-* {{ HOURS_SERVER_AWAKE }}:30:00" # Heal unhealthy docker instances once per hour
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM
### Schedule for backup tasks

View File

@@ -10,8 +10,8 @@ defaults_networks:
# /28 Networks, 14 Usable Ip Addresses
web-app-akaunting:
subnet: 192.168.101.0/28
# Free:
# subnet: 192.168.101.16/28
web-app-confluence:
subnet: 192.168.101.16/28
web-app-baserow:
subnet: 192.168.101.32/28
web-app-mobilizon:
@@ -34,8 +34,8 @@ defaults_networks:
subnet: 192.168.101.176/28
web-app-listmonk:
subnet: 192.168.101.192/28
# Free:
# subnet: 192.168.101.208/28
web-app-jira:
subnet: 192.168.101.208/28
web-app-matomo:
subnet: 192.168.101.224/28
web-app-mastodon:
@@ -48,8 +48,8 @@ defaults_networks:
subnet: 192.168.102.16/28
web-app-moodle:
subnet: 192.168.102.32/28
# Free:
# subnet: 192.168.102.48/28
web-app-bookwyrm:
subnet: 192.168.102.48/28
web-app-nextcloud:
subnet: 192.168.102.64/28
web-app-openproject:
@@ -96,6 +96,12 @@ defaults_networks:
subnet: 192.168.103.160/28
web-svc-logout:
subnet: 192.168.103.176/28
web-app-chess:
subnet: 192.168.103.192/28
web-app-magento:
subnet: 192.168.103.208/28
web-app-bridgy-fed:
subnet: 192.168.103.224/28
# /24 Networks / 254 Usable Clients
web-app-bigbluebutton:

View File

@@ -26,7 +26,7 @@ ports:
web-app-gitea: 8002
web-app-wordpress: 8003
web-app-mediawiki: 8004
# Free : 8005
web-app-confluence: 8005
web-app-yourls: 8006
web-app-mailu: 8007
web-app-elk: 8008
@@ -36,7 +36,7 @@ ports:
web-app-funkwhale: 8012
web-app-roulette-wheel: 8013
web-app-joomla: 8014
# Free: 8015
web-app-jira: 8015
web-app-pgadmin: 8016
web-app-baserow: 8017
web-app-matomo: 8018
@@ -70,6 +70,11 @@ ports:
web-app-pretix: 8046
web-app-mig: 8047
web-svc-logout: 8048
web-app-bookwyrm: 8049
web-app-chess: 8050
web-app-bluesky_view: 8051
web-app-magento: 8052
web-app-bridgy-fed: 8053
web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port
public:
# The following ports should be changed to 22 on the subdomain via stream mapping

View File

@@ -1,4 +0,0 @@
---
- name: reload virtualbox kernel modules
become: true
command: vboxreload

View File

@@ -15,10 +15,17 @@
- name: docker compose pull
shell: |
set -euo pipefail
lock="{{ [ PATH_DOCKER_COMPOSE_PULL_LOCK_DIR, docker_compose.directories.instance ] | path_join | hash('sha1') }}"
lock="{{ [ PATH_DOCKER_COMPOSE_PULL_LOCK_DIR, (docker_compose.directories.instance | hash('sha1')) ~ '.lock' ] | path_join }}"
if [ ! -e "$lock" ]; then
mkdir -p "$(dirname "$lock")"
docker compose pull
if docker compose config | grep -qE '^[[:space:]]+build:'; then
docker compose build --pull
fi
if docker compose pull --help 2>/dev/null | grep -q -- '--ignore-buildable'; then
docker compose pull --ignore-buildable
else
docker compose pull || true
fi
: > "$lock"
echo "pulled"
fi

View File

@@ -5,7 +5,9 @@
loop:
- "{{ application_id | abs_role_path_by_application_id }}/templates/Dockerfile.j2"
- "{{ application_id | abs_role_path_by_application_id }}/files/Dockerfile"
notify: docker compose up
notify:
- docker compose up
- docker compose build
register: create_dockerfile_result
failed_when:
- create_dockerfile_result is failed

View File

@@ -43,3 +43,7 @@
chdir: "{{ PKGMGR_INSTALL_PATH }}"
executable: /bin/bash
become: true
- name: "Update all repositories with pkgmgr"
command: "pkgmgr pull --all"
when: MODE_UPDATE | bool

View File

@@ -1,6 +0,0 @@
- name: "reload svc-bkp-loc-2-usb service"
systemd:
name: "{{ 'svc-bkp-loc-2-usb' | get_service_name(SOFTWARE_NAME) }}"
state: reloaded
daemon_reload: yes

View File

@@ -1,55 +0,0 @@
- name: Load memberof module from file in OpenLDAP container
shell: >
docker exec -i {{ openldap_name }} ldapmodify -Y EXTERNAL -H ldapi:/// -f {{ openldap_ldif_docker_path }}configuration/01_member_of_configuration.ldif
listen:
- "Import configuration LDIF files"
- "Import all LDIF files"
# @todo Remove the following ignore errors when setting up a new server
# Just here because debugging would take to much time
ignore_errors: true
- name: Refint Module Activation for OpenLDAP
shell: >
docker exec -i {{ openldap_name }} ldapadd -Y EXTERNAL -H ldapi:/// -f {{ openldap_ldif_docker_path }}configuration/02_member_of_configuration.ldif
listen:
- "Import configuration LDIF files"
- "Import all LDIF files"
register: ldapadd_result
failed_when: ldapadd_result.rc not in [0, 68]
# @todo Remove the following ignore errors when setting up a new server
# Just here because debugging would take to much time
ignore_errors: true
- name: "Import schemas"
shell: >
docker exec -i {{ openldap_name }} ldapadd -Y EXTERNAL -H ldapi:/// -f "{{ openldap_ldif_docker_path }}schema/{{ item | basename | regex_replace('\.j2$', '') }}"
register: ldapadd_result
changed_when: "'adding new entry' in ldapadd_result.stdout"
failed_when: ldapadd_result.rc not in [0, 80]
listen:
- "Import schema LDIF files"
- "Import all LDIF files"
loop: "{{ lookup('fileglob', role_path ~ '/templates/ldif/schema/*.j2', wantlist=True) }}"
- name: Refint Overlay Configuration for OpenLDAP
shell: >
docker exec -i {{ openldap_name }} ldapmodify -Y EXTERNAL -H ldapi:/// -f {{ openldap_ldif_docker_path }}configuration/03_member_of_configuration.ldif
listen:
- "Import configuration LDIF files"
- "Import all LDIF files"
register: ldapadd_result
failed_when: ldapadd_result.rc not in [0, 68]
# @todo Remove the following ignore errors when setting up a new server
# Just here because debugging would take to much time
ignore_errors: true
- name: "Import users, groups, etc. to LDAP"
shell: >
docker exec -i {{ openldap_name }} ldapadd -x -D "{{LDAP.DN.ADMINISTRATOR.DATA}}" -w "{{ LDAP.BIND_CREDENTIAL }}" -c -f "{{ openldap_ldif_docker_path }}groups/{{ item | basename | regex_replace('\.j2$', '') }}"
register: ldapadd_result
changed_when: "'adding new entry' in ldapadd_result.stdout"
failed_when: ldapadd_result.rc not in [0, 20, 68, 65]
listen:
- "Import groups LDIF files"
- "Import all LDIF files"
loop: "{{ query('fileglob', role_path ~ '/templates/ldif/groups/*.j2') | sort }}"

View File

@@ -37,7 +37,7 @@
- name: "Reset LDAP Credentials"
include_tasks: 01_credentials.yml
when:
- applications | get_app_conf(application_id, 'network.local', True)
- applications | get_app_conf(application_id, 'network.local')
- applications | get_app_conf(application_id, 'provisioning.credentials', True)
- name: "create directory {{openldap_ldif_host_path}}{{item}}"

View File

@@ -21,4 +21,4 @@ openldap_version: "{{ applications | get_app_conf(application_id,
openldap_volume: "{{ applications | get_app_conf(application_id, 'docker.volumes.data', True) }}"
openldap_network: "{{ applications | get_app_conf(application_id, 'docker.network', True) }}"
openldap_network_expose_local: "{{ applications | get_app_conf(application_id, 'network.public', True) | bool or applications | get_app_conf(application_id, 'network.local', True) | bool }}"
openldap_network_expose_local: "{{ applications | get_app_conf(application_id, 'network.public', True) | bool or applications | get_app_conf(application_id, 'network.local') | bool }}"

View File

@@ -8,4 +8,3 @@ database_type: ""
OPENRESTY_IMAGE: "openresty/openresty"
OPENRESTY_VERSION: "alpine"
OPENRESTY_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.openresty.name', True) }}"

View File

@@ -3,9 +3,14 @@
name: sys-ctl-alm-compose
when: run_once_sys_ctl_alm_compose is not defined
- name: Include dependency 'sys-ctl-rpr-docker-soft'
include_role:
name: sys-ctl-rpr-docker-soft
when: run_once_sys_ctl_rpr_docker_soft is not defined
- include_role:
name: sys-service
vars:
system_service_timer_enabled: true
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER }}"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_timer_enabled: true
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER }}"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }}"

View File

@@ -2,7 +2,7 @@
include_role:
name: sys-ctl-alm-compose
when: run_once_sys_ctl_alm_compose is not defined
- include_role:
name: sys-service
vars:

View File

@@ -1,15 +1,26 @@
#!/usr/bin/env python3
"""
Restart Docker-Compose configurations with exited or unhealthy containers.
This version receives the *manipulation services* via argparse (no Jinja).
STRICT mode: Resolve the Compose project exclusively via Docker labels
(com.docker.compose.project and com.docker.compose.project.working_dir).
No container-name fallback. If labels are missing or Docker is unavailable,
the script records an error for that container.
All shell interactions that matter for tests go through print_bash()
so they can be monkeypatched in unit tests.
"""
import subprocess
import time
import os
import argparse
from typing import List
from typing import List, Optional, Tuple
# ---------------------------
# Shell helpers
# ---------------------------
def bash(command: str) -> List[str]:
print(command)
process = subprocess.Popen(
@@ -30,31 +41,45 @@ def list_to_string(lst: List[str]) -> str:
def print_bash(command: str) -> List[str]:
"""
Wrapper around bash() that echoes combined output for easier debugging
and can be monkeypatched in tests.
"""
output = bash(command)
if output:
print(list_to_string(output))
return output
def find_docker_compose_file(directory: str) -> str | None:
# ---------------------------
# Filesystem / compose helpers
# ---------------------------
def find_docker_compose_file(directory: str) -> Optional[str]:
"""
Search for docker-compose.yml beneath a directory.
"""
for root, _, files in os.walk(directory):
if "docker-compose.yml" in files:
return os.path.join(root, "docker-compose.yml")
return None
def detect_env_file(project_path: str) -> str | None:
def detect_env_file(project_path: str) -> Optional[str]:
"""
Return the path to a Compose env file if present (.env preferred, fallback to env).
Return the path to a Compose env file if present (.env preferred, fallback to .env/env).
"""
candidates = [os.path.join(project_path, ".env"), os.path.join(project_path, ".env", "env")]
candidates = [
os.path.join(project_path, ".env"),
os.path.join(project_path, ".env", "env"),
]
for candidate in candidates:
if os.path.isfile(candidate):
return candidate
return None
def compose_cmd(subcmd: str, project_path: str, project_name: str | None = None) -> str:
def compose_cmd(subcmd: str, project_path: str, project_name: Optional[str] = None) -> str:
"""
Build a docker-compose command string with optional -p and --env-file if present.
Example: compose_cmd("restart", "/opt/docker/foo", "foo")
@@ -69,6 +94,10 @@ def compose_cmd(subcmd: str, project_path: str, project_name: str | None = None)
return " ".join(parts)
# ---------------------------
# Business logic
# ---------------------------
def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[str]:
"""
Accept either:
@@ -78,7 +107,6 @@ def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[s
if raw:
return [s for s in raw if s.strip()]
if raw_str:
# split on comma or whitespace
parts = [p.strip() for chunk in raw_str.split(",") for p in chunk.split()]
return [p for p in parts if p]
return []
@@ -87,7 +115,7 @@ def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[s
def wait_while_manipulation_running(
services: List[str],
waiting_time: int = 600,
timeout: int | None = None,
timeout: Optional[int] = None,
) -> None:
"""
Wait until none of the given services are active anymore.
@@ -107,7 +135,6 @@ def wait_while_manipulation_running(
break
if any_active:
# Check timeout
elapsed = time.time() - start
if timeout and elapsed >= timeout:
print(f"Timeout ({timeout}s) reached while waiting for services. Continuing anyway.")
@@ -119,7 +146,30 @@ def wait_while_manipulation_running(
break
def main(base_directory: str, manipulation_services: List[str], timeout: int | None) -> int:
def get_compose_project_info(container: str) -> Tuple[str, str]:
"""
Resolve project name and working dir from Docker labels.
STRICT: Raises RuntimeError if labels are missing/unreadable.
"""
out_project = print_bash(
f"docker inspect -f '{{{{ index .Config.Labels \"com.docker.compose.project\" }}}}' {container}"
)
out_workdir = print_bash(
f"docker inspect -f '{{{{ index .Config.Labels \"com.docker.compose.project.working_dir\" }}}}' {container}"
)
project = out_project[0].strip() if out_project else ""
workdir = out_workdir[0].strip() if out_workdir else ""
if not project:
raise RuntimeError(f"No compose project label found for container {container}")
if not workdir:
raise RuntimeError(f"No compose working_dir label found for container {container}")
return project, workdir
def main(base_directory: str, manipulation_services: List[str], timeout: Optional[int]) -> int:
errors = 0
wait_while_manipulation_running(manipulation_services, waiting_time=600, timeout=timeout)
@@ -131,43 +181,50 @@ def main(base_directory: str, manipulation_services: List[str], timeout: int | N
)
failed_containers = unhealthy_container_names + exited_container_names
unfiltered_failed_docker_compose_repositories = [
container.split("-")[0] for container in failed_containers
]
filtered_failed_docker_compose_repositories = list(
dict.fromkeys(unfiltered_failed_docker_compose_repositories)
)
for container in failed_containers:
try:
project, workdir = get_compose_project_info(container)
except Exception as e:
print(f"Error reading compose labels for {container}: {e}")
errors += 1
continue
for repo in filtered_failed_docker_compose_repositories:
compose_file_path = find_docker_compose_file(os.path.join(base_directory, repo))
compose_file_path = os.path.join(workdir, "docker-compose.yml")
if not os.path.isfile(compose_file_path):
# As STRICT: we only trust labels; if file not there, error out.
print(f"Error: docker-compose.yml not found at {compose_file_path} for container {container}")
errors += 1
continue
if compose_file_path:
project_path = os.path.dirname(compose_file_path)
try:
print("Restarting unhealthy container in:", compose_file_path)
project_path = os.path.dirname(compose_file_path)
try:
# restart with optional --env-file and -p
print_bash(compose_cmd("restart", project_path, repo))
except Exception as e:
if "port is already allocated" in str(e):
print("Detected port allocation problem. Executing recovery steps...")
# down (no -p needed), then engine restart, then up -d with -p
print_bash(compose_cmd("restart", project_path, project))
except Exception as e:
if "port is already allocated" in str(e):
print("Detected port allocation problem. Executing recovery steps...")
try:
print_bash(compose_cmd("down", project_path))
print_bash("systemctl restart docker")
print_bash(compose_cmd("up -d", project_path, repo))
else:
print("Unhandled exception during restart:", e)
print_bash(compose_cmd("up -d", project_path, project))
except Exception as e2:
print("Unhandled exception during recovery:", e2)
errors += 1
else:
print("Error: Docker Compose file not found for:", repo)
errors += 1
else:
print("Unhandled exception during restart:", e)
errors += 1
print("Finished restart procedure.")
return errors
# ---------------------------
# CLI
# ---------------------------
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Restart Docker-Compose configurations with exited or unhealthy containers."
description="Restart Docker-Compose configurations with exited or unhealthy containers (STRICT label mode)."
)
parser.add_argument(
"--manipulation",
@@ -184,12 +241,12 @@ if __name__ == "__main__":
"--timeout",
type=int,
default=60,
help="Maximum time in seconds to wait for manipulation services before continuing.(Default 1min)",
help="Maximum time in seconds to wait for manipulation services before continuing. (Default 1min)",
)
parser.add_argument(
"base_directory",
type=str,
help="Base directory where Docker Compose configurations are located.",
help="(Unused in STRICT mode) Base directory where Docker Compose configurations are located.",
)
args = parser.parse_args()
services = normalize_services_arg(args.manipulation, args.manipulation_string)

View File

@@ -6,8 +6,6 @@
- include_role:
name: sys-service
vars:
system_service_on_calendar: "{{ SYS_SCHEDULE_REPAIR_DOCKER_SOFT }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP| join(' ') }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }} --timeout '{{ SYS_TIMEOUT_DOCKER_RPR_SOFT }}'"
system_service_tpl_exec_start: >

View File

@@ -17,14 +17,8 @@ When enabled via `MODE_CLEANUP` or `MODE_RESET`, it will automatically prune unu
Installs Docker and Docker Compose via the system package manager.
- **Integrated Dependencies**
Includes backup, repair, and health check sub-roles:
- `sys-ctl-bkp-docker-2-loc`
- `user-administrator`
- `sys-ctl-hlth-docker-container`
- `sys-ctl-hlth-docker-volumes`
- `sys-ctl-rpr-docker-soft`
- `sys-ctl-rpr-docker-hard`
Includes backup, repair, and health check sub-roles
- **Cleanup & Reset Modes**
- `MODE_CLEANUP`: Removes unused Docker containers, networks, images, and volumes.
- `MODE_RESET`: Performs cleanup and restarts the Docker service.

View File

@@ -21,6 +21,5 @@
- sys-ctl-bkp-docker-2-loc
- sys-ctl-hlth-docker-container
- sys-ctl-hlth-docker-volumes
- sys-ctl-rpr-docker-soft
- sys-ctl-rpr-docker-hard
when: SYS_SVC_DOCKER_LOAD_SERVICES | bool

View File

@@ -8,10 +8,10 @@
path: "{{ docker_compose.directories.env }}"
state: directory
mode: "0755"
- name: "For '{{ application_id }}': Create {{database_env}}"
- name: "For '{{ application_id }}': Create {{ database_env }}"
template:
src: "env/{{database_type}}.env.j2"
dest: "{{database_env}}"
src: "env/{{ database_type }}.env.j2"
dest: "{{ database_env }}"
notify: docker compose up
when: not applications | get_app_conf(application_id, 'features.central_database', False)
@@ -19,7 +19,7 @@
# I don't know why this includes leads to that the application_id in vars/main.yml of the database role isn't used
# This is the behaviour which I want, but I'm still wondering why ;)
include_role:
name: "svc-db-{{database_type}}"
name: "svc-db-{{ database_type }}"
when: applications | get_app_conf(application_id, 'features.central_database', False)
- name: "For '{{ application_id }}': Add Entry for Backup Procedure"

View File

@@ -5,10 +5,10 @@
container_name: {{ application_id | get_entity_name }}-database
logging:
driver: journald
image: mariadb
image: {{ database_image }}:{{ database_version }}
restart: {{ DOCKER_RESTART_POLICY }}
env_file:
- {{database_env}}
- {{ database_env }}
command: "--transaction-isolation=READ-COMMITTED --binlog-format=ROW"
volumes:
- database:/var/lib/mysql

View File

@@ -2,13 +2,13 @@
{% if not applications | get_app_conf(application_id, 'features.central_database', False) %}
{{ database_host }}:
image: postgres:{{ applications['svc-db-postgres'].version}}-alpine
image: {{ database_image }}:{{ database_version }}
container_name: {{ application_id | get_entity_name }}-database
env_file:
- {{database_env}}
- {{ database_env }}
restart: {{ DOCKER_RESTART_POLICY }}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U {{ database_name }}"]
test: ["CMD-SHELL", "pg_isready -U {{ database_username }}"]
interval: 10s
timeout: 5s
retries: 6

View File

@@ -1,20 +1,23 @@
# Helper variables
_dbtype: "{{ (database_type | d('') | trim) }}"
_database_id: "{{ ('svc-db-' ~ _dbtype) if _dbtype else '' }}"
_database_central_name: "{{ (applications | get_app_conf(_database_id, 'docker.services.' ~ _dbtype ~ '.name', False, '')) if _dbtype else '' }}"
_database_consumer_id: "{{ database_application_id | d(application_id) }}"
_database_consumer_entity_name: "{{ _database_consumer_id | get_entity_name }}"
_database_central_enabled: "{{ (applications | get_app_conf(_database_consumer_id, 'features.central_database', False)) if _dbtype else False }}"
_dbtype: "{{ (database_type | d('') | trim) }}"
_database_id: "{{ ('svc-db-' ~ _dbtype) if _dbtype else '' }}"
_database_central_name: "{{ (applications | get_app_conf(_database_id, 'docker.services.' ~ _dbtype ~ '.name', False, '')) if _dbtype else '' }}"
_database_consumer_id: "{{ database_application_id | d(application_id) }}"
_database_consumer_entity_name: "{{ _database_consumer_id | get_entity_name }}"
_database_central_enabled: "{{ (applications | get_app_conf(_database_consumer_id, 'features.central_database', False)) if _dbtype else False }}"
_database_default_version: "{{ applications | get_app_conf(_database_id, 'docker.services.' ~ _dbtype ~ '.version') }}"
# Definition
database_name: "{{ _database_consumer_entity_name }}"
database_instance: "{{ _database_central_name if _database_central_enabled else database_name }}" # This could lead to bugs at dedicated database @todo cleanup
database_host: "{{ _database_central_name if _database_central_enabled else 'database' }}" # This could lead to bugs at dedicated database @todo cleanup
database_username: "{{ _database_consumer_entity_name }}"
database_password: "{{ applications | get_app_conf(_database_consumer_id, 'credentials.database_password', true) }}"
database_port: "{{ (ports.localhost.database[_database_id] | d('')) if _dbtype else '' }}"
database_env: "{{ docker_compose.directories.env }}{{ database_type }}.env"
database_url_jdbc: "jdbc:{{ database_type if database_type == 'mariadb' else 'postgresql' }}://{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_url_full: "{{ database_type }}://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_volume: "{{ _database_consumer_entity_name ~ '_' if not _database_central_enabled }}{{ database_host }}"
database_name: "{{ _database_consumer_entity_name }}"
database_instance: "{{ _database_central_name if _database_central_enabled else database_name }}" # This could lead to bugs at dedicated database @todo cleanup
database_host: "{{ _database_central_name if _database_central_enabled else 'database' }}" # This could lead to bugs at dedicated database @todo cleanup
database_username: "{{ _database_consumer_entity_name }}"
database_password: "{{ applications | get_app_conf(_database_consumer_id, 'credentials.database_password', true) }}"
database_port: "{{ (ports.localhost.database[_database_id] | d('')) if _dbtype else '' }}"
database_env: "{{ docker_compose.directories.env }}{{ database_type }}.env"
database_url_jdbc: "jdbc:{{ database_type if database_type == 'mariadb' else 'postgresql' }}://{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_url_full: "{{ database_type }}://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_volume: "{{ _database_consumer_entity_name ~ '_' if not _database_central_enabled }}{{ database_host }}"
database_image: "{{ _dbtype }}"
database_version: "{{ applications | get_app_conf( _database_consumer_id, 'docker.services.database.version', False, _database_default_version) }}"

View File

@@ -14,13 +14,6 @@
name: update-apt
when: ansible_distribution == "Debian"
- name: "Update Docker Images"
include_role:
name: update-docker
when:
- docker_compose_directory_stat.stat.exists
- run_once_update_docker is not defined
- name: "Check if yay is installed"
command: which yay
register: yay_installed
@@ -51,7 +44,3 @@
register: pkgmgr_available
failed_when: false
- name: "Update all repositories using pkgmgr"
include_role:
name: update-pkgmgr
when: pkgmgr_available.rc == 0

View File

@@ -1,27 +0,0 @@
# Update Docker
## Description
This role updates Docker Compose instances by checking for changes in Docker image digests and applying updates if necessary. It utilizes a Python script to handle git pulls and Docker image pulls, and rebuilds containers when changes are detected.
## Overview
The role performs the following:
- Deploys a Python script to check for Docker image updates.
- Configures a systemd service to run the update script.
- Restarts the Docker update service upon configuration changes.
- Supports additional procedures for specific Docker applications (e.g., Discourse, Mastodon, Nextcloud).
## Purpose
The role is designed to ensure that Docker images remain current by automatically detecting changes and rebuilding containers as needed. This helps maintain a secure and efficient container environment.
## Features
- **Docker Image Monitoring:** Checks for changes in image digests.
- **Automated Updates:** Pulls new images and rebuilds containers when necessary.
- **Service Management:** Configures and restarts a systemd service to handle updates.
- **Application-Specific Procedures:** Includes hooks for updating specific Docker applications.
## Credits 📝
It was created with the help of ChatGPT. The conversation is available [here](https://chat.openai.com/share/165418b8-25fa-433b-baca-caded941e22a)

View File

@@ -1,27 +0,0 @@
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "Updates Docker Compose instances by detecting changes in Docker image digests and rebuilding containers when necessary. This role automates Docker image pulls and container rebuilds."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
min_ansible_version: "2.9"
platforms:
- name: Archlinux
versions:
- rolling
- name: Ubuntu
versions:
- all
galaxy_tags:
- docker
- update
- compose
- images
- systemd
- maintenance
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://docs.infinito.nexus"

View File

@@ -1,20 +0,0 @@
- name: Include dependency 'sys-lock'
include_role:
name: sys-lock
when: run_once_sys_lock is not defined
- name: "start {{ 'sys-ctl-bkp-docker-2-loc-everything' | get_service_name(SOFTWARE_NAME) }}"
systemd:
name: "{{ 'sys-ctl-bkp-docker-2-loc-everything' | get_service_name(SOFTWARE_NAME) }}"
state: started
when:
- MODE_BACKUP | bool
- include_role:
name: sys-service
vars:
system_service_restarted: true
system_service_timer_enabled: false
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start: "{{ system_service_script_exec }} {{ PATH_DOCKER_COMPOSE_INSTANCES }}"
system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP | join(' ') }} {{ 'update-docker' | get_service_name(SOFTWARE_NAME) }} --timeout '{{ SYS_TIMEOUT_DOCKER_UPDATE }}'"

View File

@@ -1,4 +0,0 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
when: run_once_update_docker is not defined

View File

@@ -1,217 +0,0 @@
import os
import subprocess
import sys
import time
def run_command(command):
"""
Executes the specified shell command, streaming and collecting its output in real-time.
If the command exits with a non-zero status, a subprocess.CalledProcessError is raised,
including the exit code, the executed command, and the full output (as bytes) for debugging purposes.
"""
process = None
try:
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = []
for line in iter(process.stdout.readline, b''):
decoded_line = line.decode()
output.append(decoded_line)
sys.stdout.write(decoded_line)
return_code = process.wait()
if return_code:
full_output = ''.join(output)
raise subprocess.CalledProcessError(return_code, command, output=full_output.encode())
finally:
if process and process.stdout:
process.stdout.close()
def git_pull():
"""
Checks whether the Git repository in the specified directory is up to date and performs a git pull if necessary.
Raises:
Exception: If retrieving the local or remote git revision fails because the command returns a non-zero exit code.
"""
print("Checking if the git repository is up to date.")
# Run 'git rev-parse @' and check its exit code explicitly.
local_proc = subprocess.run("git rev-parse @", shell=True, capture_output=True)
if local_proc.returncode != 0:
error_msg = local_proc.stderr.decode().strip() or "Unknown error while retrieving local revision."
raise Exception(f"Failed to retrieve local git revision: {error_msg}")
local = local_proc.stdout.decode().strip()
# Run 'git rev-parse @{u}' and check its exit code explicitly.
remote_proc = subprocess.run("git rev-parse @{u}", shell=True, capture_output=True)
if remote_proc.returncode != 0:
error_msg = remote_proc.stderr.decode().strip() or "Unknown error while retrieving remote revision."
raise Exception(f"Failed to retrieve remote git revision: {error_msg}")
remote = remote_proc.stdout.decode().strip()
if local != remote:
print("Repository is not up to date. Performing git pull.")
run_command("git pull")
return True
print("Repository is already up to date.")
return False
{% raw %}
def get_image_digests(directory):
"""
Retrieves the image digests for all images in the specified Docker Compose project.
"""
compose_project = os.path.basename(directory)
try:
images_output = subprocess.check_output(
f'docker images --format "{{{{.Repository}}}}:{{{{.Tag}}}}@{{{{.Digest}}}}" | grep {compose_project}',
shell=True
).decode().strip()
return dict(line.split('@') for line in images_output.splitlines() if line)
except subprocess.CalledProcessError as e:
if e.returncode == 1: # grep no match found
return {}
else:
raise # Other errors are still raised
{% endraw %}
def is_any_service_up():
"""
Checks if any Docker services are currently running.
"""
process = subprocess.Popen("docker-compose ps -q", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output, _ = process.communicate()
service_ids = output.decode().strip().splitlines()
return bool(service_ids)
def pull_docker_images():
"""
Pulls the latest Docker images for the project.
"""
print("Pulling docker images.")
try:
run_command("docker-compose pull")
except subprocess.CalledProcessError as e:
if "pull access denied" in e.output.decode() or "must be built from source" in e.output.decode():
print("Need to build the image from source.")
return True
else:
print("Failed to pull images with unexpected error.")
raise
return False
def update_docker(directory):
"""
Checks for updates to Docker images and rebuilds containers if necessary.
"""
print(f"Checking for updates to Docker images in {directory}.")
before_digests = get_image_digests(directory)
need_to_build = pull_docker_images()
after_digests = get_image_digests(directory)
if before_digests != after_digests:
print("Changes detected in image digests. Rebuilding containers.")
need_to_build = True
if need_to_build:
# This propably just rebuilds the Dockerfile image if there is a change in the other docker compose containers
run_command("docker-compose build --pull")
start_docker(directory)
else:
print("Docker images are up to date. No rebuild necessary.")
def update_discourse(directory):
"""
Updates Discourse by running the rebuild command on the launcher script.
"""
docker_repository_directory = os.path.join(directory, "services", "{{ applications | get_app_conf('web-app-discourse','repository') }}")
print(f"Using path {docker_repository_directory } to pull discourse repository.")
os.chdir(docker_repository_directory )
if git_pull():
print("Start Discourse update procedure.")
update_procedure("docker stop {{ applications | get_app_conf('web-app-discourse','docker.services.discourse.name') }}")
update_procedure("docker rm {{ applications | get_app_conf('web-app-discourse','docker.services.discourse.name') }}")
try:
update_procedure("docker network connect {{ applications | get_app_conf('web-app-discourse','docker.network') }} {{ applications | get_app_conf('svc-db-postgres', 'docker.network') }}")
except subprocess.CalledProcessError as e:
error_message = e.output.decode()
if "already exists" in error_message or "is already connected" in error_message:
print("Network connection already exists. Skipping...")
else:
raise
update_procedure("./launcher rebuild {{ applications | get_app_conf('web-app-discourse','docker.services.discourse.name') }}")
else:
print("Discourse update skipped. No changes in git repository.")
def upgrade_listmonk():
"""
Runs the upgrade for Listmonk
"""
print("Starting Listmonk upgrade.")
run_command('echo "y" | docker compose run -T application ./listmonk --upgrade')
print("Upgrade complete.")
def update_procedure(command):
"""
Attempts to execute a command up to a maximum number of retries.
"""
max_attempts = 3
for attempt in range(max_attempts):
try:
run_command(command)
break # If the command succeeds, exit the loop
except subprocess.CalledProcessError as e:
if attempt < max_attempts - 1: # Check if it's not the last attempt
print(f"Attempt {attempt + 1} failed, retrying in 60 seconds...")
time.sleep(60) # Wait for 60 seconds before retrying
else:
print("All attempts to update have failed.")
raise # Re-raise the last exception after all attempts fail
def start_docker(directory):
"""
Starts or restarts Docker services in the specified directory.
"""
if is_any_service_up():
print(f"Restarting containers in {directory}.")
run_command("docker-compose up -d --force-recreate")
else:
print(f"Skipped starting. No service is up in {directory}.")
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Please provide the path to the parent directory as a parameter.")
sys.exit(1)
parent_directory = sys.argv[1]
for dir_entry in os.scandir(parent_directory):
if dir_entry.is_dir():
dir_path = dir_entry.path
print(f"Checking for updates in: {dir_path}")
os.chdir(dir_path)
# Pull git repository if it exist
# @deprecated: This function should be removed in the future, as soon as all docker applications use the correct folder path
if os.path.isdir(os.path.join(dir_path, ".git")):
print("DEPRECATED: Docker .git repositories should be saved under /opt/docker/{instance}/services/{repository_name} ")
git_pull()
if os.path.basename(dir_path) == "matrix":
# No autoupdate for matrix is possible atm,
# due to the reason that the role has to be executed every time.
# The update has to be executed in the role
# @todo implement in future
pass
else:
# Pull and update docker images
update_docker(dir_path)
# The following instances need additional update and upgrade procedures
if os.path.basename(dir_path) == "discourse":
update_discourse(dir_path)
elif os.path.basename(dir_path) == "listmonk":
upgrade_listmonk()
# @todo implement dedicated procedure for bluesky
# @todo implement dedicated procedure for taiga

View File

@@ -1,2 +0,0 @@
application_id: update-docker
system_service_id: "{{ application_id }}"

View File

@@ -1,23 +0,0 @@
# Update Pip Packages
## Description
This Ansible role automatically updates all installed Python Pip packages to their latest versions.
## Overview
The role performs the following:
- Executes a command to retrieve all installed Python Pip packages.
- Updates each package individually to its latest available version.
- Ensures a smooth and automated Python environment maintenance process.
## Purpose
Ensures Python packages remain up-to-date, improving security and functionality.
## Features
- **Automatic Updates:** Automates the process of upgrading Python packages.
- **Platform Independent:** Works on Linux, macOS, and Windows environments.
- **Ansible Integration:** Easy to include in larger playbooks or maintenance routines.

View File

@@ -1,25 +0,0 @@
galaxy_info:
author: "Kevin Veen-Birkenbach"
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
description: "Automatically updates all Python Pip packages to their latest available versions."
min_ansible_version: "2.9"
platforms:
- name: Ubuntu
versions:
- all
- name: Archlinux
versions:
- rolling
- name: Debian
versions:
- all
galaxy_tags:
- python
- pip
- update
- maintenance

View File

@@ -1,9 +0,0 @@
- block:
- name: Include dependency 'dev-python-pip'
include_role:
name: dev-python-pip
when: run_once_dev_python_pip is not defined
- include_tasks: utils/run_once.yml
vars:
flush_handlers: false
when: run_once_update_pip is not defined

View File

@@ -1 +0,0 @@
application_id: update-pip

View File

@@ -1,27 +0,0 @@
# Update pkgmgr
## Description
This role checks if the [package manager](https://github.com/kevinveenbirkenbach/package-manager) is available on the system. If so, it runs `pkgmgr update --all` to update all repositories managed by the `pkgmgr`.
## Overview
This role performs the following tasks:
- Checks if the `pkgmgr` command is available.
- If available, runs `pkgmgr update --all` to update all repositories.
## Purpose
The purpose of this role is to simplify system updates by using the `pkgmgr` package manager to handle all repository updates with a single command.
## Features
- **Conditional Execution**: Runs only if the `pkgmgr` command is found on the system.
- **Automated Updates**: Automatically runs `pkgmgr update --all` to update all repositories.
## License
Infinito.Nexus NonCommercial License
[Learn More](https://s.infinito.nexus/license)

View File

@@ -1,2 +0,0 @@
# Todos
- Activate update again. Atm not possible, because it pulls all repos

View File

@@ -1,3 +0,0 @@
# run_once_update_pkgmgr: deactivated
#- name: "Update all repositories with pkgmgr"
# command: "pkgmgr update --all"

View File

@@ -1 +0,0 @@
application_id: update-pkgmgr

View File

@@ -23,6 +23,6 @@ AKAUNTING_COMPANY_NAME: "{{ applications | get_app_conf(application_
AKAUNTING_COMPANY_EMAIL: "{{ applications | get_app_conf(application_id, 'company.email') }}"
AKAUNTING_ADMIN_EMAIL: "{{ applications | get_app_conf(application_id, 'setup_admin_email') }}"
AKAUNTING_ADMIN_PASSWORD: "{{ applications | get_app_conf(application_id, 'credentials.setup_admin_password') }}"
AKAUNTING_SETUP_MARKER: "/var/lib/docker/volumes/{{ AKAUNTING_VOLUME }}/_data/.akaunting_installed"
AKAUNTING_SETUP_MARKER: "{{ [ (AKAUNTING_VOLUME | docker_volume_path), '.akaunting_installed' ] | path_join }}"
AKAUNTING_APP_KEY: "{{ applications | get_app_conf(application_id, 'credentials.app_key') }}"
AKAUNTING_CACHE_DRIVER: "{{ 'redis' if applications | get_app_conf(application_id, 'docker.services.redis.enabled') else 'file' }}"

View File

@@ -1,19 +1,45 @@
images:
pds: "ghcr.io/bluesky-social/pds:latest"
pds:
version: "latest"
features:
matomo: true
css: true
desktop: true
central_database: true
central_database: false
logout: true
server:
config_upstream_url: "https://ip.bsky.app/config"
domains:
canonical:
web: "bskyweb.{{ PRIMARY_DOMAIN }}"
api: "bluesky.{{ PRIMARY_DOMAIN }}"
view: "view.bluesky.{{ PRIMARY_DOMAIN }}"
csp:
whitelist:
connect-src:
- "{{ WEB_PROTOCOL }}://<< defaults_applications[web-app-bluesky].server.domains.canonical.api >>"
- https://plc.directory
- https://bsky.social
- https://api.bsky.app
- https://public.api.bsky.app
- https://events.bsky.app
- https://statsigapi.net
- https://ip.bsky.app
- https://video.bsky.app
- https://bsky.app
- wss://bsky.network
- wss://*.bsky.app
media-src:
- "blob:"
worker-src:
- "blob:"
docker:
services:
database:
enabled: true
enabled: false
web:
enabled: true # @see https://github.com/bluesky-social/social-app
view:
enabled: false
pds:
image: "ghcr.io/bluesky-social/pds"
version: "latest"
volumes:
pds_data: "pds_data"

View File

@@ -7,7 +7,3 @@ credentials:
description: "PLC rotation key in hex format (32 bytes)"
algorithm: "sha256"
validation: "^[a-f0-9]{64}$"
admin_password:
description: "Initial admin password for Bluesky PDS"
algorithm: "plain"
validation: "^.{12,}$"

View File

@@ -0,0 +1,30 @@
# The following lines should be removed when the following issue is closed:
# https://github.com/bluesky-social/pds/issues/52
- name: Download pdsadmin tarball
get_url:
url: "https://github.com/lhaig/pdsadmin/releases/download/v1.0.0-dev/pdsadmin_Linux_x86_64.tar.gz"
dest: "{{ BLUESKY_PDSADMIN_TMP_TAR }}"
mode: '0644'
notify:
- docker compose up
- docker compose build
- name: Create {{ BLUESKY_PDSADMIN_DIR }}
file:
path: "{{ BLUESKY_PDSADMIN_DIR }}"
state: directory
mode: '0755'
- name: Extract pdsadmin tarball
unarchive:
src: "{{ BLUESKY_PDSADMIN_TMP_TAR }}"
dest: "{{ BLUESKY_PDSADMIN_DIR }}"
remote_src: yes
mode: '0755'
- name: Ensure pdsadmin is executable
file:
path: "{{ BLUESKY_PDSADMIN_FILE }}"
mode: '0755'
state: file

View File

@@ -0,0 +1,21 @@
- name: clone social app repository
git:
repo: "https://github.com/bluesky-social/social-app.git"
dest: "{{ BLUESKY_SOCIAL_APP_DIR }}"
version: "main"
force: true
notify:
- docker compose up
- docker compose build
- name: Force BAPP_CONFIG_URL to same-origin /config
ansible.builtin.replace:
path: "{{ BLUESKY_SOCIAL_APP_DIR }}/src/state/geolocation.tsx"
regexp: '^\s*const\s+BAPP_CONFIG_URL\s*=\s*.*$'
replace: "const BAPP_CONFIG_URL = '/config'"
- name: Force IPCC_URL to same-origin /ipcc
ansible.builtin.replace:
path: "{{ BLUESKY_SOCIAL_APP_DIR }}/src/state/geolocation.tsx"
regexp: '^\s*const\s+IPCC_URL\s*=\s*.*$'
replace: "const IPCC_URL = '/ipcc'"

View File

@@ -0,0 +1,73 @@
---
# Creates Cloudflare DNS records for Bluesky:
# - PDS/API host (A/AAAA)
# - Handle TXT verification (_atproto)
# - Optional Web UI host (A/AAAA)
# - Optional custom AppView host (A/AAAA)
#
# Requirements:
# DNS_PROVIDER == 'cloudflare'
# CLOUDFLARE_API_TOKEN set
#
# Inputs (inventory/vars):
# BLUESKY_API_DOMAIN, BLUESKY_WEB_DOMAIN, BLUESKY_VIEW_DOMAIN
# BLUESKY_WEB_ENABLED (bool), BLUESKY_VIEW_ENABLED (bool)
# PRIMARY_DOMAIN
# networks.internet.ip4 (and optionally networks.internet.ip6)
- name: "DNS (Cloudflare) for Bluesky base records"
include_role:
name: sys-dns-cloudflare-records
when: DNS_PROVIDER | lower == 'cloudflare'
vars:
cloudflare_records:
# 1) PDS / API host
- type: A
zone: "{{ BLUESKY_API_DOMAIN | to_zone }}"
name: "{{ BLUESKY_API_DOMAIN }}"
content: "{{ networks.internet.ip4 }}"
proxied: false
- type: AAAA
zone: "{{ BLUESKY_API_DOMAIN | to_zone }}"
name: "{{ BLUESKY_API_DOMAIN }}"
content: "{{ networks.internet.ip6 | default('') }}"
proxied: false
state: "{{ (networks.internet.ip6 is defined and (networks.internet.ip6 | string) | length > 0) | ternary('present','absent') }}"
# 2) Handle verification for primary handle (Apex)
- type: TXT
zone: "{{ PRIMARY_DOMAIN | to_zone }}"
name: "_atproto.{{ PRIMARY_DOMAIN }}"
value: "did=did:web:{{ BLUESKY_API_DOMAIN }}"
# 3) Web UI host (only if enabled)
- type: A
zone: "{{ BLUESKY_WEB_DOMAIN | to_zone }}"
name: "{{ BLUESKY_WEB_DOMAIN }}"
content: "{{ networks.internet.ip4 }}"
proxied: true
state: "{{ (BLUESKY_WEB_ENABLED | bool) | ternary('present','absent') }}"
- type: AAAA
zone: "{{ BLUESKY_WEB_DOMAIN | to_zone }}"
name: "{{ BLUESKY_WEB_DOMAIN }}"
content: "{{ networks.internet.ip6 | default('') }}"
proxied: true
state: "{{ (BLUESKY_WEB_ENABLED | bool) and (networks.internet.ip6 is defined) and ((networks.internet.ip6 | string) | length > 0) | ternary('present','absent') }}"
# 4) Custom AppView host (only if you actually run one and it's not api.bsky.app)
- type: A
zone: "{{ BLUESKY_VIEW_DOMAIN | to_zone }}"
name: "{{ BLUESKY_VIEW_DOMAIN }}"
content: "{{ networks.internet.ip4 }}"
proxied: false
state: "{{ (BLUESKY_VIEW_ENABLED | bool) and (BLUESKY_VIEW_DOMAIN != 'api.bsky.app') | ternary('present','absent') }}"
- type: AAAA
zone: "{{ BLUESKY_VIEW_DOMAIN | to_zone }}"
name: "{{ BLUESKY_VIEW_DOMAIN }}"
content: "{{ networks.internet.ip6 | default('') }}"
proxied: false
state: "{{ (BLUESKY_VIEW_ENABLED | bool) and (BLUESKY_VIEW_DOMAIN != 'api.bsky.app') and (networks.internet.ip6 is defined) and ((networks.internet.ip6 | string) | length > 0) | ternary('present','absent') }}"

View File

@@ -1,48 +1,40 @@
- name: "include docker-compose role"
include_role:
name: docker-compose
vars:
docker_compose_flush_handlers: false
- name: "include role sys-stk-front-proxy for {{ application_id }}"
- name: "Include front proxy for {{ BLUESKY_API_DOMAIN }}:{{ BLUESKY_API_PORT }}"
include_role:
name: sys-stk-front-proxy
vars:
domain: "{{ item.domain }}"
http_port: "{{ item.http_port }}"
loop:
- { domain: "{{domains[application_id].api", http_port: "{{ports.localhost.http['web-app-bluesky_api']}}" }
- { domain: "{{domains[application_id].web}}", http_port: "{{ports.localhost.http['web-app-bluesky_web']}}" }
domain: "{{ BLUESKY_API_DOMAIN }}"
http_port: "{{ BLUESKY_API_PORT }}"
# The following lines should be removed when the following issue is closed:
# https://github.com/bluesky-social/pds/issues/52
- name: "Include front proxy for {{ BLUESKY_WEB_DOMAIN }}:{{ BLUESKY_WEB_PORT }}"
include_role:
name: sys-stk-front-proxy
vars:
domain: "{{ BLUESKY_WEB_DOMAIN }}"
http_port: "{{ BLUESKY_WEB_PORT }}"
proxy_extra_configuration: "{{ BLUESKY_FRONT_PROXY_CONTENT }}"
when: BLUESKY_WEB_ENABLED | bool
- name: Download pdsadmin tarball
get_url:
url: "https://github.com/lhaig/pdsadmin/releases/download/v1.0.0-dev/pdsadmin_Linux_x86_64.tar.gz"
dest: "{{pdsadmin_temporary_tar_path}}"
mode: '0644'
- name: "Include front proxy for {{ BLUESKY_VIEW_DOMAIN }}:{{ BLUESKY_VIEW_PORT }}"
include_role:
name: sys-stk-front-proxy
vars:
domain: "{{ BLUESKY_VIEW_DOMAIN }}"
http_port: "{{ BLUESKY_VIEW_PORT }}"
when: BLUESKY_VIEW_ENABLED | bool
- name: Create {{pdsadmin_folder_path}}
file:
path: "{{pdsadmin_folder_path}}"
state: directory
mode: '0755'
- name: Extract pdsadmin tarball
unarchive:
src: "{{pdsadmin_temporary_tar_path}}"
dest: "{{pdsadmin_folder_path}}"
remote_src: yes
mode: '0755'
- name: "Execute PDS routines"
ansible.builtin.include_tasks: "01_pds.yml"
- name: Ensure pdsadmin is executable
file:
path: "{{pdsadmin_file_path}}"
mode: '0755'
state: file
- name: "Execute Social App routines"
ansible.builtin.include_tasks: "02_social_app.yml"
when: BLUESKY_WEB_ENABLED | bool
- name: clone social app repository
git:
repo: "https://github.com/bluesky-social/social-app.git"
dest: "{{social_app_path}}"
version: "main"
notify: docker compose up
- name: "DNS for Bluesky"
include_tasks: "03_dns.yml"
when: DNS_PROVIDER | lower == 'cloudflare'

View File

@@ -3,40 +3,32 @@
pds:
{% set container_port = 3000 %}
{% set container_healthcheck = 'xrpc/_health' %}
image: "{{ applications | get_app_conf(application_id, 'images.pds', True) }}"
image: "{{ BLUESKY_PDS_IMAGE }}:{{ BLUESKY_PDS_VERSION }}"
{% include 'roles/docker-container/templates/base.yml.j2' %}
volumes:
- pds_data:/opt/pds
- {{pdsadmin_file_path}}:/usr/local/bin/pdsadmin:ro
- pds_data:{{ BLUESKY_PDS_DATA_DIR }}
- {{ BLUESKY_PDSADMIN_FILE }}:/usr/local/bin/pdsadmin:ro
ports:
- "127.0.0.1:{{ports.localhost.http['web-app-bluesky_api']}}:{{ container_port }}"
- "127.0.0.1:{{ BLUESKY_API_PORT }}:{{ container_port }}"
{% include 'roles/docker-container/templates/healthcheck/wget.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
# Deactivated for the moment @see https://github.com/bluesky-social/social-app
{% if BLUESKY_WEB_ENABLED %}
{% set container_port = 8100 %}
web:
command: ["bskyweb","serve"]
build:
context: "{{ social_app_path }}"
dockerfile: Dockerfile
# It doesn't compile yet with this parameters. @todo Fix it
args:
REACT_APP_PDS_URL: "{{ WEB_PROTOCOL }}://{{domains[application_id].api}}" # URL des PDS
REACT_APP_API_URL: "{{ WEB_PROTOCOL }}://{{domains[application_id].api}}" # API-URL des PDS
REACT_APP_SITE_NAME: "{{ PRIMARY_DOMAIN | upper }} - Bluesky"
REACT_APP_SITE_DESCRIPTION: "Decentral Social "
context: "{{ BLUESKY_SOCIAL_APP_DIR }}"
dockerfile: Dockerfile
pull_policy: never
ports:
- "127.0.0.1:{{ports.localhost.http['web-app-bluesky_web']}}:8100"
healthcheck:
test: ["CMD", "sh", "-c", "for pid in $(ls /proc | grep -E '^[0-9]+$'); do if cat /proc/$pid/cmdline 2>/dev/null | grep -q 'bskywebserve'; then exit 0; fi; done; exit 1"]
interval: 30s
timeout: 10s
retries: 3
- "127.0.0.1:{{ BLUESKY_WEB_PORT }}:{{ container_port }}"
{% include 'roles/docker-container/templates/healthcheck/tcp.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% endif %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
pds_data:
name: {{ BLUESKY_PDS_DATA_VOLUME }}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -1,21 +1,30 @@
PDS_HOSTNAME="{{domains[application_id].api}}"
PDS_ADMIN_EMAIL="{{ applications.bluesky.users.administrator.email}}"
PDS_SERVICE_DID="did:web:{{domains[application_id].api}}"
# General
PDS_HOSTNAME="{{ BLUESKY_API_DOMAIN }}"
PDS_CRAWLERS=https://bsky.network
LOG_ENABLED={{ MODE_DEBUG | string | lower }}
PDS_BLOBSTORE_DISK_LOCATION={{ BLUESKY_PDS_BLOBSTORE_LOCATION }}
PDS_DATA_DIRECTORY={{ BLUESKY_PDS_DATA_DIR }}
PDS_BLOB_UPLOAD_LIMIT=52428800
PDS_DID_PLC_URL=https://plc.directory
# See https://mattdyson.org/blog/2024/11/self-hosting-bluesky-pds/
PDS_SERVICE_HANDLE_DOMAINS=".{{ PRIMARY_DOMAIN }}"
PDS_JWT_SECRET="{{ bluesky_jwt_secret }}"
PDS_ADMIN_PASSWORD="{{bluesky_admin_password}}"
PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX="{{ bluesky_rotation_key }}"
PDS_CRAWLERS=https://bsky.network
PDS_SERVICE_DID="did:web:{{ BLUESKY_API_DOMAIN }}"
# Email
PDS_ADMIN_EMAIL="{{ BLUESKY_ADMIN_EMAIL }}"
PDS_EMAIL_SMTP_URL=smtps://{{ users['no-reply'].email }}:{{ users['no-reply'].mailu_token }}@{{ SYSTEM_EMAIL.HOST }}:{{ SYSTEM_EMAIL.PORT }}/
PDS_EMAIL_FROM_ADDRESS={{ users['no-reply'].email }}
LOG_ENABLED=true
PDS_BLOBSTORE_DISK_LOCATION=/opt/pds/blocks
PDS_DATA_DIRECTORY: /opt/pds
PDS_BLOB_UPLOAD_LIMIT: 52428800
PDS_DID_PLC_URL=https://plc.directory
PDS_BSKY_APP_VIEW_URL=https://{{domains[application_id].web}}
PDS_BSKY_APP_VIEW_DID=did:web:{{domains[application_id].web}}
# Credentials
PDS_JWT_SECRET="{{ BLUESKY_JWT_SECRET }}"
PDS_ADMIN_PASSWORD="{{ BLUESKY_ADMIN_PASSWORD }}"
PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX="{{ BLUESKY_ROTATION_KEY }}"
# View
PDS_BSKY_APP_VIEW_URL={{ BLUESKY_VIEW_URL }}
PDS_BSKY_APP_VIEW_DID={{ BLUESKY_VIEW_DID }}
# Report
PDS_REPORT_SERVICE_URL=https://mod.bsky.app
PDS_REPORT_SERVICE_DID=did:plc:ar7c4by46qjdydhdevvrndac

View File

@@ -0,0 +1,29 @@
# Injected by web-app-bluesky (same pattern as web-app-yourls)
# Exposes a same-origin /config to avoid CORS when the social-app fetches config.
location = /config {
proxy_pass {{ BLUESKY_CONFIG_UPSTREAM_URL }};
# Nur Hostname extrahieren:
set $up_host "{{ BLUESKY_CONFIG_UPSTREAM_URL | regex_replace('^https?://', '') | regex_replace('/.*$', '') }}";
proxy_set_header Host $up_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_ssl_server_name on;
# Make response clearly same-origin for browsers
proxy_hide_header Access-Control-Allow-Origin;
add_header Access-Control-Allow-Origin $scheme://$host always;
add_header Vary Origin always;
}
location = /ipcc {
proxy_pass https://bsky.app/ipcc;
set $up_host "bsky.app";
proxy_set_header Host $up_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_ssl_server_name on;
proxy_hide_header Access-Control-Allow-Origin;
add_header Access-Control-Allow-Origin $scheme://$host always;
add_header Vary Origin always;
}

View File

@@ -1,11 +1,48 @@
application_id: "web-app-bluesky"
social_app_path: "{{ docker_compose.directories.services }}/social-app"
# General
application_id: "web-app-bluesky"
## Bluesky
## Social App
BLUESKY_SOCIAL_APP_DIR: "{{ docker_compose.directories.services }}/social-app"
# This should be removed when the following issue is closed:
# https://github.com/bluesky-social/pds/issues/52
pdsadmin_folder_path: "{{ docker_compose.directories.volumes }}/pdsadmin"
pdsadmin_file_path: "{{pdsadmin_folder_path}}/pdsadmin"
pdsadmin_temporary_tar_path: "/tmp/pdsadmin.tar.gz"
bluesky_jwt_secret: "{{ applications | get_app_conf(application_id, 'credentials.jwt_secret') }}"
bluesky_admin_password: "{{ applications | get_app_conf(application_id, 'credentials.admin_password') }}"
bluesky_rotation_key: "{{ applications | get_app_conf(application_id, 'credentials.plc_rotation_key_k256_private_key_hex') }}"
## PDS
BLUESKY_PDSADMIN_DIR: "{{ [ docker_compose.directories.volumes, 'pdsadmin' ] | path_join }}"
BLUESKY_PDSADMIN_FILE: "{{ [ BLUESKY_PDSADMIN_DIR, 'pdsadmin' ] | path_join }}"
BLUESKY_PDSADMIN_TMP_TAR: "/tmp/pdsadmin.tar.gz"
BLUESKY_PDS_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.pds.image') }}"
BLUESKY_PDS_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.pds.version') }}"
BLUESKY_PDS_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.pds_data') }}"
BLUESKY_PDS_DATA_DIR: "/opt/pds"
BLUESKY_PDS_BLOBSTORE_LOCATION: "{{ [ BLUESKY_PDS_DATA_DIR, 'blocks' ] | path_join }}"
## Web
BLUESKY_WEB_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.web.enabled') }}"
BLUESKY_WEB_DOMAIN: "{{ domains[application_id].web }}"
BLUESKY_WEB_PORT: "{{ ports.localhost.http['web-app-bluesky_web'] }}"
## View
BLUESKY_VIEW_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.view.enabled') }}"
BLUESKY_VIEW_DOMAIN: "{{ domains[application_id].view if BLUESKY_VIEW_ENABLED else 'api.bsky.app' }}"
BLUESKY_VIEW_URL: "{{ WEB_PROTOCOL }}://{{ BLUESKY_VIEW_DOMAIN }}"
BLUESKY_VIEW_DID: "did:web:{{ BLUESKY_VIEW_DOMAIN }}"
BLUESKY_VIEW_PORT: "{{ ports.localhost.http['web-app-bluesky_view'] | default(8053) }}"
## Server
BLUESKY_API_DOMAIN: "{{ domains[application_id].api }}"
BLUESKY_API_PORT: "{{ ports.localhost.http['web-app-bluesky_api'] }}"
## Credentials
BLUESKY_JWT_SECRET: "{{ applications | get_app_conf(application_id, 'credentials.jwt_secret') }}"
BLUESKY_ROTATION_KEY: "{{ applications | get_app_conf(application_id, 'credentials.plc_rotation_key_k256_private_key_hex') }}"
## Admin
BLUESKY_ADMIN_EMAIL: "{{ users.administrator.email }}"
BLUESKY_ADMIN_PASSWORD: "{{ users.administrator.password }}"
# Front proxy
BLUESKY_FRONT_PROXY_CONTENT: "{{ lookup('template', 'extra_locations.conf.j2') }}"
BLUESKY_CONFIG_UPSTREAM_URL: "{{ applications | get_app_conf(application_id, 'server.config_upstream_url') }}"

View File

@@ -0,0 +1,26 @@
# BookWyrm
## Description
**BookWyrm** is a self-hosted social reading platform where users can share books, post reviews, follow each other, and join federated conversations across the Fediverse. It is a community-driven alternative to proprietary platforms like Goodreads. Readers can catalog their library, track reading progress, and discover new books through friends and federated timelines.
## Overview
BookWyrm provides a federated social network for books built on ActivityPub. Each instance can be private, invitation-only, or open for public registration. Users can import/export book lists, interact with others across the Fediverse, and maintain their own curated reading environment. As an admin, you can configure moderation tools, content rules, and federation policies to suit your community.
## Features
- **Federated Social Network:** Connects with other BookWyrm instances and ActivityPub platforms.
- **Book Cataloging:** Add, search, and organize books; import/export libraries.
- **Reading Status & Reviews:** Mark books as “to read,” “reading,” or “finished,” and publish reviews or quotes.
- **Timelines & Interaction:** Follow other readers, comment on reviews, and engage in federated discussions.
- **Privacy & Moderation:** Fine-grained controls for content visibility, moderation, and federation settings.
- **Community Building:** Host a private club, classroom library, or large public community for readers.
- **Optional SSO Integration:** Can work with OIDC for unified login across platforms.
## Further Resources
- [BookWyrm GitHub](https://github.com/bookwyrm-social/bookwyrm)
- [BookWyrm Documentation](https://docs.joinbookwyrm.com/)
- [ActivityPub (Wikipedia)](https://en.wikipedia.org/wiki/ActivityPub)
- [Fediverse (Wikipedia)](https://en.wikipedia.org/wiki/Fediverse)

View File

@@ -1,2 +0,0 @@
# Todo
- Implement https://joinbookwyrm.com/de/

View File

@@ -0,0 +1,40 @@
credentials: {}
docker:
services:
database:
enabled: true
redis:
enabled: true
application:
version: 'v0.7.5'
name: bookwyrm
worker:
enabled: true
volumes:
data: "bookwyrm_data"
media: "bookwyrm_media"
features:
matomo: true
css: true
desktop: true
central_database: true
logout: true
oidc: false
ldap: false
server:
csp:
whitelist: {}
flags:
script-src-elem:
unsafe-inline: true
script-src:
unsafe-inline: true
domains:
canonical:
- "book.{{ PRIMARY_DOMAIN }}"
aliases:
- "bookwyrm.{{ PRIMARY_DOMAIN }}"
rbac:
roles: {}
registration_open: false
allow_invite_request: false

View File

@@ -1,22 +1,26 @@
---
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "Deploys BookWyrm social reading server via Docker Compose, with basic domain and port wiring."
author: "Kevin Veen-Birchenbach"
description: "BookWyrm is a self-hosted federated social reading platform where users share reviews, track reading, and connect with others across the Fediverse."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Kevin Veen-Birchenbach
Consulting & Coaching Solutions
https://www.veen.world
galaxy_tags:
- bookwyrm
- social
- docker
- books
- social-network
- fediverse
- activitypub
- reading
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://s.infinito.nexus/code/tree/main/roles/web-app-bookwyrm"
min_ansible_version: "2.9"
platforms:
- name: Any
versions:
- all
logo:
class: "fas fa-book"
run_after:
- web-app-matomo
- web-app-keycloak
- web-app-mailu
dependencies: []

View File

@@ -0,0 +1,6 @@
credentials:
secret_key:
description: "Django SECRET_KEY for BookWyrm"
algorithm: "alphanumeric" # uses generate_value('alphanumeric') → 64 random a-zA-Z0-9
validation:
min_length: 50 # Django recommends ≥50 characters

View File

@@ -0,0 +1,7 @@
---
- block:
- name: "load docker, db/redis and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
- include_tasks: utils/run_once.yml
when: run_once_web_app_bookwyrm is not defined

View File

@@ -0,0 +1,40 @@
# Build BookWyrm from source (no upstream image available)
FROM python:3.11-bookworm AS builder
RUN apt-get update && apt-get install -y --no-install-recommends \
git build-essential libpq-dev \
libjpeg-dev zlib1g-dev libxml2-dev libxslt1-dev libffi-dev libmagic-dev \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /src
# Shallow clone the chosen tag/branch
RUN git clone --depth=1 --branch "{{ BOOKWYRM_VERSION }}" https://github.com/bookwyrm-social/bookwyrm.git .
# Pre-install Python deps to a wheelhouse for faster final image
RUN pip install --upgrade pip \
&& pip wheel --wheel-dir /wheels -r requirements.txt \
&& pip wheel --wheel-dir /wheels gunicorn
FROM python:3.11-bookworm
ENV PYTHONUNBUFFERED=1
WORKDIR /app
# Copy app source and wheels
COPY --from=builder /src /app
COPY --from=builder /wheels /wheels
# System deps for runtime
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 curl \
libjpeg62-turbo zlib1g libxml2 libxslt1.1 libffi8 libmagic1 \
&& rm -rf /var/lib/apt/lists/* \
&& pip install --no-cache-dir --no-index --find-links=/wheels -r /app/requirements.txt \
&& pip install --no-cache-dir --no-index --find-links=/wheels gunicorn \
&& adduser --disabled-password --gecos '' bookwyrm \
&& mkdir -p /app/data /app/media \
&& chown -R bookwyrm:bookwyrm /app
USER bookwyrm
# Gunicorn/Celery are configured by upstream files in repo
# Ports/healthcheck handled by compose template

View File

@@ -0,0 +1,44 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
application:
{% include 'roles/docker-container/templates/base.yml.j2' %}
command: >-
bash -lc '
python manage.py migrate --noinput &&
python manage.py collectstatic --noinput &&
(python manage.py initdb || true) &&
python -m gunicorn bookwyrm.wsgi:application --bind 0.0.0.0:{{ container_port }}
'
build:
context: .
dockerfile: Dockerfile
image: "{{ BOOKWYRM_CUSTOM_IMAGE }}"
container_name: "{{ BOOKWYRM_CONTAINER }}"
hostname: "{{ BOOKWYRM_HOSTNAME }}"
ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}"
volumes:
- 'data:/app/data'
- 'media:/app/media'
{% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %}
{% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
worker:
{% include 'roles/docker-container/templates/base.yml.j2' %}
image: "{{ BOOKWYRM_CUSTOM_IMAGE }}"
container_name: "{{ BOOKWYRM_WORKER_CONTAINER }}"
command: "bash -lc 'celery -A celerywyrm worker -l INFO'"
volumes:
- 'data:/app/data'
- 'media:/app/media'
{% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
data:
name: {{ BOOKWYRM_DATA_VOLUME }}
media:
name: {{ BOOKWYRM_MEDIA_VOLUME }}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -0,0 +1,72 @@
# Core
BOOKWYRM_URL="{{ BOOKWYRM_URL }}"
DOMAIN="{{ BOOKWYRM_HOSTNAME }}"
ALLOWED_HOSTS="{{ BOOKWYRM_HOSTNAME }},127.0.0.1,localhost"
PORT="{{ WEB_PORT }}"
WEB_PROTOCOL="{{ WEB_PROTOCOL }}"
MEDIA_ROOT="/app/media"
DATA_ROOT="/app/data"
REGISTRATION_OPEN={{ BOOKWYRM_REGISTRATION_OPEN }}
ALLOW_INVITE_REQUESTS={{ BOOKWYRM_ALLOW_INVITE_REQUESTS }}
# Django/Secrets (provide via vault/env in production)
SECRET_KEY="{{ BOOKWYRM_SECRET_KEY }}"
# Email / SMTP (BookWyrm expects these names)
EMAIL_HOST="{{ EMAIL_HOST }}"
EMAIL_PORT="{{ EMAIL_PORT }}"
EMAIL_USE_TLS={{ EMAIL_USE_TLS }}
EMAIL_USE_SSL={{ EMAIL_USE_SSL }}
EMAIL_HOST_USER="{{ EMAIL_HOST_USER }}"
EMAIL_HOST_PASSWORD="{{ EMAIL_HOST_PASSWORD }}"
DEFAULT_FROM_EMAIL="{{ EMAIL_DEFAULT_FROM }}"
# Database
POSTGRES_DB="{{ database_name }}"
POSTGRES_USER="{{ database_username }}"
POSTGRES_PASSWORD="{{ database_password }}"
POSTGRES_HOST="{{ database_host }}"
POSTGRES_PORT="{{ database_port }}"
DATABASE_URL="postgres://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}"
# Redis / Celery
REDIS_HOST="{{ BOOKWYRM_REDIS_HOST }}"
REDIS_PORT="{{ BOOKWYRM_REDIS_PORT }}"
REDIS_URL="{{ BOOKWYRM_REDIS_CACHE_URL }}"
REDIS_CACHE_URL="{{ BOOKWYRM_REDIS_CACHE_URL }}"
CACHE_URL="{{ BOOKWYRM_REDIS_CACHE_URL }}"
DJANGO_REDIS_URL="{{ BOOKWYRM_REDIS_CACHE_URL }}"
## Broker
BROKER_URL="{{ BOOKWYRM_BROKER_URL }}"
REDIS_BROKER_URL="{{ BOOKWYRM_REDIS_BROKER_URL }}"
REDIS_BROKER_HOST="{{ BOOKWYRM_REDIS_HOST }}"
REDIS_BROKER_PORT="{{ BOOKWYRM_REDIS_PORT }}"
REDIS_BROKER_DB_INDEX="{{ BOOKWYRM_REDIS_BROKER_DB }}"
CELERY_BROKER_URL="{{ BOOKWYRM_REDIS_BROKER_URL }}"
## Activity
REDIS_ACTIVITY_HOST="{{ BOOKWYRM_REDIS_HOST }}"
REDIS_ACTIVITY_PORT="{{ BOOKWYRM_REDIS_PORT }}"
REDIS_ACTIVITY_DB_INDEX="{{ BOOKWYRM_REDIS_ACTIVITY_DB }}"
REDIS_ACTIVITY_URL="{{ BOOKWYRM_REDIS_ACTIVITY_URL }}"
# Proxy (if BookWyrm sits behind reverse proxy)
FORWARDED_ALLOW_IPS="*"
USE_X_FORWARDED_HOST="true"
SECURE_PROXY_SSL_HEADER="{{ (WEB_PORT == 443) | string | lower }}"
# OIDC (optional only if BOOKWYRM_OIDC_ENABLED)
{% if BOOKWYRM_OIDC_ENABLED %}
OIDC_TITLE="{{ BOOKWYRM_OIDC_LABEL | replace('\"','\\\"') }}"
OIDC_ISSUER="{{ BOOKWYRM_OIDC_ISSUER }}"
OIDC_AUTHORIZATION_ENDPOINT="{{ BOOKWYRM_OIDC_AUTH_URL }}"
OIDC_TOKEN_ENDPOINT="{{ BOOKWYRM_OIDC_TOKEN_URL }}"
OIDC_USERINFO_ENDPOINT="{{ BOOKWYRM_OIDC_USERINFO_URL }}"
OIDC_END_SESSION_ENDPOINT="{{ BOOKWYRM_OIDC_LOGOUT_URL }}"
OIDC_JWKS_URI="{{ BOOKWYRM_OIDC_JWKS_URL }}"
OIDC_CLIENT_ID="{{ BOOKWYRM_OIDC_CLIENT_ID }}"
OIDC_CLIENT_SECRET="{{ BOOKWYRM_OIDC_CLIENT_SECRET }}"
OIDC_SCOPES="{{ BOOKWYRM_OIDC_SCOPES }}"
OIDC_UNIQUE_ATTRIBUTE="{{ BOOKWYRM_OIDC_UNIQUE_ATTRIBUTE }}"
{% endif %}

View File

@@ -1 +1,63 @@
application_id: web-app-bookwyrm
# General
application_id: "web-app-bookwyrm"
database_type: "postgres"
# Container
container_port: 8000
container_hostname: "{{ domains | get_domain(application_id) }}"
# BookWyrm
BOOKWYRM_REGISTRATION_OPEN: "{{ applications | get_app_conf(application_id, 'registration_open') | string | lower }}"
BOOKWYRM_ALLOW_INVITE_REQUESTS: "{{ applications | get_app_conf(application_id, 'allow_invite_request') | string | lower }}"
## Credentrials
BOOKWYRM_SECRET_KEY: "{{ applications | get_app_conf(application_id, 'credentials.secret_key') }}"
## URLs
BOOKWYRM_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
BOOKWYRM_HOSTNAME: "{{ container_hostname }}"
## OIDC (optional; can be fronted by oauth2-proxy or native if you wire it)
BOOKWYRM_OIDC_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc') }}"
BOOKWYRM_OIDC_LABEL: "{{ OIDC.BUTTON_TEXT }}"
BOOKWYRM_OIDC_CLIENT_ID: "{{ OIDC.CLIENT.ID }}"
BOOKWYRM_OIDC_CLIENT_SECRET: "{{ OIDC.CLIENT.SECRET }}"
BOOKWYRM_OIDC_ISSUER: "{{ OIDC.CLIENT.ISSUER_URL }}"
BOOKWYRM_OIDC_AUTH_URL: "{{ OIDC.CLIENT.AUTHORIZE_URL }}"
BOOKWYRM_OIDC_TOKEN_URL: "{{ OIDC.CLIENT.TOKEN_URL }}"
BOOKWYRM_OIDC_USERINFO_URL: "{{ OIDC.CLIENT.USER_INFO_URL }}"
BOOKWYRM_OIDC_LOGOUT_URL: "{{ OIDC.CLIENT.LOGOUT_URL }}"
BOOKWYRM_OIDC_JWKS_URL: "{{ OIDC.CLIENT.CERTS }}"
BOOKWYRM_OIDC_SCOPES: "openid,email,profile"
BOOKWYRM_OIDC_UNIQUE_ATTRIBUTE: "{{ OIDC.ATTRIBUTES.USERNAME }}"
## Docker
BOOKWYRM_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}"
BOOKWYRM_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
BOOKWYRM_MEDIA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.media') }}"
BOOKWYRM_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version') }}"
BOOKWYRM_CUSTOM_IMAGE: "bookwyrm_custom"
BOOKWYRM_WORKER_CONTAINER: "{{ BOOKWYRM_CONTAINER }}-worker"
## Redis
BOOKWYRM_REDIS_HOST: "redis"
BOOKWYRM_REDIS_PORT: 6379
BOOKWYRM_REDIS_BASE_URL: "redis://{{ BOOKWYRM_REDIS_HOST }}:{{ BOOKWYRM_REDIS_PORT }}"
BOOKWYRM_REDIS_BROKER_URL: "{{ BOOKWYRM_REDIS_BASE_URL }}/0"
BOOKWYRM_REDIS_CACHE_URL: "{{ BOOKWYRM_REDIS_BASE_URL }}/1"
BOOKWYRM_REDIS_BROKER_DB: 0
BOOKWYRM_REDIS_ACTIVITY_DB: 1
BOOKWYRM_BROKER_URL: "{{ BOOKWYRM_REDIS_BROKER_URL }}"
BOOKWYRM_REDIS_ACTIVITY_URL: "{{ BOOKWYRM_REDIS_CACHE_URL }}"
#BOOKWYRM_CACHE_URL: "{{ BOOKWYRM_REDIS_CACHE_URL }}"
# Email
EMAIL_HOST: "{{ SYSTEM_EMAIL.HOST }}"
EMAIL_PORT: "{{ SYSTEM_EMAIL.PORT }}"
EMAIL_HOST_USER: "{{ users['no-reply'].email }}"
EMAIL_HOST_PASSWORD: "{{ users['no-reply'].mailu_token }}"
# TLS/SSL: If TLS is true → TLS; else → SSL
EMAIL_USE_TLS: "{{ SYSTEM_EMAIL.TLS | ternary('true','false') }}"
EMAIL_USE_SSL: "{{ SYSTEM_EMAIL.TLS | ternary('false','true') }}"
EMAIL_DEFAULT_FROM: "BookWyrm <{{ users['no-reply'].email }}>"

View File

@@ -0,0 +1,25 @@
# Bridgy Fed
## Description
Bridgy Fed bridges ActivityPub (Fediverse), ATProto/Bluesky, and IndieWeb (webmentions/mf2). It mirrors identities and interactions across networks.
## Overview
This role builds and runs Bridgy Fed as a Docker container and (optionally) starts a Datastore-mode Firestore emulator as a sidecar. It exposes HTTP locally for a front proxy.
Upstream docs & dev notes:
- User & developer docs: https://fed.brid.gy and https://bridgy-fed.readthedocs.io/
- Source: https://github.com/snarfed/bridgy-fed
- Local run (reference): `flask run -p 8080` with BRIDGY_APPVIEW_HOST/BRIDGY_PLC_HOST/BRIDGY_BGS_HOST/BRIDGY_PDS_HOST set, and Datastore emulator envs
## Features
- Dockerized Flask app (gunicorn)
- Optional Firestore emulator (Datastore mode) sidecar
- Front proxy integration via `sys-stk-front-proxy`
## Quick start
1) Set domains and ports in inventory.
2) Enable/disable the emulator in `config/main.yml`.
3) Run the role; your front proxy will publish the app.
## Notes
- Emulator is **not** for production; its in-memory unless you mount a volume/configure import/export.

View File

@@ -0,0 +1,29 @@
features:
matomo: true
css: true
desktop: true
central_database: false
logout: false
oidc: false
server:
domains:
canonical:
- "bridgyfed.{{ PRIMARY_DOMAIN }}"
csp:
whitelist: {}
flags: {}
docker:
services:
database:
enabled: false
application:
image: "python"
version: "3.12-bookworm"
name: "web-app-bridgy-fed"
repository: "https://github.com/snarfed/bridgy-fed.git"
branch: "main"
rbac:
roles: {}

View File

@@ -0,0 +1,49 @@
# Runtime image for Bridgy Fed (Flask) with a build step that clones upstream
ARG PY_BASE="python:3.12-bookworm"
FROM ${PY_BASE} AS build
ARG BRIDGY_REPO_URL
ARG BRIDGY_REPO_BRANCH
# System deps: git, build tools, curl for healthchecks, and gunicorn
RUN apt-get update && apt-get install -y --no-install-recommends \
git build-essential curl ca-certificates && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
RUN git clone --depth=1 --branch "${BRIDGY_REPO_BRANCH}" "${BRIDGY_REPO_URL}" ./
# Python deps
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
# Create oauth_dropins static symlink (upstream expects this)
RUN python - <<'PY'\n\
import oauth_dropins, pathlib, os\n\
target = pathlib.Path(oauth_dropins.__file__).parent / 'static'\n\
link = pathlib.Path('/app/oauth_dropins_static')\n\
try:\n\
if link.exists() or link.is_symlink():\n\
link.unlink()\n\
os.symlink(str(target), str(link))\n\
except FileExistsError:\n\
pass\n\
print('Symlinked oauth_dropins_static ->', target)\n\
PY
# Final stage
FROM ${PY_BASE}
ARG CONTAINER_PORT
ENV PORT=${CONTAINER_PORT}
WORKDIR /app
COPY --from=build /app /app
# Non-root good practice
RUN useradd -r -m -d /nonroot appuser && chown -R appuser:appuser /app
USER appuser
EXPOSE ${PORT}
# Upstream flask app entry: 'flask_app:app'
CMD ["sh", "-lc", "exec gunicorn -w 2 -k gthread -b 0.0.0.0:${PORT} flask_app:app"]

View File

@@ -1,24 +1,22 @@
---
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "Checks if the pkgmgr command is available and runs 'pkgmgr update --all' to update all repositories."
description: "Bridgy Fed: bridge between ActivityPub (Fediverse), ATProto/Bluesky and IndieWeb."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
min_ansible_version: "2.9"
platforms:
- name: Linux
versions:
- all
galaxy_tags:
- update
- pkgmgr
- pkgmgr
- system
- activitypub
- bluesky
- atproto
- fediverse
- bridge
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://docs.infinito.nexus"
dependencies: []
documentation: "https://fed.brid.gy/docs"
logo:
class: "fa-solid fa-bridge"
dependencies: []

View File

View File

@@ -0,0 +1,9 @@
- name: "Load docker and front proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateless
- name: "Include front proxy for {{ container_hostname}}:{{ ports.localhost.http[application_id] }}"
include_role:
name: sys-stk-front-proxy
- include_tasks: utils/run_once.yml

View File

@@ -0,0 +1,3 @@
- name: "Include core routines for '{{ application_id }}'"
include_tasks: "01_core.yml"
when: run_once_web_app_bridgy_fed is not defined

View File

@@ -0,0 +1,20 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
application:
build:
context: .
dockerfile: Dockerfile
args:
BRIDGY_REPO_URL: "{{ BRIDGY_REPO_URL }}"
BRIDGY_REPO_BRANCH: "{{ BRIDGY_REPO_BRANCH }}"
CONTAINER_PORT: "{{ container_port | string }}"
image: "{{ BRIDGY_IMAGE }}:{{ BRIDGY_VERSION }}"
container_name: "{{ BRIDGY_CONTAINER }}"
hostname: "{{ container_hostname }}"
ports:
- "127.0.0.1:{{ http_port }}:{{ container_port }}"
{% include 'roles/docker-container/templates/healthcheck/tcp.yml.j2' %}
{% include 'roles/docker-container/templates/base.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -0,0 +1,13 @@
# Flask / Gunicorn basics
FLASK_ENV="{{ ENVIRONMENT }}"
PORT="{{ container_port }}"
BRIDGY_ADMIN_EMAIL="{{ BRIDGY_ADMIN_EMAIL }}"
# Bridgy Fed upstream knobs (see README @ GitHub)
BRIDGY_APPVIEW_HOST="{{ BRIDGY_APPVIEW_HOST }}"
BRIDGY_PLC_HOST="{{ BRIDGY_PLC_HOST }}"
BRIDGY_BGS_HOST="{{ BRIDGY_BGS_HOST }}"
BRIDGY_PDS_HOST="{{ BRIDGY_PDS_HOST }}"
# Optional:
# GUNICORN_CMD_ARGS="--log-level info"

View File

@@ -0,0 +1,25 @@
# General
application_id: "web-app-bridgy-fed"
# Container
container_port: 8080
domain: "{{ container_hostname }}"
http_port: "{{ ports.localhost.http[application_id] }}"
container_hostname: "{{ domains | get_domain(application_id) }}"
# App container
BRIDGY_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}"
BRIDGY_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}"
BRIDGY_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version')}}"
BRIDGY_ADMIN_EMAIL: "{{ users.administrator.email }}"
# Source
BRIDGY_REPO_URL: "{{ applications | get_app_conf(application_id, 'docker.services.application.repository') }}"
BRIDGY_REPO_BRANCH: "{{ applications | get_app_conf(application_id, 'docker.services.application.branch') }}"
# Runtime env defaults for Bridgy Fed (see upstream README)
BRIDGY_APPVIEW_HOST: "api.bsky.app"
BRIDGY_PLC_HOST: "plc.directory"
BRIDGY_BGS_HOST: "bsky.network"
BRIDGY_PDS_HOST: "atproto.brid.gy"

View File

@@ -1,2 +1,25 @@
# Todo
- Implement https://joinbookwyrm.com/de/
# Chess
## Description
**castling.club** is a federated chess server built on the ActivityPub protocol.
It provides an open and decentralized way to play chess online, where games and moves are visible across the Fediverse.
## Overview
Instead of relying on closed platforms, castling.club uses an arbiter actor (“the King”) to validate moves and mediate matches.
This ensures fair play, federation with platforms like Mastodon or Friendica, and community visibility of ongoing games.
The service runs as a lightweight Node.js app backed by PostgreSQL.
## Features
- **Federated Chess Matches:** Challenge and play with others across the Fediverse.
- **Rule Enforcement:** The arbiter validates each move for correctness.
- **Open Identities:** Use your existing Fediverse account; no new silo account needed.
- **Game Visibility:** Matches and moves can appear in social timelines.
- **Lightweight Service:** Built with Node.js and PostgreSQL for efficiency.
## Further Resources
- [castling.club GitHub Repository](https://github.com/stephank/castling.club)
- [ActivityPub Specification (W3C)](https://www.w3.org/TR/activitypub/)

View File

@@ -0,0 +1,35 @@
credentials: {}
docker:
services:
database:
enabled: true # Use central DB role (recommended)
application:
image: "node" # Base image family; final image is custom
version: "20-bullseye" # >=16 as required upstream
name: "web-app-chess"
backup:
no_stop_required: true
volumes:
data: "chess_data"
features:
matomo: true
css: true
desktop: true
central_database: true
logout: false
oidc: false
server:
csp:
whitelist: {}
flags:
script-src-elem:
unsafe-inline: true
domains:
canonical:
- "chess.{{ PRIMARY_DOMAIN }}"
aliases: []
rbac:
roles: {}
source:
repo: "https://github.com/stephank/castling.club.git"
ref: "main"

View File

@@ -0,0 +1,69 @@
# Multi-stage build for castling.club
# Allow a dynamic base image version in both stages
ARG CHESS_VERSION
# -------- Stage 1: build --------
FROM node:${CHESS_VERSION} AS build
# Build-time inputs
ARG CHESS_REPO_URL
ARG CHESS_REPO_REF
RUN apt-get update && apt-get install -y --no-install-recommends \
git ca-certificates openssl dumb-init python3 build-essential \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /src
RUN git clone --depth 1 --branch "${CHESS_REPO_REF}" "${CHESS_REPO_URL}" ./
# Prepare Yarn 4 as root (safe during build stage)
RUN corepack enable && corepack prepare yarn@4.9.1 --activate && yarn -v
RUN yarn install --immutable --inline-builds
RUN yarn build
# -------- Stage 2: runtime --------
FROM node:${CHESS_VERSION}
# Runtime inputs (formerly Jinja variables)
ARG CHESS_ENTRYPOINT_REL
ARG CHESS_ENTRYPOINT_INT
ARG CHESS_APP_DATA_DIR
ARG CONTAINER_PORT
WORKDIR /app
# Minimal runtime deps + curl for healthcheck
RUN apt-get update && apt-get install -y --no-install-recommends \
bash openssl dumb-init postgresql-client ca-certificates curl \
&& rm -rf /var/lib/apt/lists/*
# Copy built app
COPY --from=build /src /app
# Install entrypoint
COPY ${CHESS_ENTRYPOINT_REL} ${CHESS_ENTRYPOINT_INT}
RUN chmod +x ${CHESS_ENTRYPOINT_INT}
# Fix: enable Corepack/Yarn as root so shims land in /usr/local/bin
RUN corepack enable && corepack prepare yarn@4.9.1 --activate && yarn -v
# Create writable dirs and set ownership
RUN mkdir -p ${CHESS_APP_DATA_DIR} /app/.yarn/cache /home/node \
&& chown -R node:node /app /home/node
# Use project-local Yarn cache
ENV YARN_ENABLE_GLOBAL_CACHE=false \
YARN_CACHE_FOLDER=/app/.yarn/cache \
HOME=/home/node
# Drop privileges
USER node
# Expose the runtime port (build-time constant)
EXPOSE ${CONTAINER_PORT}
ENTRYPOINT ["dumb-init", "--"]
# Use a shell so the value can be expanded reliably
ENV CHESS_ENTRYPOINT_INT=${CHESS_ENTRYPOINT_INT}
CMD ["sh","-lc","exec \"$CHESS_ENTRYPOINT_INT\""]

View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
set -euo pipefail
APP_KEY_FILE="${APP_KEY_FILE}"
APP_KEY_PUB="${APP_KEY_FILE}.pub"
# 1) Generate signing key pair if missing
if [[ ! -f "${APP_KEY_FILE}" || ! -f "${APP_KEY_PUB}" ]]; then
echo "[chess] generating RSA signing key pair at ${APP_KEY_FILE}"
key_dir="$(dirname "${APP_KEY_FILE}")"
key_base="$(basename "${APP_KEY_FILE}")"
( cd "${key_dir}" && bash /app/tools/gen-signing-key.sh "${key_base}" )
fi
# 1.5) Ensure Yarn is ready and deps are installed (PnP, immutable)
echo "[chess] preparing yarn & installing deps (immutable)"
corepack enable || true
yarn install --immutable --inline-builds
# 2) Wait for PostgreSQL if env is provided
if [[ -n "${PGHOST:-}" ]]; then
echo "[chess] waiting for PostgreSQL at ${PGHOST}:${PGPORT}..."
until pg_isready -h "${PGHOST}" -p "${PGPORT}" -U "${PGUSER}" >/dev/null 2>&1; do
sleep 1
done
fi
# 3) Run migrations (idempotent)
echo "[chess] running migrations"
yarn migrate up
# 4) Start app
echo "[chess] starting server on port ${PORT}"
exec yarn start

View File

@@ -1,7 +1,7 @@
---
galaxy_info:
author: "Kevin Veen-Birchenbach"
description: "Stub role for deploying a Chess web application via Docker Compose (implementation pending)."
description: "Federated chess server based on ActivityPub. Play and follow games across the Fediverse with verified rules and open identities."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
@@ -10,13 +10,16 @@ galaxy_info:
https://www.veen.world
galaxy_tags:
- chess
- docker
- federation
- activitypub
- social
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://s.infinito.nexus/code/tree/main/roles/web-app-chess"
documentation: "https://github.com/stephank/castling.club"
logo:
class: "fas fa-chess-king"
min_ansible_version: "2.9"
platforms:
- name: Any
versions: [ all ]
dependencies: []

View File

View File

@@ -0,0 +1,12 @@
- name: "load docker, db and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
- name: "Deploy '{{ CHESS_ENTRYPOINT_ABS }}'"
copy:
src: "{{ CHESS_ENTRYPOINT_FILE }}"
dest: "{{ CHESS_ENTRYPOINT_ABS }}"
notify:
- docker compose build
- include_tasks: utils/run_once.yml

View File

@@ -0,0 +1,3 @@
- name: "Include core routines for '{{ application_id }}'"
include_tasks: "01_core.yml"
when: run_once_web_app_chess is not defined

View File

@@ -0,0 +1,30 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
application:
build:
context: .
dockerfile: Dockerfile
args:
CHESS_VERSION: "{{ CHESS_VERSION }}"
CHESS_REPO_URL: "{{ CHESS_REPO_URL }}"
CHESS_REPO_REF: "{{ CHESS_REPO_REF }}"
CHESS_ENTRYPOINT_REL: "{{ CHESS_ENTRYPOINT_REL }}"
CHESS_ENTRYPOINT_INT: "{{ CHESS_ENTRYPOINT_INT }}"
CHESS_APP_DATA_DIR: "{{ CHESS_APP_DATA_DIR }}"
CONTAINER_PORT: "{{ container_port | string }}"
image: "{{ CHESS_CUSTOM_IMAGE }}"
container_name: "{{ CHESS_CONTAINER }}"
hostname: "{{ CHESS_HOSTNAME }}"
ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}"
volumes:
- 'data:{{ CHESS_APP_DATA_DIR }}'
{% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %}
{% include 'roles/docker-container/templates/base.yml.j2' %}
{% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
data:
name: {{ CHESS_DATA_VOLUME }}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -0,0 +1,16 @@
# App basics
APP_SCHEME="{{ WEB_PROTOCOL }}"
APP_DOMAIN="{{ CHESS_HOSTNAME }}"
APP_ADMIN_URL="{{ CHESS_ADMIN_URL }}"
APP_ADMIN_EMAIL="{{ CHESS_ADMIN_EMAIL }}"
APP_KEY_FILE="{{ CHESS_APP_KEY_FILE }}"
APP_HMAC_SECRET="{{ CHESS_HMAC_SECRET }}"
NODE_ENV="{{ ENVIRONMENT }}"
PORT="{{ container_port }}"
# PostgreSQL (libpq envs)
PGHOST="{{ database_host }}"
PGPORT="{{ database_port }}"
PGDATABASE="{{ database_name }}"
PGUSER="{{ database_username }}"
PGPASSWORD="{{ database_password }}"

View File

@@ -1 +1,35 @@
application_id: web-app-chess
# General
application_id: "web-app-chess"
database_type: "postgres"
# Container
container_port: 5080
container_hostname: "{{ domains | get_domain(application_id) }}"
# App URLs & meta
# CHESS_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
CHESS_HOSTNAME: "{{ container_hostname }}"
CHESS_ADMIN_URL: ""
CHESS_ADMIN_EMAIL: "{{ users.administrator.email }}"
# Docker image
#CHESS_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}"
CHESS_CUSTOM_IMAGE: "castling_custom"
CHESS_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version') }}"
CHESS_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}"
CHESS_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
# Build source
CHESS_REPO_URL: "{{ applications | get_app_conf(application_id, 'source.repo') }}"
CHESS_REPO_REF: "{{ applications | get_app_conf(application_id, 'source.ref') }}"
# Security
CHESS_HMAC_SECRET: "{{ lookup('password', '/dev/null length=63 chars=ascii_letters,digits') }}"
CHESS_KEY_FILENAME: "signing-key"
CHESS_APP_DATA_DIR: '/app/data'
CHESS_APP_KEY_FILE: "{{ [ CHESS_APP_DATA_DIR, CHESS_KEY_FILENAME ] | path_join }}"
CHESS_ENTRYPOINT_FILE: "docker-entrypoint.sh"
CHESS_ENTRYPOINT_REL: "{{ CHESS_ENTRYPOINT_FILE }}"
CHESS_ENTRYPOINT_ABS: "{{ [docker_compose.directories.instance, CHESS_ENTRYPOINT_REL] | path_join }}"
CHESS_ENTRYPOINT_INT: "{{ ['/usr/local/bin', CHESS_ENTRYPOINT_FILE] | path_join }}"

View File

@@ -0,0 +1,25 @@
# Confluence
## Description
Confluence is Atlassians enterprise wiki and collaboration platform. This role deploys Confluence via Docker Compose, wires it to PostgreSQL, and integrates proxy awareness, optional OIDC SSO, health checks, and production-friendly defaults for Infinito.Nexus.
## Overview
The role builds a minimal custom image on top of the official Confluence image, prepares persistent volumes, and exposes the app behind your reverse proxy. Configuration is driven by variables (image, version, volumes, domains, OIDC). JVM heap sizing is auto-derived from host RAM with safe caps to avoid `Xms > Xmx`.
## Features
* **Fully Dockerized:** Compose stack with a dedicated data volume (`confluence_data`) and a slim overlay image for future add-ons.
* **Reverse-Proxy Ready:** Sets `ATL_PROXY_NAME/PORT/SCHEME/SECURE` so Confluence generates correct external URLs behind HTTPS.
* **OIDC SSO (Optional):** Pre-templated vars for issuer, client, scopes, JWKS; compatible with Atlassian DC SSO/OIDC marketplace apps.
* **Central Database:** PostgreSQL integration (local or central DB) with bootstrap credentials from role vars.
* **JVM Auto-Tuning:** `JVM_MINIMUM_MEMORY` / `JVM_MAXIMUM_MEMORY` computed from host memory with upper bounds.
* **Health Checks:** Curl-based container healthcheck for early failure detection.
* **CSP & Canonical Domains:** Hooks into platform CSP/SSL/domain management to keep policies strict and URLs stable.
* **Backup Friendly:** Data isolated under `{{ CONFLUENCE_HOME }}`.
## Further Resources
* Product page: [Atlassian Confluence](https://www.atlassian.com/software/confluence)
* Docker Hub (official image): [atlassian/confluence](https://hub.docker.com/r/atlassian/confluence)

View File

@@ -15,13 +15,19 @@ features:
desktop: true
central_database: true
logout: true
oidc: true
oidc: false # Not enabled for demo version
ldap: false # Not enabled for demo version
server:
csp:
whitelist: {}
flags: {}
flags:
script-src-elem:
unsafe-inline: true
script-src:
unsafe-inline: true
domains:
canonical:
- "confluence.{{ PRIMARY_DOMAIN }}"
rbac:
roles: {}
truststore_enabled: false

View File

@@ -0,0 +1,10 @@
FROM "{{ CONFLUENCE_IMAGE }}:{{ CONFLUENCE_VERSION }}"
# Optional: install OIDC SSO app (example path/name)
# COPY ./plugins/atlassian-sso-dc-latest.obr /opt/atlassian/confluence/confluence/WEB-INF/atlassian-bundled-plugins/
# Ensure proper permissions for app data
RUN mkdir -p {{ CONFLUENCE_HOME }} && \
chown -R 2001:2001 {{ CONFLUENCE_HOME }}
RUN printf "confluence.home={{ CONFLUENCE_HOME }}\n" \
> /opt/atlassian/confluence/confluence/WEB-INF/classes/confluence-init.properties

View File

@@ -3,19 +3,16 @@
build:
context: .
dockerfile: Dockerfile
args:
CONFLUENCE_BASE_IMAGE: "{{ CONFLUENCE_IMAGE }}:{{ CONFLUENCE_VERSION }}"
image: "{{ CONFLUENCE_IMAGE }}:{{ CONFLUENCE_VERSION }}-oidc"
image: "{{ CONFLUENCE_CUSTOM_IMAGE }}"
container_name: "{{ CONFLUENCE_CONTAINER }}"
hostname: '{{ CONFLUENCE_HOSTNAME}}'
ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:8090"
volumes:
- 'data:/var/atlassian/application-data/confluence'
- 'data:{{ CONFLUENCE_HOME }}'
{% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %}
{% include 'roles/docker-container/templates/base.yml.j2' %}
depends_on:
- database
{% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}

View File

@@ -1,15 +1,25 @@
## Confluence core
CONFLUENCE_URL="{{ CONFLUENCE_URL }}"
CONFLUENCE_HOME="{{ CONFLUENCE_HOME }}"
ATL_PROXY_NAME={{ CONFLUENCE_HOSTNAME }}
ATL_PROXY_PORT={{ WEB_PORT }}
ATL_TOMCAT_SCHEME={{ WEB_PROTOCOL }}
ATL_TOMCAT_SECURE={{ (WEB_PORT == 443) | lower }}
JVM_MINIMUM_MEMORY={{ CONFLUENCE_JVM_MIN }}
JVM_MAXIMUM_MEMORY={{ CONFLUENCE_JVM_MAX }}
JVM_SUPPORT_RECOMMENDED_ARGS=-Datlassian.home={{ CONFLUENCE_HOME }} -Datlassian.upm.signature.check.disabled={{ CONFLUENCE_TRUST_STORE_ENABLED | ternary('false','true')}}
## Database
CONFLUENCE_DATABASE_NAME="{{ database_name }}"
CONFLUENCE_DATABASE_USER="{{ database_username }}"
CONFLUENCE_DATABASE_PASSWORD="{{ database_password }}"
CONFLUENCE_DATABASE_HOST="{{ database_host }}"
CONFLUENCE_DATABASE_PORT="{{ database_port }}"
ATL_DB_TYPE=postgresql
ATL_DB_DRIVER=org.postgresql.Driver
ATL_JDBC_URL=jdbc:postgresql://{{ database_host }}:{{ database_port }}/{{ database_name }}
ATL_JDBC_USER={{ database_username }}
ATL_JDBC_PASSWORD={{ database_password }}
## OIDC
{% if CONFLUENCE_OIDC_ENABLED %}
## OIDC
CONFLUENCE_OIDC_TITLE="{{ CONFLUENCE_OIDC_LABEL | replace('\"','\\\"') }}"
CONFLUENCE_OIDC_ISSUER="{{ CONFLUENCE_OIDC_ISSUER }}"
CONFLUENCE_OIDC_AUTHORIZATION_ENDPOINT="{{ CONFLUENCE_OIDC_AUTH_URL }}"

View File

@@ -1,27 +1,46 @@
application_id: "web-app-confluence"
database_type: "postgres"
container_port: 8090 # Standardport Confluence
# General
application_id: "web-app-confluence"
database_type: "postgres"
# URLs
CONFLUENCE_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
CONFLUENCE_HOSTNAME: "{{ domains | get_domain(application_id) }}"
# Container
container_port: 8090
container_hostname: "{{ domains | get_domain(application_id) }}"
# OIDC
CONFLUENCE_OIDC_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc') }}"
CONFLUENCE_OIDC_LABEL: "{{ OIDC.BUTTON_TEXT }}"
CONFLUENCE_OIDC_CLIENT_ID: "{{ OIDC.CLIENT.ID }}"
CONFLUENCE_OIDC_CLIENT_SECRET: "{{ OIDC.CLIENT.SECRET }}"
CONFLUENCE_OIDC_ISSUER: "{{ OIDC.CLIENT.ISSUER_URL }}"
CONFLUENCE_OIDC_AUTH_URL: "{{ OIDC.CLIENT.AUTHORIZE_URL }}"
CONFLUENCE_OIDC_TOKEN_URL: "{{ OIDC.CLIENT.TOKEN_URL }}"
CONFLUENCE_OIDC_USERINFO_URL: "{{ OIDC.CLIENT.USER_INFO_URL }}"
CONFLUENCE_OIDC_LOGOUT_URL: "{{ OIDC.CLIENT.LOGOUT_URL }}"
CONFLUENCE_OIDC_JWKS_URL: "{{ OIDC.CLIENT.CERTS }}"
CONFLUENCE_OIDC_SCOPES: "openid,email,profile"
# Confluence
## URLs
CONFLUENCE_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
CONFLUENCE_HOSTNAME: "{{ container_hostname }}"
CONFLUENCE_HOME: "/var/atlassian/application-data/confluence"
## OIDC
CONFLUENCE_OIDC_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc') }}"
CONFLUENCE_OIDC_LABEL: "{{ OIDC.BUTTON_TEXT }}"
CONFLUENCE_OIDC_CLIENT_ID: "{{ OIDC.CLIENT.ID }}"
CONFLUENCE_OIDC_CLIENT_SECRET: "{{ OIDC.CLIENT.SECRET }}"
CONFLUENCE_OIDC_ISSUER: "{{ OIDC.CLIENT.ISSUER_URL }}"
CONFLUENCE_OIDC_AUTH_URL: "{{ OIDC.CLIENT.AUTHORIZE_URL }}"
CONFLUENCE_OIDC_TOKEN_URL: "{{ OIDC.CLIENT.TOKEN_URL }}"
CONFLUENCE_OIDC_USERINFO_URL: "{{ OIDC.CLIENT.USER_INFO_URL }}"
CONFLUENCE_OIDC_LOGOUT_URL: "{{ OIDC.CLIENT.LOGOUT_URL }}"
CONFLUENCE_OIDC_JWKS_URL: "{{ OIDC.CLIENT.CERTS }}"
CONFLUENCE_OIDC_SCOPES: "openid,email,profile"
CONFLUENCE_OIDC_UNIQUE_ATTRIBUTE: "{{ OIDC.ATTRIBUTES.USERNAME }}"
# Docker
CONFLUENCE_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version') }}"
CONFLUENCE_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}"
CONFLUENCE_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}"
CONFLUENCE_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
## Docker
CONFLUENCE_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version') }}"
CONFLUENCE_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}"
CONFLUENCE_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}"
CONFLUENCE_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
CONFLUENCE_CUSTOM_IMAGE: "{{ CONFLUENCE_IMAGE }}_custom"
## Performance
CONFLUENCE_TOTAL_MB: "{{ ansible_memtotal_mb | int }}"
CONFLUENCE_JVM_MAX_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 2), 12288 ] | min }}"
CONFLUENCE_JVM_MIN_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 4), (CONFLUENCE_JVM_MAX_MB | int) ] | min }}"
CONFLUENCE_JVM_MIN: "{{ CONFLUENCE_JVM_MIN_MB }}m"
CONFLUENCE_JVM_MAX: "{{ CONFLUENCE_JVM_MAX_MB }}m"
## Options
CONFLUENCE_TRUST_STORE_ENABLED: "{{ applications | get_app_conf(application_id, 'truststore_enabled') }}"

View File

@@ -0,0 +1,25 @@
# Jira
## Description
Jira Software is Atlassians issue and project-tracking platform. This role deploys Jira via Docker Compose, connects it to PostgreSQL, and adds proxy awareness, optional OIDC SSO, health checks, and production-oriented defaults for Infinito.Nexus.
## Overview
The role builds a lean custom image on top of the official Jira Software image, provisions persistent volumes, and exposes the app behind your reverse proxy. Variables control image/version/volumes/domains/SSO. JVM heap sizing is auto-derived from host RAM with safe caps to prevent `Xms > Xmx`.
## Features
* **Fully Dockerized:** Compose stack with a dedicated data volume (`jira_data`) and a minimal overlay image to enable future plugins/config.
* **Reverse-Proxy/HTTPS Ready:** Preconfigured Atlassian Tomcat proxy envs so Jira respects external scheme/host/port.
* **OIDC SSO (Optional):** Pre-templated vars for issuer, client, endpoints, scopes; compatible with Atlassian DC SSO/OIDC marketplace apps.
* **Central Database:** PostgreSQL integration (local or central) with credentials sourced from role configuration.
* **JVM Auto-Tuning:** Safe calculation of `JVM_MINIMUM_MEMORY` / `JVM_MAXIMUM_MEMORY` with caps to avoid VM init errors.
* **Health Checks:** Container healthcheck for quicker failure detection and stable automation.
* **CSP & Canonical Domains:** Integrates with platform CSP and domain management.
* **Backup Ready:** Persistent data under `/var/atlassian/application-data/jira`.
## Further Resources
* Product page: [Atlassian Jira Software](https://www.atlassian.com/software/jira)
* Docker Hub (official image): [atlassian/jira-software](https://hub.docker.com/r/atlassian/jira-software)

View File

@@ -0,0 +1,35 @@
credentials: {}
docker:
services:
database:
enabled: true
application:
image: atlassian/jira-software
version: latest
name: jira
volumes:
data: "jira_data"
features:
matomo: true
css: true
desktop: true
central_database: true
logout: true
oidc: false # Not enabled for demo version
ldap: false # Not enabled for demo version
server:
csp:
whitelist: {}
flags:
script-src-elem:
unsafe-inline: true
unsafe-eval: true
script-src:
unsafe-inline: true
unsafe-eval: true
domains:
canonical:
- "jira.{{ PRIMARY_DOMAIN }}"
rbac:
roles: {}

View File

@@ -0,0 +1,20 @@
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "Jira Software is Atlassians issue & project tracking platform. This role deploys Jira in Docker, adds optional OIDC support, and integrates with the Infinito.Nexus ecosystem."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
galaxy_tags: []
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://s.infinito.nexus/code/"
logo:
class: "fas fa-diagram-project"
run_after:
- web-app-matomo
- web-app-keycloak
- web-app-mailu
dependencies: []

View File

View File

@@ -0,0 +1,7 @@
---
- block:
- name: "load docker, db and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
- include_tasks: utils/run_once.yml
when: run_once_web_app_jira is not defined

View File

@@ -0,0 +1,8 @@
FROM "{{ JIRA_IMAGE }}:{{ JIRA_VERSION }}"
# Optional: install OIDC SSO app (example path/name)
# COPY ./plugins/atlassian-sso-dc-latest.obr /opt/atlassian/jira/atlassian-bundled-plugins/
# Ensure proper permissions for app data
RUN mkdir -p /var/atlassian/application-data/jira && \
chown -R 2001:2001 /var/atlassian/application-data/jira

View File

@@ -0,0 +1,23 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
application:
build:
context: .
dockerfile: Dockerfile
image: "{{ JIRA_CUSTOM_IMAGE }}"
container_name: "{{ JIRA_CONTAINER }}"
hostname: '{{ JIRA_HOSTNAME }}'
ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:8080"
volumes:
- 'data:/var/atlassian/application-data/jira'
{% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %}
{% include 'roles/docker-container/templates/base.yml.j2' %}
{% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
data:
name: {{ JIRA_DATA_VOLUME }}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -0,0 +1,31 @@
## Jira core
JIRA_URL="{{ JIRA_URL }}"
## Database
ATL_DB_TYPE=postgres72
ATL_DB_DRIVER=org.postgresql.Driver
ATL_JDBC_URL=jdbc:postgresql://{{ database_host }}:{{ database_port }}/{{ database_name }}
ATL_JDBC_USER={{ database_username }}
ATL_JDBC_PASSWORD={{ database_password }}
ATL_PROXY_NAME={{ JIRA_HOSTNAME }}
ATL_PROXY_PORT={{ WEB_PORT }}
ATL_TOMCAT_SCHEME={{ WEB_PROTOCOL }}
ATL_TOMCAT_SECURE={{ (WEB_PORT == 443) | lower }}
JVM_MINIMUM_MEMORY={{ JIRA_JVM_MIN }}
JVM_MAXIMUM_MEMORY={{ JIRA_JVM_MAX }}
## OIDC
{% if JIRA_OIDC_ENABLED %}
JIRA_OIDC_TITLE="{{ JIRA_OIDC_LABEL | replace('\"','\\\"') }}"
JIRA_OIDC_ISSUER="{{ JIRA_OIDC_ISSUER }}"
JIRA_OIDC_AUTHORIZATION_ENDPOINT="{{ JIRA_OIDC_AUTH_URL }}"
JIRA_OIDC_TOKEN_ENDPOINT="{{ JIRA_OIDC_TOKEN_URL }}"
JIRA_OIDC_USERINFO_ENDPOINT="{{ JIRA_OIDC_USERINFO_URL }}"
JIRA_OIDC_END_SESSION_ENDPOINT="{{ JIRA_OIDC_LOGOUT_URL }}"
JIRA_OIDC_JWKS_URI="{{ JIRA_OIDC_JWKS_URL }}"
JIRA_OIDC_CLIENT_ID="{{ JIRA_OIDC_CLIENT_ID }}"
JIRA_OIDC_CLIENT_SECRET="{{ JIRA_OIDC_CLIENT_SECRET }}"
JIRA_OIDC_SCOPES="{{ JIRA_OIDC_SCOPES }}"
JIRA_OIDC_UNIQUE_ATTRIBUTE="{{ JIRA_OIDC_UNIQUE_ATTRIBUTE }}"
{% endif %}

View File

@@ -0,0 +1,41 @@
# General
application_id: "web-app-jira"
database_type: "postgres"
# Container
container_port: 8080 # Standardport Jira
container_hostname: "{{ domains | get_domain(application_id) }}"
# Jira
## URLs
JIRA_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
JIRA_HOSTNAME: "{{ container_hostname }}"
## OIDC
JIRA_OIDC_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc') }}"
JIRA_OIDC_LABEL: "{{ OIDC.BUTTON_TEXT }}"
JIRA_OIDC_CLIENT_ID: "{{ OIDC.CLIENT.ID }}"
JIRA_OIDC_CLIENT_SECRET: "{{ OIDC.CLIENT.SECRET }}"
JIRA_OIDC_ISSUER: "{{ OIDC.CLIENT.ISSUER_URL }}"
JIRA_OIDC_AUTH_URL: "{{ OIDC.CLIENT.AUTHORIZE_URL }}"
JIRA_OIDC_TOKEN_URL: "{{ OIDC.CLIENT.TOKEN_URL }}"
JIRA_OIDC_USERINFO_URL: "{{ OIDC.CLIENT.USER_INFO_URL }}"
JIRA_OIDC_LOGOUT_URL: "{{ OIDC.CLIENT.LOGOUT_URL }}"
JIRA_OIDC_JWKS_URL: "{{ OIDC.CLIENT.CERTS }}"
JIRA_OIDC_SCOPES: "openid,email,profile"
JIRA_OIDC_UNIQUE_ATTRIBUTE: "{{ OIDC.ATTRIBUTES.USERNAME }}"
## Docker
JIRA_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version') }}"
JIRA_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}"
JIRA_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}"
JIRA_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
JIRA_CUSTOM_IMAGE: "{{ JIRA_IMAGE }}_custom"
## Performance (auto-derive from host memory)
JIRA_TOTAL_MB: "{{ ansible_memtotal_mb | int }}"
JIRA_JVM_MAX_MB: "{{ [ (JIRA_TOTAL_MB | int // 2), 12288 ] | min }}"
JIRA_JVM_MIN_MB: "{{ [ (JIRA_TOTAL_MB | int // 4), (JIRA_JVM_MAX_MB | int) ] | min }}"
JIRA_JVM_MIN: "{{ JIRA_JVM_MIN_MB }}m"
JIRA_JVM_MAX: "{{ JIRA_JVM_MAX_MB }}m"

View File

@@ -77,23 +77,16 @@
}}
include_tasks: _update.yml
- name: "Update REALM mail settings"
- name: "Update REALM mail settings from realm dictionary (SPOT)"
include_tasks: _update.yml
vars:
kc_object_kind: "realm"
kc_object_kind: "realm"
kc_lookup_field: "id"
kc_lookup_value: "{{ KEYCLOAK_REALM }}"
kc_desired:
smtpServer:
from: "no-reply@{{ DEFAULT_SYSTEM_EMAIL.DOMAIN }}"
fromDisplayName: "{{ SOFTWARE_NAME | default('Infinito.Nexus') }}"
host: "{{ DEFAULT_SYSTEM_EMAIL.HOST }}"
port: "{{ DEFAULT_SYSTEM_EMAIL.PORT }}"
# Keycloak expects strings "true"/"false"
ssl: "{{ 'true' if not DEFAULT_SYSTEM_EMAIL.START_TLS and DEFAULT_SYSTEM_EMAIL.TLS else 'false' }}"
starttls: "{{ 'true' if DEFAULT_SYSTEM_EMAIL.START_TLS else 'false' }}"
user: "{{ DEFAULT_SYSTEM_EMAIL.USER | default('') }}"
password: "{{ DEFAULT_SYSTEM_EMAIL.PASSWORD | default('') }}"
smtpServer: "{{ KEYCLOAK_DICTIONARY_REALM.smtpServer | default({}, true) }}"
kc_merge_path: "smtpServer"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- include_tasks: 05_rbac_client_scope.yml

Some files were not shown because too many files have changed in this diff Show More