Compare commits

..

42 Commits

Author SHA1 Message Date
445c94788e Refactor: consolidate pkgmgr updates and remove legacy roles
Details:
- Added pkgmgr update task directly in pkgmgr role (pkgmgr pull --all)
- Removed deprecated update-pkgmgr role and references
- Removed deprecated update-pip role and references
- Simplified update-compose by dropping update-pkgmgr include

https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:46:39 +02:00
aac9704e8b Refactor: remove legacy update-docker role and references
Details:
- Removed update-docker role (README, meta, vars, tasks, script)
- Cleaned references from group_vars, update-compose, and docs
- Adjusted web-app-matrix role (removed @todo pointing to update-docker)
- Updated administrator guide (update-docker no longer mentioned)

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:32:33 +02:00
a57a5f8828 Refactor: remove Python-based Listmonk upgrade logic and implement upgrade as Ansible task
Details:
- Removed upgrade_listmonk() function and related calls from update-docker script
- Added dedicated Ansible task in web-app-listmonk role to run non-interactive DB/schema upgrade
- Conditional execution via MODE_UPDATE

Ref: https://chatgpt.com/share/68bbeff1-27a0-800f-bef3-03ab597595fd
2025-09-06 10:25:41 +02:00
90843726de keycloak: update realm mail settings to use smtp_server.json.j2 (SPOT); merge via kc_merge_path; fix display name and SSL handling
See: https://chatgpt.com/share/68bb0b25-96bc-800f-8ff7-9ca8d7c7af11
2025-09-05 18:09:33 +02:00
d25da76117 Solved wrong variable bug 2025-09-05 17:30:08 +02:00
d48a1b3c0a Solved missing variable bugs. Role is not fully implemented need to pause development on it for the moment 2025-09-05 17:07:15 +02:00
2839d2e1a4 In between commit Magento implementation 2025-09-05 17:01:13 +02:00
00c99e58e9 Cleaned up bridgy fed 2025-09-04 17:09:35 +02:00
904040589e Added correct variables and health check 2025-09-04 15:13:10 +02:00
9f3d300bca Removed unneccessary handlers 2025-09-04 14:04:53 +02:00
9e253a2d09 Bluesky: Patch hardcoded IPCC_URL and proxy /ipcc
- Added Ansible replace task to override IPCC_URL in geolocation.tsx to same-origin '/ipcc'
- Extended Nginx extra_locations.conf to proxy /ipcc requests to https://bsky.app/ipcc
- Ensures frontend avoids CORS errors when fetching IP geolocation

See: https://chatgpt.com/share/68b97be3-0278-800f-9ee0-94389ca3ac0c
2025-09-04 13:45:57 +02:00
49120b0dcf Added more CSP headers 2025-09-04 13:36:35 +02:00
b6f91ab9d3 changed database_user to database_username 2025-09-04 12:45:22 +02:00
77e8e7ed7e Magento 2.4.8 refactor:
- Switch to split containers (markoshust/magento-php:8.2-fpm + magento-nginx:latest)
- Disable central DB; use app-local MariaDB and pin to 11.4
- Composer bootstrap of Magento in php container (Adobe repo keys), idempotent via creates
- Make setup:install idempotent; run as container user 'app'
- Wire OpenSearch (security disabled) and depends_on ordering
- Add credentials schema (adobe_public_key/adobe_private_key)
- Update vars for php/nginx/search containers + MAGENTO_USER
- Remove legacy docs (Administration.md, Upgrade.md)
Context: changes derived from our ChatGPT session about getting Magento 2.4.8 running with MariaDB 11.4.
Conversation: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 12:45:03 +02:00
32bc17e0c3 Optimized whitespacing 2025-09-04 12:41:11 +02:00
e294637cb6 Changed db config path attribut 2025-09-04 12:34:13 +02:00
577767bed6 sys-svc-rdbms: Refactor database service templates and add version support for Magento
- Unified Jinja2 variable spacing in tasks and templates
- Introduced database_image and database_version variables in vars/database.yml
- Updated mariadb.yml.j2 and postgres.yml.j2 to use {{ database_image }}:{{ database_version }}
- Ensured env file paths and includes are consistent
- Prepared support for versioned database images (needed for Magento deployment)

Ref: https://chatgpt.com/share/68b96a9d-c100-800f-856f-cd23d1eda2ed
2025-09-04 12:32:34 +02:00
e77f8da510 Added debug options to mastodon 2025-09-04 11:50:14 +02:00
4738b263ec Added docker_volume_path filter_plugin 2025-09-04 11:49:40 +02:00
0a588023a7 feat(bluesky): fix CORS by serving /config same-origin and pinning BAPP_CONFIG_URL
- Add `server.config_upstream_url` default in `roles/web-app-bluesky/config/main.yml`
  to define upstream for /config (defaults to https://ip.bsky.app/config).
- Introduce front-proxy injection `extra_locations.conf.j2` that:
  - proxies `/config` to the upstream,
  - sets SNI and correct Host header,
  - normalizes CORS headers for same-origin consumption.
- Wire the proxy injection only for the Web domain in
  `roles/web-app-bluesky/tasks/main.yml` via `proxy_extra_configuration`.
- Force fresh social-app checkout and patch
  `src/state/geolocation.tsx` to `const BAPP_CONFIG_URL = '/config'`
  in `roles/web-app-bluesky/tasks/02_social_app.yml`; notify `docker compose build` and `up`.
- Tidy and re-group PDS env in `roles/web-app-bluesky/templates/env.j2` (no functional change).
- Add vars in `roles/web-app-bluesky/vars/main.yml`:
  - `BLUESKY_FRONT_PROXY_CONTENT` (renders the extra locations),
  - `BLUESKY_CONFIG_UPSTREAM_URL` (reads `server.config_upstream_url`).

Security/Scope:
- Only affects the Bluesky web frontend (same-origin `/config`); PDS/API and AppView remain unchanged.

Refs:
- Conversation: https://chatgpt.com/share/68b8dd3a-2100-800f-959e-1495f6320aab
2025-09-04 02:29:10 +02:00
d2fa90774b Added fediverse bridge draft 2025-09-04 02:26:27 +02:00
0e72dcbe36 feat(magento): switch to ghcr.io/alexcheng1982/docker-magento2:2.4.6-p3; update Compose/Env/Tasks/Docs
• Docs: updated to MAGENTO_VOLUME; removed Installation/User_Administration guides
• Compose: volume path → /var/www/html; switched variables to MAGENTO_*/MYSQL_*/OPENSEARCH_*
• Env: new variable set + APACHE_SERVERNAME
• Task: setup:install via docker compose exec (multiline form)
• Schema: removed obsolete credentials definition
Link: https://chatgpt.com/share/68b8dc30-361c-800f-aa69-88df514cb160
2025-09-04 02:25:49 +02:00
4f8ce598a9 Mastodon: allow internal chess host & refactor var names; OpenLDAP: safer get_app_conf
- Add ALLOWED_PRIVATE_ADDRESSES to .env (from svc-db-postgres) to handle 422 Mastodon::PrivateNetworkAddressError
- Switch docker-compose to MASTODON_* variables and align vars/main.yml
- Always run 01_setup.yml during deployment (removed conditional flag)
- OpenLDAP: remove implicit True default on network.local to avoid unintended truthy behavior

Context: chess.infinito.nexus resolved to 192.168.200.30 (private IP) from Mastodon; targeted allowlist unblocks federation lookups.

Ref: https://chat.openai.com/share/REPLACE_WITH_THIS_CONVERSATION_LINK
2025-09-03 21:44:47 +02:00
3769e66d8d Updated CSP for bluesky 2025-09-03 20:55:21 +02:00
33a5fadf67 web-app-chess: fix Corepack/Yarn EACCES and switch to ARG-driven Dockerfile
• Add roles/web-app-chess/files/Dockerfile using build ARGs (CHESS_VERSION, CHESS_REPO_URL, CHESS_REPO_REF, CHESS_ENTRYPOINT_REL, CHESS_ENTRYPOINT_INT, CHESS_APP_DATA_DIR, CONTAINER_PORT). Enable Corepack/Yarn as root in the runtime stage to avoid EACCES on /usr/local/bin symlinks, then drop privileges to 'node'.

• Delete Jinja-based templates/Dockerfile.j2; docker-compose now passes former Jinja vars via build.args. • Update templates/docker-compose.yml.j2 to forward all required build args. • Update config/main.yml: add CSP flag 'script-src-elem: unsafe-inline'.

Ref: https://chatgpt.com/share/68b88d3d-3bd8-800f-9723-e8df0cdc37e2
2025-09-03 20:47:50 +02:00
699a6b6f1e feat(web-app-magento): add Magento role + network/ports
- add role files (docs, vars, config, tasks, schema, templates)

- networks: add web-app-magento 192.168.103.208/28

- ports: add localhost http 8052

Conversation: https://chatgpt.com/share/68b8820f-f864-800f-8819-da509b99cee2
2025-09-03 20:00:01 +02:00
61c29eee60 web-app-chess: build/runtime hardening & feature enablement
Build: use Yarn 4 via Corepack; immutable install with inline builds.

Runtime: enable Corepack as user 'node', use project-local cache (/app/.yarn/cache), add curl; fix ownership.

Entrypoint: generate keys in correct dir; run 'yarn install --immutable --inline-builds' before migrations; wait for Postgres.

Config: enable matomo/css/desktop; notify 'docker compose build' on entrypoint changes.

Docs: rename README title to 'Chess'.

Ref: ChatGPT conversation (2025-09-03) — https://chatgpt.com/share/68b88126-7a6c-800f-acae-ae61ed577f46
2025-09-03 19:56:13 +02:00
d5204fb5c2 Removed unnecessary env loading 2025-09-03 17:41:53 +02:00
751615b1a4 Changed 09_ports.yml to 10_ports.yml 2025-09-03 17:41:14 +02:00
e2993d2912 Added more CSP urls for bluesky 2025-09-03 17:31:29 +02:00
24b6647bfb Corrected variable 2025-09-03 17:30:31 +02:00
d2dc2eab5f web-app-bluesky: refactor role, add Cloudflare DNS integration, split tasks
Changes: add AppView port; add CSP whitelist; new tasks (01_pds, 02_social_app, 03_dns); switch templates to BLUESKY_* vars; update docker-compose and env; TCP healthcheck; remove admin_password from schema.

Conversation context: https://chatgpt.com/share/68b85276-e0ec-800f-90ec-480a1d528593
2025-09-03 16:37:35 +02:00
a1130e33d7 web-app-chess: refactor runtime & entrypoint
- Move entrypoint to files/ and deploy via copy
- Parameterize APP_KEY_FILE, data dir, and entrypoint paths
- Require explicit PORT/PG envs (remove fallbacks)
- Drop stray header from config/main.yml
- Dockerfile: use templated data dir & entrypoint; keep node user
- Compose: set custom image, adjust volume mapping
- env: derive APP_SCHEME from WEB_PROTOCOL; NODE_ENV from ENVIRONMENT
- tasks: add 01_core and simplify main to include it

Ref: https://chatgpt.com/share/68b851c5-4dd8-800f-8e9e-22b985597b8f
2025-09-03 16:34:04 +02:00
df122905eb mailu: include base defaults for oletools (env_file/LD_PRELOAD)
Add base include to oletools service so it inherits env_file (LD_PRELOAD=/usr/lib/libhardened_malloc.so) and other defaults. Fixes crash: PermissionError: '/proc/cpuinfo' during hardened_malloc compatibility probe when LD_PRELOAD was absent. Aligns oletools with other Mailu services.

Refs: ChatGPT discussion – https://chatgpt.com/share/68b837ba-c9cc-800f-b5d9-62b60d6fafd9
2025-09-03 14:42:50 +02:00
d093a22d61 Added correct CSP for JIRA 2025-09-03 11:35:24 +02:00
5e550ce3a3 sys-ctl-rpr-docker-soft: switch to STRICT label mode and adapt tests
- script.py now resolves docker-compose project and working_dir strictly from container labels
- removed container-name fallback logic
- adjusted sys-ctl-hlth-docker-container to include sys-ctl-rpr-docker-soft
- cleaned up sys-svc-docker dependencies
- updated unit tests to mock docker inspect and os.path.isfile for STRICT mode

Conversation: https://chatgpt.com/share/68b80927-b800-800f-a909-0fe8d110fd0e
2025-09-03 11:24:14 +02:00
0ada12e3ca Enabled rpr service by failed health checkl isntead of tiumer 2025-09-03 10:46:46 +02:00
1a5ce4a7fa web-app-bookwyrm, web-app-confluence:
- Fix BookWyrm email SSL/TLS handling (use ternary without 'not' for clarity)
- Add truststore_enabled flag in Confluence config and vars
- Wire JVM_SUPPORT_RECOMMENDED_ARGS to disable UPM signature check if truststore is disabled
- Add placeholder style.css.j2 for Confluence

See conversation: https://chatgpt.com/share/68b80024-7100-800f-a2fe-ba8b9f5cec05
2025-09-03 10:45:41 +02:00
a9abb3ce5d Added unsafe-eval csp to jira 2025-09-03 09:43:07 +02:00
71ceb339fc Fix Confluence & BookWyrm setup:
- Add docker compose build trigger in docker-compose tasks
- Cleanup svc-prx-openresty vars
- Enable unsafe-inline CSP flags for BookWyrm, Confluence, Jira to allow Atlassian inline scripts
- Generalize CONFLUENCE_HOME usage in vars, env and docker-compose
- Ensure confluence-init.properties written with correct home
- Add JVM_SUPPORT_RECOMMENDED_ARGS to pass atlassian.home
- Update README to reference {{ CONFLUENCE_HOME }}

See: https://chatgpt.com/share/68b7582a-aeb8-800f-a14f-e98c5b4e6c70
2025-09-02 22:49:02 +02:00
61bba3d2ef feat(bookwyrm): production-ready runtime + Redis wiring
- Dockerfile: build & install gunicorn wheels
- compose: run initdb before start; use `python -m gunicorn`
- env: add POSTGRES_* and BookWyrm Redis aliases (BROKER/ACTIVITY/CACHE) + CACHE_URL
- vars: add cache URL, DB indices, and URL aliases for Redis

Ref: https://chatgpt.com/share/68b7492b-3200-800f-80c4-295bc3233d68
2025-09-02 21:45:11 +02:00
0bde4295c7 Implemented correct confluence version 2025-09-02 17:01:58 +02:00
117 changed files with 1390 additions and 794 deletions

View File

@@ -11,7 +11,7 @@ sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')
from module_utils.entity_name_utils import get_entity_name from module_utils.entity_name_utils import get_entity_name
# Paths to the group-vars files # Paths to the group-vars files
PORTS_FILE = './group_vars/all/09_ports.yml' PORTS_FILE = './group_vars/all/10_ports.yml'
NETWORKS_FILE = './group_vars/all/09_networks.yml' NETWORKS_FILE = './group_vars/all/09_networks.yml'
ROLE_TEMPLATE_DIR = './templates/roles/web-app' ROLE_TEMPLATE_DIR = './templates/roles/web-app'
ROLES_DIR = './roles' ROLES_DIR = './roles'

View File

@@ -15,7 +15,7 @@ Follow these guides to install and configure Infinito.Nexus:
- **Networking & VPN** - Configure `WireGuard`, `OpenVPN`, and `Nginx Reverse Proxy`. - **Networking & VPN** - Configure `WireGuard`, `OpenVPN`, and `Nginx Reverse Proxy`.
## Managing & Updating Infinito.Nexus 🔄 ## Managing & Updating Infinito.Nexus 🔄
- Regularly update services using `update-docker`, `update-pacman`, or `update-apt`. - Regularly update services using `update-pacman`, or `update-apt`.
- Monitor system health with `sys-ctl-hlth-btrfs`, `sys-ctl-hlth-webserver`, and `sys-ctl-hlth-docker-container`. - Monitor system health with `sys-ctl-hlth-btrfs`, `sys-ctl-hlth-webserver`, and `sys-ctl-hlth-docker-container`.
- Automate system maintenance with `sys-lock`, `sys-ctl-cln-bkps`, and `sys-ctl-rpr-docker-hard`. - Automate system maintenance with `sys-lock`, `sys-ctl-cln-bkps`, and `sys-ctl-rpr-docker-hard`.

View File

@@ -0,0 +1,21 @@
from ansible.errors import AnsibleFilterError
def docker_volume_path(volume_name: str) -> str:
"""
Returns the absolute filesystem path of a Docker volume.
Example:
"akaunting_data" -> "/var/lib/docker/volumes/akaunting_data/_data/"
"""
if not volume_name or not isinstance(volume_name, str):
raise AnsibleFilterError(f"Invalid volume name: {volume_name}")
return f"/var/lib/docker/volumes/{volume_name}/_data/"
class FilterModule(object):
"""Docker volume path filters."""
def filters(self):
return {
"docker_volume_path": docker_volume_path,
}

View File

@@ -12,7 +12,6 @@ SYS_SERVICE_BACKUP_RMT_2_LOC: "{{ 'svc-bkp-rmt-2-loc' | get_se
SYS_SERVICE_BACKUP_DOCKER_2_LOC: "{{ 'sys-ctl-bkp-docker-2-loc' | get_service_name(SOFTWARE_NAME) }}" SYS_SERVICE_BACKUP_DOCKER_2_LOC: "{{ 'sys-ctl-bkp-docker-2-loc' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_REPAIR_DOCKER_SOFT: "{{ 'sys-ctl-rpr-docker-soft' | get_service_name(SOFTWARE_NAME) }}" SYS_SERVICE_REPAIR_DOCKER_SOFT: "{{ 'sys-ctl-rpr-docker-soft' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_REPAIR_DOCKER_HARD: "{{ 'sys-ctl-rpr-docker-hard' | get_service_name(SOFTWARE_NAME) }}" SYS_SERVICE_REPAIR_DOCKER_HARD: "{{ 'sys-ctl-rpr-docker-hard' | get_service_name(SOFTWARE_NAME) }}"
SYS_SERVICE_UPDATE_DOCKER: "{{ 'update-docker' | get_service_name(SOFTWARE_NAME) }}"
## On Failure ## On Failure
SYS_SERVICE_ON_FAILURE_COMPOSE: "{{ ('sys-ctl-alm-compose@') | get_service_name(SOFTWARE_NAME, False) }}%n.service" SYS_SERVICE_ON_FAILURE_COMPOSE: "{{ ('sys-ctl-alm-compose@') | get_service_name(SOFTWARE_NAME, False) }}%n.service"
@@ -46,8 +45,7 @@ SYS_SERVICE_GROUP_MANIPULATION: >
SYS_SERVICE_GROUP_CLEANUP + SYS_SERVICE_GROUP_CLEANUP +
SYS_SERVICE_GROUP_REPAIR + SYS_SERVICE_GROUP_REPAIR +
SYS_SERVICE_GROUP_OPTIMIZATION + SYS_SERVICE_GROUP_OPTIMIZATION +
SYS_SERVICE_GROUP_MAINTANANCE + SYS_SERVICE_GROUP_MAINTANANCE
[ SYS_SERVICE_UPDATE_DOCKER ]
) | sort ) | sort
}} }}

View File

@@ -37,7 +37,6 @@ SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 12:00:00"
### Schedule for repair services ### Schedule for repair services
SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month
SYS_SCHEDULE_REPAIR_DOCKER_SOFT: "*-*-* {{ HOURS_SERVER_AWAKE }}:30:00" # Heal unhealthy docker instances once per hour
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM
### Schedule for backup tasks ### Schedule for backup tasks

View File

@@ -98,6 +98,10 @@ defaults_networks:
subnet: 192.168.103.176/28 subnet: 192.168.103.176/28
web-app-chess: web-app-chess:
subnet: 192.168.103.192/28 subnet: 192.168.103.192/28
web-app-magento:
subnet: 192.168.103.208/28
web-app-bridgy-fed:
subnet: 192.168.103.224/28
# /24 Networks / 254 Usable Clients # /24 Networks / 254 Usable Clients
web-app-bigbluebutton: web-app-bigbluebutton:

View File

@@ -72,6 +72,9 @@ ports:
web-svc-logout: 8048 web-svc-logout: 8048
web-app-bookwyrm: 8049 web-app-bookwyrm: 8049
web-app-chess: 8050 web-app-chess: 8050
web-app-bluesky_view: 8051
web-app-magento: 8052
web-app-bridgy-fed: 8053
web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port
public: public:
# The following ports should be changed to 22 on the subdomain via stream mapping # The following ports should be changed to 22 on the subdomain via stream mapping

View File

@@ -5,7 +5,9 @@
loop: loop:
- "{{ application_id | abs_role_path_by_application_id }}/templates/Dockerfile.j2" - "{{ application_id | abs_role_path_by_application_id }}/templates/Dockerfile.j2"
- "{{ application_id | abs_role_path_by_application_id }}/files/Dockerfile" - "{{ application_id | abs_role_path_by_application_id }}/files/Dockerfile"
notify: docker compose up notify:
- docker compose up
- docker compose build
register: create_dockerfile_result register: create_dockerfile_result
failed_when: failed_when:
- create_dockerfile_result is failed - create_dockerfile_result is failed

View File

@@ -43,3 +43,7 @@
chdir: "{{ PKGMGR_INSTALL_PATH }}" chdir: "{{ PKGMGR_INSTALL_PATH }}"
executable: /bin/bash executable: /bin/bash
become: true become: true
- name: "Update all repositories with pkgmgr"
command: "pkgmgr pull --all"
when: MODE_UPDATE | bool

View File

@@ -37,7 +37,7 @@
- name: "Reset LDAP Credentials" - name: "Reset LDAP Credentials"
include_tasks: 01_credentials.yml include_tasks: 01_credentials.yml
when: when:
- applications | get_app_conf(application_id, 'network.local', True) - applications | get_app_conf(application_id, 'network.local')
- applications | get_app_conf(application_id, 'provisioning.credentials', True) - applications | get_app_conf(application_id, 'provisioning.credentials', True)
- name: "create directory {{openldap_ldif_host_path}}{{item}}" - name: "create directory {{openldap_ldif_host_path}}{{item}}"

View File

@@ -21,4 +21,4 @@ openldap_version: "{{ applications | get_app_conf(application_id,
openldap_volume: "{{ applications | get_app_conf(application_id, 'docker.volumes.data', True) }}" openldap_volume: "{{ applications | get_app_conf(application_id, 'docker.volumes.data', True) }}"
openldap_network: "{{ applications | get_app_conf(application_id, 'docker.network', True) }}" openldap_network: "{{ applications | get_app_conf(application_id, 'docker.network', True) }}"
openldap_network_expose_local: "{{ applications | get_app_conf(application_id, 'network.public', True) | bool or applications | get_app_conf(application_id, 'network.local', True) | bool }}" openldap_network_expose_local: "{{ applications | get_app_conf(application_id, 'network.public', True) | bool or applications | get_app_conf(application_id, 'network.local') | bool }}"

View File

@@ -8,4 +8,3 @@ database_type: ""
OPENRESTY_IMAGE: "openresty/openresty" OPENRESTY_IMAGE: "openresty/openresty"
OPENRESTY_VERSION: "alpine" OPENRESTY_VERSION: "alpine"
OPENRESTY_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.openresty.name', True) }}" OPENRESTY_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.openresty.name', True) }}"

View File

@@ -3,9 +3,14 @@
name: sys-ctl-alm-compose name: sys-ctl-alm-compose
when: run_once_sys_ctl_alm_compose is not defined when: run_once_sys_ctl_alm_compose is not defined
- name: Include dependency 'sys-ctl-rpr-docker-soft'
include_role:
name: sys-ctl-rpr-docker-soft
when: run_once_sys_ctl_rpr_docker_soft is not defined
- include_role: - include_role:
name: sys-service name: sys-service
vars: vars:
system_service_timer_enabled: true system_service_timer_enabled: true
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER }}" system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER }}"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}" system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }}"

View File

@@ -2,7 +2,7 @@
include_role: include_role:
name: sys-ctl-alm-compose name: sys-ctl-alm-compose
when: run_once_sys_ctl_alm_compose is not defined when: run_once_sys_ctl_alm_compose is not defined
- include_role: - include_role:
name: sys-service name: sys-service
vars: vars:

View File

@@ -1,15 +1,26 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
""" """
Restart Docker-Compose configurations with exited or unhealthy containers. Restart Docker-Compose configurations with exited or unhealthy containers.
This version receives the *manipulation services* via argparse (no Jinja).
STRICT mode: Resolve the Compose project exclusively via Docker labels
(com.docker.compose.project and com.docker.compose.project.working_dir).
No container-name fallback. If labels are missing or Docker is unavailable,
the script records an error for that container.
All shell interactions that matter for tests go through print_bash()
so they can be monkeypatched in unit tests.
""" """
import subprocess import subprocess
import time import time
import os import os
import argparse import argparse
from typing import List from typing import List, Optional, Tuple
# ---------------------------
# Shell helpers
# ---------------------------
def bash(command: str) -> List[str]: def bash(command: str) -> List[str]:
print(command) print(command)
process = subprocess.Popen( process = subprocess.Popen(
@@ -30,31 +41,45 @@ def list_to_string(lst: List[str]) -> str:
def print_bash(command: str) -> List[str]: def print_bash(command: str) -> List[str]:
"""
Wrapper around bash() that echoes combined output for easier debugging
and can be monkeypatched in tests.
"""
output = bash(command) output = bash(command)
if output: if output:
print(list_to_string(output)) print(list_to_string(output))
return output return output
def find_docker_compose_file(directory: str) -> str | None: # ---------------------------
# Filesystem / compose helpers
# ---------------------------
def find_docker_compose_file(directory: str) -> Optional[str]:
"""
Search for docker-compose.yml beneath a directory.
"""
for root, _, files in os.walk(directory): for root, _, files in os.walk(directory):
if "docker-compose.yml" in files: if "docker-compose.yml" in files:
return os.path.join(root, "docker-compose.yml") return os.path.join(root, "docker-compose.yml")
return None return None
def detect_env_file(project_path: str) -> str | None: def detect_env_file(project_path: str) -> Optional[str]:
""" """
Return the path to a Compose env file if present (.env preferred, fallback to env). Return the path to a Compose env file if present (.env preferred, fallback to .env/env).
""" """
candidates = [os.path.join(project_path, ".env"), os.path.join(project_path, ".env", "env")] candidates = [
os.path.join(project_path, ".env"),
os.path.join(project_path, ".env", "env"),
]
for candidate in candidates: for candidate in candidates:
if os.path.isfile(candidate): if os.path.isfile(candidate):
return candidate return candidate
return None return None
def compose_cmd(subcmd: str, project_path: str, project_name: str | None = None) -> str: def compose_cmd(subcmd: str, project_path: str, project_name: Optional[str] = None) -> str:
""" """
Build a docker-compose command string with optional -p and --env-file if present. Build a docker-compose command string with optional -p and --env-file if present.
Example: compose_cmd("restart", "/opt/docker/foo", "foo") Example: compose_cmd("restart", "/opt/docker/foo", "foo")
@@ -69,6 +94,10 @@ def compose_cmd(subcmd: str, project_path: str, project_name: str | None = None)
return " ".join(parts) return " ".join(parts)
# ---------------------------
# Business logic
# ---------------------------
def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[str]: def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[str]:
""" """
Accept either: Accept either:
@@ -78,7 +107,6 @@ def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[s
if raw: if raw:
return [s for s in raw if s.strip()] return [s for s in raw if s.strip()]
if raw_str: if raw_str:
# split on comma or whitespace
parts = [p.strip() for chunk in raw_str.split(",") for p in chunk.split()] parts = [p.strip() for chunk in raw_str.split(",") for p in chunk.split()]
return [p for p in parts if p] return [p for p in parts if p]
return [] return []
@@ -87,7 +115,7 @@ def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[s
def wait_while_manipulation_running( def wait_while_manipulation_running(
services: List[str], services: List[str],
waiting_time: int = 600, waiting_time: int = 600,
timeout: int | None = None, timeout: Optional[int] = None,
) -> None: ) -> None:
""" """
Wait until none of the given services are active anymore. Wait until none of the given services are active anymore.
@@ -107,7 +135,6 @@ def wait_while_manipulation_running(
break break
if any_active: if any_active:
# Check timeout
elapsed = time.time() - start elapsed = time.time() - start
if timeout and elapsed >= timeout: if timeout and elapsed >= timeout:
print(f"Timeout ({timeout}s) reached while waiting for services. Continuing anyway.") print(f"Timeout ({timeout}s) reached while waiting for services. Continuing anyway.")
@@ -119,7 +146,30 @@ def wait_while_manipulation_running(
break break
def main(base_directory: str, manipulation_services: List[str], timeout: int | None) -> int: def get_compose_project_info(container: str) -> Tuple[str, str]:
"""
Resolve project name and working dir from Docker labels.
STRICT: Raises RuntimeError if labels are missing/unreadable.
"""
out_project = print_bash(
f"docker inspect -f '{{{{ index .Config.Labels \"com.docker.compose.project\" }}}}' {container}"
)
out_workdir = print_bash(
f"docker inspect -f '{{{{ index .Config.Labels \"com.docker.compose.project.working_dir\" }}}}' {container}"
)
project = out_project[0].strip() if out_project else ""
workdir = out_workdir[0].strip() if out_workdir else ""
if not project:
raise RuntimeError(f"No compose project label found for container {container}")
if not workdir:
raise RuntimeError(f"No compose working_dir label found for container {container}")
return project, workdir
def main(base_directory: str, manipulation_services: List[str], timeout: Optional[int]) -> int:
errors = 0 errors = 0
wait_while_manipulation_running(manipulation_services, waiting_time=600, timeout=timeout) wait_while_manipulation_running(manipulation_services, waiting_time=600, timeout=timeout)
@@ -131,43 +181,50 @@ def main(base_directory: str, manipulation_services: List[str], timeout: int | N
) )
failed_containers = unhealthy_container_names + exited_container_names failed_containers = unhealthy_container_names + exited_container_names
unfiltered_failed_docker_compose_repositories = [ for container in failed_containers:
container.split("-")[0] for container in failed_containers try:
] project, workdir = get_compose_project_info(container)
filtered_failed_docker_compose_repositories = list( except Exception as e:
dict.fromkeys(unfiltered_failed_docker_compose_repositories) print(f"Error reading compose labels for {container}: {e}")
) errors += 1
continue
for repo in filtered_failed_docker_compose_repositories: compose_file_path = os.path.join(workdir, "docker-compose.yml")
compose_file_path = find_docker_compose_file(os.path.join(base_directory, repo)) if not os.path.isfile(compose_file_path):
# As STRICT: we only trust labels; if file not there, error out.
print(f"Error: docker-compose.yml not found at {compose_file_path} for container {container}")
errors += 1
continue
if compose_file_path: project_path = os.path.dirname(compose_file_path)
try:
print("Restarting unhealthy container in:", compose_file_path) print("Restarting unhealthy container in:", compose_file_path)
project_path = os.path.dirname(compose_file_path) print_bash(compose_cmd("restart", project_path, project))
try: except Exception as e:
# restart with optional --env-file and -p if "port is already allocated" in str(e):
print_bash(compose_cmd("restart", project_path, repo)) print("Detected port allocation problem. Executing recovery steps...")
except Exception as e: try:
if "port is already allocated" in str(e):
print("Detected port allocation problem. Executing recovery steps...")
# down (no -p needed), then engine restart, then up -d with -p
print_bash(compose_cmd("down", project_path)) print_bash(compose_cmd("down", project_path))
print_bash("systemctl restart docker") print_bash("systemctl restart docker")
print_bash(compose_cmd("up -d", project_path, repo)) print_bash(compose_cmd("up -d", project_path, project))
else: except Exception as e2:
print("Unhandled exception during restart:", e) print("Unhandled exception during recovery:", e2)
errors += 1 errors += 1
else: else:
print("Error: Docker Compose file not found for:", repo) print("Unhandled exception during restart:", e)
errors += 1 errors += 1
print("Finished restart procedure.") print("Finished restart procedure.")
return errors return errors
# ---------------------------
# CLI
# ---------------------------
if __name__ == "__main__": if __name__ == "__main__":
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description="Restart Docker-Compose configurations with exited or unhealthy containers." description="Restart Docker-Compose configurations with exited or unhealthy containers (STRICT label mode)."
) )
parser.add_argument( parser.add_argument(
"--manipulation", "--manipulation",
@@ -184,12 +241,12 @@ if __name__ == "__main__":
"--timeout", "--timeout",
type=int, type=int,
default=60, default=60,
help="Maximum time in seconds to wait for manipulation services before continuing.(Default 1min)", help="Maximum time in seconds to wait for manipulation services before continuing. (Default 1min)",
) )
parser.add_argument( parser.add_argument(
"base_directory", "base_directory",
type=str, type=str,
help="Base directory where Docker Compose configurations are located.", help="(Unused in STRICT mode) Base directory where Docker Compose configurations are located.",
) )
args = parser.parse_args() args = parser.parse_args()
services = normalize_services_arg(args.manipulation, args.manipulation_string) services = normalize_services_arg(args.manipulation, args.manipulation_string)

View File

@@ -6,8 +6,6 @@
- include_role: - include_role:
name: sys-service name: sys-service
vars: vars:
system_service_on_calendar: "{{ SYS_SCHEDULE_REPAIR_DOCKER_SOFT }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}" system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP| join(' ') }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }} --timeout '{{ SYS_TIMEOUT_DOCKER_RPR_SOFT }}'" system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP| join(' ') }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }} --timeout '{{ SYS_TIMEOUT_DOCKER_RPR_SOFT }}'"
system_service_tpl_exec_start: > system_service_tpl_exec_start: >

View File

@@ -17,14 +17,8 @@ When enabled via `MODE_CLEANUP` or `MODE_RESET`, it will automatically prune unu
Installs Docker and Docker Compose via the system package manager. Installs Docker and Docker Compose via the system package manager.
- **Integrated Dependencies** - **Integrated Dependencies**
Includes backup, repair, and health check sub-roles: Includes backup, repair, and health check sub-roles
- `sys-ctl-bkp-docker-2-loc`
- `user-administrator`
- `sys-ctl-hlth-docker-container`
- `sys-ctl-hlth-docker-volumes`
- `sys-ctl-rpr-docker-soft`
- `sys-ctl-rpr-docker-hard`
- **Cleanup & Reset Modes** - **Cleanup & Reset Modes**
- `MODE_CLEANUP`: Removes unused Docker containers, networks, images, and volumes. - `MODE_CLEANUP`: Removes unused Docker containers, networks, images, and volumes.
- `MODE_RESET`: Performs cleanup and restarts the Docker service. - `MODE_RESET`: Performs cleanup and restarts the Docker service.

View File

@@ -21,6 +21,5 @@
- sys-ctl-bkp-docker-2-loc - sys-ctl-bkp-docker-2-loc
- sys-ctl-hlth-docker-container - sys-ctl-hlth-docker-container
- sys-ctl-hlth-docker-volumes - sys-ctl-hlth-docker-volumes
- sys-ctl-rpr-docker-soft
- sys-ctl-rpr-docker-hard - sys-ctl-rpr-docker-hard
when: SYS_SVC_DOCKER_LOAD_SERVICES | bool when: SYS_SVC_DOCKER_LOAD_SERVICES | bool

View File

@@ -8,10 +8,10 @@
path: "{{ docker_compose.directories.env }}" path: "{{ docker_compose.directories.env }}"
state: directory state: directory
mode: "0755" mode: "0755"
- name: "For '{{ application_id }}': Create {{database_env}}" - name: "For '{{ application_id }}': Create {{ database_env }}"
template: template:
src: "env/{{database_type}}.env.j2" src: "env/{{ database_type }}.env.j2"
dest: "{{database_env}}" dest: "{{ database_env }}"
notify: docker compose up notify: docker compose up
when: not applications | get_app_conf(application_id, 'features.central_database', False) when: not applications | get_app_conf(application_id, 'features.central_database', False)
@@ -19,7 +19,7 @@
# I don't know why this includes leads to that the application_id in vars/main.yml of the database role isn't used # I don't know why this includes leads to that the application_id in vars/main.yml of the database role isn't used
# This is the behaviour which I want, but I'm still wondering why ;) # This is the behaviour which I want, but I'm still wondering why ;)
include_role: include_role:
name: "svc-db-{{database_type}}" name: "svc-db-{{ database_type }}"
when: applications | get_app_conf(application_id, 'features.central_database', False) when: applications | get_app_conf(application_id, 'features.central_database', False)
- name: "For '{{ application_id }}': Add Entry for Backup Procedure" - name: "For '{{ application_id }}': Add Entry for Backup Procedure"

View File

@@ -5,10 +5,10 @@
container_name: {{ application_id | get_entity_name }}-database container_name: {{ application_id | get_entity_name }}-database
logging: logging:
driver: journald driver: journald
image: mariadb image: {{ database_image }}:{{ database_version }}
restart: {{ DOCKER_RESTART_POLICY }} restart: {{ DOCKER_RESTART_POLICY }}
env_file: env_file:
- {{database_env}} - {{ database_env }}
command: "--transaction-isolation=READ-COMMITTED --binlog-format=ROW" command: "--transaction-isolation=READ-COMMITTED --binlog-format=ROW"
volumes: volumes:
- database:/var/lib/mysql - database:/var/lib/mysql

View File

@@ -2,13 +2,13 @@
{% if not applications | get_app_conf(application_id, 'features.central_database', False) %} {% if not applications | get_app_conf(application_id, 'features.central_database', False) %}
{{ database_host }}: {{ database_host }}:
image: postgres:{{ applications['svc-db-postgres'].version}}-alpine image: {{ database_image }}:{{ database_version }}
container_name: {{ application_id | get_entity_name }}-database container_name: {{ application_id | get_entity_name }}-database
env_file: env_file:
- {{database_env}} - {{ database_env }}
restart: {{ DOCKER_RESTART_POLICY }} restart: {{ DOCKER_RESTART_POLICY }}
healthcheck: healthcheck:
test: ["CMD-SHELL", "pg_isready -U {{ database_name }}"] test: ["CMD-SHELL", "pg_isready -U {{ database_username }}"]
interval: 10s interval: 10s
timeout: 5s timeout: 5s
retries: 6 retries: 6

View File

@@ -1,20 +1,23 @@
# Helper variables # Helper variables
_dbtype: "{{ (database_type | d('') | trim) }}" _dbtype: "{{ (database_type | d('') | trim) }}"
_database_id: "{{ ('svc-db-' ~ _dbtype) if _dbtype else '' }}" _database_id: "{{ ('svc-db-' ~ _dbtype) if _dbtype else '' }}"
_database_central_name: "{{ (applications | get_app_conf(_database_id, 'docker.services.' ~ _dbtype ~ '.name', False, '')) if _dbtype else '' }}" _database_central_name: "{{ (applications | get_app_conf(_database_id, 'docker.services.' ~ _dbtype ~ '.name', False, '')) if _dbtype else '' }}"
_database_consumer_id: "{{ database_application_id | d(application_id) }}" _database_consumer_id: "{{ database_application_id | d(application_id) }}"
_database_consumer_entity_name: "{{ _database_consumer_id | get_entity_name }}" _database_consumer_entity_name: "{{ _database_consumer_id | get_entity_name }}"
_database_central_enabled: "{{ (applications | get_app_conf(_database_consumer_id, 'features.central_database', False)) if _dbtype else False }}" _database_central_enabled: "{{ (applications | get_app_conf(_database_consumer_id, 'features.central_database', False)) if _dbtype else False }}"
_database_default_version: "{{ applications | get_app_conf(_database_id, 'docker.services.' ~ _dbtype ~ '.version') }}"
# Definition # Definition
database_name: "{{ _database_consumer_entity_name }}" database_name: "{{ _database_consumer_entity_name }}"
database_instance: "{{ _database_central_name if _database_central_enabled else database_name }}" # This could lead to bugs at dedicated database @todo cleanup database_instance: "{{ _database_central_name if _database_central_enabled else database_name }}" # This could lead to bugs at dedicated database @todo cleanup
database_host: "{{ _database_central_name if _database_central_enabled else 'database' }}" # This could lead to bugs at dedicated database @todo cleanup database_host: "{{ _database_central_name if _database_central_enabled else 'database' }}" # This could lead to bugs at dedicated database @todo cleanup
database_username: "{{ _database_consumer_entity_name }}" database_username: "{{ _database_consumer_entity_name }}"
database_password: "{{ applications | get_app_conf(_database_consumer_id, 'credentials.database_password', true) }}" database_password: "{{ applications | get_app_conf(_database_consumer_id, 'credentials.database_password', true) }}"
database_port: "{{ (ports.localhost.database[_database_id] | d('')) if _dbtype else '' }}" database_port: "{{ (ports.localhost.database[_database_id] | d('')) if _dbtype else '' }}"
database_env: "{{ docker_compose.directories.env }}{{ database_type }}.env" database_env: "{{ docker_compose.directories.env }}{{ database_type }}.env"
database_url_jdbc: "jdbc:{{ database_type if database_type == 'mariadb' else 'postgresql' }}://{{ database_host }}:{{ database_port }}/{{ database_name }}" database_url_jdbc: "jdbc:{{ database_type if database_type == 'mariadb' else 'postgresql' }}://{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_url_full: "{{ database_type }}://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}" database_url_full: "{{ database_type }}://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}"
database_volume: "{{ _database_consumer_entity_name ~ '_' if not _database_central_enabled }}{{ database_host }}" database_volume: "{{ _database_consumer_entity_name ~ '_' if not _database_central_enabled }}{{ database_host }}"
database_image: "{{ _dbtype }}"
database_version: "{{ applications | get_app_conf( _database_consumer_id, 'docker.services.database.version', False, _database_default_version) }}"

View File

@@ -14,13 +14,6 @@
name: update-apt name: update-apt
when: ansible_distribution == "Debian" when: ansible_distribution == "Debian"
- name: "Update Docker Images"
include_role:
name: update-docker
when:
- docker_compose_directory_stat.stat.exists
- run_once_update_docker is not defined
- name: "Check if yay is installed" - name: "Check if yay is installed"
command: which yay command: which yay
register: yay_installed register: yay_installed
@@ -51,7 +44,3 @@
register: pkgmgr_available register: pkgmgr_available
failed_when: false failed_when: false
- name: "Update all repositories using pkgmgr"
include_role:
name: update-pkgmgr
when: pkgmgr_available.rc == 0

View File

@@ -1,27 +0,0 @@
# Update Docker
## Description
This role updates Docker Compose instances by checking for changes in Docker image digests and applying updates if necessary. It utilizes a Python script to handle git pulls and Docker image pulls, and rebuilds containers when changes are detected.
## Overview
The role performs the following:
- Deploys a Python script to check for Docker image updates.
- Configures a systemd service to run the update script.
- Restarts the Docker update service upon configuration changes.
- Supports additional procedures for specific Docker applications (e.g., Discourse, Mastodon, Nextcloud).
## Purpose
The role is designed to ensure that Docker images remain current by automatically detecting changes and rebuilding containers as needed. This helps maintain a secure and efficient container environment.
## Features
- **Docker Image Monitoring:** Checks for changes in image digests.
- **Automated Updates:** Pulls new images and rebuilds containers when necessary.
- **Service Management:** Configures and restarts a systemd service to handle updates.
- **Application-Specific Procedures:** Includes hooks for updating specific Docker applications.
## Credits 📝
It was created with the help of ChatGPT. The conversation is available [here](https://chat.openai.com/share/165418b8-25fa-433b-baca-caded941e22a)

View File

@@ -1,27 +0,0 @@
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "Updates Docker Compose instances by detecting changes in Docker image digests and rebuilding containers when necessary. This role automates Docker image pulls and container rebuilds."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
min_ansible_version: "2.9"
platforms:
- name: Archlinux
versions:
- rolling
- name: Ubuntu
versions:
- all
galaxy_tags:
- docker
- update
- compose
- images
- systemd
- maintenance
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://docs.infinito.nexus"

View File

@@ -1,20 +0,0 @@
- name: Include dependency 'sys-lock'
include_role:
name: sys-lock
when: run_once_sys_lock is not defined
- name: "start {{ 'sys-ctl-bkp-docker-2-loc-everything' | get_service_name(SOFTWARE_NAME) }}"
systemd:
name: "{{ 'sys-ctl-bkp-docker-2-loc-everything' | get_service_name(SOFTWARE_NAME) }}"
state: started
when:
- MODE_BACKUP | bool
- include_role:
name: sys-service
vars:
system_service_restarted: true
system_service_timer_enabled: false
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start: "{{ system_service_script_exec }} {{ PATH_DOCKER_COMPOSE_INSTANCES }}"
system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP | join(' ') }} {{ 'update-docker' | get_service_name(SOFTWARE_NAME) }} --timeout '{{ SYS_TIMEOUT_DOCKER_UPDATE }}'"

View File

@@ -1,4 +0,0 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
when: run_once_update_docker is not defined

View File

@@ -1,217 +0,0 @@
import os
import subprocess
import sys
import time
def run_command(command):
"""
Executes the specified shell command, streaming and collecting its output in real-time.
If the command exits with a non-zero status, a subprocess.CalledProcessError is raised,
including the exit code, the executed command, and the full output (as bytes) for debugging purposes.
"""
process = None
try:
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = []
for line in iter(process.stdout.readline, b''):
decoded_line = line.decode()
output.append(decoded_line)
sys.stdout.write(decoded_line)
return_code = process.wait()
if return_code:
full_output = ''.join(output)
raise subprocess.CalledProcessError(return_code, command, output=full_output.encode())
finally:
if process and process.stdout:
process.stdout.close()
def git_pull():
"""
Checks whether the Git repository in the specified directory is up to date and performs a git pull if necessary.
Raises:
Exception: If retrieving the local or remote git revision fails because the command returns a non-zero exit code.
"""
print("Checking if the git repository is up to date.")
# Run 'git rev-parse @' and check its exit code explicitly.
local_proc = subprocess.run("git rev-parse @", shell=True, capture_output=True)
if local_proc.returncode != 0:
error_msg = local_proc.stderr.decode().strip() or "Unknown error while retrieving local revision."
raise Exception(f"Failed to retrieve local git revision: {error_msg}")
local = local_proc.stdout.decode().strip()
# Run 'git rev-parse @{u}' and check its exit code explicitly.
remote_proc = subprocess.run("git rev-parse @{u}", shell=True, capture_output=True)
if remote_proc.returncode != 0:
error_msg = remote_proc.stderr.decode().strip() or "Unknown error while retrieving remote revision."
raise Exception(f"Failed to retrieve remote git revision: {error_msg}")
remote = remote_proc.stdout.decode().strip()
if local != remote:
print("Repository is not up to date. Performing git pull.")
run_command("git pull")
return True
print("Repository is already up to date.")
return False
{% raw %}
def get_image_digests(directory):
"""
Retrieves the image digests for all images in the specified Docker Compose project.
"""
compose_project = os.path.basename(directory)
try:
images_output = subprocess.check_output(
f'docker images --format "{{{{.Repository}}}}:{{{{.Tag}}}}@{{{{.Digest}}}}" | grep {compose_project}',
shell=True
).decode().strip()
return dict(line.split('@') for line in images_output.splitlines() if line)
except subprocess.CalledProcessError as e:
if e.returncode == 1: # grep no match found
return {}
else:
raise # Other errors are still raised
{% endraw %}
def is_any_service_up():
"""
Checks if any Docker services are currently running.
"""
process = subprocess.Popen("docker-compose ps -q", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output, _ = process.communicate()
service_ids = output.decode().strip().splitlines()
return bool(service_ids)
def pull_docker_images():
"""
Pulls the latest Docker images for the project.
"""
print("Pulling docker images.")
try:
run_command("docker-compose pull")
except subprocess.CalledProcessError as e:
if "pull access denied" in e.output.decode() or "must be built from source" in e.output.decode():
print("Need to build the image from source.")
return True
else:
print("Failed to pull images with unexpected error.")
raise
return False
def update_docker(directory):
"""
Checks for updates to Docker images and rebuilds containers if necessary.
"""
print(f"Checking for updates to Docker images in {directory}.")
before_digests = get_image_digests(directory)
need_to_build = pull_docker_images()
after_digests = get_image_digests(directory)
if before_digests != after_digests:
print("Changes detected in image digests. Rebuilding containers.")
need_to_build = True
if need_to_build:
# This propably just rebuilds the Dockerfile image if there is a change in the other docker compose containers
run_command("docker-compose build --pull")
start_docker(directory)
else:
print("Docker images are up to date. No rebuild necessary.")
def update_discourse(directory):
"""
Updates Discourse by running the rebuild command on the launcher script.
"""
docker_repository_directory = os.path.join(directory, "services", "{{ applications | get_app_conf('web-app-discourse','repository') }}")
print(f"Using path {docker_repository_directory } to pull discourse repository.")
os.chdir(docker_repository_directory )
if git_pull():
print("Start Discourse update procedure.")
update_procedure("docker stop {{ applications | get_app_conf('web-app-discourse','docker.services.discourse.name') }}")
update_procedure("docker rm {{ applications | get_app_conf('web-app-discourse','docker.services.discourse.name') }}")
try:
update_procedure("docker network connect {{ applications | get_app_conf('web-app-discourse','docker.network') }} {{ applications | get_app_conf('svc-db-postgres', 'docker.network') }}")
except subprocess.CalledProcessError as e:
error_message = e.output.decode()
if "already exists" in error_message or "is already connected" in error_message:
print("Network connection already exists. Skipping...")
else:
raise
update_procedure("./launcher rebuild {{ applications | get_app_conf('web-app-discourse','docker.services.discourse.name') }}")
else:
print("Discourse update skipped. No changes in git repository.")
def upgrade_listmonk():
"""
Runs the upgrade for Listmonk
"""
print("Starting Listmonk upgrade.")
run_command('echo "y" | docker compose run -T application ./listmonk --upgrade')
print("Upgrade complete.")
def update_procedure(command):
"""
Attempts to execute a command up to a maximum number of retries.
"""
max_attempts = 3
for attempt in range(max_attempts):
try:
run_command(command)
break # If the command succeeds, exit the loop
except subprocess.CalledProcessError as e:
if attempt < max_attempts - 1: # Check if it's not the last attempt
print(f"Attempt {attempt + 1} failed, retrying in 60 seconds...")
time.sleep(60) # Wait for 60 seconds before retrying
else:
print("All attempts to update have failed.")
raise # Re-raise the last exception after all attempts fail
def start_docker(directory):
"""
Starts or restarts Docker services in the specified directory.
"""
if is_any_service_up():
print(f"Restarting containers in {directory}.")
run_command("docker-compose up -d --force-recreate")
else:
print(f"Skipped starting. No service is up in {directory}.")
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Please provide the path to the parent directory as a parameter.")
sys.exit(1)
parent_directory = sys.argv[1]
for dir_entry in os.scandir(parent_directory):
if dir_entry.is_dir():
dir_path = dir_entry.path
print(f"Checking for updates in: {dir_path}")
os.chdir(dir_path)
# Pull git repository if it exist
# @deprecated: This function should be removed in the future, as soon as all docker applications use the correct folder path
if os.path.isdir(os.path.join(dir_path, ".git")):
print("DEPRECATED: Docker .git repositories should be saved under /opt/docker/{instance}/services/{repository_name} ")
git_pull()
if os.path.basename(dir_path) == "matrix":
# No autoupdate for matrix is possible atm,
# due to the reason that the role has to be executed every time.
# The update has to be executed in the role
# @todo implement in future
pass
else:
# Pull and update docker images
update_docker(dir_path)
# The following instances need additional update and upgrade procedures
if os.path.basename(dir_path) == "discourse":
update_discourse(dir_path)
elif os.path.basename(dir_path) == "listmonk":
upgrade_listmonk()
# @todo implement dedicated procedure for bluesky
# @todo implement dedicated procedure for taiga

View File

@@ -1,2 +0,0 @@
application_id: update-docker
system_service_id: "{{ application_id }}"

View File

@@ -1,23 +0,0 @@
# Update Pip Packages
## Description
This Ansible role automatically updates all installed Python Pip packages to their latest versions.
## Overview
The role performs the following:
- Executes a command to retrieve all installed Python Pip packages.
- Updates each package individually to its latest available version.
- Ensures a smooth and automated Python environment maintenance process.
## Purpose
Ensures Python packages remain up-to-date, improving security and functionality.
## Features
- **Automatic Updates:** Automates the process of upgrading Python packages.
- **Platform Independent:** Works on Linux, macOS, and Windows environments.
- **Ansible Integration:** Easy to include in larger playbooks or maintenance routines.

View File

@@ -1,25 +0,0 @@
galaxy_info:
author: "Kevin Veen-Birkenbach"
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
description: "Automatically updates all Python Pip packages to their latest available versions."
min_ansible_version: "2.9"
platforms:
- name: Ubuntu
versions:
- all
- name: Archlinux
versions:
- rolling
- name: Debian
versions:
- all
galaxy_tags:
- python
- pip
- update
- maintenance

View File

@@ -1,9 +0,0 @@
- block:
- name: Include dependency 'dev-python-pip'
include_role:
name: dev-python-pip
when: run_once_dev_python_pip is not defined
- include_tasks: utils/run_once.yml
vars:
flush_handlers: false
when: run_once_update_pip is not defined

View File

@@ -1 +0,0 @@
application_id: update-pip

View File

@@ -1,27 +0,0 @@
# Update pkgmgr
## Description
This role checks if the [package manager](https://github.com/kevinveenbirkenbach/package-manager) is available on the system. If so, it runs `pkgmgr update --all` to update all repositories managed by the `pkgmgr`.
## Overview
This role performs the following tasks:
- Checks if the `pkgmgr` command is available.
- If available, runs `pkgmgr update --all` to update all repositories.
## Purpose
The purpose of this role is to simplify system updates by using the `pkgmgr` package manager to handle all repository updates with a single command.
## Features
- **Conditional Execution**: Runs only if the `pkgmgr` command is found on the system.
- **Automated Updates**: Automatically runs `pkgmgr update --all` to update all repositories.
## License
Infinito.Nexus NonCommercial License
[Learn More](https://s.infinito.nexus/license)

View File

@@ -1,2 +0,0 @@
# Todos
- Activate update again. Atm not possible, because it pulls all repos

View File

@@ -1,3 +0,0 @@
# run_once_update_pkgmgr: deactivated
#- name: "Update all repositories with pkgmgr"
# command: "pkgmgr update --all"

View File

@@ -1 +0,0 @@
application_id: update-pkgmgr

View File

@@ -23,6 +23,6 @@ AKAUNTING_COMPANY_NAME: "{{ applications | get_app_conf(application_
AKAUNTING_COMPANY_EMAIL: "{{ applications | get_app_conf(application_id, 'company.email') }}" AKAUNTING_COMPANY_EMAIL: "{{ applications | get_app_conf(application_id, 'company.email') }}"
AKAUNTING_ADMIN_EMAIL: "{{ applications | get_app_conf(application_id, 'setup_admin_email') }}" AKAUNTING_ADMIN_EMAIL: "{{ applications | get_app_conf(application_id, 'setup_admin_email') }}"
AKAUNTING_ADMIN_PASSWORD: "{{ applications | get_app_conf(application_id, 'credentials.setup_admin_password') }}" AKAUNTING_ADMIN_PASSWORD: "{{ applications | get_app_conf(application_id, 'credentials.setup_admin_password') }}"
AKAUNTING_SETUP_MARKER: "/var/lib/docker/volumes/{{ AKAUNTING_VOLUME }}/_data/.akaunting_installed" AKAUNTING_SETUP_MARKER: "{{ [ (AKAUNTING_VOLUME | docker_volume_path), '.akaunting_installed' ] | path_join }}"
AKAUNTING_APP_KEY: "{{ applications | get_app_conf(application_id, 'credentials.app_key') }}" AKAUNTING_APP_KEY: "{{ applications | get_app_conf(application_id, 'credentials.app_key') }}"
AKAUNTING_CACHE_DRIVER: "{{ 'redis' if applications | get_app_conf(application_id, 'docker.services.redis.enabled') else 'file' }}" AKAUNTING_CACHE_DRIVER: "{{ 'redis' if applications | get_app_conf(application_id, 'docker.services.redis.enabled') else 'file' }}"

View File

@@ -1,19 +1,45 @@
images:
pds: "ghcr.io/bluesky-social/pds:latest"
pds:
version: "latest"
features: features:
matomo: true matomo: true
css: true css: true
desktop: true desktop: true
central_database: true central_database: false
logout: true logout: true
server: server:
config_upstream_url: "https://ip.bsky.app/config"
domains: domains:
canonical: canonical:
web: "bskyweb.{{ PRIMARY_DOMAIN }}" web: "bskyweb.{{ PRIMARY_DOMAIN }}"
api: "bluesky.{{ PRIMARY_DOMAIN }}" api: "bluesky.{{ PRIMARY_DOMAIN }}"
view: "view.bluesky.{{ PRIMARY_DOMAIN }}"
csp:
whitelist:
connect-src:
- "{{ WEB_PROTOCOL }}://<< defaults_applications[web-app-bluesky].server.domains.canonical.api >>"
- https://plc.directory
- https://bsky.social
- https://api.bsky.app
- https://public.api.bsky.app
- https://events.bsky.app
- https://statsigapi.net
- https://ip.bsky.app
- https://video.bsky.app
- https://bsky.app
- wss://bsky.network
- wss://*.bsky.app
media-src:
- "blob:"
worker-src:
- "blob:"
docker: docker:
services: services:
database: database:
enabled: true enabled: false
web:
enabled: true # @see https://github.com/bluesky-social/social-app
view:
enabled: false
pds:
image: "ghcr.io/bluesky-social/pds"
version: "latest"
volumes:
pds_data: "pds_data"

View File

@@ -7,7 +7,3 @@ credentials:
description: "PLC rotation key in hex format (32 bytes)" description: "PLC rotation key in hex format (32 bytes)"
algorithm: "sha256" algorithm: "sha256"
validation: "^[a-f0-9]{64}$" validation: "^[a-f0-9]{64}$"
admin_password:
description: "Initial admin password for Bluesky PDS"
algorithm: "plain"
validation: "^.{12,}$"

View File

@@ -0,0 +1,30 @@
# The following lines should be removed when the following issue is closed:
# https://github.com/bluesky-social/pds/issues/52
- name: Download pdsadmin tarball
get_url:
url: "https://github.com/lhaig/pdsadmin/releases/download/v1.0.0-dev/pdsadmin_Linux_x86_64.tar.gz"
dest: "{{ BLUESKY_PDSADMIN_TMP_TAR }}"
mode: '0644'
notify:
- docker compose up
- docker compose build
- name: Create {{ BLUESKY_PDSADMIN_DIR }}
file:
path: "{{ BLUESKY_PDSADMIN_DIR }}"
state: directory
mode: '0755'
- name: Extract pdsadmin tarball
unarchive:
src: "{{ BLUESKY_PDSADMIN_TMP_TAR }}"
dest: "{{ BLUESKY_PDSADMIN_DIR }}"
remote_src: yes
mode: '0755'
- name: Ensure pdsadmin is executable
file:
path: "{{ BLUESKY_PDSADMIN_FILE }}"
mode: '0755'
state: file

View File

@@ -0,0 +1,21 @@
- name: clone social app repository
git:
repo: "https://github.com/bluesky-social/social-app.git"
dest: "{{ BLUESKY_SOCIAL_APP_DIR }}"
version: "main"
force: true
notify:
- docker compose up
- docker compose build
- name: Force BAPP_CONFIG_URL to same-origin /config
ansible.builtin.replace:
path: "{{ BLUESKY_SOCIAL_APP_DIR }}/src/state/geolocation.tsx"
regexp: '^\s*const\s+BAPP_CONFIG_URL\s*=\s*.*$'
replace: "const BAPP_CONFIG_URL = '/config'"
- name: Force IPCC_URL to same-origin /ipcc
ansible.builtin.replace:
path: "{{ BLUESKY_SOCIAL_APP_DIR }}/src/state/geolocation.tsx"
regexp: '^\s*const\s+IPCC_URL\s*=\s*.*$'
replace: "const IPCC_URL = '/ipcc'"

View File

@@ -0,0 +1,73 @@
---
# Creates Cloudflare DNS records for Bluesky:
# - PDS/API host (A/AAAA)
# - Handle TXT verification (_atproto)
# - Optional Web UI host (A/AAAA)
# - Optional custom AppView host (A/AAAA)
#
# Requirements:
# DNS_PROVIDER == 'cloudflare'
# CLOUDFLARE_API_TOKEN set
#
# Inputs (inventory/vars):
# BLUESKY_API_DOMAIN, BLUESKY_WEB_DOMAIN, BLUESKY_VIEW_DOMAIN
# BLUESKY_WEB_ENABLED (bool), BLUESKY_VIEW_ENABLED (bool)
# PRIMARY_DOMAIN
# networks.internet.ip4 (and optionally networks.internet.ip6)
- name: "DNS (Cloudflare) for Bluesky base records"
include_role:
name: sys-dns-cloudflare-records
when: DNS_PROVIDER | lower == 'cloudflare'
vars:
cloudflare_records:
# 1) PDS / API host
- type: A
zone: "{{ BLUESKY_API_DOMAIN | to_zone }}"
name: "{{ BLUESKY_API_DOMAIN }}"
content: "{{ networks.internet.ip4 }}"
proxied: false
- type: AAAA
zone: "{{ BLUESKY_API_DOMAIN | to_zone }}"
name: "{{ BLUESKY_API_DOMAIN }}"
content: "{{ networks.internet.ip6 | default('') }}"
proxied: false
state: "{{ (networks.internet.ip6 is defined and (networks.internet.ip6 | string) | length > 0) | ternary('present','absent') }}"
# 2) Handle verification for primary handle (Apex)
- type: TXT
zone: "{{ PRIMARY_DOMAIN | to_zone }}"
name: "_atproto.{{ PRIMARY_DOMAIN }}"
value: "did=did:web:{{ BLUESKY_API_DOMAIN }}"
# 3) Web UI host (only if enabled)
- type: A
zone: "{{ BLUESKY_WEB_DOMAIN | to_zone }}"
name: "{{ BLUESKY_WEB_DOMAIN }}"
content: "{{ networks.internet.ip4 }}"
proxied: true
state: "{{ (BLUESKY_WEB_ENABLED | bool) | ternary('present','absent') }}"
- type: AAAA
zone: "{{ BLUESKY_WEB_DOMAIN | to_zone }}"
name: "{{ BLUESKY_WEB_DOMAIN }}"
content: "{{ networks.internet.ip6 | default('') }}"
proxied: true
state: "{{ (BLUESKY_WEB_ENABLED | bool) and (networks.internet.ip6 is defined) and ((networks.internet.ip6 | string) | length > 0) | ternary('present','absent') }}"
# 4) Custom AppView host (only if you actually run one and it's not api.bsky.app)
- type: A
zone: "{{ BLUESKY_VIEW_DOMAIN | to_zone }}"
name: "{{ BLUESKY_VIEW_DOMAIN }}"
content: "{{ networks.internet.ip4 }}"
proxied: false
state: "{{ (BLUESKY_VIEW_ENABLED | bool) and (BLUESKY_VIEW_DOMAIN != 'api.bsky.app') | ternary('present','absent') }}"
- type: AAAA
zone: "{{ BLUESKY_VIEW_DOMAIN | to_zone }}"
name: "{{ BLUESKY_VIEW_DOMAIN }}"
content: "{{ networks.internet.ip6 | default('') }}"
proxied: false
state: "{{ (BLUESKY_VIEW_ENABLED | bool) and (BLUESKY_VIEW_DOMAIN != 'api.bsky.app') and (networks.internet.ip6 is defined) and ((networks.internet.ip6 | string) | length > 0) | ternary('present','absent') }}"

View File

@@ -1,48 +1,40 @@
- name: "include docker-compose role" - name: "include docker-compose role"
include_role: include_role:
name: docker-compose name: docker-compose
vars:
docker_compose_flush_handlers: false
- name: "include role sys-stk-front-proxy for {{ application_id }}" - name: "Include front proxy for {{ BLUESKY_API_DOMAIN }}:{{ BLUESKY_API_PORT }}"
include_role: include_role:
name: sys-stk-front-proxy name: sys-stk-front-proxy
vars: vars:
domain: "{{ item.domain }}" domain: "{{ BLUESKY_API_DOMAIN }}"
http_port: "{{ item.http_port }}" http_port: "{{ BLUESKY_API_PORT }}"
loop:
- { domain: "{{domains[application_id].api", http_port: "{{ports.localhost.http['web-app-bluesky_api']}}" }
- { domain: "{{domains[application_id].web}}", http_port: "{{ports.localhost.http['web-app-bluesky_web']}}" }
# The following lines should be removed when the following issue is closed: - name: "Include front proxy for {{ BLUESKY_WEB_DOMAIN }}:{{ BLUESKY_WEB_PORT }}"
# https://github.com/bluesky-social/pds/issues/52 include_role:
name: sys-stk-front-proxy
vars:
domain: "{{ BLUESKY_WEB_DOMAIN }}"
http_port: "{{ BLUESKY_WEB_PORT }}"
proxy_extra_configuration: "{{ BLUESKY_FRONT_PROXY_CONTENT }}"
when: BLUESKY_WEB_ENABLED | bool
- name: Download pdsadmin tarball - name: "Include front proxy for {{ BLUESKY_VIEW_DOMAIN }}:{{ BLUESKY_VIEW_PORT }}"
get_url: include_role:
url: "https://github.com/lhaig/pdsadmin/releases/download/v1.0.0-dev/pdsadmin_Linux_x86_64.tar.gz" name: sys-stk-front-proxy
dest: "{{pdsadmin_temporary_tar_path}}" vars:
mode: '0644' domain: "{{ BLUESKY_VIEW_DOMAIN }}"
http_port: "{{ BLUESKY_VIEW_PORT }}"
when: BLUESKY_VIEW_ENABLED | bool
- name: Create {{pdsadmin_folder_path}} - name: "Execute PDS routines"
file: ansible.builtin.include_tasks: "01_pds.yml"
path: "{{pdsadmin_folder_path}}"
state: directory
mode: '0755'
- name: Extract pdsadmin tarball
unarchive:
src: "{{pdsadmin_temporary_tar_path}}"
dest: "{{pdsadmin_folder_path}}"
remote_src: yes
mode: '0755'
- name: Ensure pdsadmin is executable - name: "Execute Social App routines"
file: ansible.builtin.include_tasks: "02_social_app.yml"
path: "{{pdsadmin_file_path}}" when: BLUESKY_WEB_ENABLED | bool
mode: '0755'
state: file
- name: clone social app repository - name: "DNS for Bluesky"
git: include_tasks: "03_dns.yml"
repo: "https://github.com/bluesky-social/social-app.git" when: DNS_PROVIDER | lower == 'cloudflare'
dest: "{{social_app_path}}"
version: "main"
notify: docker compose up

View File

@@ -3,40 +3,32 @@
pds: pds:
{% set container_port = 3000 %} {% set container_port = 3000 %}
{% set container_healthcheck = 'xrpc/_health' %} {% set container_healthcheck = 'xrpc/_health' %}
image: "{{ applications | get_app_conf(application_id, 'images.pds', True) }}" image: "{{ BLUESKY_PDS_IMAGE }}:{{ BLUESKY_PDS_VERSION }}"
{% include 'roles/docker-container/templates/base.yml.j2' %} {% include 'roles/docker-container/templates/base.yml.j2' %}
volumes: volumes:
- pds_data:/opt/pds - pds_data:{{ BLUESKY_PDS_DATA_DIR }}
- {{pdsadmin_file_path}}:/usr/local/bin/pdsadmin:ro - {{ BLUESKY_PDSADMIN_FILE }}:/usr/local/bin/pdsadmin:ro
ports: ports:
- "127.0.0.1:{{ports.localhost.http['web-app-bluesky_api']}}:{{ container_port }}" - "127.0.0.1:{{ BLUESKY_API_PORT }}:{{ container_port }}"
{% include 'roles/docker-container/templates/healthcheck/wget.yml.j2' %} {% include 'roles/docker-container/templates/healthcheck/wget.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %} {% include 'roles/docker-container/templates/networks.yml.j2' %}
# Deactivated for the moment @see https://github.com/bluesky-social/social-app {% if BLUESKY_WEB_ENABLED %}
{% set container_port = 8100 %}
web: web:
command: ["bskyweb","serve"] command: ["bskyweb","serve"]
build: build:
context: "{{ social_app_path }}" context: "{{ BLUESKY_SOCIAL_APP_DIR }}"
dockerfile: Dockerfile dockerfile: Dockerfile
# It doesn't compile yet with this parameters. @todo Fix it
args:
REACT_APP_PDS_URL: "{{ WEB_PROTOCOL }}://{{domains[application_id].api}}" # URL des PDS
REACT_APP_API_URL: "{{ WEB_PROTOCOL }}://{{domains[application_id].api}}" # API-URL des PDS
REACT_APP_SITE_NAME: "{{ PRIMARY_DOMAIN | upper }} - Bluesky"
REACT_APP_SITE_DESCRIPTION: "Decentral Social "
pull_policy: never pull_policy: never
ports: ports:
- "127.0.0.1:{{ports.localhost.http['web-app-bluesky_web']}}:8100" - "127.0.0.1:{{ BLUESKY_WEB_PORT }}:{{ container_port }}"
healthcheck: {% include 'roles/docker-container/templates/healthcheck/tcp.yml.j2' %}
test: ["CMD", "sh", "-c", "for pid in $(ls /proc | grep -E '^[0-9]+$'); do if cat /proc/$pid/cmdline 2>/dev/null | grep -q 'bskywebserve'; then exit 0; fi; done; exit 1"]
interval: 30s
timeout: 10s
retries: 3
{% include 'roles/docker-container/templates/networks.yml.j2' %} {% include 'roles/docker-container/templates/networks.yml.j2' %}
{% endif %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %} {% include 'roles/docker-compose/templates/volumes.yml.j2' %}
pds_data: pds_data:
name: {{ BLUESKY_PDS_DATA_VOLUME }}
{% include 'roles/docker-compose/templates/networks.yml.j2' %} {% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -1,21 +1,30 @@
PDS_HOSTNAME="{{domains[application_id].api}}" # General
PDS_ADMIN_EMAIL="{{ applications.bluesky.users.administrator.email}}" PDS_HOSTNAME="{{ BLUESKY_API_DOMAIN }}"
PDS_SERVICE_DID="did:web:{{domains[application_id].api}}" PDS_CRAWLERS=https://bsky.network
LOG_ENABLED={{ MODE_DEBUG | string | lower }}
PDS_BLOBSTORE_DISK_LOCATION={{ BLUESKY_PDS_BLOBSTORE_LOCATION }}
PDS_DATA_DIRECTORY={{ BLUESKY_PDS_DATA_DIR }}
PDS_BLOB_UPLOAD_LIMIT=52428800
PDS_DID_PLC_URL=https://plc.directory
# See https://mattdyson.org/blog/2024/11/self-hosting-bluesky-pds/ # See https://mattdyson.org/blog/2024/11/self-hosting-bluesky-pds/
PDS_SERVICE_HANDLE_DOMAINS=".{{ PRIMARY_DOMAIN }}" PDS_SERVICE_HANDLE_DOMAINS=".{{ PRIMARY_DOMAIN }}"
PDS_JWT_SECRET="{{ bluesky_jwt_secret }}" PDS_SERVICE_DID="did:web:{{ BLUESKY_API_DOMAIN }}"
PDS_ADMIN_PASSWORD="{{bluesky_admin_password}}"
PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX="{{ bluesky_rotation_key }}" # Email
PDS_CRAWLERS=https://bsky.network PDS_ADMIN_EMAIL="{{ BLUESKY_ADMIN_EMAIL }}"
PDS_EMAIL_SMTP_URL=smtps://{{ users['no-reply'].email }}:{{ users['no-reply'].mailu_token }}@{{ SYSTEM_EMAIL.HOST }}:{{ SYSTEM_EMAIL.PORT }}/ PDS_EMAIL_SMTP_URL=smtps://{{ users['no-reply'].email }}:{{ users['no-reply'].mailu_token }}@{{ SYSTEM_EMAIL.HOST }}:{{ SYSTEM_EMAIL.PORT }}/
PDS_EMAIL_FROM_ADDRESS={{ users['no-reply'].email }} PDS_EMAIL_FROM_ADDRESS={{ users['no-reply'].email }}
LOG_ENABLED=true
PDS_BLOBSTORE_DISK_LOCATION=/opt/pds/blocks # Credentials
PDS_DATA_DIRECTORY: /opt/pds PDS_JWT_SECRET="{{ BLUESKY_JWT_SECRET }}"
PDS_BLOB_UPLOAD_LIMIT: 52428800 PDS_ADMIN_PASSWORD="{{ BLUESKY_ADMIN_PASSWORD }}"
PDS_DID_PLC_URL=https://plc.directory PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX="{{ BLUESKY_ROTATION_KEY }}"
PDS_BSKY_APP_VIEW_URL=https://{{domains[application_id].web}}
PDS_BSKY_APP_VIEW_DID=did:web:{{domains[application_id].web}} # View
PDS_BSKY_APP_VIEW_URL={{ BLUESKY_VIEW_URL }}
PDS_BSKY_APP_VIEW_DID={{ BLUESKY_VIEW_DID }}
# Report
PDS_REPORT_SERVICE_URL=https://mod.bsky.app PDS_REPORT_SERVICE_URL=https://mod.bsky.app
PDS_REPORT_SERVICE_DID=did:plc:ar7c4by46qjdydhdevvrndac PDS_REPORT_SERVICE_DID=did:plc:ar7c4by46qjdydhdevvrndac

View File

@@ -0,0 +1,29 @@
# Injected by web-app-bluesky (same pattern as web-app-yourls)
# Exposes a same-origin /config to avoid CORS when the social-app fetches config.
location = /config {
proxy_pass {{ BLUESKY_CONFIG_UPSTREAM_URL }};
# Nur Hostname extrahieren:
set $up_host "{{ BLUESKY_CONFIG_UPSTREAM_URL | regex_replace('^https?://', '') | regex_replace('/.*$', '') }}";
proxy_set_header Host $up_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_ssl_server_name on;
# Make response clearly same-origin for browsers
proxy_hide_header Access-Control-Allow-Origin;
add_header Access-Control-Allow-Origin $scheme://$host always;
add_header Vary Origin always;
}
location = /ipcc {
proxy_pass https://bsky.app/ipcc;
set $up_host "bsky.app";
proxy_set_header Host $up_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_ssl_server_name on;
proxy_hide_header Access-Control-Allow-Origin;
add_header Access-Control-Allow-Origin $scheme://$host always;
add_header Vary Origin always;
}

View File

@@ -1,11 +1,48 @@
application_id: "web-app-bluesky" # General
social_app_path: "{{ docker_compose.directories.services }}/social-app" application_id: "web-app-bluesky"
## Bluesky
## Social App
BLUESKY_SOCIAL_APP_DIR: "{{ docker_compose.directories.services }}/social-app"
# This should be removed when the following issue is closed: # This should be removed when the following issue is closed:
# https://github.com/bluesky-social/pds/issues/52 # https://github.com/bluesky-social/pds/issues/52
pdsadmin_folder_path: "{{ docker_compose.directories.volumes }}/pdsadmin"
pdsadmin_file_path: "{{pdsadmin_folder_path}}/pdsadmin" ## PDS
pdsadmin_temporary_tar_path: "/tmp/pdsadmin.tar.gz" BLUESKY_PDSADMIN_DIR: "{{ [ docker_compose.directories.volumes, 'pdsadmin' ] | path_join }}"
bluesky_jwt_secret: "{{ applications | get_app_conf(application_id, 'credentials.jwt_secret') }}" BLUESKY_PDSADMIN_FILE: "{{ [ BLUESKY_PDSADMIN_DIR, 'pdsadmin' ] | path_join }}"
bluesky_admin_password: "{{ applications | get_app_conf(application_id, 'credentials.admin_password') }}" BLUESKY_PDSADMIN_TMP_TAR: "/tmp/pdsadmin.tar.gz"
bluesky_rotation_key: "{{ applications | get_app_conf(application_id, 'credentials.plc_rotation_key_k256_private_key_hex') }}" BLUESKY_PDS_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.pds.image') }}"
BLUESKY_PDS_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.pds.version') }}"
BLUESKY_PDS_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.pds_data') }}"
BLUESKY_PDS_DATA_DIR: "/opt/pds"
BLUESKY_PDS_BLOBSTORE_LOCATION: "{{ [ BLUESKY_PDS_DATA_DIR, 'blocks' ] | path_join }}"
## Web
BLUESKY_WEB_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.web.enabled') }}"
BLUESKY_WEB_DOMAIN: "{{ domains[application_id].web }}"
BLUESKY_WEB_PORT: "{{ ports.localhost.http['web-app-bluesky_web'] }}"
## View
BLUESKY_VIEW_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.view.enabled') }}"
BLUESKY_VIEW_DOMAIN: "{{ domains[application_id].view if BLUESKY_VIEW_ENABLED else 'api.bsky.app' }}"
BLUESKY_VIEW_URL: "{{ WEB_PROTOCOL }}://{{ BLUESKY_VIEW_DOMAIN }}"
BLUESKY_VIEW_DID: "did:web:{{ BLUESKY_VIEW_DOMAIN }}"
BLUESKY_VIEW_PORT: "{{ ports.localhost.http['web-app-bluesky_view'] | default(8053) }}"
## Server
BLUESKY_API_DOMAIN: "{{ domains[application_id].api }}"
BLUESKY_API_PORT: "{{ ports.localhost.http['web-app-bluesky_api'] }}"
## Credentials
BLUESKY_JWT_SECRET: "{{ applications | get_app_conf(application_id, 'credentials.jwt_secret') }}"
BLUESKY_ROTATION_KEY: "{{ applications | get_app_conf(application_id, 'credentials.plc_rotation_key_k256_private_key_hex') }}"
## Admin
BLUESKY_ADMIN_EMAIL: "{{ users.administrator.email }}"
BLUESKY_ADMIN_PASSWORD: "{{ users.administrator.password }}"
# Front proxy
BLUESKY_FRONT_PROXY_CONTENT: "{{ lookup('template', 'extra_locations.conf.j2') }}"
BLUESKY_CONFIG_UPSTREAM_URL: "{{ applications | get_app_conf(application_id, 'server.config_upstream_url') }}"

View File

@@ -24,7 +24,11 @@ features:
server: server:
csp: csp:
whitelist: {} whitelist: {}
flags: {} flags:
script-src-elem:
unsafe-inline: true
script-src:
unsafe-inline: true
domains: domains:
canonical: canonical:
- "book.{{ PRIMARY_DOMAIN }}" - "book.{{ PRIMARY_DOMAIN }}"

View File

@@ -12,7 +12,8 @@ RUN git clone --depth=1 --branch "{{ BOOKWYRM_VERSION }}" https://github.com/boo
# Pre-install Python deps to a wheelhouse for faster final image # Pre-install Python deps to a wheelhouse for faster final image
RUN pip install --upgrade pip \ RUN pip install --upgrade pip \
&& pip wheel --wheel-dir /wheels -r requirements.txt && pip wheel --wheel-dir /wheels -r requirements.txt \
&& pip wheel --wheel-dir /wheels gunicorn
FROM python:3.11-bookworm FROM python:3.11-bookworm
ENV PYTHONUNBUFFERED=1 ENV PYTHONUNBUFFERED=1
@@ -28,6 +29,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libjpeg62-turbo zlib1g libxml2 libxslt1.1 libffi8 libmagic1 \ libjpeg62-turbo zlib1g libxml2 libxslt1.1 libffi8 libmagic1 \
&& rm -rf /var/lib/apt/lists/* \ && rm -rf /var/lib/apt/lists/* \
&& pip install --no-cache-dir --no-index --find-links=/wheels -r /app/requirements.txt \ && pip install --no-cache-dir --no-index --find-links=/wheels -r /app/requirements.txt \
&& pip install --no-cache-dir --no-index --find-links=/wheels gunicorn \
&& adduser --disabled-password --gecos '' bookwyrm \ && adduser --disabled-password --gecos '' bookwyrm \
&& mkdir -p /app/data /app/media \ && mkdir -p /app/data /app/media \
&& chown -R bookwyrm:bookwyrm /app && chown -R bookwyrm:bookwyrm /app

View File

@@ -6,7 +6,8 @@
bash -lc ' bash -lc '
python manage.py migrate --noinput && python manage.py migrate --noinput &&
python manage.py collectstatic --noinput && python manage.py collectstatic --noinput &&
gunicorn bookwyrm.wsgi:application --bind 0.0.0.0:{{ container_port }} (python manage.py initdb || true) &&
python -m gunicorn bookwyrm.wsgi:application --bind 0.0.0.0:{{ container_port }}
' '
build: build:
context: . context: .

View File

@@ -22,17 +22,39 @@ EMAIL_HOST_PASSWORD="{{ EMAIL_HOST_PASSWORD }}"
DEFAULT_FROM_EMAIL="{{ EMAIL_DEFAULT_FROM }}" DEFAULT_FROM_EMAIL="{{ EMAIL_DEFAULT_FROM }}"
# Database # Database
POSTGRES_DB="{{ database_name }}"
POSTGRES_USER="{{ database_username }}"
POSTGRES_PASSWORD="{{ database_password }}"
POSTGRES_HOST="{{ database_host }}"
POSTGRES_PORT="{{ database_port }}"
DATABASE_URL="postgres://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}" DATABASE_URL="postgres://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}"
# Redis / Celery # Redis / Celery
REDIS_HOST="{{ BOOKWYRM_REDIS_HOST }}"
REDIS_PORT="{{ BOOKWYRM_REDIS_PORT }}"
REDIS_URL="{{ BOOKWYRM_REDIS_CACHE_URL }}"
REDIS_CACHE_URL="{{ BOOKWYRM_REDIS_CACHE_URL }}"
CACHE_URL="{{ BOOKWYRM_REDIS_CACHE_URL }}"
DJANGO_REDIS_URL="{{ BOOKWYRM_REDIS_CACHE_URL }}"
## Broker
BROKER_URL="{{ BOOKWYRM_BROKER_URL }}"
REDIS_BROKER_URL="{{ BOOKWYRM_REDIS_BROKER_URL }}" REDIS_BROKER_URL="{{ BOOKWYRM_REDIS_BROKER_URL }}"
REDIS_CACHE_URL="{{ BOOKWYRM_REDIS_BASE_URL }}/1" REDIS_BROKER_HOST="{{ BOOKWYRM_REDIS_HOST }}"
REDIS_BROKER_PORT="{{ BOOKWYRM_REDIS_PORT }}"
REDIS_BROKER_DB_INDEX="{{ BOOKWYRM_REDIS_BROKER_DB }}"
CELERY_BROKER_URL="{{ BOOKWYRM_REDIS_BROKER_URL }}" CELERY_BROKER_URL="{{ BOOKWYRM_REDIS_BROKER_URL }}"
## Activity
REDIS_ACTIVITY_HOST="{{ BOOKWYRM_REDIS_HOST }}"
REDIS_ACTIVITY_PORT="{{ BOOKWYRM_REDIS_PORT }}"
REDIS_ACTIVITY_DB_INDEX="{{ BOOKWYRM_REDIS_ACTIVITY_DB }}"
REDIS_ACTIVITY_URL="{{ BOOKWYRM_REDIS_ACTIVITY_URL }}"
# Proxy (if BookWyrm sits behind reverse proxy) # Proxy (if BookWyrm sits behind reverse proxy)
FORWARDED_ALLOW_IPS="*" FORWARDED_ALLOW_IPS="*"
USE_X_FORWARDED_HOST="true" USE_X_FORWARDED_HOST="true"
SECURE_PROXY_SSL_HEADER="HTTP_X_FORWARDED_PROTO,{{ WEB_PROTOCOL }}" SECURE_PROXY_SSL_HEADER="{{ (WEB_PORT == 443) | string | lower }}"
# OIDC (optional only if BOOKWYRM_OIDC_ENABLED) # OIDC (optional only if BOOKWYRM_OIDC_ENABLED)
{% if BOOKWYRM_OIDC_ENABLED %} {% if BOOKWYRM_OIDC_ENABLED %}

View File

@@ -45,6 +45,12 @@ BOOKWYRM_REDIS_HOST: "redis"
BOOKWYRM_REDIS_PORT: 6379 BOOKWYRM_REDIS_PORT: 6379
BOOKWYRM_REDIS_BASE_URL: "redis://{{ BOOKWYRM_REDIS_HOST }}:{{ BOOKWYRM_REDIS_PORT }}" BOOKWYRM_REDIS_BASE_URL: "redis://{{ BOOKWYRM_REDIS_HOST }}:{{ BOOKWYRM_REDIS_PORT }}"
BOOKWYRM_REDIS_BROKER_URL: "{{ BOOKWYRM_REDIS_BASE_URL }}/0" BOOKWYRM_REDIS_BROKER_URL: "{{ BOOKWYRM_REDIS_BASE_URL }}/0"
BOOKWYRM_REDIS_CACHE_URL: "{{ BOOKWYRM_REDIS_BASE_URL }}/1"
BOOKWYRM_REDIS_BROKER_DB: 0
BOOKWYRM_REDIS_ACTIVITY_DB: 1
BOOKWYRM_BROKER_URL: "{{ BOOKWYRM_REDIS_BROKER_URL }}"
BOOKWYRM_REDIS_ACTIVITY_URL: "{{ BOOKWYRM_REDIS_CACHE_URL }}"
#BOOKWYRM_CACHE_URL: "{{ BOOKWYRM_REDIS_CACHE_URL }}"
# Email # Email
EMAIL_HOST: "{{ SYSTEM_EMAIL.HOST }}" EMAIL_HOST: "{{ SYSTEM_EMAIL.HOST }}"
@@ -53,5 +59,5 @@ EMAIL_HOST_USER: "{{ users['no-reply'].email }}"
EMAIL_HOST_PASSWORD: "{{ users['no-reply'].mailu_token }}" EMAIL_HOST_PASSWORD: "{{ users['no-reply'].mailu_token }}"
# TLS/SSL: If TLS is true → TLS; else → SSL # TLS/SSL: If TLS is true → TLS; else → SSL
EMAIL_USE_TLS: "{{ SYSTEM_EMAIL.TLS | ternary('true','false') }}" EMAIL_USE_TLS: "{{ SYSTEM_EMAIL.TLS | ternary('true','false') }}"
EMAIL_USE_SSL: "{{ not SYSTEM_EMAIL.TLS | ternary('true','false') }}" EMAIL_USE_SSL: "{{ SYSTEM_EMAIL.TLS | ternary('false','true') }}"
EMAIL_DEFAULT_FROM: "BookWyrm <{{ users['no-reply'].email }}>" EMAIL_DEFAULT_FROM: "BookWyrm <{{ users['no-reply'].email }}>"

View File

@@ -0,0 +1,25 @@
# Bridgy Fed
## Description
Bridgy Fed bridges ActivityPub (Fediverse), ATProto/Bluesky, and IndieWeb (webmentions/mf2). It mirrors identities and interactions across networks.
## Overview
This role builds and runs Bridgy Fed as a Docker container and (optionally) starts a Datastore-mode Firestore emulator as a sidecar. It exposes HTTP locally for a front proxy.
Upstream docs & dev notes:
- User & developer docs: https://fed.brid.gy and https://bridgy-fed.readthedocs.io/
- Source: https://github.com/snarfed/bridgy-fed
- Local run (reference): `flask run -p 8080` with BRIDGY_APPVIEW_HOST/BRIDGY_PLC_HOST/BRIDGY_BGS_HOST/BRIDGY_PDS_HOST set, and Datastore emulator envs
## Features
- Dockerized Flask app (gunicorn)
- Optional Firestore emulator (Datastore mode) sidecar
- Front proxy integration via `sys-stk-front-proxy`
## Quick start
1) Set domains and ports in inventory.
2) Enable/disable the emulator in `config/main.yml`.
3) Run the role; your front proxy will publish the app.
## Notes
- Emulator is **not** for production; its in-memory unless you mount a volume/configure import/export.

View File

@@ -0,0 +1,29 @@
features:
matomo: true
css: true
desktop: true
central_database: false
logout: false
oidc: false
server:
domains:
canonical:
- "bridgyfed.{{ PRIMARY_DOMAIN }}"
csp:
whitelist: {}
flags: {}
docker:
services:
database:
enabled: false
application:
image: "python"
version: "3.12-bookworm"
name: "web-app-bridgy-fed"
repository: "https://github.com/snarfed/bridgy-fed.git"
branch: "main"
rbac:
roles: {}

View File

@@ -0,0 +1,49 @@
# Runtime image for Bridgy Fed (Flask) with a build step that clones upstream
ARG PY_BASE="python:3.12-bookworm"
FROM ${PY_BASE} AS build
ARG BRIDGY_REPO_URL
ARG BRIDGY_REPO_BRANCH
# System deps: git, build tools, curl for healthchecks, and gunicorn
RUN apt-get update && apt-get install -y --no-install-recommends \
git build-essential curl ca-certificates && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
RUN git clone --depth=1 --branch "${BRIDGY_REPO_BRANCH}" "${BRIDGY_REPO_URL}" ./
# Python deps
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
# Create oauth_dropins static symlink (upstream expects this)
RUN python - <<'PY'\n\
import oauth_dropins, pathlib, os\n\
target = pathlib.Path(oauth_dropins.__file__).parent / 'static'\n\
link = pathlib.Path('/app/oauth_dropins_static')\n\
try:\n\
if link.exists() or link.is_symlink():\n\
link.unlink()\n\
os.symlink(str(target), str(link))\n\
except FileExistsError:\n\
pass\n\
print('Symlinked oauth_dropins_static ->', target)\n\
PY
# Final stage
FROM ${PY_BASE}
ARG CONTAINER_PORT
ENV PORT=${CONTAINER_PORT}
WORKDIR /app
COPY --from=build /app /app
# Non-root good practice
RUN useradd -r -m -d /nonroot appuser && chown -R appuser:appuser /app
USER appuser
EXPOSE ${PORT}
# Upstream flask app entry: 'flask_app:app'
CMD ["sh", "-lc", "exec gunicorn -w 2 -k gthread -b 0.0.0.0:${PORT} flask_app:app"]

View File

@@ -1,24 +1,22 @@
--- ---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Checks if the pkgmgr command is available and runs 'pkgmgr update --all' to update all repositories." description: "Bridgy Fed: bridge between ActivityPub (Fediverse), ATProto/Bluesky and IndieWeb."
license: "Infinito.Nexus NonCommercial License" license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license" license_url: "https://s.infinito.nexus/license"
company: | company: |
Kevin Veen-Birkenbach Kevin Veen-Birkenbach
Consulting & Coaching Solutions Consulting & Coaching Solutions
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9"
platforms:
- name: Linux
versions:
- all
galaxy_tags: galaxy_tags:
- update - activitypub
- pkgmgr - bluesky
- pkgmgr - atproto
- system - fediverse
- bridge
repository: "https://s.infinito.nexus/code" repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues" issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://docs.infinito.nexus" documentation: "https://fed.brid.gy/docs"
dependencies: [] logo:
class: "fa-solid fa-bridge"
dependencies: []

View File

View File

@@ -0,0 +1,9 @@
- name: "Load docker and front proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateless
- name: "Include front proxy for {{ container_hostname}}:{{ ports.localhost.http[application_id] }}"
include_role:
name: sys-stk-front-proxy
- include_tasks: utils/run_once.yml

View File

@@ -0,0 +1,3 @@
- name: "Include core routines for '{{ application_id }}'"
include_tasks: "01_core.yml"
when: run_once_web_app_bridgy_fed is not defined

View File

@@ -0,0 +1,20 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
application:
build:
context: .
dockerfile: Dockerfile
args:
BRIDGY_REPO_URL: "{{ BRIDGY_REPO_URL }}"
BRIDGY_REPO_BRANCH: "{{ BRIDGY_REPO_BRANCH }}"
CONTAINER_PORT: "{{ container_port | string }}"
image: "{{ BRIDGY_IMAGE }}:{{ BRIDGY_VERSION }}"
container_name: "{{ BRIDGY_CONTAINER }}"
hostname: "{{ container_hostname }}"
ports:
- "127.0.0.1:{{ http_port }}:{{ container_port }}"
{% include 'roles/docker-container/templates/healthcheck/tcp.yml.j2' %}
{% include 'roles/docker-container/templates/base.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -0,0 +1,13 @@
# Flask / Gunicorn basics
FLASK_ENV="{{ ENVIRONMENT }}"
PORT="{{ container_port }}"
BRIDGY_ADMIN_EMAIL="{{ BRIDGY_ADMIN_EMAIL }}"
# Bridgy Fed upstream knobs (see README @ GitHub)
BRIDGY_APPVIEW_HOST="{{ BRIDGY_APPVIEW_HOST }}"
BRIDGY_PLC_HOST="{{ BRIDGY_PLC_HOST }}"
BRIDGY_BGS_HOST="{{ BRIDGY_BGS_HOST }}"
BRIDGY_PDS_HOST="{{ BRIDGY_PDS_HOST }}"
# Optional:
# GUNICORN_CMD_ARGS="--log-level info"

View File

@@ -0,0 +1,25 @@
# General
application_id: "web-app-bridgy-fed"
# Container
container_port: 8080
domain: "{{ container_hostname }}"
http_port: "{{ ports.localhost.http[application_id] }}"
container_hostname: "{{ domains | get_domain(application_id) }}"
# App container
BRIDGY_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}"
BRIDGY_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}"
BRIDGY_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version')}}"
BRIDGY_ADMIN_EMAIL: "{{ users.administrator.email }}"
# Source
BRIDGY_REPO_URL: "{{ applications | get_app_conf(application_id, 'docker.services.application.repository') }}"
BRIDGY_REPO_BRANCH: "{{ applications | get_app_conf(application_id, 'docker.services.application.branch') }}"
# Runtime env defaults for Bridgy Fed (see upstream README)
BRIDGY_APPVIEW_HOST: "api.bsky.app"
BRIDGY_PLC_HOST: "plc.directory"
BRIDGY_BGS_HOST: "bsky.network"
BRIDGY_PDS_HOST: "atproto.brid.gy"

View File

@@ -1,4 +1,4 @@
# web-app-chess # Chess
## Description ## Description

View File

@@ -1,4 +1,3 @@
# roles/web-app-chess/config/main.yml
credentials: {} credentials: {}
docker: docker:
services: services:
@@ -13,16 +12,18 @@ docker:
volumes: volumes:
data: "chess_data" data: "chess_data"
features: features:
matomo: false matomo: true
css: false css: true
desktop: false desktop: true
central_database: true central_database: true
logout: false logout: false
oidc: false oidc: false
server: server:
csp: csp:
whitelist: {} whitelist: {}
flags: {} flags:
script-src-elem:
unsafe-inline: true
domains: domains:
canonical: canonical:
- "chess.{{ PRIMARY_DOMAIN }}" - "chess.{{ PRIMARY_DOMAIN }}"

View File

@@ -0,0 +1,69 @@
# Multi-stage build for castling.club
# Allow a dynamic base image version in both stages
ARG CHESS_VERSION
# -------- Stage 1: build --------
FROM node:${CHESS_VERSION} AS build
# Build-time inputs
ARG CHESS_REPO_URL
ARG CHESS_REPO_REF
RUN apt-get update && apt-get install -y --no-install-recommends \
git ca-certificates openssl dumb-init python3 build-essential \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /src
RUN git clone --depth 1 --branch "${CHESS_REPO_REF}" "${CHESS_REPO_URL}" ./
# Prepare Yarn 4 as root (safe during build stage)
RUN corepack enable && corepack prepare yarn@4.9.1 --activate && yarn -v
RUN yarn install --immutable --inline-builds
RUN yarn build
# -------- Stage 2: runtime --------
FROM node:${CHESS_VERSION}
# Runtime inputs (formerly Jinja variables)
ARG CHESS_ENTRYPOINT_REL
ARG CHESS_ENTRYPOINT_INT
ARG CHESS_APP_DATA_DIR
ARG CONTAINER_PORT
WORKDIR /app
# Minimal runtime deps + curl for healthcheck
RUN apt-get update && apt-get install -y --no-install-recommends \
bash openssl dumb-init postgresql-client ca-certificates curl \
&& rm -rf /var/lib/apt/lists/*
# Copy built app
COPY --from=build /src /app
# Install entrypoint
COPY ${CHESS_ENTRYPOINT_REL} ${CHESS_ENTRYPOINT_INT}
RUN chmod +x ${CHESS_ENTRYPOINT_INT}
# Fix: enable Corepack/Yarn as root so shims land in /usr/local/bin
RUN corepack enable && corepack prepare yarn@4.9.1 --activate && yarn -v
# Create writable dirs and set ownership
RUN mkdir -p ${CHESS_APP_DATA_DIR} /app/.yarn/cache /home/node \
&& chown -R node:node /app /home/node
# Use project-local Yarn cache
ENV YARN_ENABLE_GLOBAL_CACHE=false \
YARN_CACHE_FOLDER=/app/.yarn/cache \
HOME=/home/node
# Drop privileges
USER node
# Expose the runtime port (build-time constant)
EXPOSE ${CONTAINER_PORT}
ENTRYPOINT ["dumb-init", "--"]
# Use a shell so the value can be expanded reliably
ENV CHESS_ENTRYPOINT_INT=${CHESS_ENTRYPOINT_INT}
CMD ["sh","-lc","exec \"$CHESS_ENTRYPOINT_INT\""]

View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
set -euo pipefail
APP_KEY_FILE="${APP_KEY_FILE}"
APP_KEY_PUB="${APP_KEY_FILE}.pub"
# 1) Generate signing key pair if missing
if [[ ! -f "${APP_KEY_FILE}" || ! -f "${APP_KEY_PUB}" ]]; then
echo "[chess] generating RSA signing key pair at ${APP_KEY_FILE}"
key_dir="$(dirname "${APP_KEY_FILE}")"
key_base="$(basename "${APP_KEY_FILE}")"
( cd "${key_dir}" && bash /app/tools/gen-signing-key.sh "${key_base}" )
fi
# 1.5) Ensure Yarn is ready and deps are installed (PnP, immutable)
echo "[chess] preparing yarn & installing deps (immutable)"
corepack enable || true
yarn install --immutable --inline-builds
# 2) Wait for PostgreSQL if env is provided
if [[ -n "${PGHOST:-}" ]]; then
echo "[chess] waiting for PostgreSQL at ${PGHOST}:${PGPORT}..."
until pg_isready -h "${PGHOST}" -p "${PGPORT}" -U "${PGUSER}" >/dev/null 2>&1; do
sleep 1
done
fi
# 3) Run migrations (idempotent)
echo "[chess] running migrations"
yarn migrate up
# 4) Start app
echo "[chess] starting server on port ${PORT}"
exec yarn start

View File

View File

@@ -0,0 +1,12 @@
- name: "load docker, db and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
- name: "Deploy '{{ CHESS_ENTRYPOINT_ABS }}'"
copy:
src: "{{ CHESS_ENTRYPOINT_FILE }}"
dest: "{{ CHESS_ENTRYPOINT_ABS }}"
notify:
- docker compose build
- include_tasks: utils/run_once.yml

View File

@@ -1,10 +0,0 @@
- block:
- name: "load docker, db and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
- name: "Place entrypoint and other assets"
include_tasks: 02_assets.yml
- include_tasks: utils/run_once.yml
when: run_once_web_app_chess is not defined

View File

@@ -1,8 +1,3 @@
--- - name: "Include core routines for '{{ application_id }}'"
- block: include_tasks: "01_core.yml"
- name: "load docker, db and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
- include_tasks: utils/run_once.yml
when: run_once_web_app_chess is not defined when: run_once_web_app_chess is not defined

View File

@@ -1,47 +0,0 @@
# Multi-stage build for castling.club
# Stage 1: build
FROM node:{{ CHESS_VERSION }} AS build
ARG CHESS_REPO_URL={{ CHESS_REPO_URL }}
ARG CHESS_REPO_REF={{ CHESS_REPO_REF }}
RUN apt-get update && apt-get install -y --no-install-recommends \
git ca-certificates openssl dumb-init python3 build-essential \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /src
RUN git clone --depth 1 --branch "${CHESS_REPO_REF}" "${CHESS_REPO_URL}" ./
# Yarn is preinstalled in Node images via corepack; enable it.
RUN corepack enable
# Install deps and build TS
RUN yarn install --frozen-lockfile && yarn build
# Stage 2: runtime
FROM node:{{ CHESS_VERSION }}
ENV NODE_ENV=production
ENV PORT={{ container_port }}
WORKDIR /app
# Minimal runtime packages + dumb-init
RUN apt-get update && apt-get install -y --no-install-recommends \
openssl dumb-init postgresql-client \
&& rm -rf /var/lib/apt/lists/*
# Copy built app
COPY --from=build /src /app
# Create data dir for signing keys & cache
RUN mkdir -p /app/data && chown -R node:node /app
VOLUME ["/app/data"]
# Entrypoint script
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
USER node
EXPOSE {{ container_port }}
ENTRYPOINT ["dumb-init", "--"]
CMD ["docker-entrypoint.sh"]

View File

@@ -4,19 +4,20 @@
context: . context: .
dockerfile: Dockerfile dockerfile: Dockerfile
args: args:
CHESS_VERSION: "{{ CHESS_VERSION }}"
CHESS_REPO_URL: "{{ CHESS_REPO_URL }}" CHESS_REPO_URL: "{{ CHESS_REPO_URL }}"
CHESS_REPO_REF: "{{ CHESS_REPO_REF }}" CHESS_REPO_REF: "{{ CHESS_REPO_REF }}"
image: "castling_custom" CHESS_ENTRYPOINT_REL: "{{ CHESS_ENTRYPOINT_REL }}"
CHESS_ENTRYPOINT_INT: "{{ CHESS_ENTRYPOINT_INT }}"
CHESS_APP_DATA_DIR: "{{ CHESS_APP_DATA_DIR }}"
CONTAINER_PORT: "{{ container_port | string }}"
image: "{{ CHESS_CUSTOM_IMAGE }}"
container_name: "{{ CHESS_CONTAINER }}" container_name: "{{ CHESS_CONTAINER }}"
hostname: "{{ CHESS_HOSTNAME }}" hostname: "{{ CHESS_HOSTNAME }}"
environment:
- NODE_ENV=production
ports: ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}" - "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}"
volumes: volumes:
- 'data:/app/data' - 'data:{{ CHESS_APP_DATA_DIR }}'
env_file:
- .env
{% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %} {% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %}
{% include 'roles/docker-container/templates/base.yml.j2' %} {% include 'roles/docker-container/templates/base.yml.j2' %}
{% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %} {% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %}

View File

@@ -1,27 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
APP_KEY_FILE="${APP_KEY_FILE:-/app/data/{{ CHESS_KEY_FILENAME }}}"
APP_KEY_PUB="${APP_KEY_FILE}.pub"
# 1) Generate signing key pair if missing
if [[ ! -f "${APP_KEY_FILE}" || ! -f "${APP_KEY_PUB}" ]]; then
echo "[chess] generating RSA signing key pair at ${APP_KEY_FILE}"
/app/tools/gen-signing-key.sh "${APP_KEY_FILE}"
fi
# 2) Wait for PostgreSQL if env is provided
if [[ -n "${PGHOST:-}" ]]; then
echo "[chess] waiting for PostgreSQL at ${PGHOST}:${PGPORT:-5432}..."
until pg_isready -h "${PGHOST}" -p "${PGPORT:-5432}" -U "${PGUSER:-postgres}" >/dev/null 2>&1; do
sleep 1
done
fi
# 3) Run migrations (idempotent)
echo "[chess] running migrations"
yarn migrate up
# 4) Start app
echo "[chess] starting server on port ${PORT:-5080}"
exec yarn start

View File

@@ -1,11 +1,11 @@
# App basics # App basics
APP_SCHEME="{{ 'https' if WEB_PROTOCOL == 'https' else 'http' }}" APP_SCHEME="{{ WEB_PROTOCOL }}"
APP_DOMAIN="{{ CHESS_HOSTNAME }}" APP_DOMAIN="{{ CHESS_HOSTNAME }}"
APP_ADMIN_URL="{{ CHESS_ADMIN_URL }}" APP_ADMIN_URL="{{ CHESS_ADMIN_URL }}"
APP_ADMIN_EMAIL="{{ CHESS_ADMIN_EMAIL }}" APP_ADMIN_EMAIL="{{ CHESS_ADMIN_EMAIL }}"
APP_KEY_FILE="/app/data/{{ CHESS_KEY_FILENAME }}" APP_KEY_FILE="{{ CHESS_APP_KEY_FILE }}"
APP_HMAC_SECRET="{{ CHESS_HMAC_SECRET }}" APP_HMAC_SECRET="{{ CHESS_HMAC_SECRET }}"
NODE_ENV="production" NODE_ENV="{{ ENVIRONMENT }}"
PORT="{{ container_port }}" PORT="{{ container_port }}"
# PostgreSQL (libpq envs) # PostgreSQL (libpq envs)

View File

@@ -1,17 +1,20 @@
# General # General
application_id: "web-app-chess" application_id: "web-app-chess"
database_type: "postgres" database_type: "postgres"
# Container
container_port: 5080 container_port: 5080
container_hostname: "{{ domains | get_domain(application_id) }}" container_hostname: "{{ domains | get_domain(application_id) }}"
# App URLs & meta # App URLs & meta
#CHESS_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}" # CHESS_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
CHESS_HOSTNAME: "{{ container_hostname }}" CHESS_HOSTNAME: "{{ container_hostname }}"
CHESS_ADMIN_URL: "" CHESS_ADMIN_URL: ""
CHESS_ADMIN_EMAIL: "" CHESS_ADMIN_EMAIL: "{{ users.administrator.email }}"
# Docker image # Docker image
#CHESS_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}" #CHESS_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}"
CHESS_CUSTOM_IMAGE: "castling_custom"
CHESS_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version') }}" CHESS_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version') }}"
CHESS_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}" CHESS_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}"
CHESS_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}" CHESS_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
@@ -23,3 +26,10 @@ CHESS_REPO_REF: "{{ applications | get_app_conf(application_id,
# Security # Security
CHESS_HMAC_SECRET: "{{ lookup('password', '/dev/null length=63 chars=ascii_letters,digits') }}" CHESS_HMAC_SECRET: "{{ lookup('password', '/dev/null length=63 chars=ascii_letters,digits') }}"
CHESS_KEY_FILENAME: "signing-key" CHESS_KEY_FILENAME: "signing-key"
CHESS_APP_DATA_DIR: '/app/data'
CHESS_APP_KEY_FILE: "{{ [ CHESS_APP_DATA_DIR, CHESS_KEY_FILENAME ] | path_join }}"
CHESS_ENTRYPOINT_FILE: "docker-entrypoint.sh"
CHESS_ENTRYPOINT_REL: "{{ CHESS_ENTRYPOINT_FILE }}"
CHESS_ENTRYPOINT_ABS: "{{ [docker_compose.directories.instance, CHESS_ENTRYPOINT_REL] | path_join }}"
CHESS_ENTRYPOINT_INT: "{{ ['/usr/local/bin', CHESS_ENTRYPOINT_FILE] | path_join }}"

View File

@@ -17,7 +17,7 @@ The role builds a minimal custom image on top of the official Confluence image,
* **JVM Auto-Tuning:** `JVM_MINIMUM_MEMORY` / `JVM_MAXIMUM_MEMORY` computed from host memory with upper bounds. * **JVM Auto-Tuning:** `JVM_MINIMUM_MEMORY` / `JVM_MAXIMUM_MEMORY` computed from host memory with upper bounds.
* **Health Checks:** Curl-based container healthcheck for early failure detection. * **Health Checks:** Curl-based container healthcheck for early failure detection.
* **CSP & Canonical Domains:** Hooks into platform CSP/SSL/domain management to keep policies strict and URLs stable. * **CSP & Canonical Domains:** Hooks into platform CSP/SSL/domain management to keep policies strict and URLs stable.
* **Backup Friendly:** Data isolated under `/var/atlassian/application-data/confluence`. * **Backup Friendly:** Data isolated under `{{ CONFLUENCE_HOME }}`.
## Further Resources ## Further Resources

View File

@@ -20,9 +20,14 @@ features:
server: server:
csp: csp:
whitelist: {} whitelist: {}
flags: {} flags:
script-src-elem:
unsafe-inline: true
script-src:
unsafe-inline: true
domains: domains:
canonical: canonical:
- "confluence.{{ PRIMARY_DOMAIN }}" - "confluence.{{ PRIMARY_DOMAIN }}"
rbac: rbac:
roles: {} roles: {}
truststore_enabled: false

View File

@@ -4,5 +4,7 @@ FROM "{{ CONFLUENCE_IMAGE }}:{{ CONFLUENCE_VERSION }}"
# COPY ./plugins/atlassian-sso-dc-latest.obr /opt/atlassian/confluence/confluence/WEB-INF/atlassian-bundled-plugins/ # COPY ./plugins/atlassian-sso-dc-latest.obr /opt/atlassian/confluence/confluence/WEB-INF/atlassian-bundled-plugins/
# Ensure proper permissions for app data # Ensure proper permissions for app data
RUN mkdir -p /var/atlassian/application-data/confluence && \ RUN mkdir -p {{ CONFLUENCE_HOME }} && \
chown -R 2001:2001 /var/atlassian/application-data/confluence chown -R 2001:2001 {{ CONFLUENCE_HOME }}
RUN printf "confluence.home={{ CONFLUENCE_HOME }}\n" \
> /opt/atlassian/confluence/confluence/WEB-INF/classes/confluence-init.properties

View File

@@ -9,7 +9,7 @@
ports: ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:8090" - "127.0.0.1:{{ ports.localhost.http[application_id] }}:8090"
volumes: volumes:
- 'data:/var/atlassian/application-data/confluence' - 'data:{{ CONFLUENCE_HOME }}'
{% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %} {% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %}
{% include 'roles/docker-container/templates/base.yml.j2' %} {% include 'roles/docker-container/templates/base.yml.j2' %}
{% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %} {% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %}

View File

@@ -1,6 +1,6 @@
## Confluence core ## Confluence core
CONFLUENCE_URL="{{ CONFLUENCE_URL }}" CONFLUENCE_URL="{{ CONFLUENCE_URL }}"
CONFLUENCE_HOME="{{ CONFLUENCE_HOME }}"
ATL_PROXY_NAME={{ CONFLUENCE_HOSTNAME }} ATL_PROXY_NAME={{ CONFLUENCE_HOSTNAME }}
ATL_PROXY_PORT={{ WEB_PORT }} ATL_PROXY_PORT={{ WEB_PORT }}
@@ -9,15 +9,17 @@ ATL_TOMCAT_SECURE={{ (WEB_PORT == 443) | lower }}
JVM_MINIMUM_MEMORY={{ CONFLUENCE_JVM_MIN }} JVM_MINIMUM_MEMORY={{ CONFLUENCE_JVM_MIN }}
JVM_MAXIMUM_MEMORY={{ CONFLUENCE_JVM_MAX }} JVM_MAXIMUM_MEMORY={{ CONFLUENCE_JVM_MAX }}
JVM_SUPPORT_RECOMMENDED_ARGS=-Datlassian.home={{ CONFLUENCE_HOME }} -Datlassian.upm.signature.check.disabled={{ CONFLUENCE_TRUST_STORE_ENABLED | ternary('false','true')}}
## Database ## Database
ATL_DB_TYPE=postgres72 ATL_DB_TYPE=postgresql
ATL_DB_DRIVER=org.postgresql.Driver ATL_DB_DRIVER=org.postgresql.Driver
ATL_JDBC_URL=jdbc:postgresql://{{ database_host }}:{{ database_port }}/{{ database_name }} ATL_JDBC_URL=jdbc:postgresql://{{ database_host }}:{{ database_port }}/{{ database_name }}
ATL_JDBC_USER={{ database_username }} ATL_JDBC_USER={{ database_username }}
ATL_JDBC_PASSWORD={{ database_password }} ATL_JDBC_PASSWORD={{ database_password }}
## OIDC
{% if CONFLUENCE_OIDC_ENABLED %} {% if CONFLUENCE_OIDC_ENABLED %}
## OIDC
CONFLUENCE_OIDC_TITLE="{{ CONFLUENCE_OIDC_LABEL | replace('\"','\\\"') }}" CONFLUENCE_OIDC_TITLE="{{ CONFLUENCE_OIDC_LABEL | replace('\"','\\\"') }}"
CONFLUENCE_OIDC_ISSUER="{{ CONFLUENCE_OIDC_ISSUER }}" CONFLUENCE_OIDC_ISSUER="{{ CONFLUENCE_OIDC_ISSUER }}"
CONFLUENCE_OIDC_AUTHORIZATION_ENDPOINT="{{ CONFLUENCE_OIDC_AUTH_URL }}" CONFLUENCE_OIDC_AUTHORIZATION_ENDPOINT="{{ CONFLUENCE_OIDC_AUTH_URL }}"

View File

@@ -11,6 +11,7 @@ container_hostname: "{{ domains | get_domain(application_id) }}"
## URLs ## URLs
CONFLUENCE_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}" CONFLUENCE_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
CONFLUENCE_HOSTNAME: "{{ container_hostname }}" CONFLUENCE_HOSTNAME: "{{ container_hostname }}"
CONFLUENCE_HOME: "/var/atlassian/application-data/confluence"
## OIDC ## OIDC
CONFLUENCE_OIDC_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc') }}" CONFLUENCE_OIDC_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc') }}"
@@ -38,4 +39,8 @@ CONFLUENCE_TOTAL_MB: "{{ ansible_memtotal_mb | int }}"
CONFLUENCE_JVM_MAX_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 2), 12288 ] | min }}" CONFLUENCE_JVM_MAX_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 2), 12288 ] | min }}"
CONFLUENCE_JVM_MIN_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 4), (CONFLUENCE_JVM_MAX_MB | int) ] | min }}" CONFLUENCE_JVM_MIN_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 4), (CONFLUENCE_JVM_MAX_MB | int) ] | min }}"
CONFLUENCE_JVM_MIN: "{{ CONFLUENCE_JVM_MIN_MB }}m" CONFLUENCE_JVM_MIN: "{{ CONFLUENCE_JVM_MIN_MB }}m"
CONFLUENCE_JVM_MAX: "{{ CONFLUENCE_JVM_MAX_MB }}m" CONFLUENCE_JVM_MAX: "{{ CONFLUENCE_JVM_MAX_MB }}m"
## Options
CONFLUENCE_TRUST_STORE_ENABLED: "{{ applications | get_app_conf(application_id, 'truststore_enabled') }}"

View File

@@ -21,7 +21,13 @@ features:
server: server:
csp: csp:
whitelist: {} whitelist: {}
flags: {} flags:
script-src-elem:
unsafe-inline: true
unsafe-eval: true
script-src:
unsafe-inline: true
unsafe-eval: true
domains: domains:
canonical: canonical:
- "jira.{{ PRIMARY_DOMAIN }}" - "jira.{{ PRIMARY_DOMAIN }}"

View File

@@ -77,23 +77,16 @@
}} }}
include_tasks: _update.yml include_tasks: _update.yml
- name: "Update REALM mail settings" - name: "Update REALM mail settings from realm dictionary (SPOT)"
include_tasks: _update.yml include_tasks: _update.yml
vars: vars:
kc_object_kind: "realm" kc_object_kind: "realm"
kc_lookup_field: "id" kc_lookup_field: "id"
kc_lookup_value: "{{ KEYCLOAK_REALM }}" kc_lookup_value: "{{ KEYCLOAK_REALM }}"
kc_desired: kc_desired:
smtpServer: smtpServer: "{{ KEYCLOAK_DICTIONARY_REALM.smtpServer | default({}, true) }}"
from: "no-reply@{{ DEFAULT_SYSTEM_EMAIL.DOMAIN }}" kc_merge_path: "smtpServer"
fromDisplayName: "{{ SOFTWARE_NAME | default('Infinito.Nexus') }}" no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
host: "{{ DEFAULT_SYSTEM_EMAIL.HOST }}"
port: "{{ DEFAULT_SYSTEM_EMAIL.PORT }}"
# Keycloak expects strings "true"/"false"
ssl: "{{ 'true' if not DEFAULT_SYSTEM_EMAIL.START_TLS and DEFAULT_SYSTEM_EMAIL.TLS else 'false' }}"
starttls: "{{ 'true' if DEFAULT_SYSTEM_EMAIL.START_TLS else 'false' }}"
user: "{{ DEFAULT_SYSTEM_EMAIL.USER | default('') }}"
password: "{{ DEFAULT_SYSTEM_EMAIL.PASSWORD | default('') }}"
- include_tasks: 05_rbac_client_scope.yml - include_tasks: 05_rbac_client_scope.yml

View File

@@ -1443,20 +1443,7 @@
"xXSSProtection": "1; mode=block", "xXSSProtection": "1; mode=block",
"strictTransportSecurity": "max-age=31536000; includeSubDomains" "strictTransportSecurity": "max-age=31536000; includeSubDomains"
}, },
"smtpServer": { {%- include "smtp_server.json.j2" -%},
"password": "{{ users['no-reply'].mailu_token }}",
"replyToDisplayName": "",
"starttls": "{{ SYSTEM_EMAIL.START_TLS | lower }}",
"auth": "true",
"port": "{{ SYSTEM_EMAIL.PORT }}",
"replyTo": "",
"host": "{{ SYSTEM_EMAIL.HOST }}",
"from": "{{ users['no-reply'].email }}",
"fromDisplayName": "Keycloak Authentification System - {{ KEYCLOAK_DOMAIN | upper }}",
"envelopeFrom": "",
"ssl": "true",
"user": "{{ users['no-reply'].email }}"
},
"eventsEnabled": false, "eventsEnabled": false,
"eventsListeners": [ "eventsListeners": [
"jboss-logging" "jboss-logging"

View File

@@ -0,0 +1,14 @@
"smtpServer": {
"password": "{{ users['no-reply'].mailu_token }}",
"replyToDisplayName": "",
"starttls": "{{ SYSTEM_EMAIL.START_TLS | lower }}",
"auth": "true",
"port": "{{ SYSTEM_EMAIL.PORT }}",
"replyTo": "",
"host": "{{ SYSTEM_EMAIL.HOST }}",
"from": "{{ users['no-reply'].email }}",
"fromDisplayName": "Keycloak Authentication System - {{ KEYCLOAK_DOMAIN | upper }}",
"envelopeFrom": "",
"ssl": "{{ (SYSTEM_EMAIL.TLS and not SYSTEM_EMAIL.START_TLS) | ternary('true','false') }}",
"user": "{{ users['no-reply'].email }}"
}

View File

@@ -30,6 +30,14 @@
chdir: "{{ docker_compose.directories.instance }}" chdir: "{{ docker_compose.directories.instance }}"
when: "'No relations found.' in db_tables.stdout" when: "'No relations found.' in db_tables.stdout"
- name: "Listmonk | run DB/schema upgrade (non-interactive)"
ansible.builtin.shell: |
set -o pipefail
echo "y" | docker compose run -T application ./listmonk --upgrade
args:
chdir: "{{ docker_compose.directories.instance }}"
when: MODE_UPDATE | bool
- name: Build OIDC settings JSON - name: Build OIDC settings JSON
set_fact: set_fact:
oidc_settings_json: >- oidc_settings_json: >-
@@ -73,3 +81,4 @@
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}" no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
async: "{{ ASYNC_TIME if ASYNC_ENABLED | bool else omit }}" async: "{{ ASYNC_TIME if ASYNC_ENABLED | bool else omit }}"
poll: "{{ ASYNC_POLL if ASYNC_ENABLED | bool else omit }}" poll: "{{ ASYNC_POLL if ASYNC_ENABLED | bool else omit }}"

View File

@@ -0,0 +1,43 @@
# Magento
## Description
**Magento (Adobe Commerce Open Source)** is a powerful, extensible e-commerce platform built with PHP. It supports multi-store setups, advanced catalog management, promotions, checkout flows, and a rich extension ecosystem.
## Overview
This role deploys **Magento 2** via Docker Compose. It is aligned with the Infinito.Nexus stack patterns:
- Reverse-proxy integration (front proxy handled by platform roles)
- Optional **central database** (MariaDB) or app-local DB
- **OpenSearch** for catalog search (required by Magento 2.4+)
- Optional **Redis** cache/session (can be toggled)
- Health checks, volumes, and environment templating
- SMTP wired via platform's `SYSTEM_EMAIL` settings
For setup & operations, see:
- [Installation.md](./Installation.md)
- [Administration.md](./Administration.md)
- [Upgrade.md](./Upgrade.md)
- [User_Administration.md](./User_Administration.md)
## Features
- **Modern search:** OpenSearch out of the box (single-node).
- **Flexible DB:** Use platform's central MariaDB or app-local DB.
- **Optional Redis:** Toggle cache/session backend.
- **Proxy-aware:** Exposes HTTP on localhost, picked up by front proxy role.
- **Automation-friendly:** Admin user seeded from inventory variables.
## Further Resources
- Magento Open Source: https://magento.com/
- DevDocs: https://developer.adobe.com/commerce/
- OpenSearch: https://opensearch.org/
## License / Credits
Developed and maintained by **Kevin Veen-Birkenbach**.
Learn more at [veen.world](https://www.veen.world).
Part of the [Infinito.Nexus Project](https://s.infinito.nexus/code)
Licensed under [Infinito.Nexus NonCommercial License](https://s.infinito.nexus/license).

View File

@@ -0,0 +1,2 @@
# To-dos
- Finish implementation

View File

@@ -0,0 +1,43 @@
features:
matomo: true
css: true
desktop: true
central_database: false # Impossible to use central database due to strict database checking
oidc: false # Magento SSO via OIDC requires extensions; not wired by default
logout: true
ldap: false
server:
csp:
whitelist: {}
domains:
canonical:
- "shop.{{ PRIMARY_DOMAIN }}"
aliases:
- "magento.{{ PRIMARY_DOMAIN }}"
docker:
services:
php:
image: "markoshust/magento-php"
version: "8.2-fpm"
name: "magento-php"
backup:
no_stop_required: true
nginx:
image: "markoshust/magento-nginx"
version: "latest"
name: "magento-nginx"
backup:
no_stop_required: true
database:
enabled: true
version: "11.4"
redis:
enabled: true
search:
enabled: true
image: "opensearchproject/opensearch"
version: "latest"
name: "magento-opensearch"
volumes:
data: "magento_data"

View File

@@ -0,0 +1,25 @@
---
galaxy_info:
author: "Kevin Veen-Birchenbach"
description: "Deploy Magento (Adobe Commerce Open Source) via Docker Compose with OpenSearch, MariaDB, optional Redis, and proxy integration for Infinito.Nexus."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birchenbach
Consulting & Coaching Solutions
https://www.veen.world
galaxy_tags:
- magento
- ecommerce
- php
- docker
- opensearch
- mariadb
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://docs.infinito.nexus"
logo:
class: "fa-solid fa-cart-shopping"
run_after:
- web-app-keycloak
dependencies: []

View File

@@ -0,0 +1,7 @@
credentials:
adobe_public_key:
description: "Adobe/Magento Marketplace Public Key"
algorithm: "plain"
adobe_private_key:
description: "Adobe/Magento Marketplace Private Key"
algorithm: "plain"

View File

@@ -0,0 +1,51 @@
- name: "load docker, db/redis/proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
vars:
docker_compose_flush_handlers: true
- name: "Bootstrap Magento 2.4.8 source (exact working variant)"
command: >
docker exec
-e COMPOSER_AUTH='{"http-basic":{"repo.magento.com":{"username":"{{ MAGENTO_REPO_PUBLIC_KEY }}","password":"{{ MAGENTO_REPO_PRIVATE_KEY }}"}}}'
-e COMPOSER_HOME=/tmp/composer
-e COMPOSER_CACHE_DIR=/tmp/composer/cache
--user {{ MAGENTO_USER }}
{{ MAGENTO_PHP_CONTAINER }} bash -lc 'set -e
mkdir -p /tmp/composer/cache
cd /var/www/html
composer create-project --no-interaction --no-progress --repository-url=https://repo.magento.com/ magento/project-community-edition=2.4.8 .
mkdir -p var pub/static pub/media app/etc
chmod -R 775 var pub/static pub/media app/etc
'
args:
creates: "{{ [ (MAGENTO_VOLUME | docker_volume_path), 'bin/magento' ] | path_join }}"
- name: "Run Magento setup:install (in container)"
command: >
docker exec --user {{ MAGENTO_USER }} {{ MAGENTO_PHP_CONTAINER }} bash -lc "
cd /var/www/html && bin/magento setup:install \
--base-url='{{ MAGENTO_URL }}/' \
--db-host=\"$MYSQL_HOST\" \
--db-name=\"$MYSQL_DATABASE\" \
--db-user=\"$MYSQL_USER\" \
--db-password=\"$MYSQL_PASSWORD\" \
--skip-db-validation \
--db-engine=mysql \
--search-engine='opensearch' \
--opensearch-host=\"$OPENSEARCH_HOST\" \
--opensearch-port=\"$OPENSEARCH_PORT_NUMBER\" \
--admin-firstname=\"$MAGENTO_ADMIN_FIRSTNAME\" \
--admin-lastname=\"$MAGENTO_ADMIN_LASTNAME\" \
--admin-email=\"$MAGENTO_ADMIN_EMAIL\" \
--admin-user=\"$MAGENTO_ADMIN_USERNAME\" \
--admin-password=\"$MAGENTO_ADMIN_PASSWORD\""
args:
creates: "{{ [ (MAGENTO_VOLUME | docker_volume_path), 'app/etc/env.php' ] | path_join }}"
register: magento_install
changed_when: >
(magento_install.stdout is defined and
('Magento installation complete' in magento_install.stdout
or 'successfully installed' in magento_install.stdout))
- include_tasks: utils/run_once.yml

View File

@@ -0,0 +1,35 @@
---
- name: Assert required vars (no defaults anywhere)
assert:
that:
- MAGENTO_DOMAIN is defined and MAGENTO_DOMAIN | length > 0
- MAGENTO_NGINX_PORT is defined
- MAGENTO_PHP_HOST is defined and MAGENTO_PHP_HOST | length > 0
- MAGENTO_PHP_PORT is defined
- docker_compose.directories.config is defined and docker_compose.directories.config | length > 0
fail_msg: "Missing one of: MAGENTO_DOMAIN, MAGENTO_NGINX_PORT, MAGENTO_PHP_HOST, MAGENTO_PHP_PORT, docker_compose.directories.config"
- name: Ensure subdirs exist (config root exists already)
file:
path: "{{ item }}"
state: directory
mode: '0755'
loop:
- "{{ MAGENTO_NGINX_DIR }}"
- "{{ MAGENTO_PHP_DIR }}"
- name: Render nginx main config (no TLS; single source of truth)
template:
src: "nginx.conf.j2"
dest: "{{ MAGENTO_NGINX_CONF_PATH }}"
mode: '0644'
force: true
notify: docker compose up
- name: Render php-fpm pool override (TCP listen; clear_env=no)
template:
src: "php-fpm-zz-docker.conf.j2"
dest: "{{ MAGENTO_PHP_ZZ_CONF_PATH }}"
mode: '0644'
force: true
notify: docker compose up

View File

@@ -0,0 +1,4 @@
---
- name: "construct {{ role_name }}"
include_tasks: 01_core.yml
when: run_once_web_app_magento is not defined

View File

@@ -0,0 +1,51 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
nginx:
{% set container_port = 8000 %}
image: "{{ MAGENTO_NGINX_IMAGE }}:{{ MAGENTO_NGINX_VERSION }}"
container_name: "{{ MAGENTO_NGINX_CONTAINER }}"
environment:
PHP_HOST: "php"
PHP_PORT: "9000"
depends_on:
- php
- search
volumes:
- "data:/var/www/html"
ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}"
healthcheck:
test: ["CMD-SHELL", "nginx -t >/dev/null 2>&1 && { grep -q ':1F40' /proc/net/tcp || grep -q ':1F40' /proc/net/tcp6; }"]
interval: 10s
timeout: 5s
retries: 5
{% include 'roles/docker-container/templates/networks.yml.j2' %}
php:
{% include 'roles/docker-container/templates/base.yml.j2' %}
image: "{{ MAGENTO_PHP_IMAGE }}:{{ MAGENTO_PHP_VERSION }}"
container_name: "{{ MAGENTO_PHP_CONTAINER }}"
volumes:
- "data:/var/www/html"
{% include 'roles/docker-container/templates/depends_on/dmbs_incl.yml.j2' %}
search:
condition: service_started
{% include 'roles/docker-container/templates/networks.yml.j2' %}
search:
{% set container_port = 9200 %}
image: "{{ MAGENTO_SEARCH_IMAGE }}:{{ MAGENTO_SEARCH_VERSION }}"
container_name: "{{ MAGENTO_SEARCH_CONTAINER }}"
{% include 'roles/docker-container/templates/base.yml.j2' %}
environment:
- discovery.type=single-node
- plugins.security.disabled=true
- OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m
{% include 'roles/docker-container/templates/healthcheck/tcp.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
data:
name: {{ MAGENTO_VOLUME }}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -0,0 +1,41 @@
# Magento environment
# Base references:
# - https://github.com/bitnami/containers/tree/main/bitnami/magento
# Host & URLs
MAGENTO_URL="{{ MAGENTO_URL }}"
MAGENTO_BACKEND_FRONTNAME="admin"
MAGENTO_USE_SECURE={{ (WEB_PORT == 443) | ternary('1','0') }}
MAGENTO_BASE_URL_SECURE={{ (WEB_PORT == 443) | ternary('1','0') }}
MAGENTO_USE_SECURE_ADMIN={{ (WEB_PORT == 443) | ternary('1','0') }}
# Admin (seed from global administrator)
MAGENTO_ADMIN_USERNAME="{{ users.administrator.username }}"
MAGENTO_ADMIN_PASSWORD="{{ users.administrator.password }}"
MAGENTO_ADMIN_EMAIL="{{ users.administrator.email }}"
MAGENTO_ADMIN_FIRSTNAME="{{ users.administrator.firstname | default('Admin') }}"
MAGENTO_ADMIN_LASTNAME="{{ users.administrator.lastname | default('User') }}"
# Database (central DB preferred)
MYSQL_HOST="{{ database_host }}"
MYSQL_PORT="{{ database_port }}"
MYSQL_USER="{{ database_username }}"
MYSQL_PASSWORD="{{ database_password }}"
MYSQL_DATABASE="{{ database_name }}"
# Search (Magento 2.4+)
OPENSEARCH_HOST="search"
OPENSEARCH_PORT_NUMBER="9200"
OPENSEARCH_INITIAL_ADMIN_PASSWORD="{{ users.administrator.password }}"
# SMTP (post-install youll wire these in Magento admin or env.php)
SMTP_HOST="{{ SYSTEM_EMAIL.HOST }}"
SMTP_PORT="{{ SYSTEM_EMAIL.PORT }}"
SMTP_USER="{{ users['no-reply'].email }}"
SMTP_PASSWORD="{{ users['no-reply'].mailu_token }}"
SMTP_PROTOCOL={{ SYSTEM_EMAIL.TLS | ternary('tls','ssl') }}
# Misc
PHP_MEMORY_LIMIT="768M"
APACHE_SERVERNAME={{ MAGENTO_DOMAIN }}

View File

@@ -0,0 +1,47 @@
worker_processes auto;
events { worker_connections 1024; }
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
access_log /dev/stdout;
error_log /dev/stderr;
upstream fastcgi_backend {
server {{ MAGENTO_PHP_HOST }}:{{ MAGENTO_PHP_PORT }};
}
server {
listen {{ MAGENTO_NGINX_PORT }};
server_name {{ MAGENTO_DOMAIN }};
set $MAGE_ROOT /var/www/html;
root $MAGE_ROOT/pub;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_index index.php;
fastcgi_pass fastcgi_backend;
fastcgi_read_timeout 300;
fastcgi_connect_timeout 5s;
}
location ~* ^/(app|var|lib|dev|update|vendor|node_modules|\.git|\.svn)/ { deny all; }
location ~ /\. { deny all; }
error_page 404 403 = /errors/404.php;
}
}

View File

@@ -0,0 +1,15 @@
[global]
error_log = /proc/self/fd/2
[www]
listen = 0.0.0.0:{{ MAGENTO_PHP_PORT }}
clear_env = no
pm = dynamic
pm.max_children = 10
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 5
access.log = /proc/self/fd/2
catch_workers_output = yes

Some files were not shown because too many files have changed in this diff Show More