49 Commits

Author SHA1 Message Date
eca567fefd Made gitea LDAP Source primary domain independent 2025-10-18 10:54:39 +02:00
905f461ee8 Add basic healthcheck to oauth2-proxy container template using binary version check for distroless compatibility
Reference: https://chatgpt.com/share/68f35550-4248-800f-9c6a-dbd49a48592e
2025-10-18 10:52:58 +02:00
9f0b259ba9 Merge branch 'master' of github.com:kevinveenbirkenbach/infinito-nexus 2025-10-18 09:41:18 +02:00
06e4323faa Added ansible environmnet 2025-10-17 23:07:43 +02:00
3d99226f37 Refactor BigBlueButton and backup task structure:
- Moved database seed variables from vars/main.yml to task-level include in BigBlueButton
- Simplified core include logic in sys-ctl-bkp-docker-2-loc
- Ensured clean conditional for BKP_DOCKER_2_LOC_DB_ENABLED
See: https://chatgpt.com/share/68f216f7-62d8-800f-94e3-c82e4418e51b (deutsch)
2025-10-17 12:14:39 +02:00
73ba09fbe2 Optimize SSH connection performance by disabling GSSAPI authentication and reverse DNS lookups
- Added 'GSSAPIAuthentication no' to prevent unnecessary Kerberos negotiation delays.
- Added 'UseDNS no' to skip reverse DNS resolution during SSH login, improving connection speed.
- Both changes improve SSH responsiveness, especially in non-domain environments.

Reference: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
2025-10-15 18:37:09 +02:00
01ea9b76ce Enable pipelining globally and modernize SSH settings
- Activated pipelining in [defaults] for better performance.
- Replaced deprecated 'scp_if_ssh' with 'transfer_method'.
- Flattened multi-line ssh_args for compatibility.
- Verified configuration parsing as discussed in https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
2025-10-15 17:45:16 +02:00
c22acf202f Solved bugs 2025-10-15 17:03:57 +02:00
61e138c1a6 Optimize OpenLDAP container resources for up to 5k users (1.25 CPU / 1.5GB RAM / 1024 PIDs). See https://chatgpt.com/share/68ef7228-4028-800f-8986-54206a51b9c1 2025-10-15 12:06:51 +02:00
07c8e036ec Deactivated change when because its anyhow not trackable 2025-10-15 10:27:12 +02:00
0b36059cd2 feat(web-app-gitea): add optional Redis integration for caching, sessions, and queues
This update introduces conditional Redis support for Gitea, allowing connection
to either a local or centralized Redis instance depending on configuration.
Includes resource limits for the Redis service and corresponding environment
variables for cache, session, and queue backends.

Reference: ChatGPT conversation on centralized vs per-app Redis architecture (2025-10-15).
https://chatgpt.com/share/68ef5930-49c8-800f-b6b8-069e6fefda01
2025-10-15 10:20:18 +02:00
d76e384ae3 Enhance CertUtils to return the newest matching certificate and add comprehensive unit tests
- Added run_openssl_dates() to extract notBefore/notAfter timestamps.
- Modified mapping logic to store multiple cert entries per SAN with metadata.
- find_cert_for_domain() now selects the newest certificate based on notBefore and mtime.
- Exact SAN matches take precedence over wildcard matches.
- Added new unit tests (test_cert_utils_newest.py) verifying freshness logic, fallback handling, and wildcard behavior.

Reference: https://chatgpt.com/share/68ef4b4c-41d4-800f-9e50-5da4b6be1105
2025-10-15 09:21:00 +02:00
e6f4f3a6a4 feat(cli/build/defaults): ensure deterministic alphabetical sorting for applications and users
- Added sorting by application key and user key before YAML output.
- Ensures stable and reproducible file generation across runs.
- Added comprehensive unit tests verifying key order and output stability.

See: https://chatgpt.com/share/68ef4778-a848-800f-a50b-a46a3b878797
2025-10-15 09:04:39 +02:00
a80b26ed9e Moved bbb database seeding 2025-10-15 08:50:21 +02:00
45ec7b0ead Optimized include text 2025-10-15 08:39:37 +02:00
ec396d130c Optimized time schedule 2025-10-15 08:37:51 +02:00
93c2fbedd7 Added setting of timezone 2025-10-15 02:24:25 +02:00
d006f0ba5e Optimized schedule 2025-10-15 02:13:13 +02:00
dd43722e02 Raised memory for baserow 2025-10-14 21:59:10 +02:00
05d7ddc491 svc-bkp-rmt-2-loc: migrate pull script to Python + add unit tests; lock down backup-provider ACLs
- Replace Bash pull-specific-host.sh with Python pull-specific-host.py (argparse, identical logic)
- Update role vars and runner template to call python script
- Add __init__.py files for test discovery/imports
- Add unittest: tests/unit/roles/svc-bkp-rmt-2-loc/files/test_pull_specific_host.py (mocks subprocess/os/time; covers success, no types, find-fail, retry-exhaustion)
- Backup provider SSH wrapper: align allowed ls path (backup-docker-to-local)
- Split user role tasks: 01_core (sudoers), 02_permissions_ssh (SSH keys + wrapper), 03_permissions_folders (ownership + default ACLs + depth-limited chown/chmod)
- Ensure default ACLs grant rwx to 'backup' and none to group/other; keep sudo rsync working

Ref: ChatGPT discussion (2025-10-14) — https://chatgpt.com/share/68ee920a-9b98-800f-8806-ddcfe0255149
2025-10-14 20:10:49 +02:00
e54436821c Refactor sys-front-inj-all dependencies handling
Moved CDN and logout role inclusions into a dedicated '01_dependencies.yml' file for better modularity and reusability.
Added variable injection support via 'vars:' to allow flexible configuration like 'proxy_extra_configuration'.

See: https://chatgpt.com/share/68ee880d-cd80-800f-8dda-9e981631a5c7
2025-10-14 19:27:56 +02:00
ed73a37795 Improve get_app_conf robustness and add skip_missing_app parameter support
- Added new optional parameter 'skip_missing_app' to get_app_conf() in module_utils/config_utils.py to safely return defaults when applications are missing.
- Updated group_vars/all/00_general.yml and roles/web-app-nextcloud/config/main.yml to include skip_missing_app=True in all Nextcloud-related calls.
- Added comprehensive unit tests under tests/unit/module_utils/test_config_utils.py covering missing app handling, schema enforcement, nested lists, and index edge cases.

Ref: https://chatgpt.com/share/68ee6b5c-6db0-800f-bc20-d51470d7b39f
2025-10-14 17:25:37 +02:00
adff9271fd Solved rmt backup bugs 2025-10-14 16:29:42 +02:00
2f0fb2cb69 Merged network definitions before application definitions 2025-10-14 15:52:28 +02:00
6abf2629e0 Removed false - 2025-10-13 19:03:23 +02:00
6a8e0f38d8 fix(web-svc-collabora): add required Docker capabilities and resource limits for Collabora Jails
- Added security_opt (seccomp=unconfined, apparmor=unconfined) and cap_add (MKNOD, SYS_CHROOT, SETUID, SETGID, FOWNER)
  to allow Collabora's sandbox (coolmount/systemplate) to mount and chroot properly
- Increased resource limits (2 CPUs, 2 GB RAM, 2048 PIDs) to prevent document timeout and OOM issues
- Resolves 'coolmount: Operation not permitted' and systemplate performance warnings

Refs: https://chatgpt.com/share/68ed03cd-1afc-800f-904e-d1c1cb133914
2025-10-13 15:52:50 +02:00
ae618cbf19 refactor(web-app-desktop, web-app-discourse): improve initialization handling and HTTP readiness check
- Added HTTP readiness check for Desktop application to ensure all logos can be downloaded during initialization
- Introduced 'http_port' variable for better readability
- Simplified role execution structure by moving run_once inclusion into core task file
- Adjusted docker compose handler flushing behavior
- Applied consistent structure to Discourse role

See: https://chatgpt.com/share/68ed02aa-b44c-800f-a125-de8600b102d4
2025-10-13 15:48:26 +02:00
c835ca8f2c refactor(front-injection): stabilize run_once flow and explicit web-service loading
- sys-front-inj-all: load web-svc-cdn and web-svc-logout once; reinitialize inj_enabled after services; move run_once block to top; reorder injections.
- sys-front-inj-css: move run_once call into 01_core; fix app_style_present default; simplify main.
- sys-front-inj-desktop/js/matomo: deactivate per-role run_once blocks; keep utils/run_once where appropriate.
- sys-front-inj-logout: switch to files/logout.js + copy; update head_sub mtime lookup; mark set_fact tasks unchanged.
- sys-svc-cdn: inline former 01_core tasks into main; ensure shared/vendor dirs and set run_once in guarded block; remove 01_core.yml.

Rationale: prevent cascading 'false_condition: run_once_sys_svc_cdn is not defined' skips by setting run-once facts only after the necessary tasks and avoiding parent-scope guards; improves determinism and handler flushing.

Conversation: https://chatgpt.com/share/68ecfaa5-94a0-800f-b1b6-2b969074651f
2025-10-13 15:12:23 +02:00
087175a3c7 Solved mailu token bug 2025-10-13 10:50:59 +02:00
3da645f3b8 Mailu/MSMTP: split token mgmt, idempotent reload, safer guards
• Rename: 02_create-user.yml → 02_manage_user.yml; 03_create-token.yml → 03a_manage_user_token.yml + 03b_create_user_token.yml
• Only (re)run sys-svc-msmtp when no-reply token exists; set run_once_sys_svc_msmtp=true in 01_core
• Reset by setting run_once_sys_svc_msmtp=false after creating no-reply token; then include sys-svc-msmtp
• Harden when-guards (no '{{ }}' in when, safe .get lookups)
• Minor formatting and failed_when readability

Conversation: https://chatgpt.com/share/68ebd196-a264-800f-a215-3a89d0f96c79
2025-10-12 18:05:00 +02:00
a996e2190f feat(logout): wire injector to web-svc-logout and add robust CORS/CSP for /logout
- sys-front-inj-logout: depend on web-svc-logout (run-once guarded) and simplify task flow.
- web-svc-logout: align feature flags/formatting and extend CSP:
  - add cdn.jsdelivr.net to connect/script/style and quote values.
- Nginx: move CORS config into logout-proxy.conf.j2 with dynamic vars:
  - Access-Control-Allow-Origin set to canonical logout origin,
  - Allow-Credentials=true,
  - Allow-Methods=GET, OPTIONS,
  - basic headers list (Accept, Authorization),
  - cache disabled for /logout responses.
- Drop obsolete CORS var passing from 01_core.yml; headers now templated at proxy layer.

Prepares clean cross-origin logout orchestration from https://logout.veen.world.

Refs: ChatGPT discussion – https://chatgpt.com/share/68ebb75f-0170-800f-93c5-e5cb438b8ed4
2025-10-12 16:16:47 +02:00
7dccffd52d Optimized variable whitespacing 2025-10-12 14:37:45 +02:00
853f2c3e2d Added mastodon to disc space cleanup script 2025-10-12 14:37:19 +02:00
b2978a3141 Changed mailu token name 2025-10-12 14:36:13 +02:00
0e0b703ccd Added cleanup to mastodon 2025-10-12 14:03:28 +02:00
0b86b2f057 Set MODE_CLEANUP default true and solved tld localhost user name bug 2025-10-12 01:15:52 +02:00
80e048a274 Fix: make EspoCRM entrypoint POSIX-compliant and remove illegal 'pipefail' usage
The previous entrypoint script used 'set -euo pipefail', which caused runtime errors
because /bin/sh (Dash/BusyBox) does not support 'pipefail'. This commit makes the
entrypoint fully POSIX-safe, adds robust fallbacks for missing scripts, and improves
logging. Also removes a trailing newline in the navigator Docker Compose template.

Related ChatGPT discussion: https://chatgpt.com/share/68eab0b7-7a64-800f-a8aa-e7d7262a262e
2025-10-11 21:33:07 +02:00
2610aec293 Deactivated cakeday plugin because it's an onboard plugin 2025-10-11 18:23:38 +02:00
07db162368 Reformated navigation role 2025-10-11 18:04:58 +02:00
a526d1adc4 Solved Keycloak Master Email Configuration Update settings 2025-10-11 16:57:36 +02:00
ca95079111 Added Email Configuration for Keycloak Master Realm 2025-10-11 16:45:50 +02:00
e410d66cb4 Add health check for Keycloak container and grant global 'admin' realm role to permanent admin user
This update waits for the Keycloak container to become healthy before attempting login and replaces the old realm-management based role assignment with the global 'admin' realm role.
See: https://chatgpt.com/share/68e99953-e988-800f-8b82-9ffb14c11910
2025-10-11 01:40:48 +02:00
ab48cf522f fix(keycloak): make permanent admin creation idempotent and fix password command
- prevent task failure when 'User exists with same username'
- remove invalid '--temporary false' flag from set-password command
- ensure realm-admin role grant remains idempotent

See: https://chatgpt.com/share/68e99271-fdb0-800f-a8ad-11c15d02a670
2025-10-11 01:10:57 +02:00
41c12bdc12 Refactor Keycloak permanent admin creation:
- Remove jq dependency to avoid container command errors
- Use username-based operations instead of user ID lookups
- Improve idempotency and portability across environments

See: https://chatgpt.com/share/68e98e77-9b3c-800f-8393-71b0be22cb46
2025-10-11 00:54:04 +02:00
aae463b602 Optimized keycloak setup procedures 2025-10-11 00:20:27 +02:00
bb50551533 Restrucutred keycloak accounts 2025-10-10 23:40:58 +02:00
098099b41e Refactored web-app-keycloak 2025-10-10 22:45:26 +02:00
0a7d767252 Added contains to filter 2025-10-10 22:16:23 +02:00
d88599f76c Added 'to_nice_json' exception 2025-10-10 22:16:22 +02:00
104 changed files with 1588 additions and 646 deletions

View File

@@ -1,5 +1,6 @@
[defaults]
# --- Performance & Behavior ---
pipelining = True
forks = 25
strategy = linear
gathering = smart
@@ -14,19 +15,14 @@ stdout_callback = yaml
callbacks_enabled = profile_tasks,timer
# --- Plugin paths ---
filter_plugins = ./filter_plugins
filter_plugins = ./filter_plugins
lookup_plugins = ./lookup_plugins
module_utils = ./module_utils
[ssh_connection]
# Multiplexing: safer socket path in HOME instead of /tmp
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r \
-o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new \
-o PreferredAuthentications=publickey,password,keyboard-interactive
# Pipelining boosts speed; works fine if sudoers does not enforce "requiretty"
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r -o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new -o PreferredAuthentications=publickey,password,keyboard-interactive
pipelining = True
scp_if_ssh = smart
transfer_method = smart
[persistent_connection]
connect_timeout = 30

View File

@@ -83,6 +83,13 @@ class DefaultsGenerator:
print(f"Error during rendering: {e}", file=sys.stderr)
sys.exit(1)
# Sort applications by application key for stable output
apps = result.get("defaults_applications", {})
if isinstance(apps, dict) and apps:
result["defaults_applications"] = {
k: apps[k] for k in sorted(apps.keys())
}
# Write output
self.output_file.parent.mkdir(parents=True, exist_ok=True)
with self.output_file.open("w", encoding="utf-8") as f:

View File

@@ -220,6 +220,10 @@ def main():
print(f"Error building user entries: {e}", file=sys.stderr)
sys.exit(1)
# Sort users by key for deterministic output
if isinstance(users, dict) and users:
users = OrderedDict(sorted(users.items()))
# Convert OrderedDict into plain dict for YAML
default_users = {'default_users': users}
plain_data = dictify(default_users)

View File

@@ -76,8 +76,9 @@ _applications_nextcloud_oidc_flavor: >-
False,
'oidc_login'
if applications
| get_app_conf('web-app-nextcloud','features.ldap',False, True)
else 'sociallogin'
| get_app_conf('web-app-nextcloud','features.ldap',False, True, True)
else 'sociallogin',
True
)
}}

View File

@@ -5,6 +5,6 @@ MODE_DUMMY: false # Executes dummy/test routines instead
MODE_UPDATE: true # Executes updates
MODE_DEBUG: false # This enables debugging in ansible and in the apps, You SHOULD NOT enable this on production servers
MODE_RESET: false # Cleans up all Infinito.Nexus files. It's necessary to run to whole playbook and not particial roles when using this function.
MODE_CLEANUP: "{{ MODE_DEBUG | bool }}" # Cleanup unused files and configurations
MODE_CLEANUP: true # Cleanup unused files and configurations
MODE_ASSERT: "{{ MODE_DEBUG | bool }}" # Executes validation tasks during the run.
MODE_BACKUP: true # Executes the Backup before the deployment

View File

@@ -1,4 +1,3 @@
# Service Timers
## Meta
@@ -24,29 +23,29 @@ SYS_SCHEDULE_HEALTH_BTRFS: "*-*-* 00:00:00"
SYS_SCHEDULE_HEALTH_JOURNALCTL: "*-*-* 00:00:00" # Check once per day the journalctl for errors
SYS_SCHEDULE_HEALTH_DISC_SPACE: "*-*-* 06,12,18,00:00:00" # Check four times per day if there is sufficient disc space
SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if the docker containers are healthy
SYS_SCHEDULE_HEALTH_DOCKER_VOLUMES: "*-*-* {{ HOURS_SERVER_AWAKE }}:15:00" # Check once per hour if the docker volumes are healthy
SYS_SCHEDULE_HEALTH_CSP_CRAWLER: "*-*-* {{ HOURS_SERVER_AWAKE }}:30:00" # Check once per hour if all CSP are fullfilled available
SYS_SCHEDULE_HEALTH_NGINX: "*-*-* {{ HOURS_SERVER_AWAKE }}:45:00" # Check once per hour if all webservices are available
SYS_SCHEDULE_HEALTH_DOCKER_VOLUMES: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if the docker volumes are healthy
SYS_SCHEDULE_HEALTH_CSP_CRAWLER: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if all CSP are fullfilled available
SYS_SCHEDULE_HEALTH_NGINX: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if all webservices are available
SYS_SCHEDULE_HEALTH_MSMTP: "*-*-* 00:00:00" # Check once per day SMTP Server
### Schedule for cleanup tasks
SYS_SCHEDULE_CLEANUP_BACKUPS: "*-*-* 00,06,12,18:30:00" # Cleanup backups every 6 hours, MUST be called before disc space cleanup
SYS_SCHEDULE_CLEANUP_DISC_SPACE: "*-*-* 07,13,19,01:30:00" # Cleanup disc space every 6 hours
SYS_SCHEDULE_CLEANUP_CERTS: "*-*-* 12,00:45:00" # Deletes and revokes unused certs
SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 12:00:00" # Clean up failed docker backups every noon
SYS_SCHEDULE_CLEANUP_CERTS: "*-*-* 20:00" # Deletes and revokes unused certs once per day
SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 21:00" # Clean up failed docker backups once per day
SYS_SCHEDULE_CLEANUP_BACKUPS: "*-*-* 22:00" # Cleanup backups once per day, MUST be called before disc space cleanup
SYS_SCHEDULE_CLEANUP_DISC_SPACE: "*-*-* 23:00" # Cleanup disc space once per day
### Schedule for repair services
SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 00:00:00" # Restart docker instances every Sunday
### Schedule for backup tasks
SYS_SCHEDULE_BACKUP_DOCKER_TO_LOCAL: "*-*-* 03:30:00"
SYS_SCHEDULE_BACKUP_REMOTE_TO_LOCAL: "*-*-* 21:30:00"
SYS_SCHEDULE_BACKUP_REMOTE_TO_LOCAL: "*-*-* 00:30:00" # Pull Backup of the previous day
SYS_SCHEDULE_BACKUP_DOCKER_TO_LOCAL: "*-*-* 01:00:00" # Backup the current day
### Schedule for Maintenance Tasks
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_RENEW: "*-*-* 12,00:30:00" # Renew Mailu certificates twice per day
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_DEPLOY: "*-*-* 13,01:30:00" # Deploy letsencrypt certificates twice per day to docker containers
SYS_SCHEDULE_MAINTANANCE_NEXTCLOUD: "22" # Do nextcloud maintanace between 22:00 and 02:00
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_RENEW: "*-*-* 10,22:00:00" # Renew Mailu certificates twice per day
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_DEPLOY: "*-*-* 11,23:00:00" # Deploy letsencrypt certificates twice per day to docker containers
SYS_SCHEDULE_MAINTANANCE_NEXTCLOUD: "21" # Do nextcloud maintanace between 21:00 and 01:00
### Animation
SYS_SCHEDULE_ANIMATION_KEYBOARD_COLOR: "*-*-* *:*:00" # Change the keyboard color every minute

View File

@@ -6,6 +6,7 @@ __metaclass__ = type
import os
import subprocess
import time
from datetime import datetime
class CertUtils:
_domain_cert_mapping = None
@@ -22,6 +23,30 @@ class CertUtils:
except subprocess.CalledProcessError:
return ""
@staticmethod
def run_openssl_dates(cert_path):
"""
Returns (not_before_ts, not_after_ts) as POSIX timestamps or (None, None) on failure.
"""
try:
output = subprocess.check_output(
['openssl', 'x509', '-in', cert_path, '-noout', '-startdate', '-enddate'],
universal_newlines=True
)
nb, na = None, None
for line in output.splitlines():
line = line.strip()
if line.startswith('notBefore='):
nb = line.split('=', 1)[1].strip()
elif line.startswith('notAfter='):
na = line.split('=', 1)[1].strip()
def _parse(openssl_dt):
# OpenSSL format example: "Oct 10 12:34:56 2025 GMT"
return int(datetime.strptime(openssl_dt, "%b %d %H:%M:%S %Y %Z").timestamp())
return (_parse(nb) if nb else None, _parse(na) if na else None)
except Exception:
return (None, None)
@staticmethod
def extract_sans(cert_text):
dns_entries = []
@@ -59,7 +84,6 @@ class CertUtils:
else:
return domain == san
@classmethod
def build_snapshot(cls, cert_base_path):
snapshot = []
@@ -82,6 +106,17 @@ class CertUtils:
@classmethod
def refresh_cert_mapping(cls, cert_base_path, debug=False):
"""
Build mapping: SAN -> list of entries
entry = {
'folder': str,
'cert_path': str,
'mtime': float,
'not_before': int|None,
'not_after': int|None,
'is_wildcard': bool
}
"""
cert_files = cls.list_cert_files(cert_base_path)
mapping = {}
for cert_path in cert_files:
@@ -90,46 +125,82 @@ class CertUtils:
continue
sans = cls.extract_sans(cert_text)
folder = os.path.basename(os.path.dirname(cert_path))
try:
mtime = os.stat(cert_path).st_mtime
except FileNotFoundError:
mtime = 0.0
nb, na = cls.run_openssl_dates(cert_path)
for san in sans:
if san not in mapping:
mapping[san] = folder
entry = {
'folder': folder,
'cert_path': cert_path,
'mtime': mtime,
'not_before': nb,
'not_after': na,
'is_wildcard': san.startswith('*.'),
}
mapping.setdefault(san, []).append(entry)
cls._domain_cert_mapping = mapping
if debug:
print(f"[DEBUG] Refreshed domain-to-cert mapping: {mapping}")
print(f"[DEBUG] Refreshed domain-to-cert mapping (counts): "
f"{ {k: len(v) for k, v in mapping.items()} }")
@classmethod
def ensure_cert_mapping(cls, cert_base_path, debug=False):
if cls._domain_cert_mapping is None or cls.snapshot_changed(cert_base_path):
cls.refresh_cert_mapping(cert_base_path, debug)
@staticmethod
def _score_entry(entry):
"""
Return tuple used for sorting newest-first:
(not_before or -inf, mtime)
"""
nb = entry.get('not_before')
mtime = entry.get('mtime', 0.0)
return (nb if nb is not None else -1, mtime)
@classmethod
def find_cert_for_domain(cls, domain, cert_base_path, debug=False):
cls.ensure_cert_mapping(cert_base_path, debug)
exact_match = None
wildcard_match = None
candidates_exact = []
candidates_wild = []
for san, folder in cls._domain_cert_mapping.items():
for san, entries in cls._domain_cert_mapping.items():
if san == domain:
exact_match = folder
break
if san.startswith('*.'):
candidates_exact.extend(entries)
elif san.startswith('*.'):
base = san[2:]
if domain.count('.') == base.count('.') + 1 and domain.endswith('.' + base):
wildcard_match = folder
candidates_wild.extend(entries)
if exact_match:
if debug:
print(f"[DEBUG] Exact match for {domain} found in {exact_match}")
return exact_match
def _pick_newest(entries):
if not entries:
return None
# newest by (not_before, mtime)
best = max(entries, key=cls._score_entry)
return best
if wildcard_match:
if debug:
print(f"[DEBUG] Wildcard match for {domain} found in {wildcard_match}")
return wildcard_match
best_exact = _pick_newest(candidates_exact)
best_wild = _pick_newest(candidates_wild)
if best_exact and debug:
print(f"[DEBUG] Best exact match for {domain}: {best_exact['folder']} "
f"(not_before={best_exact['not_before']}, mtime={best_exact['mtime']})")
if best_wild and debug:
print(f"[DEBUG] Best wildcard match for {domain}: {best_wild['folder']} "
f"(not_before={best_wild['not_before']}, mtime={best_wild['mtime']})")
# Prefer exact if it exists; otherwise wildcard
chosen = best_exact or best_wild
if chosen:
return chosen['folder']
if debug:
print(f"[DEBUG] No certificate folder found for {domain}")
return None

View File

@@ -24,7 +24,7 @@ class ConfigEntryNotSetError(AppConfigKeyError):
pass
def get_app_conf(applications, application_id, config_path, strict=True, default=None):
def get_app_conf(applications, application_id, config_path, strict=True, default=None, skip_missing_app=False):
# Path to the schema file for this application
schema_path = os.path.join('roles', application_id, 'schema', 'main.yml')
@@ -133,6 +133,9 @@ def get_app_conf(applications, application_id, config_path, strict=True, default
try:
obj = applications[application_id]
except KeyError:
if skip_missing_app:
# Simply return default instead of failing
return default if default is not None else False
raise AppConfigKeyError(
f"Application ID '{application_id}' not found in applications dict.\n"
f"path_trace: {path_trace}\n"

View File

@@ -3,4 +3,6 @@ collections:
- name: community.general
- name: hetzner.hcloud
yay:
- python-simpleaudio
- python-simpleaudio
pacman:
- ansible

View File

@@ -153,6 +153,11 @@ roles:
description: "Core AI building blocks—model serving, OpenAI-compatible gateways, vector databases, orchestration, and chat UIs."
icon: "fas fa-brain"
invokable: true
bkp:
title: "Backup Services"
description: "Service-level backup and recovery components—handling automated data snapshots, remote backups, synchronization services, and backup orchestration across databases, files, and containers."
icon: "fas fa-database"
invokable: true
user:
title: "Users & Access"
description: "User accounts & access control"

View File

@@ -127,7 +127,7 @@
#de_BE@euro ISO-8859-15
#de_CH.UTF-8 UTF-8
#de_CH ISO-8859-1
de_DE.UTF-8 UTF-8
#de_DE.UTF-8 UTF-8
#de_DE ISO-8859-1
#de_DE@euro ISO-8859-15
#de_IT.UTF-8 UTF-8

View File

View File

@@ -0,0 +1,132 @@
#!/usr/bin/env python3
import argparse
import os
import subprocess
import time
import sys
def run_command(command, capture_output=True, check=False, shell=True):
"""Run a shell command and return its output as string."""
try:
result = subprocess.run(
command,
capture_output=capture_output,
shell=shell,
text=True,
check=check
)
return result.stdout.strip()
except subprocess.CalledProcessError as e:
if capture_output:
print(e.stdout)
print(e.stderr)
raise
def pull_backups(hostname: str):
print(f"pulling backups from: {hostname}")
errors = 0
print("loading meta data...")
remote_host = f"backup@{hostname}"
print(f"host address: {remote_host}")
remote_machine_id = run_command(f'ssh "{remote_host}" sha256sum /etc/machine-id')[:64]
print(f"remote machine id: {remote_machine_id}")
general_backup_machine_dir = f"/Backups/{remote_machine_id}/"
print(f"backup dir: {general_backup_machine_dir}")
try:
remote_backup_types = run_command(
f'ssh "{remote_host}" "find {general_backup_machine_dir} -maxdepth 1 -type d -execdir basename {{}} ;"'
).splitlines()
print(f"backup types: {' '.join(remote_backup_types)}")
except subprocess.CalledProcessError:
sys.exit(1)
for backup_type in remote_backup_types:
if backup_type == remote_machine_id:
continue
print(f"backup type: {backup_type}")
general_backup_type_dir = f"{general_backup_machine_dir}{backup_type}/"
general_versions_dir = general_backup_type_dir
# local previous version
try:
local_previous_version_dir = run_command(f"ls -d {general_versions_dir}* | tail -1")
except subprocess.CalledProcessError:
local_previous_version_dir = ""
print(f"last local backup: {local_previous_version_dir}")
# remote versions
remote_backup_versions = run_command(
f'ssh "{remote_host}" "ls -d /Backups/{remote_machine_id}/backup-docker-to-local/*"'
).splitlines()
print(f"remote backup versions: {' '.join(remote_backup_versions)}")
remote_last_backup_dir = remote_backup_versions[-1] if remote_backup_versions else ""
print(f"last remote backup: {remote_last_backup_dir}")
remote_source_path = f"{remote_host}:{remote_last_backup_dir}/"
print(f"source path: {remote_source_path}")
local_backup_destination_path = remote_last_backup_dir
print(f"backup destination: {local_backup_destination_path}")
print("creating local backup destination folder...")
os.makedirs(local_backup_destination_path, exist_ok=True)
rsync_command = (
f'rsync -abP --delete --delete-excluded --rsync-path="sudo rsync" '
f'--link-dest="{local_previous_version_dir}" "{remote_source_path}" "{local_backup_destination_path}"'
)
print("starting backup...")
print(f"executing: {rsync_command}")
retry_count = 0
max_retries = 12
retry_delay = 300 # 5 minutes
last_retry_start = 0
max_retry_duration = 43200 # 12 hours
rsync_exit_code = 1
while retry_count < max_retries:
print(f"Retry attempt: {retry_count + 1}")
if retry_count > 0:
current_time = int(time.time())
last_retry_duration = current_time - last_retry_start
if last_retry_duration >= max_retry_duration:
print("Last retry took more than 12 hours, increasing max retries to 12.")
max_retries = 12
last_retry_start = int(time.time())
rsync_exit_code = os.system(rsync_command)
if rsync_exit_code == 0:
break
retry_count += 1
time.sleep(retry_delay)
if rsync_exit_code != 0:
print(f"Error: rsync failed after {max_retries} attempts")
errors += 1
sys.exit(errors)
def main():
parser = argparse.ArgumentParser(
description="Pull backups from a remote backup host via rsync."
)
parser.add_argument(
"hostname",
help="Hostname from which backup should be pulled"
)
args = parser.parse_args()
pull_backups(args.hostname)
if __name__ == "__main__":
main()

View File

@@ -1,85 +0,0 @@
#!/bin/bash
# @param $1 hostname from which backup should be pulled
echo "pulling backups from: $1" &&
# error counter
errors=0 &&
echo "loading meta data..." &&
remote_host="backup@$1" &&
echo "host address: $remote_host" &&
remote_machine_id="$( (ssh "$remote_host" sha256sum /etc/machine-id) | head -c 64 )" &&
echo "remote machine id: $remote_machine_id" &&
general_backup_machine_dir="/Backups/$remote_machine_id/" &&
echo "backup dir: $general_backup_machine_dir" &&
remote_backup_types="$(ssh "$remote_host" "find $general_backup_machine_dir -maxdepth 1 -type d -execdir basename {} ;")" &&
echo "backup types: $remote_backup_types" || exit 1
for backup_type in $remote_backup_types; do
if [ "$backup_type" != "$remote_machine_id" ]; then
echo "backup type: $backup_type" &&
general_backup_type_dir="$general_backup_machine_dir""$backup_type/" &&
general_versions_dir="$general_backup_type_dir" &&
local_previous_version_dir="$(ls -d $general_versions_dir* | tail -1)" &&
echo "last local backup: $local_previous_version_dir" &&
remote_backup_versions="$(ssh "$remote_host" ls -d "$general_backup_type_dir"\*)" &&
echo "remote backup versions: $remote_backup_versions" &&
remote_last_backup_dir=$(echo "$remote_backup_versions" | tail -1) &&
echo "last remote backup: $remote_last_backup_dir" &&
remote_source_path="$remote_host:$remote_last_backup_dir/" &&
echo "source path: $remote_source_path" &&
local_backup_destination_path=$remote_last_backup_dir &&
echo "backup destination: $local_backup_destination_path" &&
echo "creating local backup destination folder..." &&
mkdir -vp "$local_backup_destination_path" &&
echo "starting backup..."
rsync_command='rsync -abP --delete --delete-excluded --rsync-path="sudo rsync" --link-dest="'$local_previous_version_dir'" "'$remote_source_path'" "'$local_backup_destination_path'"'
echo "executing: $rsync_command"
retry_count=0
max_retries=12
retry_delay=300 # Retry delay in seconds (5 minutes)
last_retry_start=0
max_retry_duration=43200 # Maximum duration for a single retry attempt (12 hours)
while [[ $retry_count -lt $max_retries ]]; do
echo "Retry attempt: $((retry_count + 1))"
if [[ $retry_count -gt 0 ]]; then
current_time=$(date +%s)
last_retry_duration=$((current_time - last_retry_start))
if [[ $last_retry_duration -ge $max_retry_duration ]]; then
echo "Last retry took more than 12 hours, increasing max retries to 12."
max_retries=12
fi
fi
last_retry_start=$(date +%s)
eval "$rsync_command"
rsync_exit_code=$?
if [[ $rsync_exit_code -eq 0 ]]; then
break
fi
retry_count=$((retry_count + 1))
sleep $retry_delay
done
if [[ $rsync_exit_code -ne 0 ]]; then
echo "Error: rsync failed after $max_retries attempts"
((errors += 1))
fi
fi
done
exit $errors;

View File

@@ -10,15 +10,15 @@
- include_tasks: utils/run_once.yml
when: run_once_svc_bkp_rmt_2_loc is not defined
- name: "create {{ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR }}"
- name: "Create Directory '{{ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR }}'"
file:
path: "{{ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR }}"
state: directory
mode: "0755"
- name: create svc-bkp-rmt-2-loc.sh
- name: "Deploy '{{ DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT }}'"
copy:
src: svc-bkp-rmt-2-loc.sh
src: "{{ DOCKER_BACKUP_REMOTE_2_LOCAL_FILE }}"
dest: "{{ DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT }}"
mode: "0755"

View File

@@ -3,6 +3,6 @@
hosts="{{ DOCKER_BACKUP_REMOTE_2_LOCAL_BACKUP_PROVIDERS | join(' ') }}";
errors=0
for host in $hosts; do
bash {{ DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT }} $host || ((errors+=1));
python {{ DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT }} $host || ((errors+=1));
done;
exit $errors;

View File

@@ -1,5 +1,9 @@
# General
application_id: svc-bkp-rmt-2-loc
system_service_id: "{{ application_id }}"
system_service_id: "{{ application_id }}"
# Role Specific
DOCKER_BACKUP_REMOTE_2_LOCAL_DIR: '{{ PATH_ADMINISTRATOR_SCRIPTS }}{{ application_id }}/'
DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT: "{{ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR }}svc-bkp-rmt-2-loc.sh"
DOCKER_BACKUP_REMOTE_2_LOCAL_FILE: 'pull-specific-host.py'
DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT: "{{ [ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR , DOCKER_BACKUP_REMOTE_2_LOCAL_FILE ] | path_join }}"
DOCKER_BACKUP_REMOTE_2_LOCAL_BACKUP_PROVIDERS: "{{ applications | get_app_conf(application_id, 'backup_providers') }}"

View File

@@ -8,6 +8,11 @@ docker:
image: "bitnamilegacy/openldap"
name: "openldap"
version: "latest"
cpus: 1.25
# Optimized up to 5k user
mem_reservation: 1g
mem_limit: 1.5g
pids_limit: 1024
network: "openldap"
volumes:
data: "openldap_data"

View File

@@ -13,7 +13,7 @@ get_backup_types="find /Backups/$hashed_machine_id/ -maxdepth 1 -type d -execdir
# @todo This configuration is not scalable yet. If other backup services then sys-ctl-bkp-docker-2-loc are integrated, this logic needs to be optimized
get_version_directories="ls -d /Backups/$hashed_machine_id/sys-ctl-bkp-docker-2-loc/*"
get_version_directories="ls -d /Backups/$hashed_machine_id/backup-docker-to-local/*"
last_version_directory="$($get_version_directories | tail -1)"
rsync_command="sudo rsync --server --sender -blogDtpre.iLsfxCIvu . $last_version_directory/"

View File

@@ -3,30 +3,6 @@
name: backup
create_home: yes
- name: create .ssh directory
file:
path: /home/backup/.ssh
state: directory
owner: backup
group: backup
mode: '0700'
- name: create /home/backup/.ssh/authorized_keys
template:
src: "authorized_keys.j2"
dest: /home/backup/.ssh/authorized_keys
owner: backup
group: backup
mode: '0644'
- name: create /home/backup/ssh-wrapper.sh
copy:
src: "ssh-wrapper.sh"
dest: /home/backup/ssh-wrapper.sh
owner: backup
group: backup
mode: '0700'
- name: grant backup sudo rights
copy:
src: "backup"
@@ -35,3 +11,9 @@
owner: root
group: root
notify: sshd restart
- include_tasks: 02_permissions_ssh.yml
- include_tasks: 03_permissions_folders.yml
- include_tasks: utils/run_once.yml

View File

@@ -0,0 +1,23 @@
- name: create .ssh directory
file:
path: /home/backup/.ssh
state: directory
owner: backup
group: backup
mode: '0700'
- name: create /home/backup/.ssh/authorized_keys
template:
src: "authorized_keys.j2"
dest: /home/backup/.ssh/authorized_keys
owner: backup
group: backup
mode: '0644'
- name: create /home/backup/ssh-wrapper.sh
copy:
src: "ssh-wrapper.sh"
dest: /home/backup/ssh-wrapper.sh
owner: backup
group: backup
mode: '0700'

View File

@@ -0,0 +1,66 @@
# Ensure the backups root exists and is owned by backup
- name: Ensure backups root exists and owned by backup
file:
path: "{{ BACKUPS_FOLDER_PATH }}"
state: directory
owner: backup
group: backup
mode: "0700"
# Explicit ACL so 'backup' has rwx, others none
- name: Grant ACL rwx on backups root to backup user
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
entity: backup
etype: user
permissions: rwx
state: present
# Set default ACLs so new entries inherit rwx for backup and nothing for others
- name: Set default ACL (inherit) for backup user under backups root
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
entity: backup
etype: user
permissions: rwx
default: true
state: present
# Remove default ACLs for group/others (defensive hardening)
# Default ACLs so new entries inherit only backup's rwx
- name: Default ACL for backup user (inherit)
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
etype: user
entity: backup
permissions: rwx
default: true
state: present
# Explicitly set default group/other to no permissions (instead of absent)
- name: Default ACL for group -> none
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
etype: group
permissions: '---'
default: true
state: present
- name: Default ACL for other -> none
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
etype: other
permissions: '---'
default: true
state: present
- name: Fix ownership level 0..2 directories to backup:backup
ansible.builtin.shell: >
find "{{ BACKUPS_FOLDER_PATH }}" -mindepth 0 -maxdepth 2 -xdev -type d -exec chown backup:backup {} +
changed_when: false
- name: Fix perms level 0..2 directories to 0700
ansible.builtin.shell: >
find "{{ BACKUPS_FOLDER_PATH }}" -mindepth 0 -maxdepth 2 -xdev -type d -exec chmod 700 {} +
changed_when: false

View File

@@ -1,4 +1,2 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
- include_tasks: 01_core.yml
when: run_once_sys_bkp_provider_user is not defined

View File

@@ -1,8 +1,7 @@
- name: Include dependencies
include_role:
name: '{{ item }}'
loop:
- sys-svc-msmtp
name: "sys-svc-msmtp"
when: run_once_sys_svc_msmtp is not defined or run_once_sys_svc_msmtp is false
- include_role:
name: sys-service

View File

@@ -1,9 +1,6 @@
- block:
- include_tasks: 01_core.yml
when:
- run_once_sys_ctl_bkp_docker_2_loc is not defined
- include_tasks: 01_core.yml
when: run_once_sys_ctl_bkp_docker_2_loc is not defined
- name: "include 04_seed-database-to-backup.yml"
include_tasks: 04_seed-database-to-backup.yml
when:
- BKP_DOCKER_2_LOC_DB_ENABLED | bool
when: BKP_DOCKER_2_LOC_DB_ENABLED | bool

View File

@@ -17,7 +17,7 @@
name: sys-service
vars:
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start: "{{ system_service_script_exec }} --backups-folder-path {{ BACKUPS_FOLDER_PATH }} --maximum-backup-size-percent {{SIZE_PERCENT_MAXIMUM_BACKUP}}"
system_service_tpl_exec_start: "{{ system_service_script_exec }} --backups-folder-path {{ BACKUPS_FOLDER_PATH }} --maximum-backup-size-percent {{ SIZE_PERCENT_MAXIMUM_BACKUP }}"
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP | join(" ") }} --timeout "{{ SYS_TIMEOUT_BACKUP_SERVICES }}"'
system_service_copy_files: true
system_service_force_linear_sync: false

View File

@@ -39,6 +39,18 @@ if [ "$force_freeing" = true ]; then
docker exec -u www-data $nextcloud_application_container /var/www/html/occ versions:cleanup || exit 6
fi
# Mastodon cleanup (remote media cache)
mastodon_application_container="{{ applications | get_app_conf('web-app-mastodon', 'docker.services.mastodon.name') }}"
mastodon_cleanup_days="1"
if [ -n "$mastodon_application_container" ] && docker ps -a --format '{% raw %}{{.Names}}{% endraw %}' | grep -qw "$mastodon_application_container"; then
echo "Cleaning up Mastodon media cache (older than ${mastodon_cleanup_days} days)" &&
docker exec -u root "$mastodon_application_container" bash -lc "bin/tootctl media remove --days=${mastodon_cleanup_days}" || exit 8
# Optional: additionally remove local thumbnail/cache files older than X days
# Warning: these will be regenerated when accessed, which may cause extra CPU/I/O load
# docker exec -u root "$mastodon_application_container" bash -lc "find /mastodon/public/system/cache -type f -mtime +${mastodon_cleanup_days} -delete" || exit 9
fi
fi
if command -v pacman >/dev/null 2>&1 ; then

View File

@@ -0,0 +1,16 @@
- name: "Load CDN for '{{ domain }}'"
include_role:
name: web-svc-cdn
public: false
when:
- application_id != 'web-svc-cdn'
- run_once_web_svc_cdn is not defined
- name: Load Logout for '{{ domain }}'
include_role:
name: web-svc-logout
public: false
when:
- run_once_web_svc_logout is not defined
- application_id != 'web-svc-logout'
- inj_enabled.logout

View File

@@ -1,22 +1,41 @@
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_all is not defined
- name: Build inj_enabled
set_fact:
inj_enabled: "{{ applications | inj_enabled(application_id, SRV_WEB_INJ_COMP_FEATURES_ALL) }}"
- name: "Load CDN Service for '{{ domain }}'"
include_role:
name: sys-svc-cdn
public: true # Expose variables so that they can be used in all injection roles
- name: "Included dependent services"
include_tasks: 01_dependencies.yml
vars:
proxy_extra_configuration: ""
- name: Reinitialize 'inj_enabled' for '{{ domain }}', after modification by CDN
- name: Reinitialize 'inj_enabled' for '{{ domain }}', after loading the required webservices
set_fact:
inj_enabled: "{{ applications | inj_enabled(application_id, SRV_WEB_INJ_COMP_FEATURES_ALL) }}"
inj_head_features: "{{ SRV_WEB_INJ_COMP_FEATURES_ALL | inj_features('head') }}"
inj_body_features: "{{ SRV_WEB_INJ_COMP_FEATURES_ALL | inj_features('body') }}"
- name: "Load CDN Service for '{{ domain }}'"
include_role:
name: sys-svc-cdn
public: true
- name: "Activate logout proxy for '{{ domain }}'"
include_role:
name: sys-front-inj-logout
public: true
when: inj_enabled.logout
- name: "Activate Desktop iFrame notifier for '{{ domain }}'"
include_role:
name: sys-front-inj-desktop
public: true # Vars used in templates
public: true
when: inj_enabled.desktop
- name: "Activate Corporate CSS for '{{ domain }}'"
@@ -33,17 +52,3 @@
include_role:
name: sys-front-inj-javascript
when: inj_enabled.javascript
- name: "Activate logout proxy for '{{ domain }}'"
include_role:
name: sys-front-inj-logout
public: true # Vars used in templates
when: inj_enabled.logout
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_all is not defined

View File

@@ -1,8 +1,3 @@
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- name: Generate color palette with colorscheme-generator
set_fact:
color_palette: "{{ lookup('colorscheme', CSS_BASE_COLOR, count=CSS_COUNT, shades=CSS_SHADES) }}"
@@ -19,3 +14,5 @@
group: "{{ NGINX.USER }}"
mode: '0644'
loop: "{{ CSS_FILES }}"
- include_tasks: utils/run_once.yml

View File

@@ -1,6 +1,4 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
- include_tasks: 01_core.yml
when: run_once_sys_front_inj_css is not defined
- name: "Resolve optional app style.css source for '{{ application_id }}'"

View File

@@ -3,6 +3,6 @@
{% for css_file in ['default.css','bootstrap.css'] %}
<link rel="stylesheet" href="{{ [ cdn_urls.shared.css, css_file, lookup('local_mtime_qs', [__css_tpl_dir, css_file ~ '.j2'] | path_join)] | url_join }}">
{% endfor %}
{% if app_style_present | bool %}
{% if app_style_present | default(false) | bool %}
<link rel="stylesheet" href="{{ [ cdn_urls.role.release.css, 'style.css', lookup('local_mtime_qs', app_style_src)] | url_join }}">
{% endif %}

View File

@@ -1,8 +1,4 @@
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: 01_deploy.yml
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_desktop is not defined

View File

@@ -1,11 +1,4 @@
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_javascript is not defined
# run_once_sys_front_inj_javascript: deactivated
- name: "Load JavaScript code for '{{ application_id }}'"
set_fact:

View File

@@ -1,8 +1,6 @@
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when:
- run_once_sys_svc_webserver_core is not defined
- name: "deploy the logout.js"
include_tasks: "02_deploy.yml"
include_tasks: "02_deploy.yml"
- set_fact:
run_once_sys_front_inj_logout: true
changed_when: false

View File

@@ -1,10 +1,10 @@
- name: Deploy logout.js
template:
src: logout.js.j2
dest: "{{ INJ_LOGOUT_JS_DESTINATION }}"
owner: "{{ NGINX.USER }}"
group: "{{ NGINX.USER }}"
mode: '0644'
copy:
src: logout.js
dest: "{{ INJ_LOGOUT_JS_DESTINATION }}"
owner: "{{ NGINX.USER }}"
group: "{{ NGINX.USER }}"
mode: '0644'
- name: Get stat for logout.js
stat:

View File

@@ -1,16 +1,16 @@
- block:
- include_tasks: 01_core.yml
- set_fact:
run_once_sys_front_inj_logout: true
- name: "Load base for '{{ application_id }}'"
include_tasks: 01_core.yml
when: run_once_sys_front_inj_logout is not defined
- name: "Load logout code for '{{ application_id }}'"
set_fact:
logout_code: "{{ lookup('template', 'logout_one_liner.js.j2') }}"
changed_when: false
- name: "Collapse logout code into one-liner for '{{ application_id }}'"
set_fact:
logout_code_one_liner: "{{ logout_code | to_one_liner }}"
changed_when: false
- name: "Append logout CSP hash for '{{ application_id }}'"
set_fact:

View File

@@ -1 +1 @@
<script src="{{ cdn_urls.shared.js }}/{{ INJ_LOGOUT_JS_FILE_NAME }}{{ lookup('local_mtime_qs', [playbook_dir, 'roles', 'sys-front-inj-logout', 'templates', INJ_LOGOUT_JS_FILE_NAME ~ '.j2'] | path_join) }}"></script>
<script src="{{ cdn_urls.shared.js }}/{{ INJ_LOGOUT_JS_FILE_NAME }}{{ lookup('local_mtime_qs', [playbook_dir, 'roles', 'sys-front-inj-logout', 'files', INJ_LOGOUT_JS_FILE_NAME] | path_join) }}"></script>

View File

@@ -1,10 +1,4 @@
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_matomo is not defined
# run_once_sys_front_inj_matomo: deactivated
- name: "Relevant variables for role: {{ role_path | basename }}"
debug:

View File

@@ -1,21 +0,0 @@
- name: "Load CDN for '{{ domain }}'"
include_role:
name: web-svc-cdn
public: false
when:
- application_id != 'web-svc-cdn'
- run_once_web_svc_cdn is not defined
# ------------------------------------------------------------------
# Only-once creations (shared root and vendor)
# ------------------------------------------------------------------
- name: Ensure shared root and vendor exist (run once)
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ NGINX.USER }}"
group: "{{ NGINX.USER }}"
mode: "0755"
loop: "{{ CDN_DIRS_GLOBAL }}"
- include_tasks: utils/run_once.yml

View File

@@ -1,6 +1,14 @@
---
- block:
- include_tasks: 01_core.yml
- name: Ensure shared root and vendor exist (run once)
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ NGINX.USER }}"
group: "{{ NGINX.USER }}"
mode: "0755"
loop: "{{ CDN_DIRS_GLOBAL }}"
- include_tasks: utils/run_once.yml
when:
- run_once_sys_svc_cdn is not defined

View File

@@ -14,4 +14,7 @@
- include_role:
name: sys-ctl-hlth-msmtp
when: run_once_sys_ctl_hlth_msmtp is not defined
when: run_once_sys_ctl_hlth_msmtp is not defined
- set_fact:
run_once_sys_svc_msmtp: true

View File

@@ -1,5 +1,6 @@
- block:
- include_tasks: 01_core.yml
- set_fact:
run_once_sys_svc_msmtp: true
when: run_once_sys_svc_msmtp is not defined
- name: "Load MSMTP Core Once"
include_tasks: 01_core.yml
when:
- run_once_sys_svc_msmtp is not defined or run_once_sys_svc_msmtp is false
# Just execute when mailu_token is defined
- users['no-reply'].mailu_token is defined

View File

@@ -68,7 +68,12 @@ ChallengeResponseAuthentication no
#KerberosGetAFSToken no
# GSSAPI options
#GSSAPIAuthentication no
# Disable GSSAPI (Kerberos) authentication to avoid unnecessary negotiation delays.
# This setting is useful for non-domain environments where GSSAPI is not used,
# improving SSH connection startup time and reducing overhead.
# See: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
# Set this to 'yes' to enable PAM authentication, account processing,
@@ -97,7 +102,13 @@ PrintMotd no # pam does that
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
# Disable reverse DNS lookups to speed up SSH logins.
# When UseDNS is enabled, sshd performs a reverse DNS lookup for each connecting client,
# which can significantly delay authentication if DNS resolution is slow or misconfigured.
# See: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
UseDNS no
#PidFile /run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no

View File

@@ -5,7 +5,7 @@ users:
username: "{{ PRIMARY_DOMAIN.split('.')[0] }}"
tld:
description: "Auto Generated Account to reserve the TLD"
username: "{{ PRIMARY_DOMAIN.split('.')[1] }}"
username: "{{ PRIMARY_DOMAIN.split('.')[1] if (PRIMARY_DOMAIN is defined and (PRIMARY_DOMAIN.split('.') | length) > 1) else (PRIMARY_DOMAIN ~ '_tld ') }}"
root:
username: root
uid: 0

View File

@@ -19,7 +19,7 @@ docker:
name: "baserow"
cpus: 1.0
mem_reservation: 0.5g
mem_limit: 1g
mem_limit: 2g
pids_limit: 512
volumes:
data: "baserow_data"

View File

@@ -14,13 +14,20 @@
name: sys-stk-full-stateless
vars:
docker_compose_flush_handlers: false
- name: "include 04_seed-database-to-backup.yml"
include_tasks: "{{ [ playbook_dir, 'roles/sys-ctl-bkp-docker-2-loc/tasks/04_seed-database-to-backup.yml' ] | path_join }}"
- name: "Unset 'proxy_extra_configuration'"
set_fact:
proxy_extra_configuration: null
- name: "Include Seed routines for '{{ application_id }}' database backup"
include_tasks: "{{ [ playbook_dir, 'roles/sys-ctl-bkp-docker-2-loc/tasks/04_seed-database-to-backup.yml' ] | path_join }}"
vars:
database_type: "postgres"
database_instance: "{{ entity_name }}"
database_password: "{{ applications | get_app_conf(application_id, 'credentials.postgresql_secret') }}"
database_username: "postgres"
database_name: "" # Multiple databases
- name: configure websocket_upgrade.conf
copy:
src: "websocket_upgrade.conf"

View File

@@ -2,13 +2,6 @@
application_id: "web-app-bigbluebutton"
entity_name: "{{ application_id | get_entity_name }}"
# Database configuration
database_type: "postgres"
database_instance: "{{ application_id | get_entity_name }}"
database_password: "{{ applications | get_app_conf(application_id, 'credentials.postgresql_secret') }}"
database_username: "postgres"
database_name: "" # Multiple databases
# Proxy
domain: "{{ domains | get_domain(application_id) }}"
http_port: "{{ ports.localhost.http[application_id] }}"

View File

@@ -17,6 +17,8 @@
- name: "load docker, proxy for '{{ application_id }}'"
include_role:
name: sys-stk-full-stateless
vars:
docker_compose_flush_handlers: false
- name: "Check if host-specific config.yaml exists in {{ DESKTOP_CONFIG_INV_PATH }}"
stat:
@@ -57,8 +59,16 @@
notify: docker compose up
when: not config_file.stat.exists
- name: add docker-compose.yml
template:
src: docker-compose.yml.j2
dest: "{{ docker_compose.directories.instance }}docker-compose.yml"
notify: docker compose up
- name: "Flush docker compose handlers"
meta: flush_handlers
- name: Wait for Desktop HTTP endpoint (required so all logos can be downloaded during initialization)
uri:
url: "http://127.0.0.1:{{ http_port }}/"
status_code: 200
register: desktop_http
retries: 60
delay: 5
until: desktop_http.status == 200
- include_tasks: utils/run_once.yml

View File

@@ -1,5 +1,3 @@
---
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
- include_tasks: 01_core.yml
when: run_once_web_app_desktop is not defined

View File

@@ -1,5 +1,6 @@
# General
application_id: "web-app-desktop"
http_port: "{{ ports.localhost.http[application_id] }}"
## Webserver
proxy_extra_configuration: "{{ lookup('template', 'nginx/sso.html.conf.j2') }}"

View File

@@ -43,9 +43,10 @@ plugins:
enabled: true
discourse-akismet:
enabled: true
discourse-cakeday:
enabled: true
# discourse-solved: Seems like this plugin is now also part of the default setup
# The following plugins moved to the default setup
# discourse-cakeday:
# enabled: true
# discourse-solved:
# enabled: true
# discourse-voting:
# enabled: true

View File

@@ -6,4 +6,6 @@
include_tasks: 03_docker.yml
- name: "Setup '{{ application_id }}' network"
include_tasks: 04_network.yml
include_tasks: 04_network.yml
- include_tasks: utils/run_once.yml

View File

@@ -1,6 +1,4 @@
---
- name: "Setup {{ application_id }}"
include_tasks: 01_core.yml
when: run_once_web_app_discourse is not defined
block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml

View File

@@ -1,11 +1,13 @@
#!/bin/sh
set -euo pipefail
# POSIX-safe entrypoint for EspoCRM container
# Compatible with /bin/sh (dash/busybox). Avoids 'pipefail' and non-portable features.
set -eu
log() { printf '%s %s\n' "[entrypoint]" "$*" >&2; }
# --- Simple boolean normalization --------------------------------------------
bool_norm () {
v="$(printf '%s' "${1:-}" | tr '[:upper:]' '[:lower:]')"
v="$(printf '%s' "${1:-}" | tr '[:upper:]' '[:lower:]' 2>/dev/null || true)"
case "$v" in
1|true|yes|on) echo "true" ;;
0|false|no|off|"") echo "false" ;;
@@ -13,30 +15,45 @@ bool_norm () {
esac
}
# Expected ENV (from env.j2)
# --- Environment initialization ----------------------------------------------
MAINTENANCE="$(bool_norm "${ESPO_INIT_MAINTENANCE_MODE:-false}")"
CRON_DISABLED="$(bool_norm "${ESPO_INIT_CRON_DISABLED:-false}")"
USE_CACHE="$(bool_norm "${ESPO_INIT_USE_CACHE:-true}")"
APP_DIR="/var/www/html"
SET_FLAGS_SCRIPT="${ESPOCRM_SET_FLAGS_SCRIPT}"
# Provided by env.j2 (fallback ensures robustness)
SET_FLAGS_SCRIPT="${ESPOCRM_SET_FLAGS_SCRIPT:-/usr/local/bin/set_flags.php}"
if [ ! -f "$SET_FLAGS_SCRIPT" ]; then
log "WARN: SET_FLAGS_SCRIPT '$SET_FLAGS_SCRIPT' not found; falling back to /usr/local/bin/set_flags.php"
SET_FLAGS_SCRIPT="/usr/local/bin/set_flags.php"
fi
# --- Wait for bootstrap.php (max 60s, e.g. fresh volume) ----------------------
log "Waiting for ${APP_DIR}/bootstrap.php..."
for i in $(seq 1 60); do
[ -f "${APP_DIR}/bootstrap.php" ] && break
count=0
while [ $count -lt 60 ] && [ ! -f "${APP_DIR}/bootstrap.php" ]; do
sleep 1
count=$((count + 1))
done
if [ ! -f "${APP_DIR}/bootstrap.php" ]; then
log "ERROR: bootstrap.php missing after 60s"; exit 1
log "ERROR: bootstrap.php missing after 60s"
exit 1
fi
# --- Apply config flags via set_flags.php ------------------------------------
log "Applying runtime flags via set_flags.php..."
php "${SET_FLAGS_SCRIPT}"
if ! php "${SET_FLAGS_SCRIPT}"; then
log "ERROR: set_flags.php execution failed"
exit 1
fi
# --- Clear cache (safe) -------------------------------------------------------
php "${APP_DIR}/clear_cache.php" || true
if php "${APP_DIR}/clear_cache.php" 2>/dev/null; then
log "Cache cleared successfully."
else
log "WARN: Cache clearing skipped or failed (non-critical)."
fi
# --- Hand off to CMD ----------------------------------------------------------
if [ "$#" -gt 0 ]; then
@@ -56,5 +73,6 @@ for cmd in apache2-foreground httpd-foreground php-fpm php-fpm8.3 php-fpm8.2 sup
fi
done
# --- Fallback ---------------------------------------------------------------
log "No known server command found; tailing to keep container alive."
exec tail -f /dev/null

View File

@@ -47,7 +47,17 @@ docker:
version: "latest"
backup:
no_stop_required: true
port: 3000
name: "gitea"
port: 3000
name: "gitea"
cpus: 1.0
mem_reservation: 1g
mem_limit: 2g
pids_limit: 1024
redis:
enabled: false
cpus: 0.25
mem_reservation: 0.2g
mem_limit: 0.3g
pids_limit: 512
volumes:
data: "gitea_data"

View File

@@ -2,7 +2,7 @@
shell: |
docker exec -i --user {{ GITEA_USER }} {{ GITEA_CONTAINER }} \
gitea admin auth list \
| awk -v name="LDAP ({{ PRIMARY_DOMAIN }})" '$0 ~ name {print $1; exit}'
| awk -v name="LDAP ({{ SOFTWARE_NAME }})" '$0 ~ name {print $1; exit}'
args:
chdir: "{{ docker_compose.directories.instance }}"
register: ldap_source_id_raw

View File

@@ -11,7 +11,7 @@ USER_GID=1000
# Logging configuration
GITEA__log__MODE=console
GITEA__log__LEVEL={% if MODE_DEBUG | bool %}Debug{% else %}Info{% endif %}
GITEA__log__LEVEL={% if MODE_DEBUG | bool %}Debug{% else %}Info{% endif %}
# Database
DB_TYPE=mysql
@@ -20,6 +20,28 @@ DB_NAME={{ database_name }}
DB_USER={{ database_username }}
DB_PASSWD={{ database_password }}
{% if GITEA_REDIS_ENABLED | bool %}
# ------------------------------------------------
# Redis Configuration for Gitea
# ------------------------------------------------
# @see https://docs.gitea.com/administration/config-cheat-sheet#cache-cache
GITEA__cache__ENABLED=true
GITEA__cache__ADAPTER=redis
# use a different Redis DB index than oauth2-proxy
GITEA__cache__HOST=redis://{{ GITEA_REDIS_ADDRESS }}/1
# Store sessions in Redis (instead of the internal DB)
GITEA__session__PROVIDER=redis
GITEA__session__PROVIDER_CONFIG=network=tcp,addr={{ GITEA_REDIS_ADDRESS }},db=2,pool_size=100,idle_timeout=180
# Use Redis for background task queues
GITEA__queue__TYPE=redis
GITEA__queue__CONN_STR=redis://{{ GITEA_REDIS_ADDRESS }}/3
{% endif %}
# SSH
SSH_PORT={{ports.public.ssh[application_id]}}
SSH_LISTEN_PORT=22
@@ -48,7 +70,7 @@ GITEA__security__INSTALL_LOCK=true # Locks the installation page
GITEA__openid__ENABLE_OPENID_SIGNUP={{ applications | get_app_conf(application_id, 'features.oidc', False) | lower }}
GITEA__openid__ENABLE_OPENID_SIGNIN={{ applications | get_app_conf(application_id, 'features.oidc', False) | lower }}
{% if applications | get_app_conf(application_id, 'features.oidc', False) or applications | get_app_conf(application_id, 'features.ldap', False) %}
{% if GITEA_IAM_ENABLED | bool %}
EXTERNAL_USER_DISABLE_FEATURES=deletion,manage_credentials,change_username,change_full_name
@@ -58,9 +80,5 @@ GITEA__ldap__SYNC_USER_ON_LOGIN=true
{% endif %}
# ------------------------------------------------
# Disable user self-registration
# ------------------------------------------------
# After this only admins can create accounts
GITEA__service__DISABLE_REGISTRATION=false
GITEA__service__DISABLE_REGISTRATION={{ GITEA_IAM_ENABLED | lower }}

View File

@@ -22,9 +22,15 @@ GITEA_LDAP_AUTH_ARGS:
- '--email-attribute "{{ LDAP.USER.ATTRIBUTES.MAIL }}"'
- '--public-ssh-key-attribute "{{ LDAP.USER.ATTRIBUTES.SSH_PUBLIC_KEY }}"'
- '--synchronize-users'
GITEA_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.version') }}"
GITEA_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.image') }}"
GITEA_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.name') }}"
GITEA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
GITEA_USER: "git"
GITEA_CONFIG: "/data/gitea/conf/app.ini"
GITEA_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.version') }}"
GITEA_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.image') }}"
GITEA_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.name') }}"
GITEA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
GITEA_USER: "git"
GITEA_CONFIG: "/data/gitea/conf/app.ini"
## Redis
GITEA_REDIS_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.redis.enabled') }}"
GITEA_REDIS_ADDRESS: "redis:6379"
GITEA_IAM_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc', False) or applications | get_app_conf(application_id, 'features.ldap', False) }}"

View File

@@ -1,7 +1,6 @@
load_dependencies: True # When set to false the dependencies aren't loaded. Helpful for developing
load_dependencies: True # When set to false the dependencies aren't loaded. Helpful for developing
actions:
import_realm: True # Import REALM
create_automation_client: True
import_realm: True # Import REALM
features:
matomo: true
css: true
@@ -50,4 +49,10 @@ docker:
credentials:
recaptcha:
website_key: "" # Required if you enabled recaptcha:
secret_key: "" # Required if you enabled recaptcha:
secret_key: "" # Required if you enabled recaptcha:
accounts:
bootstrap:
username: "administrator"
system:
username: "{{ SOFTWARE_NAME | replace('.', '_') | lower }}"

View File

@@ -0,0 +1,89 @@
- name: "Wait until '{{ KEYCLOAK_CONTAINER }}' container is healthy"
community.docker.docker_container_info:
name: "{{ KEYCLOAK_CONTAINER }}"
register: kc_info
retries: 60
delay: 5
until: >
kc_info is succeeded and
(kc_info.container | default({})) != {} and
(kc_info.container.State | default({})) != {} and
(kc_info.container.State.Health | default({})) != {} and
(kc_info.container.State.Health.Status | default('')) == 'healthy'
- name: Ensure permanent Keycloak admin exists and can log in (container env only)
block:
- name: Try login with permanent admin (uses container ENV)
shell: |
{{ KEYCLOAK_EXEC_CONTAINER }} sh -lc '
{{ KEYCLOAK_KCADM }} config credentials \
--server {{ KEYCLOAK_SERVER_INTERNAL_URL }} \
--realm master \
--user "$KEYCLOAK_PERMANENT_ADMIN_USERNAME" \
--password "$KEYCLOAK_PERMANENT_ADMIN_PASSWORD"
'
register: kc_login_perm
changed_when: false
rescue:
- name: Login with bootstrap admin (uses container ENV)
shell: |
{{ KEYCLOAK_EXEC_CONTAINER }} sh -lc '
{{ KEYCLOAK_KCADM }} config credentials \
--server {{ KEYCLOAK_SERVER_INTERNAL_URL }} \
--realm master \
--user "$KC_BOOTSTRAP_ADMIN_USERNAME" \
--password "$KC_BOOTSTRAP_ADMIN_PASSWORD"
'
register: kc_login_bootstrap
changed_when: false
- name: Ensure permanent admin user exists (create if missing)
shell: |
{{ KEYCLOAK_EXEC_CONTAINER }} sh -lc '
{{ KEYCLOAK_KCADM }} create users -r master \
-s "username=$KEYCLOAK_PERMANENT_ADMIN_USERNAME" \
-s "enabled=true"
'
register: kc_create_perm_admin
failed_when: >
not (
kc_create_perm_admin.rc == 0 or
(kc_create_perm_admin.stderr is defined and
('User exists with same username' in kc_create_perm_admin.stderr))
)
changed_when: kc_create_perm_admin.rc == 0
- name: Set permanent admin password (by username, no ID needed)
shell: |
{{ KEYCLOAK_EXEC_CONTAINER }} sh -lc '
{{ KEYCLOAK_KCADM }} set-password -r master \
--username "$KEYCLOAK_PERMANENT_ADMIN_USERNAME" \
--new-password "$KEYCLOAK_PERMANENT_ADMIN_PASSWORD"
'
changed_when: true
- name: Grant global admin via master realm role 'admin'
shell: |
{{ KEYCLOAK_EXEC_CONTAINER }} sh -lc '
{{ KEYCLOAK_KCADM }} add-roles -r master \
--uusername "$KEYCLOAK_PERMANENT_ADMIN_USERNAME" \
--rolename admin
'
register: kc_grant_master_admin
changed_when: (kc_grant_master_admin.stderr is defined and kc_grant_master_admin.stderr | length > 0) or
(kc_grant_master_admin.stdout is defined and kc_grant_master_admin.stdout | length > 0)
failed_when: false
- name: Verify login with permanent admin (after creation)
shell: |
{{ KEYCLOAK_EXEC_CONTAINER }} sh -lc '
{{ KEYCLOAK_KCADM }} config credentials \
--server {{ KEYCLOAK_SERVER_INTERNAL_URL }} \
--realm master \
--user "$KEYCLOAK_PERMANENT_ADMIN_USERNAME" \
--password "$KEYCLOAK_PERMANENT_ADMIN_PASSWORD"
'
changed_when: false

View File

@@ -1,63 +0,0 @@
# Creates a confidential client with service account, fetches the secret,
# and grants realm-management/realm-admin to its service-account user.
- name: "Ensure automation client exists (confidential + service accounts)"
shell: |
{{ KEYCLOAK_EXEC_KCADM }} create clients -r {{ KEYCLOAK_REALM }} \
-s clientId={{ KEYCLOAK_AUTOMATION_CLIENT_ID }} \
-s protocol=openid-connect \
-s publicClient=false \
-s serviceAccountsEnabled=true \
-s directAccessGrantsEnabled=false
register: create_client
changed_when: create_client.rc == 0
failed_when: create_client.rc != 0 and ('already exists' not in (create_client.stderr | lower))
- name: "Resolve automation client id"
shell: >
{{ KEYCLOAK_EXEC_KCADM }} get clients -r {{ KEYCLOAK_REALM }}
--query 'clientId={{ KEYCLOAK_AUTOMATION_CLIENT_ID }}' --fields id --format json | jq -r '.[0].id'
register: auto_client_id_cmd
changed_when: false
- name: "Fail if client id could not be resolved"
assert:
that:
- "(auto_client_id_cmd.stdout | trim) is match('^[0-9a-f-]+$')"
fail_msg: "Automation client id could not be resolved."
- name: "Read client secret"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
shell: >
{{ KEYCLOAK_EXEC_KCADM }} get clients/{{ auto_client_id_cmd.stdout | trim }}/client-secret
-r {{ KEYCLOAK_REALM }} --format json | jq -r .value
register: auto_client_secret_cmd
changed_when: false
- name: "Expose client secret as a fact"
set_fact:
KEYCLOAK_AUTOMATION_CLIENT_SECRET: "{{ auto_client_secret_cmd.stdout | trim }}"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Grant {{ KEYCLOAK_AUTOMATION_GRANT_ROLE }} to service account"
shell: >
{{ KEYCLOAK_EXEC_KCADM }} add-roles -r {{ KEYCLOAK_REALM }}
--uusername service-account-{{ KEYCLOAK_AUTOMATION_CLIENT_ID }}
--cclientid realm-management
--rolename {{ KEYCLOAK_AUTOMATION_GRANT_ROLE }}
register: grant_role
changed_when: grant_role.rc == 0
failed_when: grant_role.rc != 0 and ('already exists' not in (grant_role.stderr | lower))
- name: "Verify client-credentials login works"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
shell: >
{{ KEYCLOAK_EXEC_KCADM }} config credentials
--server {{ KEYCLOAK_SERVER_INTERNAL_URL }}
--realm {{ KEYCLOAK_REALM }}
--client {{ KEYCLOAK_AUTOMATION_CLIENT_ID }}
--client-secret {{ KEYCLOAK_AUTOMATION_CLIENT_SECRET }} &&
{{ KEYCLOAK_EXEC_KCADM }} get realms/{{ KEYCLOAK_REALM }} --format json | jq -r '.realm'
register: verify_cc
changed_when: false
failed_when: (verify_cc.rc != 0) or ((verify_cc.stdout | trim) != (KEYCLOAK_REALM | trim))

View File

@@ -13,118 +13,21 @@
include_tasks: 04_dependencies.yml
when: KEYCLOAK_LOAD_DEPENDENCIES | bool
- name: "Wait until '{{ KEYCLOAK_CONTAINER }}' container is healthy"
community.docker.docker_container_info:
name: "{{ KEYCLOAK_CONTAINER }}"
register: kc_info
retries: 60
delay: 5
until: >
kc_info is succeeded and
(kc_info.container | default({})) != {} and
(kc_info.container.State | default({})) != {} and
(kc_info.container.State.Health | default({})) != {} and
(kc_info.container.State.Health.Status | default('')) == 'healthy'
- name: "Load Login routines for '{{ application_id }}'"
include_tasks: 05_login.yml
- name: kcadm login (master)
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
shell: >
{{ KEYCLOAK_EXEC_KCADM }} config credentials
--server {{ KEYCLOAK_SERVER_INTERNAL_URL }}
--realm master
--user {{ KEYCLOAK_MASTER_API_USER_NAME }}
--password {{ KEYCLOAK_MASTER_API_USER_PASSWORD }}
changed_when: false
- name: "Load Client Update routines for '{{ application_id }}'"
include_tasks: update/01_client.yml
- name: Verify kcadm session works (quick read)
shell: >
{{ KEYCLOAK_EXEC_KCADM }} get realms --format json | jq -r '.[0].realm' | head -n1
register: kcadm_verify
changed_when: false
failed_when: >
(kcadm_verify.rc != 0)
or ('HTTP 401' in (kcadm_verify.stderr | default('')))
or ((kcadm_verify.stdout | trim) == '')
# --- Create & grant automation service account (Option A) ---
- name: "Ensure automation service account client (Option A)"
include_tasks: 05a_service_account.yml
when: applications | get_app_conf(application_id, 'actions.create_automation_client', True)
- name: "Load Mail Update routines for '{{ application_id }} - {{ KEYCLOAK_REALM }}'"
include_tasks: update/02_mail_realm.yml
# --- Switch session to the service account for all subsequent API work ---
- name: kcadm login (realm) using service account
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
shell: >
{{ KEYCLOAK_EXEC_KCADM }} config credentials
--server {{ KEYCLOAK_SERVER_INTERNAL_URL }}
--realm {{ KEYCLOAK_REALM }}
--client {{ KEYCLOAK_AUTOMATION_CLIENT_ID }}
--client-secret {{ KEYCLOAK_AUTOMATION_CLIENT_SECRET }}
changed_when: false
- name: "Load Mail Update routines for '{{ application_id }} - master'"
include_tasks: update/03_mail_master.yml
- name: Verify kcadm session works (exact realm via service account)
shell: >
{{ KEYCLOAK_EXEC_KCADM }} get realms/{{ KEYCLOAK_REALM }} --format json | jq -r '.realm'
register: kcadm_verify_sa
changed_when: false
failed_when: >
(kcadm_verify_sa.rc != 0)
or ('HTTP 401' in (kcadm_verify_sa.stderr | default('')))
or ((kcadm_verify_sa.stdout | trim) != (KEYCLOAK_REALM | trim))
- name: "Load RBAC Update routines for '{{ application_id }}'"
include_tasks: update/04_rbac_client_scope.yml
- name: "Update Client settings"
vars:
kc_object_kind: "client"
kc_lookup_value: "{{ KEYCLOAK_CLIENT_ID }}"
kc_desired: >-
{{
KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| list | first
}}
kc_force_attrs:
publicClient: >-
{{
(KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| map(attribute='publicClient')
| first)
}}
serviceAccountsEnabled: >-
{{
(KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| map(attribute='serviceAccountsEnabled')
| first )
}}
frontchannelLogout: >-
{{
(KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| map(attribute='frontchannelLogout')
| first)
}}
attributes: >-
{{
( (KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| list | first | default({}) ).attributes | default({}) )
| combine({'frontchannel.logout.url': KEYCLOAK_FRONTCHANNEL_LOGOUT_URL}, recursive=True)
}}
include_tasks: _update.yml
- name: "Update REALM mail settings from realm dictionary (SPOT)"
include_tasks: _update.yml
vars:
kc_object_kind: "realm"
kc_lookup_field: "id"
kc_lookup_value: "{{ KEYCLOAK_REALM }}"
kc_desired:
smtpServer: "{{ KEYCLOAK_DICTIONARY_REALM.smtpServer | default({}, true) }}"
kc_merge_path: "smtpServer"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- include_tasks: 05_rbac_client_scope.yml
- include_tasks: 06_ldap.yml
- name: "Load LDAP Update routines for '{{ application_id }}'"
include_tasks: update/05_ldap.yml
when: KEYCLOAK_LDAP_ENABLED | bool

View File

@@ -0,0 +1,40 @@
- name: "Update Client settings"
vars:
kc_object_kind: "client"
kc_lookup_value: "{{ KEYCLOAK_CLIENT_ID }}"
kc_desired: >-
{{
KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| list | first
}}
kc_force_attrs:
publicClient: >-
{{
(KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| map(attribute='publicClient')
| first)
}}
serviceAccountsEnabled: >-
{{
(KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| map(attribute='serviceAccountsEnabled')
| first )
}}
frontchannelLogout: >-
{{
(KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| map(attribute='frontchannelLogout')
| first)
}}
attributes: >-
{{
( (KEYCLOAK_DICTIONARY_REALM.clients
| selectattr('clientId','equalto', KEYCLOAK_CLIENT_ID)
| list | first | default({}) ).attributes | default({}) )
| combine({'frontchannel.logout.url': KEYCLOAK_FRONTCHANNEL_LOGOUT_URL}, recursive=True)
}}
include_tasks: _update.yml

View File

@@ -0,0 +1,10 @@
- name: "Update {{ KEYCLOAK_REALM }} REALM mail settings from realm dictionary"
include_tasks: _update.yml
vars:
kc_object_kind: "realm"
kc_lookup_field: "id"
kc_lookup_value: "{{ KEYCLOAK_REALM }}"
kc_desired:
smtpServer: "{{ KEYCLOAK_DICTIONARY_REALM.smtpServer | default({}, true) }}"
kc_merge_path: "smtpServer"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"

View File

@@ -0,0 +1,10 @@
- name: "Update Master REALM mail settings from realm dictionary"
include_tasks: _update.yml
vars:
kc_object_kind: "realm"
kc_lookup_field: "id"
kc_lookup_value: "master"
kc_desired:
smtpServer: "{{ KEYCLOAK_DICTIONARY_REALM.smtpServer | default({}, true) }}"
kc_merge_path: "smtpServer"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"

View File

@@ -1,4 +1,3 @@
# --- Ensure RBAC client scope exists (idempotent) ---
- name: Ensure RBAC client scope exists
shell: |
cat <<'JSON' | {{ KEYCLOAK_EXEC_KCADM }} create client-scopes -r {{ KEYCLOAK_REALM }} -f -
@@ -16,12 +15,10 @@
('already exists' not in (create_rbac_scope.stderr | lower))
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
# --- Get the scope id we will attach to the client ---
- name: Get all client scopes
shell: "{{ KEYCLOAK_EXEC_KCADM }} get client-scopes -r {{ KEYCLOAK_REALM }} --format json"
register: all_scopes
changed_when: false
failed_when: "'HTTP 401' in (all_scopes.stderr | default(''))"
- name: Extract RBAC scope id
set_fact:

View File

@@ -10,19 +10,21 @@ KC_HTTP_ENABLED= true
KC_HEALTH_ENABLED= {{ KEYCLOAK_HEALTH_ENABLED | lower }}
KC_METRICS_ENABLED= true
# Administrator
KEYCLOAK_ADMIN= "{{ KEYCLOAK_ADMIN }}"
KEYCLOAK_ADMIN_PASSWORD= "{{ KEYCLOAK_ADMIN_PASSWORD }}"
# Database
KC_DB= {{ database_type }}
KC_DB_URL= {{ database_url_jdbc }}
KC_DB_USERNAME= {{ database_username }}
KC_DB_PASSWORD= {{ database_password }}
# If the initial administrator already exists and the environment variables are still present at startup, an error message stating the failed creation of the initial administrator is shown in the logs. Keycloak ignores the values and starts up correctly.
KC_BOOTSTRAP_ADMIN_USERNAME= "{{ KEYCLOAK_ADMIN }}"
KC_BOOTSTRAP_ADMIN_PASSWORD= "{{ KEYCLOAK_ADMIN_PASSWORD }}"
# Credentials
## Bootstrap
KC_BOOTSTRAP_ADMIN_USERNAME="{{ KEYCLOAK_BOOTSTRAP_ADMIN_USERNAME }}"
KC_BOOTSTRAP_ADMIN_PASSWORD="{{ KEYCLOAK_BOOTSTRAP_ADMIN_PASSWORD }}"
## Permanent
KEYCLOAK_PERMANENT_ADMIN_USERNAME="{{ KEYCLOAK_PERMANENT_ADMIN_USERNAME }}"
KEYCLOAK_PERMANENT_ADMIN_PASSWORD="{{ KEYCLOAK_PERMANENT_ADMIN_PASSWORD }}"
# Enable detailed logs
{% if MODE_DEBUG | bool %}

View File

@@ -1,3 +0,0 @@
users:
administrator:
username: "administrator"

View File

@@ -1,6 +1,6 @@
# General
application_id: "web-app-keycloak" # Internal Infinito.Nexus application id
database_type: "postgres" # Database which will be used
application_id: "web-app-keycloak" # Internal Infinito.Nexus application id
database_type: "postgres" # Database which will be used
# Keycloak
@@ -29,21 +29,22 @@ KEYCLOAK_REALM_IMPORT_FILE_SRC: "import/realm.json.j2"
KEYCLOAK_REALM_IMPORT_FILE_DST: "{{ [KEYCLOAK_REALM_IMPORT_DIR_HOST,'realm.json'] | path_join }}"
## Credentials
KEYCLOAK_ADMIN: "{{ applications | get_app_conf(application_id, 'users.administrator.username') }}"
KEYCLOAK_ADMIN_PASSWORD: "{{ applications | get_app_conf(application_id, 'credentials.administrator_password') }}"
### Bootstrap
KEYCLOAK_BOOTSTRAP_ADMIN_USERNAME: "{{ applications | get_app_conf(application_id, 'accounts.bootstrap.username') }}"
KEYCLOAK_BOOTSTRAP_ADMIN_PASSWORD: "{{ applications | get_app_conf(application_id, 'credentials.administrator_password') }}"
### Permanent
KEYCLOAK_PERMANENT_ADMIN_USERNAME: "{{ applications | get_app_conf(application_id, 'accounts.system.username') }}"
KEYCLOAK_PERMANENT_ADMIN_PASSWORD: "{{ applications | get_app_conf(application_id, 'credentials.administrator_password') }}"
## Docker
KEYCLOAK_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.keycloak.name') }}" # Name of the keycloak docker container
KEYCLOAK_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.keycloak.image') }}" # Keycloak docker image
KEYCLOAK_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.keycloak.version') }}" # Keycloak docker version
KEYCLOAK_KCADM_CONFIG: "/opt/keycloak/data/kcadm.config"
KEYCLOAK_EXEC_KCADM: "docker exec -i {{ KEYCLOAK_CONTAINER }} /opt/keycloak/bin/kcadm.sh --config {{ KEYCLOAK_KCADM_CONFIG }}"
## Automation Service Account (Option A)
KEYCLOAK_AUTOMATION_CLIENT_ID: "infinito-automation"
KEYCLOAK_AUTOMATION_GRANT_ROLE: "realm-admin" # or granular roles if you prefer
# Will be discovered dynamically and set as a fact during the run:
# KEYCLOAK_AUTOMATION_CLIENT_SECRET
KEYCLOAK_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.keycloak.name') }}"
KEYCLOAK_EXEC_CONTAINER: "docker exec -i {{ KEYCLOAK_CONTAINER }}"
KEYCLOAK_KCADM: "/opt/keycloak/bin/kcadm.sh"
KEYCLOAK_EXEC_KCADM: "{{ KEYCLOAK_EXEC_CONTAINER }} {{ KEYCLOAK_KCADM }}"
KEYCLOAK_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.keycloak.image') }}"
KEYCLOAK_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.keycloak.version') }}"
## Server
KEYCLOAK_SERVER_HOST: "127.0.0.1:{{ ports.localhost.http[application_id] }}"
@@ -76,11 +77,6 @@ KEYCLOAK_LDAP_USER_OBJECT_CLASSES: >
) | join(', ')
}}
## API
KEYCLOAK_MASTER_API_USER: "{{ applications | get_app_conf(application_id, 'users.administrator') }}" # Master Administrator
KEYCLOAK_MASTER_API_USER_NAME: "{{ KEYCLOAK_MASTER_API_USER.username }}" # Master Administrator Username
KEYCLOAK_MASTER_API_USER_PASSWORD: "{{ KEYCLOAK_MASTER_API_USER.password }}" # Master Administrator Password
# Dictionaries
KEYCLOAK_DICTIONARY_REALM_RAW: "{{ lookup('template', 'import/realm.json.j2') }}"
KEYCLOAK_DICTIONARY_REALM: >-

View File

@@ -17,12 +17,12 @@ server:
csp:
flags:
style-src:
unsafe-inline: true
unsafe-inline: true
script-src-elem:
unsafe-inline: true
unsafe-inline: true
script-src:
unsafe-inline: true
unsafe-eval: true
unsafe-inline: true
unsafe-eval: true
rbac:
roles:
mail-bot:

View File

@@ -41,7 +41,7 @@
meta: flush_handlers
- name: "Create Mailu accounts"
include_tasks: 02_create-user.yml
include_tasks: 02_manage_user.yml
vars:
MAILU_DOCKER_DIR: "{{ docker_compose.directories.instance }}"
mailu_api_base_url: "http://127.0.0.1:8080/api/v1"
@@ -55,7 +55,8 @@
mailu_user_key: "{{ item.key }}"
mailu_user_name: "{{ item.value.username }}"
mailu_password: "{{ item.value.password }}"
mailu_token_ip: "{{ item.value.ip | default('') }}"
mailu_token_ip: "{{ item.value.ip | default(networks.internet.ip4) }}"
mailu_token_name: "{{ SOFTWARE_NAME ~ ' Token for ' ~ item.value.username }}"
loop: "{{ users | dict2items }}"
loop_control:
loop_var: item
@@ -66,3 +67,5 @@
- name: Set Mailu DNS records
include_tasks: 05_dns-records.yml
- include_tasks: utils/run_once.yml

View File

@@ -25,5 +25,5 @@
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Create Mailu API Token for {{ mailu_user_name }}"
include_tasks: 03_create-token.yml
when: "{{ 'mail-bot' in item.value.roles }}"
include_tasks: 03a_manage_user_token.yml
when: "'mail-bot' in item.value.roles"

View File

@@ -0,0 +1,26 @@
- name: "Fetch existing API tokens via curl inside admin container"
command: >-
{{ docker_compose_command_exec }} -T admin \
curl -s -X GET {{ mailu_api_base_url }}/token \
-H "Authorization: Bearer {{ MAILU_API_TOKEN }}"
args:
chdir: "{{ MAILU_DOCKER_DIR }}"
register: mailu_tokens_cli
changed_when: false
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Extract existing token info for '{{ mailu_user_key }};{{ mailu_user_name }}'"
set_fact:
mailu_user_existing_token: >-
{{ (
mailu_tokens_cli.stdout
| default('[]')
| from_json
| selectattr('comment','equalto', mailu_token_name)
| list
).0 | default(None) }}
- name: "Start Mailu token procedures for undefined tokens"
when: users[mailu_user_key].mailu_token is not defined
include_tasks: 03b_create_user_token.yml

View File

@@ -1,26 +1,3 @@
- name: "Fetch existing API tokens via curl inside admin container"
command: >-
{{ docker_compose_command_exec }} -T admin \
curl -s -X GET {{ mailu_api_base_url }}/token \
-H "Authorization: Bearer {{ MAILU_API_TOKEN }}"
args:
chdir: "{{ MAILU_DOCKER_DIR }}"
register: mailu_tokens_cli
changed_when: false
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Extract existing token info for '{{ mailu_user_key }};{{ mailu_user_name }}'"
set_fact:
mailu_user_existing_token: >-
{{ (
mailu_tokens_cli.stdout
| default('[]')
| from_json
| selectattr('comment','equalto', mailu_user_key ~ " - ansible.infinito")
| list
).0 | default(None) }}
- name: "Delete existing API token for '{{ mailu_user_key }};{{ mailu_user_name }}' if local token missing but remote exists"
command: >-
{{ docker_compose_command_exec }} -T admin \
@@ -29,7 +6,6 @@
args:
chdir: "{{ MAILU_DOCKER_DIR }}"
when:
- users[mailu_user_key].mailu_token is not defined
- mailu_user_existing_token is not none
- mailu_user_existing_token.id is defined
register: mailu_token_delete
@@ -43,13 +19,12 @@
-H "Authorization: Bearer {{ MAILU_API_TOKEN }}"
-H "Content-Type: application/json"
-d '{{ {
"comment": mailu_user_key ~ " - ansible.infinito",
"comment": mailu_token_name,
"email": users[mailu_user_key].email,
"ip": mailu_token_ip
} | to_json }}'
args:
chdir: "{{ MAILU_DOCKER_DIR }}"
when: users[mailu_user_key].mailu_token is not defined
register: mailu_token_creation
# If curl sees 4xx/5xx it returns non-zero due to -f → fail the task.
failed_when:
@@ -57,7 +32,7 @@
# Fallback: if some gateway returns 200 but embeds an error JSON.
- mailu_token_creation.rc == 0 and
(mailu_token_creation.stdout is search('"code"\\s*:\\s*4\\d\\d') or
mailu_token_creation.stdout is search('cannot be found'))
mailu_token_creation.stdout is search('cannot be found'))
# Only mark changed when a token is actually present in the JSON.
changed_when: mailu_token_creation.stdout is search('"token"\\s*:')
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
@@ -66,14 +41,25 @@
set_fact:
users: >-
{{ users
| combine({
mailu_user_key: (
users[mailu_user_key]
| combine({
'mailu_token': (mailu_token_creation.stdout | from_json).token
})
)
}, recursive=True)
| combine({
mailu_user_key: (
users[mailu_user_key]
| combine({
'mailu_token': (mailu_token_creation.stdout | from_json).token
})
)
}, recursive=True)
}}
when: users[mailu_user_key].mailu_token is not defined
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Reset MSMTP Configuration if No-Reply User Token changed"
when: users['no-reply'].username == mailu_user_name
block:
- name: "Set MSMTP run-once fact false"
set_fact:
run_once_sys_svc_msmtp: false
changed_when: false
- name: Reload MSMTP role
include_role:
name: "sys-svc-msmtp"

View File

@@ -1,5 +1,3 @@
---
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
- include_tasks: 01_core.yml
when: run_once_web_app_mailu is not defined

View File

@@ -0,0 +1,19 @@
- name: Check health status of '{{ item }}' container
shell: |
cid=$(docker compose ps -q {{ item }})
docker inspect \
--format '{{ "{{.State.Health.Status}}" }}' \
$cid
args:
chdir: "{{ docker_compose.directories.instance }}"
register: healthcheck
retries: 60
delay: 5
until: healthcheck.stdout == "healthy"
loop:
- mastodon
- streaming
- sidekiq
loop_control:
label: "{{ item }}"
changed_when: false

View File

@@ -0,0 +1,9 @@
---
# Cleanup routine for Mastodon
# Removes cached remote media older than 14 days when MODE_CLEANUP is enabled.
- name: "Cleanup Mastodon media cache older than 14 days"
command:
cmd: "docker exec -u root {{ MASTODON_CONTAINER }} bin/tootctl media remove --days=14"
register: mastodon_cleanup
changed_when: mastodon_cleanup.rc == 0
failed_when: mastodon_cleanup.rc != 0

View File

@@ -1,6 +1,3 @@
- name: "Execute migration for '{{ application_id }}'"
command:
cmd: "docker exec {{ MASTODON_CONTAINER }} bundle exec rails db:migrate"
- name: "Include administrator routines for '{{ application_id }}'"
include_tasks: 02_administrator.yml

View File

@@ -1,26 +1,5 @@
# Routines to create the administrator account
# @see https://chatgpt.com/share/67b9b12c-064c-800f-9354-8e42e6459764
- name: Check health status of '{{ item }}' container
shell: |
cid=$(docker compose ps -q {{ item }})
docker inspect \
--format '{{ "{{.State.Health.Status}}" }}' \
$cid
args:
chdir: "{{ docker_compose.directories.instance }}"
register: healthcheck
retries: 60
delay: 5
until: healthcheck.stdout == "healthy"
loop:
- mastodon
- streaming
- sidekiq
loop_control:
label: "{{ item }}"
changed_when: false
- name: Remove line containing "- administrator" from config/settings.yml to allow creating administrator account
command:
cmd: "docker exec -u root {{ MASTODON_CONTAINER }} sed -i '/- administrator/d' config/settings.yml"

View File

@@ -18,5 +18,15 @@
vars:
docker_compose_flush_handlers: true
- name: "Wait for Mastodon"
include_tasks: 01_wait.yml
- name: "Cleanup Mastodon caches when MODE_CLEANUP is true"
include_tasks: 02_cleanup.yml
when: MODE_CLEANUP | bool
- name: "start setup procedures for mastodon"
include_tasks: 01_setup.yml
include_tasks: 03_setup.yml
- name: "Include administrator routines for '{{ application_id }}'"
include_tasks: 04_administrator.yml

View File

@@ -1,8 +1,8 @@
features:
matomo: true
css: true
matomo: true
css: true
desktop: true
logout: false
logout: false
server:
csp:
whitelist:
@@ -16,14 +16,15 @@ server:
font-src:
- https://cdnjs.cloudflare.com
frame-src:
- "{{ WEB_PROTOCOL }}://*.{{ PRIMARY_DOMAIN }}" # Makes sense that all of the website content is available in the navigator
# Makes sense that all of the website content is available in the navigator
- "{{ WEB_PROTOCOL }}://*.{{ PRIMARY_DOMAIN }}"
flags:
style-src:
unsafe-inline: true
unsafe-inline: true
script-src:
unsafe-eval: true
unsafe-eval: true
script-src-elem:
unsafe-inline: true
unsafe-inline: true
domains:
canonical:
- "slides.{{ PRIMARY_DOMAIN }}"

View File

@@ -1,8 +1,8 @@
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "An interactive presentation platform focused on guiding end-users through the practical use of the Infinito.Nexus software. Designed to demonstrate features, workflows, and real-world applications for Administrators, Developers, End-Users, Businesses, and Investors."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
author: "Kevin Veen-Birkenbach"
description: "An interactive presentation platform focused on guiding end-users through the practical use of the Infinito.Nexus software. Designed to demonstrate features, workflows, and real-world applications for Administrators, Developers, End-Users, Businesses, and Investors."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions

View File

@@ -13,4 +13,3 @@
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -91,7 +91,7 @@ docker:
mem_reservation: "128m"
mem_limit: "512m"
pids_limit: 256
enabled: "{{ applications | get_app_conf('web-app-nextcloud', 'features.oidc', False) }}" # Activate OIDC for Nextcloud
enabled: "{{ applications | get_app_conf('web-app-nextcloud', 'features.oidc', False, True, True) }}" # Activate OIDC for Nextcloud
# floavor decides which OICD plugin should be used.
# Available options: oidc_login, sociallogin
# @see https://apps.nextcloud.com/apps/oidc_login
@@ -194,7 +194,7 @@ plugins:
enabled: false
fileslibreofficeedit:
# Nextcloud LibreOffice integration: allows online editing of documents with LibreOffice (https://apps.nextcloud.com/apps/fileslibreofficeedit)
enabled: "{{ not (applications | get_app_conf('web-app-nextcloud', 'plugins.richdocuments.enabled', False, True)) }}"
enabled: "{{ not (applications | get_app_conf('web-app-nextcloud', 'plugins.richdocuments.enabled', False, True, True)) }}"
forms:
# Nextcloud forms: facilitates creation of forms and surveys (https://apps.nextcloud.com/apps/forms)
enabled: true
@@ -292,13 +292,13 @@ plugins:
# enabled: false
twofactor_nextcloud_notification:
# Nextcloud two-factor notification: sends notifications for two-factor authentication events (https://apps.nextcloud.com/apps/twofactor_nextcloud_notification)
enabled: "{{ not applications | get_app_conf('web-app-nextcloud', 'features.oidc', False, True) }}" # Deactivate 2FA if oidc is active
enabled: "{{ not applications | get_app_conf('web-app-nextcloud', 'features.oidc', False, True, True) }}" # Deactivate 2FA if oidc is active
twofactor_totp:
# Nextcloud two-factor TOTP: provides time-based one-time password authentication (https://apps.nextcloud.com/apps/twofactor_totp)
enabled: "{{ not applications | get_app_conf('web-app-nextcloud', 'features.oidc', False, True) }}" # Deactivate 2FA if oidc is active
enabled: "{{ not applications | get_app_conf('web-app-nextcloud', 'features.oidc', False, True, True) }}" # Deactivate 2FA if oidc is active
user_ldap:
# Nextcloud user LDAP: integrates LDAP for user management and authentication (https://apps.nextcloud.com/apps/user_ldap)
enabled: "{{ applications | get_app_conf('web-app-nextcloud', 'features.ldap', False, True) }}"
enabled: "{{ applications | get_app_conf('web-app-nextcloud', 'features.ldap', False, True, True) }}"
user_directory:
enabled: true # Enables the LDAP User Directory Search
user_oidc:

View File

@@ -1,11 +1,18 @@
{% if applications | get_app_conf(application_id, 'features.oauth2', False) %}
oauth2-proxy:
image: quay.io/oauth2-proxy/oauth2-proxy:{{ applications['web-app-oauth2-proxy'].version}}
image: quay.io/oauth2-proxy/oauth2-proxy:{{ applications['web-app-oauth2-proxy'].version }}
restart: {{ DOCKER_RESTART_POLICY }}
command: --config /oauth2-proxy.cfg
container_name: {{ application_id | get_entity_name }}-oauth2-proxy
hostname: oauth2-proxy
ports:
- 127.0.0.1:{{ ports.localhost.oauth2_proxy[application_id] }}:4180/tcp
volumes:
- "{{ docker_compose.directories.volumes }}{{ applications | get_app_conf('web-app-oauth2-proxy','configuration_file')}}:/oauth2-proxy.cfg"
healthcheck:
test: ["CMD", "/bin/oauth2-proxy", "--version"]
interval: 30s
timeout: 5s
retries: 1
start_period: 5s
{% endif %}

View File

@@ -17,9 +17,13 @@ docker:
database:
enabled: false
collabora:
image: collabora/code
version: latest
name: collabora
image: collabora/code
version: latest
name: collabora
cpus: 2
mem_reservation: 1g
mem_limit: 2g
pids_limit: 2048
features:
logout: false
desktop: true # Just set to allow the iframe to load it

View File

@@ -4,6 +4,15 @@
{% include 'roles/docker-container/templates/base.yml.j2' %}
image: "{{ COLLABORA_IMAGE }}:{{ COLLABORA_VERSION }}"
container_name: {{ COLLABORA_CONTAINER }}
security_opt:
- seccomp=unconfined
- apparmor=unconfined
cap_add:
- MKNOD
- SYS_CHROOT
- SETUID
- SETGID
- FOWNER
ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}"
{% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %}

View File

@@ -1,9 +1,9 @@
features:
matomo: true
css: true
desktop: true
javascript: false
logout: false
matomo: true
css: true
desktop: true
javascript: false
logout: false
server:
domains:
canonical:
@@ -19,10 +19,11 @@ server:
connect-src:
- "{{ WEB_PROTOCOL }}://*.{{ PRIMARY_DOMAIN }}"
- "{{ WEB_PROTOCOL }}://{{ PRIMARY_DOMAIN }}"
- "https://cdn.jsdelivr.net"
script-src-elem:
- https://cdn.jsdelivr.net
- "https://cdn.jsdelivr.net"
style-src-elem:
- https://cdn.jsdelivr.net
- "https://cdn.jsdelivr.net"
frame-ancestors:
- "{{ WEB_PROTOCOL }}://<< defaults_applications[web-app-keycloak].server.domains.canonical[0] >>"

View File

@@ -21,11 +21,6 @@
- name: "load docker, proxy for '{{ application_id }}'"
include_role:
name: sys-stk-full-stateless
vars:
aca_origin: "'{{ domains | get_url('web-svc-logout', WEB_PROTOCOL) }}' always"
aca_credentials: "'true' always"
aca_methods: "'GET, OPTIONS' always"
aca_headers: "'Accept, Authorization' always"
- name: Create symbolic link from .env file to repository
file:

View File

@@ -8,7 +8,11 @@ location = /logout {
proxy_http_version 1.1;
{# CORS headers allow your central page to call this #}
{% include 'roles/sys-svc-proxy/templates/headers/access_control_allow.conf.j2' %}
{%- set aca_origin = "'{{ domains | get_url('web-svc-logout', WEB_PROTOCOL) }}' always" -%}
{%- set aca_credentials = "'true' always" -%}
{%- set aca_methods = "'GET, OPTIONS' always" -%}
{%- set aca_headers = "'Accept, Authorization' always" -%}
{%- include 'roles/sys-svc-proxy/templates/headers/access_control_allow.conf.j2' -%}
{# Disable caching absolutely #}
add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0" always;

View File

@@ -16,6 +16,10 @@
users: "{{ default_users | combine(users| default({}), recursive=True) }}"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: Merge networks definitions
set_fact:
networks: "{{ defaults_networks | combine(networks | default({}, true), recursive=True) }}"
- name: Merge application definitions
set_fact:
applications: "{{ defaults_applications | merge_with_defaults(applications | default({}, true)) }}"
@@ -92,10 +96,6 @@
)) |
generate_all_domains(WWW_REDIRECT_ENABLED | bool)
}}
- name: Merge networks definitions
set_fact:
networks: "{{ defaults_networks | combine(networks | default({}, true), recursive=True) }}"
- name: Merge OIDC configuration
set_fact:
@@ -120,6 +120,10 @@
name: update-compose
when: MODE_UPDATE | bool
- name: "Ensure correct timezone is '{{ HOST_TIMEZONE }}'"
community.general.timezone:
name: "{{ HOST_TIMEZONE }}"
- name: "Load base roles"
include_tasks: "./tasks/groups/{{ item }}-roles.yml"
loop:
@@ -128,6 +132,7 @@
- svc-net # 3. Load network roles
- svc-db # 4. Load database roles
- svc-prx # 5. Load proxy roles
- svc-ai # 6. Load ai roles
- svc-ai # 6. Load AI roles
- svc-bkp # 7. Load Backup Roles
loop_control:
label: "{{ item }}-roles.yml"

View File

@@ -28,14 +28,17 @@ BUILTIN_FILTERS: Set[str] = {
"int", "join", "last", "length", "list", "lower", "map", "min", "max", "random",
"reject", "rejectattr", "replace", "reverse", "round", "safe", "select",
"selectattr", "slice", "sort", "string", "striptags", "sum", "title", "trim",
"truncate", "unique", "upper", "urlencode", "urlize", "wordcount", "xmlattr",
"truncate", "unique", "upper", "urlencode", "urlize", "wordcount", "xmlattr","contains",
# Common Ansible filters (subset, extend as needed)
"b64decode", "b64encode", "basename", "dirname", "from_json", "to_json",
"from_yaml", "to_yaml", "combine", "difference", "intersect",
"flatten", "zip", "regex_search", "regex_replace", "bool",
"type_debug", "json_query", "mandatory", "hash", "checksum",
"lower", "upper", "capitalize", "unique", "dict2items", "items2dict", "password_hash", "path_join", "product", "quote", "split", "ternary", "to_nice_yaml", "tojson",
"lower", "upper", "capitalize", "unique", "dict2items", "items2dict",
"password_hash", "path_join", "product", "quote", "split", "ternary", "to_nice_yaml",
"tojson", "to_nice_json",
# Date/time-ish
"strftime",

View File

@@ -108,6 +108,89 @@ class TestGenerateDefaultApplications(unittest.TestCase):
self.assertIn("nocfgdirapp", apps)
self.assertEqual(apps["nocfgdirapp"], {})
def test_applications_sorted_by_key(self):
"""
Ensure that defaults_applications keys are written in alphabetical order.
"""
# Create several roles in non-sorted order
for name, cfg in [
("web-app-zeta", {"vars_id": "zeta", "cfg": "z: 1\n"}),
("web-app-alpha", {"vars_id": "alpha", "cfg": "a: 1\n"}),
("web-app-mu", {"vars_id": "mu", "cfg": "m: 1\n"}),
]:
role = self.roles_dir / name
(role / "vars").mkdir(parents=True, exist_ok=True)
(role / "config").mkdir(parents=True, exist_ok=True)
(role / "vars" / "main.yml").write_text(f"application_id: {cfg['vars_id']}\n")
(role / "config" / "main.yml").write_text(cfg["cfg"])
# Run generator
result = subprocess.run(
["python3", str(self.script_path),
"--roles-dir", str(self.roles_dir),
"--output-file", str(self.output_file)],
capture_output=True, text=True
)
self.assertEqual(result.returncode, 0, msg=result.stderr)
# Validate order of keys in YAML
data = yaml.safe_load(self.output_file.read_text())
apps = data.get("defaults_applications", {})
# dict preserves insertion order in Python 3.7+, PyYAML keeps document order
keys_in_file = list(apps.keys())
self.assertEqual(
keys_in_file,
sorted(keys_in_file),
msg=f"Applications are not sorted: {keys_in_file}"
)
# Sanity: all expected apps present
for app in ("alpha", "mu", "zeta", "testapp"):
self.assertIn(app, apps)
def test_sorting_is_stable_across_runs(self):
"""
Running the generator multiple times yields identical content (stable sort).
"""
# Create a couple more roles (unsorted)
for name, appid in [
("web-app-beta", "beta"),
("web-app-delta", "delta"),
]:
role = self.roles_dir / name
(role / "vars").mkdir(parents=True, exist_ok=True)
(role / "config").mkdir(parents=True, exist_ok=True)
(role / "vars" / "main.yml").write_text(f"application_id: {appid}\n")
(role / "config" / "main.yml").write_text("key: value\n")
# First run
result1 = subprocess.run(
["python3", str(self.script_path),
"--roles-dir", str(self.roles_dir),
"--output-file", str(self.output_file)],
capture_output=True, text=True
)
self.assertEqual(result1.returncode, 0, msg=result1.stderr)
content_run1 = self.output_file.read_text()
# Second run (simulate potential filesystem order differences by touching dirs)
for p in self.roles_dir.iterdir():
os.utime(p, None)
result2 = subprocess.run(
["python3", str(self.script_path),
"--roles-dir", str(self.roles_dir),
"--output-file", str(self.output_file)],
capture_output=True, text=True
)
self.assertEqual(result2.returncode, 0, msg=result2.stderr)
content_run2 = self.output_file.read_text()
self.assertEqual(
content_run1, content_run2,
msg="Output differs between runs; sorting should be stable."
)
if __name__ == "__main__":
unittest.main()

View File

@@ -132,5 +132,122 @@ class TestGenerateUsers(unittest.TestCase):
finally:
shutil.rmtree(tmp)
def test_cli_users_sorted_by_key(self):
"""
Ensure that default_users keys are written in alphabetical order.
"""
import tempfile
import subprocess
from pathlib import Path
tmpdir = Path(tempfile.mkdtemp())
try:
roles_dir = tmpdir / "roles"
roles_dir.mkdir()
# Create multiple roles with users in unsorted order
for role, users_map in [
("role-zeta", {"zeta": {"email": "z@ex"}}),
("role-alpha", {"alpha": {"email": "a@ex"}}),
("role-mu", {"mu": {"email": "m@ex"}}),
("role-beta", {"beta": {"email": "b@ex"}}),
]:
(roles_dir / role / "users").mkdir(parents=True, exist_ok=True)
with open(roles_dir / role / "users" / "main.yml", "w") as f:
yaml.safe_dump({"users": users_map}, f)
out_file = tmpdir / "users.yml"
# Resolve script path like in other tests (relative to repo root)
script_path = Path(__file__).resolve().parents[5] / "cli" / "build" / "defaults" / "users.py"
# Run generator
result = subprocess.run(
["python3", str(script_path),
"--roles-dir", str(roles_dir),
"--output", str(out_file)],
capture_output=True, text=True
)
self.assertEqual(result.returncode, 0, msg=result.stderr)
self.assertTrue(out_file.exists(), "Output file was not created.")
data = yaml.safe_load(out_file.read_text())
self.assertIn("default_users", data)
users_map = data["default_users"]
keys_in_file = list(users_map.keys())
# Expect alphabetical order
self.assertEqual(
keys_in_file, sorted(keys_in_file),
msg=f"Users are not sorted alphabetically: {keys_in_file}"
)
# Sanity: all expected keys present
for k in ["alpha", "beta", "mu", "zeta"]:
self.assertIn(k, users_map)
finally:
shutil.rmtree(tmpdir)
def test_cli_users_sorting_stable_across_runs(self):
"""
Running the generator multiple times yields identical content (stable sort).
"""
import tempfile
import subprocess
from pathlib import Path
tmpdir = Path(tempfile.mkdtemp())
try:
roles_dir = tmpdir / "roles"
roles_dir.mkdir()
# Unsorted creation order on purpose
cases = [
("role-d", {"duser": {"email": "d@ex"}}),
("role-a", {"auser": {"email": "a@ex"}}),
("role-c", {"cuser": {"email": "c@ex"}}),
("role-b", {"buser": {"email": "b@ex"}}),
]
for role, users_map in cases:
(roles_dir / role / "users").mkdir(parents=True, exist_ok=True)
with open(roles_dir / role / "users" / "main.yml", "w") as f:
yaml.safe_dump({"users": users_map}, f)
out_file = tmpdir / "users.yml"
script_path = Path(__file__).resolve().parents[5] / "cli" / "build" / "defaults" / "users.py"
# First run
r1 = subprocess.run(
["python3", str(script_path),
"--roles-dir", str(roles_dir),
"--output", str(out_file)],
capture_output=True, text=True
)
self.assertEqual(r1.returncode, 0, msg=r1.stderr)
content1 = out_file.read_text()
# Touch dirs to shuffle filesystem mtimes
for p in roles_dir.iterdir():
os.utime(p, None)
# Second run
r2 = subprocess.run(
["python3", str(script_path),
"--roles-dir", str(roles_dir),
"--output", str(out_file)],
capture_output=True, text=True
)
self.assertEqual(r2.returncode, 0, msg=r2.stderr)
content2 = out_file.read_text()
self.assertEqual(
content1, content2,
msg="Output differs between runs; user sorting should be stable."
)
finally:
shutil.rmtree(tmpdir)
if __name__ == '__main__':
unittest.main()

View File

@@ -0,0 +1,172 @@
#!/usr/bin/env python3
import os
import sys
import unittest
from types import SimpleNamespace
from unittest.mock import patch
# Add the project root/module_utils to the import path
CURRENT_DIR = os.path.dirname(__file__)
PROJECT_ROOT = os.path.abspath(os.path.join(CURRENT_DIR, "../../.."))
sys.path.insert(0, PROJECT_ROOT)
from module_utils.cert_utils import CertUtils
def _san_block(*entries):
"""
Helper: builds a minimal OpenSSL text snippet that contains SAN entries.
Example: _san_block('example.com', '*.example.com')
"""
sans = ", ".join(f"DNS:{e}" for e in entries)
return f"""
Certificate:
Data:
Version: 3 (0x2)
...
X509v3 extensions:
X509v3 Subject Alternative Name:
{sans}
"""
class TestCertUtilsFindNewest(unittest.TestCase):
def setUp(self):
# Reset internal caches before each test
CertUtils._domain_cert_mapping = None
CertUtils._cert_snapshot = None
def _mock_stat_map(self, mtime_map, size_map=None):
size_map = size_map or {}
def _stat_side_effect(path):
return SimpleNamespace(
st_mtime=mtime_map.get(path, 0.0),
st_size=size_map.get(path, 1234),
)
return _stat_side_effect
def test_prefers_newest_by_not_before(self):
"""
Two certs with the same SAN 'www.example.com':
- a/cert.pem: older notBefore
- b/cert.pem: newer notBefore -> should be selected
"""
files = [
"/etc/letsencrypt/live/a/cert.pem",
"/etc/letsencrypt/live/b/cert.pem",
]
san_text = _san_block("www.example.com")
with patch.object(CertUtils, "list_cert_files", return_value=files), \
patch.object(CertUtils, "run_openssl", return_value=san_text), \
patch.object(CertUtils, "run_openssl_dates") as mock_dates, \
patch("os.stat", side_effect=self._mock_stat_map({
files[0]: 1000,
files[1]: 1001,
})):
mock_dates.side_effect = [(10, 100000), (20, 100000)] # older/newer
folder = CertUtils.find_cert_for_domain("www.example.com", "/etc/letsencrypt/live", debug=False)
self.assertEqual(folder, "b", "Should return the folder with the newest notBefore date.")
def test_fallback_to_mtime_when_not_before_missing(self):
"""
When not_before is missing, mtime should be used as a fallback.
"""
files = [
"/etc/letsencrypt/live/a/cert.pem",
"/etc/letsencrypt/live/b/cert.pem",
]
san_text = _san_block("www.example.com")
with patch.object(CertUtils, "list_cert_files", return_value=files), \
patch.object(CertUtils, "run_openssl", return_value=san_text), \
patch.object(CertUtils, "run_openssl_dates", return_value=(None, None)), \
patch("os.stat", side_effect=self._mock_stat_map({
files[0]: 1000,
files[1]: 2000,
})):
folder = CertUtils.find_cert_for_domain("www.example.com", "/etc/letsencrypt/live", debug=False)
self.assertEqual(folder, "b", "Should fall back to mtime and select the newest file.")
def test_exact_beats_wildcard_even_if_wildcard_newer(self):
"""
Exact matches must take precedence over wildcard matches,
even if the wildcard certificate is newer.
"""
files = [
"/etc/letsencrypt/live/exact/cert.pem",
"/etc/letsencrypt/live/wild/cert.pem",
]
text_exact = _san_block("api.example.com")
text_wild = _san_block("*.example.com")
with patch.object(CertUtils, "list_cert_files", return_value=files), \
patch.object(CertUtils, "run_openssl") as mock_text, \
patch.object(CertUtils, "run_openssl_dates") as mock_dates, \
patch("os.stat", side_effect=self._mock_stat_map({
files[0]: 1000, # exact is older
files[1]: 5000, # wildcard is much newer
})):
mock_text.side_effect = [text_exact, text_wild]
mock_dates.side_effect = [(10, 100000), (99, 100000)]
folder = CertUtils.find_cert_for_domain("api.example.com", "/etc/letsencrypt/live", debug=False)
self.assertEqual(
folder, "exact",
"Exact match must win even if the wildcard certificate is newer."
)
def test_wildcard_one_label_only(self):
"""
Wildcards (*.example.com) must only match one additional label.
"""
files = ["/etc/letsencrypt/live/wild/cert.pem"]
text_wild = _san_block("*.example.com")
with patch.object(CertUtils, "list_cert_files", return_value=files), \
patch.object(CertUtils, "run_openssl", return_value=text_wild), \
patch.object(CertUtils, "run_openssl_dates", return_value=(50, 100000)), \
patch("os.stat", side_effect=self._mock_stat_map({files[0]: 1000})):
# should match
self.assertEqual(
CertUtils.find_cert_for_domain("api.example.com", "/etc/letsencrypt/live"),
"wild"
)
# too deep -> should not match
self.assertIsNone(
CertUtils.find_cert_for_domain("deep.api.example.com", "/etc/letsencrypt/live"),
"Wildcard must not match multiple labels."
)
# base domain not covered
self.assertIsNone(
CertUtils.find_cert_for_domain("example.com", "/etc/letsencrypt/live"),
"Base domain is not covered by *.example.com."
)
def test_snapshot_refresh_rebuilds_mapping(self):
"""
ensure_cert_mapping() should rebuild mapping when snapshot changes.
"""
CertUtils._domain_cert_mapping = {"www.example.com": [{"folder": "old", "mtime": 1, "not_before": 1}]}
with patch.object(CertUtils, "snapshot_changed", return_value=True), \
patch.object(CertUtils, "refresh_cert_mapping") as mock_refresh:
def _set_new_mapping(cert_base_path, debug=False):
CertUtils._domain_cert_mapping = {
"www.example.com": [{"folder": "new", "mtime": 999, "not_before": 999}]
}
mock_refresh.side_effect = _set_new_mapping
folder = CertUtils.find_cert_for_domain("www.example.com", "/etc/letsencrypt/live", debug=False)
self.assertEqual(folder, "new", "Mapping must be refreshed when snapshot changes.")
if __name__ == "__main__":
unittest.main()

Some files were not shown because too many files have changed in this diff Show More