41 Commits

Author SHA1 Message Date
9e874408a7 Fix: enable stable Drupal OIDC support and PHP 8.2 base image
- Switched Drupal base image to PHP 8.2 for compatibility with openid_connect 2.x
- Added mariadb-client to container to allow Drush to drop tables
- Upgraded OIDC module from ^1 to ^2@beta for entity-based client configuration
- Replaced legacy client creation task with generic plugin-based entity creation
- Ensured /usr/local/bin is in PATH for www-data user
- Updated oidc.yml to explicitly use the generic plugin

References: https://chatgpt.com/share/6905cecc-8e3c-800f-849b-4041b6925381
2025-11-01 10:12:07 +01:00
bebf76951c Fix: Drupal installation now completes successfully (permissions, PDO, and correct paths)
- Added database readiness wait and proper Drush installation command
- Ensured /sites/default/files is writable before installation
- Switched to /opt/drupal/web as canonical Drupal root
- Added missing PHP extension pdo_mysql
- Adjusted Dockerfile and Compose volume paths
- Drupal installation now runs successfully end-to-end

Details: https://chatgpt.com/share/6905bb12-6de8-800f-be8c-b565d5ec6cdb
2025-11-01 08:47:50 +01:00
aa1a901309 feat(web-app-drupal): add Drupal role, OIDC config, and wiring
- networks: add web-app-drupal subnet 192.168.104.80/28
- ports: map localhost http port 8060
- add role files: tasks, vars, schema, users, templates (Dockerfile, docker-compose, settings.local.php, upload.ini)
- add docs: README.md and Administration.md

Ref: https://chatgpt.com/share/690535c5-b55c-800f-8556-5335a6b8a33f
2025-10-31 23:19:07 +01:00
d61c81634c Add Joomla CLI paths and implement non-interactive admin password reset via CLI
Ref: https://chatgpt.com/share/69039c22-f530-800f-a641-fd2636d5b6af
2025-10-30 18:11:18 +01:00
265f815b48 Optimized Listmonk and Nextcloud CSP for hcaptcha 2025-10-30 16:02:09 +01:00
f8e5110730 Add Redis readiness check before Nextcloud upgrade and add retry logic for maintenance repair
This prevents OCC repair failures caused by Redis still loading its dataset after container restarts.
See context: https://chatgpt.com/share/690377ba-1520-800f-b8c1-bc93fbd9232f
2025-10-30 15:36:00 +01:00
37b213f96a Refactor XWiki OIDC activation to use REST-based authenticationService update (reliable alternative to Groovy) — see ChatGPT discussion: https://chatgpt.com/share/69005d88-6bf8-800f-af41-73b0e5dc9c13 2025-10-29 11:12:19 +01:00
5ef525eac9 Optimized CSP for Gitlab 2025-10-28 08:26:53 +01:00
295ae7e477 Solved Mediawiki CPS bug whichg prevented OIDC Login 2025-10-27 20:33:07 +01:00
c67ccc1df6 Used path_join @ web-app-friendica 2025-10-26 15:48:28 +01:00
cb483f60d1 optimized for easier debugging 2025-10-25 12:52:17 +02:00
2be73502ca Solved tests 2025-10-25 11:46:36 +02:00
57d5269b07 CSP (Safari-safe): merge -elem/-attr into base; respect explicit disables; no mirror-back; header only for documents/workers
- Add CSP3 support for style/script: include -elem and -attr directives
- Base (style-src, script-src) now unions elem/attr (CSP2/Safari fallback)
- Respect explicit base disables (e.g. style-src.unsafe-inline: false)
- Hashes only when 'unsafe-inline' absent in the final base tokens
- Nginx: set CSP only for HTML/worker via header_filter_by_lua_block; drop for subresources
- Remove per-location header_filter; keep body_filter only
- Update app role flags to *-attr where appropriate; extend desktop CSS sources
- Add comprehensive unit tests for union/explicit-disable/no-mirror-back

Ref: https://chatgpt.com/share/68f87a0a-cebc-800f-bb3e-8c8ab4dee8ee
2025-10-22 13:53:06 +02:00
1eefdea050 Solved CSP errors for MiniQR 2025-10-22 12:49:22 +02:00
561160504e Add new web-app-mini-qr role
- Introduced new role 'web-app-mini-qr' to deploy the lightweight, self-hosted Mini-QR application.
- Added dedicated subnet and localhost port mapping (8059) in group_vars.
- Ensured proper dependency structure and run_once handling in MIG role.
- Included upstream reference and CSP whitelist for temporary clarity.ms removal tracking.
- Added README.md and meta information following the Infinito.Nexus web-app schema.

See: https://chatgpt.com/share/68f890ab-5960-800f-85f8-ba30bd4350fe
2025-10-22 10:07:35 +02:00
9a4bf91276 feat(nextcloud): enable custom Alpine-based Whiteboard image with Chromium & ffmpeg support
- Added role tasks to deploy templated Dockerfile for Whiteboard service
- Configured build context and custom image name (nextcloud_whiteboard_custom)
- Increased PID limits and shm_size for stable recording
- Adjusted user ID variable naming consistency
- Integrated path_join for service directory variables
- Fixed build permissions (install as root, revert to nobody)

Reference: ChatGPT conversation https://chatgpt.com/share/68f771c6-0e98-800f-99ca-9e367f4cd0c2
2025-10-21 13:44:11 +02:00
468b6e734c Deactivated whiteboar 2025-10-20 21:17:06 +02:00
83cb94b6ff Refactored Redis resource include macro and increased memory limits
- Replaced deprecated lookup(vars=...) in svc-db-redis with macro-based include (Ansible/Jinja safe)
- Redis now uses higher resource values (1 CPU, 1G reserved, 8G max, 512 pids)
- Enables stable Whiteboard operation with >3.5 GB Redis memory usage
- Related conversation: https://chatgpt.com/share/68f67a00-d598-800f-a6be-ee5987e66fba
2025-10-20 20:08:38 +02:00
6857295969 Fix variable definition test to recognize block-style Jinja 'set ... endset' statements
This update extends the regex to detect block-style variable definitions such as:
  {% set var %} ... {% endset %}
Previously, only inline 'set var =' syntax was recognized, causing false positives
like '_snippet' being flagged as undefined in Jinja templates.

Reference: https://chatgpt.com/share/68f6799a-eb80-800f-ab5c-7c196d4c4661
2025-10-20 20:04:40 +02:00
8ab398f679 nextcloud:whiteboard: wait for Redis before start (depends_on: service_healthy) to prevent early SocketClosedUnexpectedlyError
Context: added depends_on on redis for the Whiteboard service so websockets don’t crash when Redis isn’t ready yet. See discussion: https://chatgpt.com/share/68f65a3e-aa54-800f-a1a7-e6878775fd7e
2025-10-20 17:50:47 +02:00
31133ddd90 Enhancement: Fix for Nextcloud Whiteboard recording and collaboration server
- Added Chromium headless flags and writable font cache/tmp volumes
- Enabled WebSocket proxy forwarding for /whiteboard/
- Verified and adjusted CSP and frontend integration
- Added Whiteboard-related variables and volumes in main.yml

See ChatGPT conversation (20 Oct 2025):
https://chatgpt.com/share/68f655e1-fa3c-800f-b35f-4f875dfed4fd
2025-10-20 17:31:59 +02:00
783b1e152d Added numpy 2025-10-20 11:03:44 +02:00
eca567fefd Made gitea LDAP Source primary domain independent 2025-10-18 10:54:39 +02:00
905f461ee8 Add basic healthcheck to oauth2-proxy container template using binary version check for distroless compatibility
Reference: https://chatgpt.com/share/68f35550-4248-800f-9c6a-dbd49a48592e
2025-10-18 10:52:58 +02:00
9f0b259ba9 Merge branch 'master' of github.com:kevinveenbirkenbach/infinito-nexus 2025-10-18 09:41:18 +02:00
06e4323faa Added ansible environmnet 2025-10-17 23:07:43 +02:00
3d99226f37 Refactor BigBlueButton and backup task structure:
- Moved database seed variables from vars/main.yml to task-level include in BigBlueButton
- Simplified core include logic in sys-ctl-bkp-docker-2-loc
- Ensured clean conditional for BKP_DOCKER_2_LOC_DB_ENABLED
See: https://chatgpt.com/share/68f216f7-62d8-800f-94e3-c82e4418e51b (deutsch)
2025-10-17 12:14:39 +02:00
73ba09fbe2 Optimize SSH connection performance by disabling GSSAPI authentication and reverse DNS lookups
- Added 'GSSAPIAuthentication no' to prevent unnecessary Kerberos negotiation delays.
- Added 'UseDNS no' to skip reverse DNS resolution during SSH login, improving connection speed.
- Both changes improve SSH responsiveness, especially in non-domain environments.

Reference: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
2025-10-15 18:37:09 +02:00
01ea9b76ce Enable pipelining globally and modernize SSH settings
- Activated pipelining in [defaults] for better performance.
- Replaced deprecated 'scp_if_ssh' with 'transfer_method'.
- Flattened multi-line ssh_args for compatibility.
- Verified configuration parsing as discussed in https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
2025-10-15 17:45:16 +02:00
c22acf202f Solved bugs 2025-10-15 17:03:57 +02:00
61e138c1a6 Optimize OpenLDAP container resources for up to 5k users (1.25 CPU / 1.5GB RAM / 1024 PIDs). See https://chatgpt.com/share/68ef7228-4028-800f-8986-54206a51b9c1 2025-10-15 12:06:51 +02:00
07c8e036ec Deactivated change when because its anyhow not trackable 2025-10-15 10:27:12 +02:00
0b36059cd2 feat(web-app-gitea): add optional Redis integration for caching, sessions, and queues
This update introduces conditional Redis support for Gitea, allowing connection
to either a local or centralized Redis instance depending on configuration.
Includes resource limits for the Redis service and corresponding environment
variables for cache, session, and queue backends.

Reference: ChatGPT conversation on centralized vs per-app Redis architecture (2025-10-15).
https://chatgpt.com/share/68ef5930-49c8-800f-b6b8-069e6fefda01
2025-10-15 10:20:18 +02:00
d76e384ae3 Enhance CertUtils to return the newest matching certificate and add comprehensive unit tests
- Added run_openssl_dates() to extract notBefore/notAfter timestamps.
- Modified mapping logic to store multiple cert entries per SAN with metadata.
- find_cert_for_domain() now selects the newest certificate based on notBefore and mtime.
- Exact SAN matches take precedence over wildcard matches.
- Added new unit tests (test_cert_utils_newest.py) verifying freshness logic, fallback handling, and wildcard behavior.

Reference: https://chatgpt.com/share/68ef4b4c-41d4-800f-9e50-5da4b6be1105
2025-10-15 09:21:00 +02:00
e6f4f3a6a4 feat(cli/build/defaults): ensure deterministic alphabetical sorting for applications and users
- Added sorting by application key and user key before YAML output.
- Ensures stable and reproducible file generation across runs.
- Added comprehensive unit tests verifying key order and output stability.

See: https://chatgpt.com/share/68ef4778-a848-800f-a50b-a46a3b878797
2025-10-15 09:04:39 +02:00
a80b26ed9e Moved bbb database seeding 2025-10-15 08:50:21 +02:00
45ec7b0ead Optimized include text 2025-10-15 08:39:37 +02:00
ec396d130c Optimized time schedule 2025-10-15 08:37:51 +02:00
93c2fbedd7 Added setting of timezone 2025-10-15 02:24:25 +02:00
d006f0ba5e Optimized schedule 2025-10-15 02:13:13 +02:00
dd43722e02 Raised memory for baserow 2025-10-14 21:59:10 +02:00
119 changed files with 1896 additions and 241 deletions

View File

@@ -1,5 +1,6 @@
[defaults]
# --- Performance & Behavior ---
pipelining = True
forks = 25
strategy = linear
gathering = smart
@@ -14,19 +15,14 @@ stdout_callback = yaml
callbacks_enabled = profile_tasks,timer
# --- Plugin paths ---
filter_plugins = ./filter_plugins
filter_plugins = ./filter_plugins
lookup_plugins = ./lookup_plugins
module_utils = ./module_utils
[ssh_connection]
# Multiplexing: safer socket path in HOME instead of /tmp
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r \
-o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new \
-o PreferredAuthentications=publickey,password,keyboard-interactive
# Pipelining boosts speed; works fine if sudoers does not enforce "requiretty"
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r -o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new -o PreferredAuthentications=publickey,password,keyboard-interactive
pipelining = True
scp_if_ssh = smart
transfer_method = smart
[persistent_connection]
connect_timeout = 30

View File

@@ -83,6 +83,13 @@ class DefaultsGenerator:
print(f"Error during rendering: {e}", file=sys.stderr)
sys.exit(1)
# Sort applications by application key for stable output
apps = result.get("defaults_applications", {})
if isinstance(apps, dict) and apps:
result["defaults_applications"] = {
k: apps[k] for k in sorted(apps.keys())
}
# Write output
self.output_file.parent.mkdir(parents=True, exist_ok=True)
with self.output_file.open("w", encoding="utf-8") as f:

View File

@@ -220,6 +220,10 @@ def main():
print(f"Error building user entries: {e}", file=sys.stderr)
sys.exit(1)
# Sort users by key for deterministic output
if isinstance(users, dict) and users:
users = OrderedDict(sorted(users.items()))
# Convert OrderedDict into plain dict for YAML
default_users = {'default_users': users}
plain_data = dictify(default_users)

View File

@@ -10,9 +10,23 @@ from module_utils.config_utils import get_app_conf
from module_utils.get_url import get_url
def _dedup_preserve(seq):
"""Return a list with stable order and unique items."""
seen = set()
out = []
for x in seq:
if x not in seen:
seen.add(x)
out.append(x)
return out
class FilterModule(object):
"""
Custom filters for Content Security Policy generation and CSP-related utilities.
Jinja filters for building a robust, CSP3-aware Content-Security-Policy header.
Safari/CSP2 compatibility is ensured by merging the -elem/-attr variants into the base
directives (style-src, script-src). We intentionally do NOT mirror back into -elem/-attr
to allow true CSP3 granularity on modern browsers.
"""
def filters(self):
@@ -61,11 +75,14 @@ class FilterModule(object):
"""
Returns CSP flag tokens (e.g., "'unsafe-eval'", "'unsafe-inline'") for a directive,
merging sane defaults with app config.
Default: 'unsafe-inline' is enabled for style-src and style-src-elem.
Defaults:
- For styles we enable 'unsafe-inline' by default (style-src, style-src-elem, style-src-attr),
because many apps rely on inline styles / style attributes.
- For scripts we do NOT enable 'unsafe-inline' by default.
"""
# Defaults that apply to all apps
default_flags = {}
if directive in ('style-src', 'style-src-elem'):
if directive in ('style-src', 'style-src-elem', 'style-src-attr'):
default_flags = {'unsafe-inline': True}
configured = get_app_conf(
@@ -76,7 +93,6 @@ class FilterModule(object):
{}
)
# Merge defaults with configured flags (configured overrides defaults)
merged = {**default_flags, **configured}
tokens = []
@@ -131,82 +147,148 @@ class FilterModule(object):
):
"""
Builds the Content-Security-Policy header value dynamically based on application settings.
- Flags (e.g., 'unsafe-eval', 'unsafe-inline') are read from server.csp.flags.<directive>,
with sane defaults applied in get_csp_flags (always 'unsafe-inline' for style-src and style-src-elem).
- Inline hashes are read from server.csp.hashes.<directive>.
- Whitelists are read from server.csp.whitelist.<directive>.
- Inline hashes are added only if the final tokens do NOT include 'unsafe-inline'.
Key points:
- CSP3-aware: supports base/elem/attr for styles and scripts.
- Safari/CSP2 fallback: base directives (style-src, script-src) always include
the union of their -elem/-attr variants.
- We do NOT mirror back into -elem/-attr; finer CSP3 rules remain effective
on modern browsers if you choose to use them.
- If the app explicitly disables a token on the *base* (e.g. style-src.unsafe-inline: false),
that token is removed from the merged base even if present in elem/attr.
- Inline hashes are added ONLY if that directive does NOT include 'unsafe-inline'.
- Whitelists/flags/hashes read from:
server.csp.whitelist.<directive>
server.csp.flags.<directive>
server.csp.hashes.<directive>
- “Smart defaults”:
* internal CDN for style/script elem and connect
* Matomo endpoints (if feature enabled) for script-elem/connect
* Simpleicons (if feature enabled) for connect
* reCAPTCHA (if feature enabled) for script-elem/frame-src
* frame-ancestors extended for desktop/logout/keycloak if enabled
"""
try:
directives = [
'default-src', # Fallback source list for content types not explicitly listed
'connect-src', # Allowed URLs for XHR, WebSockets, EventSource, fetch()
'frame-ancestors', # Who may embed this page
'frame-src', # Sources for nested browsing contexts (e.g., <iframe>)
'script-src', # Sources for script execution
'script-src-elem', # Sources for <script> elements
'style-src', # Sources for inline styles and <style>/<link> elements
'style-src-elem', # Sources for <style> and <link rel="stylesheet">
'font-src', # Sources for fonts
'worker-src', # Sources for workers
'manifest-src', # Sources for web app manifests
'media-src', # Sources for audio and video
'default-src',
'connect-src',
'frame-ancestors',
'frame-src',
'script-src',
'script-src-elem',
'script-src-attr',
'style-src',
'style-src-elem',
'style-src-attr',
'font-src',
'worker-src',
'manifest-src',
'media-src',
]
parts = []
tokens_by_dir = {}
explicit_flags_by_dir = {}
for directive in directives:
# Collect explicit flags (to later respect explicit "False" on base during merge)
explicit_flags = get_app_conf(
applications,
application_id,
'server.csp.flags.' + directive,
False,
{}
)
explicit_flags_by_dir[directive] = explicit_flags
tokens = ["'self'"]
# Load flags (includes defaults from get_csp_flags)
# 1) Flags (with sane defaults)
flags = self.get_csp_flags(applications, application_id, directive)
tokens += flags
# Allow fetching from internal CDN by default for selected directives
if directive in ['script-src-elem', 'connect-src', 'style-src-elem']:
# 2) Internal CDN defaults for selected directives
if directive in ('script-src-elem', 'connect-src', 'style-src-elem', 'style-src'):
tokens.append(get_url(domains, 'web-svc-cdn', web_protocol))
# Matomo integration if feature is enabled
if directive in ['script-src-elem', 'connect-src']:
# 3) Matomo (if enabled)
if directive in ('script-src-elem', 'connect-src'):
if self.is_feature_enabled(applications, matomo_feature_name, application_id):
tokens.append(get_url(domains, 'web-app-matomo', web_protocol))
# Simpleicons integration if feature is enabled
if directive in ['connect-src']:
# 4) Simpleicons (if enabled) typically used via connect-src (fetch)
if directive == 'connect-src':
if self.is_feature_enabled(applications, 'simpleicons', application_id):
tokens.append(get_url(domains, 'web-svc-simpleicons', web_protocol))
# ReCaptcha integration (scripts + frames) if feature is enabled
# 5) reCAPTCHA (if enabled) scripts + frames
if self.is_feature_enabled(applications, 'recaptcha', application_id):
if directive in ['script-src-elem', 'frame-src']:
if directive in ('script-src-elem', 'frame-src'):
tokens.append('https://www.gstatic.com')
tokens.append('https://www.google.com')
# Frame ancestors handling (desktop + logout support)
# 6) Frame ancestors (desktop + logout)
if directive == 'frame-ancestors':
if self.is_feature_enabled(applications, 'desktop', application_id):
# Allow being embedded by the desktop app domain (and potentially its parent)
# Allow being embedded by the desktop app domain's site
domain = domains.get('web-app-desktop')[0]
sld_tld = ".".join(domain.split(".")[-2:]) # e.g., example.com
tokens.append(f"{sld_tld}")
if self.is_feature_enabled(applications, 'logout', application_id):
# Allow embedding via logout proxy and Keycloak app
tokens.append(get_url(domains, 'web-svc-logout', web_protocol))
tokens.append(get_url(domains, 'web-app-keycloak', web_protocol))
# Custom whitelist entries
# 7) Custom whitelist
tokens += self.get_csp_whitelist(applications, application_id, directive)
# Add inline content hashes ONLY if final tokens do NOT include 'unsafe-inline'
# (Check tokens, not flags, to include defaults and later modifications.)
# 8) Inline hashes (only if this directive does NOT include 'unsafe-inline')
if "'unsafe-inline'" not in tokens:
for snippet in self.get_csp_inline_content(applications, application_id, directive):
tokens.append(self.get_csp_hash(snippet))
# Append directive
parts.append(f"{directive} {' '.join(tokens)};")
tokens_by_dir[directive] = _dedup_preserve(tokens)
# Static img-src directive (kept permissive for data/blob and any host)
# ----------------------------------------------------------
# CSP3 families → ensure CSP2 fallback (Safari-safe)
# Merge style/script families so base contains union of elem/attr.
# Respect explicit disables on the base (e.g. unsafe-inline=False).
# Do NOT mirror back into elem/attr (keep granularity).
# ----------------------------------------------------------
def _strip_if_disabled(unioned_tokens, explicit_flags, name):
"""
Remove a token (e.g. 'unsafe-inline') from the unioned token list
if it is explicitly disabled in the base directive flags.
"""
if isinstance(explicit_flags, dict) and explicit_flags.get(name) is False:
tok = f"'{name}'"
return [t for t in unioned_tokens if t != tok]
return unioned_tokens
def merge_family(base_key, elem_key, attr_key):
base = tokens_by_dir.get(base_key, [])
elem = tokens_by_dir.get(elem_key, [])
attr = tokens_by_dir.get(attr_key, [])
union = _dedup_preserve(base + elem + attr)
# Respect explicit disables on the base
explicit_base = explicit_flags_by_dir.get(base_key, {})
# The most relevant flags for script/style:
for flag_name in ('unsafe-inline', 'unsafe-eval'):
union = _strip_if_disabled(union, explicit_base, flag_name)
tokens_by_dir[base_key] = union # write back only to base
merge_family('style-src', 'style-src-elem', 'style-src-attr')
merge_family('script-src', 'script-src-elem', 'script-src-attr')
# ----------------------------------------------------------
# Assemble header
# ----------------------------------------------------------
parts = []
for directive in directives:
if directive in tokens_by_dir:
parts.append(f"{directive} {' '.join(tokens_by_dir[directive])};")
# Keep permissive img-src for data/blob + any host (as before)
parts.append("img-src * data: blob:;")
return ' '.join(parts)

View File

@@ -1,4 +1,3 @@
# Service Timers
## Meta
@@ -24,29 +23,29 @@ SYS_SCHEDULE_HEALTH_BTRFS: "*-*-* 00:00:00"
SYS_SCHEDULE_HEALTH_JOURNALCTL: "*-*-* 00:00:00" # Check once per day the journalctl for errors
SYS_SCHEDULE_HEALTH_DISC_SPACE: "*-*-* 06,12,18,00:00:00" # Check four times per day if there is sufficient disc space
SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if the docker containers are healthy
SYS_SCHEDULE_HEALTH_DOCKER_VOLUMES: "*-*-* {{ HOURS_SERVER_AWAKE }}:15:00" # Check once per hour if the docker volumes are healthy
SYS_SCHEDULE_HEALTH_CSP_CRAWLER: "*-*-* {{ HOURS_SERVER_AWAKE }}:30:00" # Check once per hour if all CSP are fullfilled available
SYS_SCHEDULE_HEALTH_NGINX: "*-*-* {{ HOURS_SERVER_AWAKE }}:45:00" # Check once per hour if all webservices are available
SYS_SCHEDULE_HEALTH_DOCKER_VOLUMES: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if the docker volumes are healthy
SYS_SCHEDULE_HEALTH_CSP_CRAWLER: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if all CSP are fullfilled available
SYS_SCHEDULE_HEALTH_NGINX: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if all webservices are available
SYS_SCHEDULE_HEALTH_MSMTP: "*-*-* 00:00:00" # Check once per day SMTP Server
### Schedule for cleanup tasks
SYS_SCHEDULE_CLEANUP_BACKUPS: "*-*-* 00,06,12,18:30:00" # Cleanup backups every 6 hours, MUST be called before disc space cleanup
SYS_SCHEDULE_CLEANUP_DISC_SPACE: "*-*-* 07,13,19,01:30:00" # Cleanup disc space every 6 hours
SYS_SCHEDULE_CLEANUP_CERTS: "*-*-* 12,00:45:00" # Deletes and revokes unused certs
SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 12:00:00" # Clean up failed docker backups every noon
SYS_SCHEDULE_CLEANUP_CERTS: "*-*-* 20:00" # Deletes and revokes unused certs once per day
SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 21:00" # Clean up failed docker backups once per day
SYS_SCHEDULE_CLEANUP_BACKUPS: "*-*-* 22:00" # Cleanup backups once per day, MUST be called before disc space cleanup
SYS_SCHEDULE_CLEANUP_DISC_SPACE: "*-*-* 23:00" # Cleanup disc space once per day
### Schedule for repair services
SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 00:00:00" # Restart docker instances every Sunday
### Schedule for backup tasks
SYS_SCHEDULE_BACKUP_DOCKER_TO_LOCAL: "*-*-* 03:30:00"
SYS_SCHEDULE_BACKUP_REMOTE_TO_LOCAL: "*-*-* 21:30:00"
SYS_SCHEDULE_BACKUP_REMOTE_TO_LOCAL: "*-*-* 00:30:00" # Pull Backup of the previous day
SYS_SCHEDULE_BACKUP_DOCKER_TO_LOCAL: "*-*-* 01:00:00" # Backup the current day
### Schedule for Maintenance Tasks
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_RENEW: "*-*-* 12,00:30:00" # Renew Mailu certificates twice per day
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_DEPLOY: "*-*-* 13,01:30:00" # Deploy letsencrypt certificates twice per day to docker containers
SYS_SCHEDULE_MAINTANANCE_NEXTCLOUD: "22" # Do nextcloud maintanace between 22:00 and 02:00
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_RENEW: "*-*-* 10,22:00:00" # Renew Mailu certificates twice per day
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_DEPLOY: "*-*-* 11,23:00:00" # Deploy letsencrypt certificates twice per day to docker containers
SYS_SCHEDULE_MAINTANANCE_NEXTCLOUD: "21" # Do nextcloud maintanace between 21:00 and 01:00
### Animation
SYS_SCHEDULE_ANIMATION_KEYBOARD_COLOR: "*-*-* *:*:00" # Change the keyboard color every minute

View File

@@ -112,6 +112,10 @@ defaults_networks:
subnet: 192.168.104.32/28
web-svc-coturn:
subnet: 192.168.104.48/28
web-app-mini-qr:
subnet: 192.168.104.64/28
web-app-drupal:
subnet: 192.168.104.80/28
# /24 Networks / 254 Usable Clients
web-app-bigbluebutton:

View File

@@ -80,6 +80,8 @@ ports:
web-app-flowise: 8056
web-app-minio_api: 8057
web-app-minio_console: 8058
web-app-mini-qr: 8059
web-app-drupal: 8060
web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port
public:
# The following ports should be changed to 22 on the subdomain via stream mapping

View File

@@ -6,6 +6,7 @@ __metaclass__ = type
import os
import subprocess
import time
from datetime import datetime
class CertUtils:
_domain_cert_mapping = None
@@ -22,6 +23,30 @@ class CertUtils:
except subprocess.CalledProcessError:
return ""
@staticmethod
def run_openssl_dates(cert_path):
"""
Returns (not_before_ts, not_after_ts) as POSIX timestamps or (None, None) on failure.
"""
try:
output = subprocess.check_output(
['openssl', 'x509', '-in', cert_path, '-noout', '-startdate', '-enddate'],
universal_newlines=True
)
nb, na = None, None
for line in output.splitlines():
line = line.strip()
if line.startswith('notBefore='):
nb = line.split('=', 1)[1].strip()
elif line.startswith('notAfter='):
na = line.split('=', 1)[1].strip()
def _parse(openssl_dt):
# OpenSSL format example: "Oct 10 12:34:56 2025 GMT"
return int(datetime.strptime(openssl_dt, "%b %d %H:%M:%S %Y %Z").timestamp())
return (_parse(nb) if nb else None, _parse(na) if na else None)
except Exception:
return (None, None)
@staticmethod
def extract_sans(cert_text):
dns_entries = []
@@ -59,7 +84,6 @@ class CertUtils:
else:
return domain == san
@classmethod
def build_snapshot(cls, cert_base_path):
snapshot = []
@@ -82,6 +106,17 @@ class CertUtils:
@classmethod
def refresh_cert_mapping(cls, cert_base_path, debug=False):
"""
Build mapping: SAN -> list of entries
entry = {
'folder': str,
'cert_path': str,
'mtime': float,
'not_before': int|None,
'not_after': int|None,
'is_wildcard': bool
}
"""
cert_files = cls.list_cert_files(cert_base_path)
mapping = {}
for cert_path in cert_files:
@@ -90,46 +125,82 @@ class CertUtils:
continue
sans = cls.extract_sans(cert_text)
folder = os.path.basename(os.path.dirname(cert_path))
try:
mtime = os.stat(cert_path).st_mtime
except FileNotFoundError:
mtime = 0.0
nb, na = cls.run_openssl_dates(cert_path)
for san in sans:
if san not in mapping:
mapping[san] = folder
entry = {
'folder': folder,
'cert_path': cert_path,
'mtime': mtime,
'not_before': nb,
'not_after': na,
'is_wildcard': san.startswith('*.'),
}
mapping.setdefault(san, []).append(entry)
cls._domain_cert_mapping = mapping
if debug:
print(f"[DEBUG] Refreshed domain-to-cert mapping: {mapping}")
print(f"[DEBUG] Refreshed domain-to-cert mapping (counts): "
f"{ {k: len(v) for k, v in mapping.items()} }")
@classmethod
def ensure_cert_mapping(cls, cert_base_path, debug=False):
if cls._domain_cert_mapping is None or cls.snapshot_changed(cert_base_path):
cls.refresh_cert_mapping(cert_base_path, debug)
@staticmethod
def _score_entry(entry):
"""
Return tuple used for sorting newest-first:
(not_before or -inf, mtime)
"""
nb = entry.get('not_before')
mtime = entry.get('mtime', 0.0)
return (nb if nb is not None else -1, mtime)
@classmethod
def find_cert_for_domain(cls, domain, cert_base_path, debug=False):
cls.ensure_cert_mapping(cert_base_path, debug)
exact_match = None
wildcard_match = None
candidates_exact = []
candidates_wild = []
for san, folder in cls._domain_cert_mapping.items():
for san, entries in cls._domain_cert_mapping.items():
if san == domain:
exact_match = folder
break
if san.startswith('*.'):
candidates_exact.extend(entries)
elif san.startswith('*.'):
base = san[2:]
if domain.count('.') == base.count('.') + 1 and domain.endswith('.' + base):
wildcard_match = folder
candidates_wild.extend(entries)
if exact_match:
if debug:
print(f"[DEBUG] Exact match for {domain} found in {exact_match}")
return exact_match
def _pick_newest(entries):
if not entries:
return None
# newest by (not_before, mtime)
best = max(entries, key=cls._score_entry)
return best
if wildcard_match:
if debug:
print(f"[DEBUG] Wildcard match for {domain} found in {wildcard_match}")
return wildcard_match
best_exact = _pick_newest(candidates_exact)
best_wild = _pick_newest(candidates_wild)
if best_exact and debug:
print(f"[DEBUG] Best exact match for {domain}: {best_exact['folder']} "
f"(not_before={best_exact['not_before']}, mtime={best_exact['mtime']})")
if best_wild and debug:
print(f"[DEBUG] Best wildcard match for {domain}: {best_wild['folder']} "
f"(not_before={best_wild['not_before']}, mtime={best_wild['mtime']})")
# Prefer exact if it exists; otherwise wildcard
chosen = best_exact or best_wild
if chosen:
return chosen['folder']
if debug:
print(f"[DEBUG] No certificate folder found for {domain}")
return None

View File

@@ -3,4 +3,7 @@ collections:
- name: community.general
- name: hetzner.hcloud
yay:
- python-simpleaudio
- python-simpleaudio
- python-numpy
pacman:
- ansible

View File

@@ -127,7 +127,7 @@
#de_BE@euro ISO-8859-15
#de_CH.UTF-8 UTF-8
#de_CH ISO-8859-1
de_DE.UTF-8 UTF-8
#de_DE.UTF-8 UTF-8
#de_DE ISO-8859-1
#de_DE@euro ISO-8859-15
#de_IT.UTF-8 UTF-8

View File

@@ -8,6 +8,11 @@ docker:
image: "bitnamilegacy/openldap"
name: "openldap"
version: "latest"
cpus: 1.25
# Optimized up to 5k user
mem_reservation: 1g
mem_limit: 1.5g
pids_limit: 1024
network: "openldap"
volumes:
data: "openldap_data"

View File

@@ -16,5 +16,12 @@
retries: 30
networks:
- default
{{ lookup('template', 'roles/docker-container/templates/resource.yml.j2',vars={'service_name':'redis'}) | indent(4) }}
{% macro include_resource_for(svc, indent=4) -%}
{% set service_name = svc -%}
{%- set _snippet -%}
{% include 'roles/docker-container/templates/resource.yml.j2' %}
{%- endset -%}
{{ _snippet | indent(indent, true) }}
{%- endmacro %}
{{ include_resource_for('redis') }}
{{ "\n" }}

View File

@@ -57,8 +57,10 @@
- name: Fix ownership level 0..2 directories to backup:backup
ansible.builtin.shell: >
find "{{ BACKUPS_FOLDER_PATH }}" -mindepth 0 -maxdepth 2 -xdev -type d -exec chown backup:backup {} +
changed_when: false
- name: Fix perms level 0..2 directories to 0700
ansible.builtin.shell: >
find "{{ BACKUPS_FOLDER_PATH }}" -mindepth 0 -maxdepth 2 -xdev -type d -exec chmod 700 {} +
changed_when: false

View File

@@ -1,9 +1,6 @@
- block:
- include_tasks: 01_core.yml
when:
- run_once_sys_ctl_bkp_docker_2_loc is not defined
- include_tasks: 01_core.yml
when: run_once_sys_ctl_bkp_docker_2_loc is not defined
- name: "include 04_seed-database-to-backup.yml"
include_tasks: 04_seed-database-to-backup.yml
when:
- BKP_DOCKER_2_LOC_DB_ENABLED | bool
when: BKP_DOCKER_2_LOC_DB_ENABLED | bool

View File

@@ -10,17 +10,6 @@
lua_need_request_body on;
header_filter_by_lua_block {
local ct = ngx.header.content_type or ""
if ct:lower():find("^text/html") then
ngx.ctx.is_html = true
-- IMPORTANT: body will be modified → drop Content-Length to avoid mismatches
ngx.header.content_length = nil
else
ngx.ctx.is_html = false
end
}
body_filter_by_lua_block {
-- Only process HTML responses
if not ngx.ctx.is_html then

View File

@@ -1,3 +1,3 @@
ssl_certificate {{ [ LETSENCRYPT_LIVE_PATH, ssl_cert_folder, 'fullchain.pem'] | path_join }};
ssl_certificate_key {{ [ LETSENCRYPT_LIVE_PATH, ssl_cert_folder, 'privkey.pem' ] | path_join }};
ssl_trusted_certificate {{ [ LETSENCRYPT_LIVE_PATH, ssl_cert_folder, 'chain.pem' ] | path_join }};
ssl_certificate {{ [ LETSENCRYPT_LIVE_PATH | mandatory, ssl_cert_folder | mandatory, 'fullchain.pem'] | path_join }};
ssl_certificate_key {{ [ LETSENCRYPT_LIVE_PATH | mandatory, ssl_cert_folder | mandatory, 'privkey.pem' ] | path_join }};
ssl_trusted_certificate {{ [ LETSENCRYPT_LIVE_PATH | mandatory, ssl_cert_folder | mandatory, 'chain.pem' ] | path_join }};

View File

@@ -1,2 +1,33 @@
add_header Content-Security-Policy "{{ applications | build_csp_header(application_id, domains) }}" always;
proxy_hide_header Content-Security-Policy; # Todo: Make this optional
# ===== Content Security Policy: only for documents and workers (no locations needed) =====
# 1) Define your CSP once (Jinja: escape double quotes to be safe)
set $csp "{{ applications | build_csp_header(application_id, domains) | replace('\"','\\\"') }}";
# 2) Send CSP ONLY for document responses; also for workers via Sec-Fetch-Dest
header_filter_by_lua_block {
local ct = ngx.header.content_type or ngx.header["Content-Type"] or ""
local dest = ngx.var.http_sec_fetch_dest or ""
local lct = ct:lower()
local is_html = lct:find("^text/html") or lct:find("^application/xhtml+xml")
local is_worker = (dest == "worker") or (dest == "serviceworker")
if is_html or is_worker then
ngx.header["Content-Security-Policy"] = ngx.var.csp
else
ngx.header["Content-Security-Policy"] = nil
ngx.header["Content-Security-Policy-Report-Only"] = nil
end
-- If you'll modify the body later, drop Content-Length on HTML
if is_html then
ngx.ctx.is_html = true
ngx.header.content_length = nil
else
ngx.ctx.is_html = false
end
}
# 3) Prevent upstream/app CSP (duplicates)
proxy_hide_header Content-Security-Policy;
proxy_hide_header Content-Security-Policy-Report-Only;

View File

@@ -68,7 +68,12 @@ ChallengeResponseAuthentication no
#KerberosGetAFSToken no
# GSSAPI options
#GSSAPIAuthentication no
# Disable GSSAPI (Kerberos) authentication to avoid unnecessary negotiation delays.
# This setting is useful for non-domain environments where GSSAPI is not used,
# improving SSH connection startup time and reducing overhead.
# See: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
# Set this to 'yes' to enable PAM authentication, account processing,
@@ -97,7 +102,13 @@ PrintMotd no # pam does that
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
# Disable reverse DNS lookups to speed up SSH logins.
# When UseDNS is enabled, sshd performs a reverse DNS lookup for each connecting client,
# which can significantly delay authentication if DNS resolution is slow or misconfigured.
# See: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
UseDNS no
#PidFile /run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no

View File

@@ -18,10 +18,10 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
font-src:

View File

@@ -19,7 +19,7 @@ docker:
name: "baserow"
cpus: 1.0
mem_reservation: 0.5g
mem_limit: 1g
mem_limit: 2g
pids_limit: 512
volumes:
data: "baserow_data"
@@ -37,5 +37,5 @@ server:
flags:
script-src-elem:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true

View File

@@ -13,7 +13,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -14,13 +14,20 @@
name: sys-stk-full-stateless
vars:
docker_compose_flush_handlers: false
- name: "include 04_seed-database-to-backup.yml"
include_tasks: "{{ [ playbook_dir, 'roles/sys-ctl-bkp-docker-2-loc/tasks/04_seed-database-to-backup.yml' ] | path_join }}"
- name: "Unset 'proxy_extra_configuration'"
set_fact:
proxy_extra_configuration: null
- name: "Include Seed routines for '{{ application_id }}' database backup"
include_tasks: "{{ [ playbook_dir, 'roles/sys-ctl-bkp-docker-2-loc/tasks/04_seed-database-to-backup.yml' ] | path_join }}"
vars:
database_type: "postgres"
database_instance: "{{ entity_name }}"
database_password: "{{ applications | get_app_conf(application_id, 'credentials.postgresql_secret') }}"
database_username: "postgres"
database_name: "" # Multiple databases
- name: configure websocket_upgrade.conf
copy:
src: "websocket_upgrade.conf"

View File

@@ -2,13 +2,6 @@
application_id: "web-app-bigbluebutton"
entity_name: "{{ application_id | get_entity_name }}"
# Database configuration
database_type: "postgres"
database_instance: "{{ application_id | get_entity_name }}"
database_password: "{{ applications | get_app_conf(application_id, 'credentials.postgresql_secret') }}"
database_username: "postgres"
database_name: "" # Multiple databases
# Proxy
domain: "{{ domains | get_domain(application_id) }}"
http_port: "{{ ports.localhost.http[application_id] }}"

View File

@@ -27,7 +27,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -29,7 +29,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -15,6 +15,8 @@ server:
- https://code.jquery.com/
style-src-elem:
- https://cdn.jsdelivr.net
- https://kit.fontawesome.com
- https://code.jquery.com/
font-src:
- https://ka-f.fontawesome.com
- https://cdn.jsdelivr.net
@@ -25,7 +27,7 @@ server:
frame-src:
- "{{ WEB_PROTOCOL }}://*.{{ PRIMARY_DOMAIN }}"
flags:
script-src:
script-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -10,7 +10,7 @@ features:
server:
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true

View File

@@ -0,0 +1,29 @@
# Administration
## Shell access
```bash
docker-compose exec -it application /bin/bash
```
## Drush (inside the container)
```bash
drush --version
drush cr # Cache rebuild
drush status # Site status
drush cim -y # Config import (if using config sync)
drush updb -y # Run DB updates
```
## Database access (local DB service)
```bash
docker-compose exec -it database /bin/mysql -u drupal -p
```
## Test Email
```bash
docker-compose exec -it application /bin/bash -lc 'echo "Test Email" | sendmail -v your-email@example.com'
```

View File

@@ -0,0 +1,32 @@
# Drupal
## Description
[Drupal](https://www.drupal.org/) is a powerful open-source CMS for building secure, extensible, and content-rich digital experiences.
This role deploys a containerized **Drupal 10/11** instance optimized for production, including **msmtp** for outbound email, **Drush** for CLI administration, and **OpenID Connect (OIDC)** for SSO (e.g., Keycloak, Auth0, Azure AD).
## Overview
* **Flexible Content Model:** Entities, fields, and views for complex data needs.
* **Security & Roles:** Fine-grained access control and active security team.
* **Robust Ecosystem:** Thousands of modules and themes.
* **CLI Automation:** Drush for installs, updates, and configuration import.
* **OIDC SSO:** First-class login via external Identity Providers.
This automated Docker Compose deployment builds a custom Drupal image with Drush and msmtp, wires database credentials and config overrides via environment, and applies OIDC configuration via Ansible/Drush.
## OIDC
This role enables **OpenID Connect** via the `openid_connect` module and configures a **client entity** (e.g., `keycloak`) including endpoints and scopes. Global OIDC behavior (auto-create, link existing users, privacy) is set via `openid_connect.settings`.
## Further Resources
* [Drupal.org](https://www.drupal.org/)
* [OpenID Connect module](https://www.drupal.org/project/openid_connect)
## Credits
Developed and maintained by **Kevin Veen-Birkenbach**
Learn more at [veen.world](https://veen.world)
Part of the [Infinito.Nexus Project](https://s.infinito.nexus/code)
License: [Infinito.Nexus NonCommercial License](https://s.infinito.nexus/license)

View File

@@ -0,0 +1,41 @@
title: "Site"
max_upload_size: "256M"
features:
matomo: true
css: false
desktop: true
oidc: true
central_database: true
logout: true
server:
csp:
flags: {}
whitelist: {}
domains:
canonical:
- "drupal.{{ PRIMARY_DOMAIN }}"
aliases: []
docker:
services:
database:
enabled: true
drupal:
# Use a PHP 8.2+ base image to ensure compatibility with OIDC 2.x syntax
version: "10-php8.2-apache"
image: drupal
name: drupal
backup:
no_stop_required: true
volumes:
data: drupal_data
rbac:
roles:
authenticated:
description: "Logged-in user"
content_editor:
description: "Can create and edit content"
site_admin:
description: "Full site administration"

View File

@@ -0,0 +1,23 @@
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: >
Drupal CMS in Docker with Drush, msmtp, and OpenID Connect (OIDC) SSO.
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
galaxy_tags:
- drupal
- docker
- cms
- oidc
- sso
repository: "https://s.infinito.nexus/code"
issue_tracker_url: "https://s.infinito.nexus/issues"
documentation: "https://docs.infinito.nexus"
logo:
class: "fa-solid fa-droplet"
run_after:
- web-app-keycloak

View File

@@ -0,0 +1,9 @@
credentials:
administrator_password:
description: "Initial password for the Drupal admin account"
algorithm: "sha256"
validation: "^[a-f0-9]{64}$"
hash_salt:
description: "Drupal hash_salt value used for one-time logins, CSRF tokens, etc."
algorithm: "sha256"
validation: "^[a-f0-9]{64}$"

View File

@@ -0,0 +1,11 @@
- name: "Ensure sites/default/files exists and is writable"
command: >
docker exec -u root {{ DRUPAL_CONTAINER }} bash -lc
"set -e;
d='{{ DRUPAL_DOCKER_HTML_PATH }}/sites/default';
f=\"$d/files\";
mkdir -p \"$f\";
chown -R {{ DRUPAL_USER }}:{{ DRUPAL_USER }} \"$d\";
find \"$d\" -type d -exec chmod 775 {} +;
find \"$d\" -type f -exec chmod 664 {} +;"
changed_when: true

View File

@@ -0,0 +1,25 @@
- name: "Ensure settings.php exists and includes settings.local.php"
command: >
docker exec -u root {{ DRUPAL_CONTAINER }} bash -lc
"set -e;
f='{{ DRUPAL_DOCKER_CONF_PATH }}/settings.php';
df='{{ DRUPAL_DOCKER_CONF_PATH }}/default.settings.php';
if [ ! -f \"$f\" ] && [ -f \"$df\" ]; then
cp \"$df\" \"$f\";
chown www-data:www-data \"$f\";
chmod 644 \"$f\";
fi;
php -r '
$f=\"{{ DRUPAL_DOCKER_CONF_PATH }}/settings.php\";
if (!file_exists($f)) { exit(0); }
$c=file_get_contents($f);
$inc=\"\\nif (file_exists(\\\"\$app_root/\$site_path/settings.local.php\\\")) { include \$app_root/\$site_path/settings.local.php; }\\n\";
if (strpos($c, \"settings.local.php\") === false) {
file_put_contents($f, $c.$inc);
echo \"patched\";
} else {
echo \"exists\";
}
'"
register: settings_local_include
changed_when: "'patched' in settings_local_include.stdout"

View File

@@ -0,0 +1,32 @@
- name: "Wait for database readiness"
command: >
docker exec {{ DRUPAL_CONTAINER }} bash -lc
"php -r '
$h=\"{{ database_host }}\"; $p={{ database_port }};
$d=\"{{ database_name }}\"; $u=\"{{ database_username }}\"; $pw=\"{{ database_password }}\";
$t=microtime(true)+60; $ok=false;
while (microtime(true)<$t) {
try { new PDO(\"mysql:host=$h;port=$p;dbname=$d;charset=utf8mb4\", $u, $pw,[PDO::ATTR_TIMEOUT=>2]); $ok=true; break; }
catch (Exception $e) { usleep(300000); }
}
if (!$ok) { fwrite(STDERR, \"DB not ready\\n\"); exit(1); }'"
register: db_wait
retries: 1
failed_when: db_wait.rc != 0
- name: "Run Drupal site:install via Drush"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
command: >
docker exec {{ DRUPAL_CONTAINER }} bash -lc
"/opt/drupal/vendor/bin/drush -r {{ DRUPAL_DOCKER_HTML_PATH }} site:install standard -y
--db-url='mysql://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}'
--site-name='{{ applications | get_app_conf(application_id, 'title', True) }}'
--account-name='{{ applications | get_app_conf(application_id, 'users.administrator.username') }}'
--account-mail='{{ applications | get_app_conf(application_id, 'users.administrator.email', True) }}'
--account-pass='{{ applications | get_app_conf(application_id, 'credentials.administrator_password', True) }}'
--uri='{{ DRUPAL_URL }}'"
args:
chdir: "{{ docker_compose.directories.instance }}"
register: drupal_install
changed_when: "'Installation complete' in drupal_install.stdout"
failed_when: false

View File

@@ -0,0 +1,12 @@
- name: "Enable OpenID Connect core module"
command: >
docker exec {{ DRUPAL_CONTAINER }} bash -lc
"drush -r {{ DRUPAL_DOCKER_HTML_PATH }} en openid_connect -y"
changed_when: true
- name: "Enable OpenID Connect Keycloak preset (submodule of openid_connect)"
command: >
docker exec {{ DRUPAL_CONTAINER }} bash -lc
"drush -r {{ DRUPAL_DOCKER_HTML_PATH }} en openid_connect_client_keycloak -y"
changed_when: true
failed_when: false

View File

@@ -0,0 +1,79 @@
- name: "Load OIDC vars"
include_vars:
file: "{{ role_path }}/vars/oidc.yml"
name: oidc_vars
- name: "Apply openid_connect.settings (global)"
loop: "{{ oidc_vars.oidc_settings | dict2items }}"
loop_control:
label: "{{ item.key }}"
command: >
docker exec {{ DRUPAL_CONTAINER }} bash -lc
"drush -r {{ DRUPAL_DOCKER_HTML_PATH }} cset -y
openid_connect.settings {{ item.key }}
{{ (item.value | to_json) if item.value is mapping or item.value is sequence else item.value }}"
- name: "Ensure/Update OIDC client entity (generic)"
vars:
client_id: "{{ oidc_vars.oidc_client.id }}"
client_label: "{{ oidc_vars.oidc_client.label }}"
plugin_id: "{{ oidc_vars.oidc_client.plugin }}"
settings_b64: "{{ oidc_vars.oidc_client.settings | to_json | b64encode }}"
command: >
docker exec {{ DRUPAL_CONTAINER }} bash -lc
"drush -r {{ DRUPAL_DOCKER_HTML_PATH }} eval '
$id=\"{{ client_id }}\";
$label=\"{{ client_label }}\";
$plugin=\"{{ plugin_id }}\";
$settings=json_decode(base64_decode(\"{{ settings_b64 }}\"), TRUE);
$storage=\\Drupal::entityTypeManager()->getStorage(\"openid_connect_client\");
$e=$storage->load($id);
if (!$e) {
$e=$storage->create([
\"id\"=> $id,
\"label\"=> $label,
\"status\"=> TRUE,
\"plugin\"=> $plugin,
\"settings\"=> $settings,
]);
$e->save();
print \"created\";
} else {
$e->set(\"label\", $label);
$e->set(\"plugin\", $plugin);
$e->set(\"settings\", $settings);
$e->set(\"status\", TRUE);
$e->save();
print \"updated\";
}
'"
register: client_apply
changed_when: "'created' in client_apply.stdout or 'updated' in client_apply.stdout"
- name: "Apply OIDC client settings"
vars:
client_id: "{{ oidc_vars.oidc_client.id }}"
settings_map: "{{ oidc_vars.oidc_client.settings }}"
kv: "{{ settings_map | dict2items }}"
loop: "{{ kv }}"
loop_control:
label: "{{ item.key }}"
command: >
docker exec {{ DRUPAL_CONTAINER }} bash -lc
"drush -r {{ DRUPAL_DOCKER_HTML_PATH }} eval '
$id=\"{{ client_id }}\";
$key=\"{{ item.key }}\";
$val=json_decode(base64_decode(\"{{ (item.value | to_json | b64encode) }}\"), true);
$storage=\Drupal::entityTypeManager()->getStorage(\"openid_connect_client\");
$c=$storage->load($id);
$s=$c->get(\"settings\");
$s[$key]=$val;
$c->set(\"settings\", $s);
$c->save();'"
changed_when: true
- name: "Clear caches after OIDC config"
command: >
docker exec {{ DRUPAL_CONTAINER }} bash -lc
"drush -r {{ DRUPAL_DOCKER_HTML_PATH }} cr"
changed_when: false

View File

@@ -0,0 +1,19 @@
- name: "Set trusted_host_patterns for canonical domains"
vars:
patterns: "{{ DRUPAL_DOMAINS
| map('regex_replace','\\\\.','\\\\\\\\.')
| map('regex_replace','^','^')
| map('regex_replace','$','$')
| list }}"
php_array: "{{ patterns | to_json }}"
command: >
docker exec -u root {{ DRUPAL_CONTAINER }} bash -lc
"php -r '
$f="{{ DRUPAL_DOCKER_CONF_PATH }}/settings.local.php";
$c=file_exists($f)?file_get_contents($f):"<?php\n";
// Remove existing assignment of $settings[\"trusted_host_patterns\"] (if any)
$c=preg_replace(\"/(\\\\$settings\\['trusted_host_patterns'\\]\\s*=).*?;/s\", \"\", $c);
$c.="\n\$settings[\'trusted_host_patterns\'] = ".var_export(json_decode("{{ php_array|e }}", true), true).";\n";
file_put_contents($f,$c);
'"
changed_when: true

View File

@@ -0,0 +1,58 @@
- name: "Include role sys-stk-front-proxy for {{ application_id }}"
include_role:
name: sys-stk-front-proxy
loop: "{{ DRUPAL_DOMAINS }}"
loop_control:
loop_var: domain
vars:
proxy_extra_configuration: "client_max_body_size {{ DRUPAL_MAX_UPLOAD_SIZE }};"
http_port: "{{ ports.localhost.http[application_id] }}"
- name: "Load docker and DB for {{ application_id }}"
include_role:
name: sys-stk-back-stateful
vars:
docker_compose_flush_handlers: false
- name: "Transfer upload.ini to {{ DRUPAL_CONFIG_UPLOAD_ABS }}"
template:
src: upload.ini.j2
dest: "{{ DRUPAL_CONFIG_UPLOAD_ABS }}"
notify:
- docker compose up
- docker compose build
- name: "Transfer msmtprc to {{ DRUPAL_MSMTP_ABS }}"
template:
src: "{{ DRUPAL_MSMTP_SRC }}"
dest: "{{ DRUPAL_MSMTP_ABS }}"
notify: docker compose up
- name: "Transfer settings.local.php overrides"
template:
src: settings.local.php.j2
dest: "{{ DRUPAL_SETTINGS_LOCAL_ABS }}"
notify: docker compose up
- name: Flush handlers to make container ready
meta: flush_handlers
- name: "Fix permissions for sites/default/files"
include_tasks: 00_permissions.yml
- name: "Ensure settings.php includes settings.local.php"
include_tasks: 01_settings_local_include.yml
- name: "Install Drupal (site:install)"
include_tasks: 02_install.yml
- name: "Enable OIDC modules"
include_tasks: 03_enable_modules.yml
when: applications | get_app_conf(application_id, 'features.oidc')
- name: "Configure OIDC (global + client)"
include_tasks: 04_configure_oidc.yml
when: applications | get_app_conf(application_id, 'features.oidc')
- name: "Harden trusted host patterns"
include_tasks: 05_trusted_hosts.yml

View File

@@ -0,0 +1,83 @@
FROM {{ DRUPAL_IMAGE }}:{{ DRUPAL_VERSION }}
# -------------------------------------------------------------------
# System dependencies (mail support + MySQL client + basic tools)
# -------------------------------------------------------------------
RUN apt-get update && \
apt-get install -y msmtp msmtp-mta git unzip zip less nano curl vim mariadb-client && \
rm -rf /var/lib/apt/lists/*
# -------------------------------------------------------------------
# PHP extensions required by Drupal/Drush bootstrap
# -------------------------------------------------------------------
RUN docker-php-ext-install -j"$(nproc)" pdo_mysql
# -------------------------------------------------------------------
# Install Composer
# -------------------------------------------------------------------
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php composer-setup.php --install-dir=/usr/local/bin --filename=composer \
&& rm composer-setup.php
ENV COMPOSER_ALLOW_SUPERUSER=1
# -------------------------------------------------------------------
# Build Drupal project with Drush + OpenID Connect
# IMPORTANT:
# - The Drupal base image uses /var/www/html as a symlink to {{ DRUPAL_DOCKER_HTML_PATH }}
# - Therefore, the actual project root must be placed in /opt/drupal
# -------------------------------------------------------------------
RUN set -eux; \
builddir="$(mktemp -d)"; \
composer create-project --no-interaction --no-ansi --no-progress drupal/recommended-project:^10 "$builddir"; \
composer --working-dir="$builddir" require -n drush/drush:^13 drupal/openid_connect:^2@beta; \
rm -rf /opt/drupal/* /opt/drupal/.[!.]* /opt/drupal/..?* 2>/dev/null || true; \
mkdir -p /opt/drupal; \
cp -a "$builddir"/. /opt/drupal/; \
rm -rf "$builddir"
# -------------------------------------------------------------------
# Make vendor binaries available in PATH
# -------------------------------------------------------------------
RUN ln -sf /opt/drupal/vendor/bin/drush /usr/local/bin/drush
# -------------------------------------------------------------------
# PHP upload configuration
# -------------------------------------------------------------------
COPY {{ DRUPAL_CONFIG_UPLOAD_REL }} $PHP_INI_DIR/conf.d/
# -------------------------------------------------------------------
# Permissions and ownership fixes
# -------------------------------------------------------------------
RUN set -eux; \
# Ensure all directories are traversable
chmod 755 /var /var/www /opt /opt/drupal; \
# Ensure correct ownership for Drupal files
chown -R www-data:www-data /opt/drupal; \
# Apply default permissions
find /opt/drupal -type d -exec chmod 755 {} +; \
find /opt/drupal -type f -exec chmod 644 {} +; \
# Ensure vendor binaries are executable
if [ -d /opt/drupal/vendor/bin ]; then chmod a+rx /opt/drupal/vendor/bin/*; fi; \
if [ -f /opt/drupal/vendor/drush/drush/drush ]; then chmod a+rx /opt/drupal/vendor/drush/drush/drush; fi; \
# Ensure the docroot ({{ DRUPAL_DOCKER_HTML_PATH }}) is accessible
if [ -d {{ DRUPAL_DOCKER_HTML_PATH }} ]; then \
chmod 755 {{ DRUPAL_DOCKER_HTML_PATH }}; \
find {{ DRUPAL_DOCKER_HTML_PATH }} -type d -exec chmod 755 {} +; \
fi; \
# Ensure settings.local.php exists and is owned by www-data
install -o www-data -g www-data -m 640 /dev/null {{ DRUPAL_DOCKER_HTML_PATH }}/sites/default/settings.local.php
# -------------------------------------------------------------------
# Runtime defaults
# -------------------------------------------------------------------
USER www-data
WORKDIR /var/www/html # symlink pointing to {{ DRUPAL_DOCKER_HTML_PATH }}
# Ensure PATH for non-login shells includes /usr/local/bin
ENV PATH="/usr/local/bin:/usr/local/sbin:/usr/sbin:/usr/bin:/sbin:/bin"
# -------------------------------------------------------------------
# Build-time check (optional)
# -------------------------------------------------------------------
RUN /usr/local/bin/drush --version

View File

@@ -0,0 +1,22 @@
{% include 'roles/docker-compose/templates/base.yml.j2' %}
application:
{% include 'roles/docker-container/templates/base.yml.j2' %}
image: {{ DRUPAL_CUSTOM_IMAGE }}
container_name: {{ DRUPAL_CONTAINER }}
{{ lookup('template', 'roles/docker-container/templates/build.yml.j2') | indent(4) }}
ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:80"
volumes:
- data:{{ DRUPAL_DOCKER_HTML_PATH }}/sites/default/files
- {{ DRUPAL_MSMTP_ABS }}:/etc/msmtprc
- {{ DRUPAL_SETTINGS_LOCAL_ABS }}:{{ DRUPAL_DOCKER_CONF_PATH }}/settings.local.php
{% include 'roles/docker-container/templates/healthcheck/msmtp_curl.yml.j2' %}
{% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
data:
name: "{{ DRUPAL_VOLUME }}"

View File

@@ -0,0 +1,7 @@
DRUPAL_DB_HOST= "{{ database_host }}:{{ database_port }}"
DRUPAL_DB_USER= "{{ database_username }}"
DRUPAL_DB_PASSWORD= "{{ database_password }}"
DRUPAL_DB_NAME= "{{ database_name }}"
# Debug flags (optional)
DRUPAL_DEBUG={{ MODE_DEBUG | lower }}

View File

@@ -0,0 +1,49 @@
<?php
/**
* Local settings overrides generated by Ansible.
* - Reads DB + OIDC endpoints from environment variables.
* - Sets $databases and selected $config overrides.
*/
$env = getenv();
/** Database **/
$host = getenv('DRUPAL_DB_HOST') ?: '{{ database_host }}:{{ database_port }}';
$db = getenv('DRUPAL_DB_NAME') ?: '{{ database_name }}';
$user = getenv('DRUPAL_DB_USER') ?: '{{ database_username }}';
$pass = getenv('DRUPAL_DB_PASSWORD') ?: '{{ database_password }}';
$parts = explode(':', $host, 2);
$hostname = $parts[0];
$port = isset($parts[1]) ? (int)$parts[1] : 3306;
$databases['default']['default'] = [
'database' => $db,
'username' => $user,
'password' => $pass,
'prefix' => '',
'host' => $hostname,
'port' => $port,
'namespace'=> 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',
];
/** OIDC endpoint hints (optional) — the real config is applied via Drush. */
$config['openid_connect.settings']['automatic_account_creation'] = true;
$config['openid_connect.settings']['always_save_userinfo'] = true;
$config['openid_connect.settings']['link_existing_users'] = true;
/** Trusted host patterns can be extended by Ansible task 04_trusted_hosts.yml */
/** Enable local services YML if present */
$settings['container_yamls'][] = $app_root . '/' . $site_path . '/services.local.yml';
// Reverse proxy optional über ENV setzen (z.B. "10.0.0.0/8, 172.16.0.0/12")
$proxy = getenv('REVERSE_PROXY_ADDRESSES');
if ($proxy) {
$settings['reverse_proxy'] = TRUE;
$settings['reverse_proxy_addresses'] = array_map('trim', explode(',', $proxy));
}
/** Hash salt (from schema/credentials, hashed with SHA-256) */
$settings['hash_salt'] = '{{ applications | get_app_conf(application_id, "credentials.hash_salt", True) }}';

View File

@@ -0,0 +1,8 @@
file_uploads = On
memory_limit = {{ DRUPAL_MAX_UPLOAD_SIZE }}
upload_max_filesize = {{ DRUPAL_MAX_UPLOAD_SIZE }}
post_max_size = {{ DRUPAL_MAX_UPLOAD_SIZE }}
max_execution_time = 300
; Use msmtp as the Mail Transfer Agent
sendmail_path = "/usr/bin/msmtp -t"

View File

@@ -0,0 +1,4 @@
users:
administrator:
username: "administrator"
email: "administrator@{{ PRIMARY_DOMAIN }}"

View File

@@ -0,0 +1,28 @@
# General
application_id: "web-app-drupal"
database_type: "mariadb"
# Drupal
DRUPAL_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
DRUPAL_CUSTOM_IMAGE: "drupal_custom"
DRUPAL_DOCKER_HTML_PATH: "/opt/drupal/web"
DRUPAL_DOCKER_CONF_PATH: "{{ DRUPAL_DOCKER_HTML_PATH }}/sites/default"
DRUPAL_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.drupal.version') }}"
DRUPAL_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.drupal.image') }}"
DRUPAL_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.drupal.name') }}"
DRUPAL_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
DRUPAL_DOMAINS: "{{ applications | get_app_conf(application_id, 'server.domains.canonical') }}"
DRUPAL_USER: "www-data"
DRUPAL_CONFIG_UPLOAD_REL: "config/upload.ini"
DRUPAL_CONFIG_UPLOAD_ABS: "{{ [docker_compose.directories.instance, DRUPAL_CONFIG_UPLOAD_REL] | path_join }}"
DRUPAL_SETTINGS_LOCAL_REL: "config/settings.local.php"
DRUPAL_SETTINGS_LOCAL_ABS: "{{ [docker_compose.directories.instance, DRUPAL_SETTINGS_LOCAL_REL] | path_join }}"
DRUPAL_MSMTP_SRC: "{{ [ playbook_dir, 'roles/sys-svc-msmtp/templates/msmtprc.conf.j2' ] | path_join }}"
DRUPAL_MSMTP_ABS: "{{ [ docker_compose.directories.config, 'msmtprc.conf'] | path_join }}"
DRUPAL_MAX_UPLOAD_SIZE: "{{ applications | get_app_conf(application_id, 'max_upload_size') }}"

View File

@@ -0,0 +1,34 @@
# OIDC configuration for Drupal's OpenID Connect module.
# Global settings for openid_connect.settings
oidc_settings:
automatic_account_creation: true # Auto-create users on first login
always_save_userinfo: true # Store latest userinfo on each login
link_existing_users: true # Match existing users by email
login_display: "button" # 'button' or 'form'
enforced: false # If true, require login for the whole site
# OIDC client entity (e.g., 'keycloak')
oidc_client:
id: "keycloak"
label: "Keycloak"
plugin: "generic" # use the built-in generic OIDC client plugin
settings:
client_id: "{{ OIDC.CLIENT.ID }}"
client_secret: "{{ OIDC.CLIENT.SECRET }}"
authorization_endpoint: "{{ OIDC.CLIENT.AUTHORIZE_URL }}"
token_endpoint: "{{ OIDC.CLIENT.TOKEN_URL }}"
userinfo_endpoint: "{{ OIDC.CLIENT.USER_INFO_URL }}"
end_session_endpoint: "{{ OIDC.CLIENT.LOGOUT_URL }}"
scopes:
- "openid"
- "email"
- "profile"
use_standard_claims: true
# Optional claim mapping examples:
# username_claim: "{{ OIDC.ATTRIBUTES.USERNAME }}"
# email_claim: "{{ OIDC.ATTRIBUTES.EMAIL }}"
# given_name_claim: "{{ OIDC.ATTRIBUTES.GIVEN_NAME }}"
# family_name_claim: "{{ OIDC.ATTRIBUTES.FAMILY_NAME }}"

View File

@@ -12,9 +12,7 @@ server:
script-src-elem:
unsafe-inline: true
unsafe-eval: true
style-src:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-eval: true
whitelist:
connect-src:

View File

@@ -18,10 +18,10 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
oauth2_proxy:
application: "application"

View File

@@ -7,10 +7,10 @@ docker_compose_flush_handlers: false
# Friendica
friendica_container: "friendica"
friendica_no_validation: "{{ applications | get_app_conf(application_id, 'features.oidc', True) }}" # Email validation is not neccessary if OIDC is active
friendica_no_validation: "{{ applications | get_app_conf(application_id, 'features.oidc') }}" # Email validation is not neccessary if OIDC is active
friendica_application_base: "/var/www/html"
friendica_docker_ldap_config: "{{ friendica_application_base }}/config/ldapauth.config.php"
friendica_host_ldap_config: "{{ docker_compose.directories.volumes }}ldapauth.config.php"
friendica_config_dir: "{{ friendica_application_base }}/config"
friendica_config_file: "{{ friendica_config_dir }}/local.config.php"
friendica_docker_ldap_config: "{{ [ friendica_application_base, 'config/ldapauth.config.php' ] | path_join }}"
friendica_host_ldap_config: "{{ [ docker_compose.directories.volumes, 'ldapauth.config.php' ] | path_join }}"
friendica_config_dir: "{{ [ friendica_application_base, 'config' ] | path_join }}"
friendica_config_file: "{{ [ friendica_config_dir, 'local.config.php' ] | path_join }}"
friendica_user: "www-data"

View File

@@ -27,7 +27,7 @@ server:
aliases: []
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
font-src:

View File

@@ -24,7 +24,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
font-src:
@@ -47,7 +47,17 @@ docker:
version: "latest"
backup:
no_stop_required: true
port: 3000
name: "gitea"
port: 3000
name: "gitea"
cpus: 1.0
mem_reservation: 1g
mem_limit: 2g
pids_limit: 1024
redis:
enabled: false
cpus: 0.25
mem_reservation: 0.2g
mem_limit: 0.3g
pids_limit: 512
volumes:
data: "gitea_data"

View File

@@ -2,7 +2,7 @@
shell: |
docker exec -i --user {{ GITEA_USER }} {{ GITEA_CONTAINER }} \
gitea admin auth list \
| awk -v name="LDAP ({{ PRIMARY_DOMAIN }})" '$0 ~ name {print $1; exit}'
| awk -v name="LDAP ({{ SOFTWARE_NAME }})" '$0 ~ name {print $1; exit}'
args:
chdir: "{{ docker_compose.directories.instance }}"
register: ldap_source_id_raw

View File

@@ -11,7 +11,7 @@ USER_GID=1000
# Logging configuration
GITEA__log__MODE=console
GITEA__log__LEVEL={% if MODE_DEBUG | bool %}Debug{% else %}Info{% endif %}
GITEA__log__LEVEL={% if MODE_DEBUG | bool %}Debug{% else %}Info{% endif %}
# Database
DB_TYPE=mysql
@@ -20,6 +20,28 @@ DB_NAME={{ database_name }}
DB_USER={{ database_username }}
DB_PASSWD={{ database_password }}
{% if GITEA_REDIS_ENABLED | bool %}
# ------------------------------------------------
# Redis Configuration for Gitea
# ------------------------------------------------
# @see https://docs.gitea.com/administration/config-cheat-sheet#cache-cache
GITEA__cache__ENABLED=true
GITEA__cache__ADAPTER=redis
# use a different Redis DB index than oauth2-proxy
GITEA__cache__HOST=redis://{{ GITEA_REDIS_ADDRESS }}/1
# Store sessions in Redis (instead of the internal DB)
GITEA__session__PROVIDER=redis
GITEA__session__PROVIDER_CONFIG=network=tcp,addr={{ GITEA_REDIS_ADDRESS }},db=2,pool_size=100,idle_timeout=180
# Use Redis for background task queues
GITEA__queue__TYPE=redis
GITEA__queue__CONN_STR=redis://{{ GITEA_REDIS_ADDRESS }}/3
{% endif %}
# SSH
SSH_PORT={{ports.public.ssh[application_id]}}
SSH_LISTEN_PORT=22
@@ -48,7 +70,7 @@ GITEA__security__INSTALL_LOCK=true # Locks the installation page
GITEA__openid__ENABLE_OPENID_SIGNUP={{ applications | get_app_conf(application_id, 'features.oidc', False) | lower }}
GITEA__openid__ENABLE_OPENID_SIGNIN={{ applications | get_app_conf(application_id, 'features.oidc', False) | lower }}
{% if applications | get_app_conf(application_id, 'features.oidc', False) or applications | get_app_conf(application_id, 'features.ldap', False) %}
{% if GITEA_IAM_ENABLED | bool %}
EXTERNAL_USER_DISABLE_FEATURES=deletion,manage_credentials,change_username,change_full_name
@@ -58,9 +80,5 @@ GITEA__ldap__SYNC_USER_ON_LOGIN=true
{% endif %}
# ------------------------------------------------
# Disable user self-registration
# ------------------------------------------------
# After this only admins can create accounts
GITEA__service__DISABLE_REGISTRATION=false
GITEA__service__DISABLE_REGISTRATION={{ GITEA_IAM_ENABLED | lower }}

View File

@@ -22,9 +22,15 @@ GITEA_LDAP_AUTH_ARGS:
- '--email-attribute "{{ LDAP.USER.ATTRIBUTES.MAIL }}"'
- '--public-ssh-key-attribute "{{ LDAP.USER.ATTRIBUTES.SSH_PUBLIC_KEY }}"'
- '--synchronize-users'
GITEA_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.version') }}"
GITEA_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.image') }}"
GITEA_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.name') }}"
GITEA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
GITEA_USER: "git"
GITEA_CONFIG: "/data/gitea/conf/app.ini"
GITEA_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.version') }}"
GITEA_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.image') }}"
GITEA_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.name') }}"
GITEA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
GITEA_USER: "git"
GITEA_CONFIG: "/data/gitea/conf/app.ini"
## Redis
GITEA_REDIS_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.redis.enabled') }}"
GITEA_REDIS_ADDRESS: "redis:6379"
GITEA_IAM_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc', False) or applications | get_app_conf(application_id, 'features.ldap', False) }}"

View File

@@ -27,3 +27,7 @@ server:
domains:
canonical:
- lab.git.{{ PRIMARY_DOMAIN }}
csp:
flags:
script-src-elem:
unsafe-inline: true

View File

@@ -29,7 +29,7 @@ server:
script-src-elem:
unsafe-inline: true
unsafe-eval: true
script-src:
script-src-attr:
unsafe-inline: true
unsafe-eval: true
domains:

View File

@@ -14,7 +14,7 @@ server:
aliases: []
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true

View File

@@ -11,7 +11,7 @@
# (Optional) specifically wait for the CLI installer script
- name: "Check for CLI installer"
command:
argv: [ docker, exec, "{{ JOOMLA_CONTAINER }}", test, -f, /var/www/html/installation/joomla.php ]
argv: [ docker, exec, "{{ JOOMLA_CONTAINER }}", test, -f, "{{ JOOMLA_INSTALLER_CLI_FILE }}" ]
register: has_installer
changed_when: false
failed_when: false
@@ -32,7 +32,7 @@
- exec
- "{{ JOOMLA_CONTAINER }}"
- php
- /var/www/html/installation/joomla.php
- "{{ JOOMLA_INSTALLER_CLI_FILE }}"
- install
- "--db-type={{ JOOMLA_DB_CONNECTOR }}"
- "--db-host={{ database_host }}"

View File

@@ -0,0 +1,18 @@
---
# Reset Joomla admin password via CLI (inside the container)
- name: "Reset Joomla admin password (non-interactive CLI)"
command:
argv:
- docker
- exec
- "{{ JOOMLA_CONTAINER }}"
- php
- "{{ JOOMLA_CLI_FILE }}"
- user:reset-password
- "--username"
- "{{ JOOMLA_USER_NAME }}"
- "--password"
- "{{ JOOMLA_USER_PASSWORD }}"
register: j_password_reset
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
changed_when: j_password_reset.rc == 0

View File

@@ -24,3 +24,7 @@
- name: Include assert routines
include_tasks: "04_assert.yml"
when: MODE_ASSERT | bool
- name: Reset Admin Password
include_tasks: 05_reset_admin_password.yml

View File

@@ -13,6 +13,8 @@ JOOMLA_DOMAINS: "{{ applications | get_app_conf(application_id
JOOMLA_SITE_NAME: "{{ SOFTWARE_NAME }} Joomla - CMS"
JOOMLA_DB_CONNECTOR: "{{ 'pgsql' if database_type == 'postgres' else 'mysqli' }}"
JOOMLA_CONFIG_FILE: "/var/www/html/configuration.php"
JOOMLA_INSTALLER_CLI_FILE: "/var/www/html/installation/joomla.php"
JOOMLA_CLI_FILE: "/var/www/html/cli/joomla.php"
# User
JOOMLA_USER_NAME: "{{ users.administrator.username }}"

View File

@@ -19,9 +19,9 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
frame-src:

View File

@@ -18,12 +18,12 @@ features:
server:
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true
unsafe-eval: true
script-src:
script-src-attr:
unsafe-inline: true
domains:
aliases: []

View File

@@ -13,6 +13,16 @@ server:
aliases: []
status_codes:
default: 404
csp:
flags:
script-src-elem:
unsafe-inline: true
whitelist:
script-src-elem:
- "https://www.hcaptcha.com"
- "https://js.hcaptcha.com"
frame-src:
- "https://newassets.hcaptcha.com/"
docker:
services:
database:

View File

@@ -16,11 +16,11 @@ server:
aliases: []
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
unsafe-eval: true
rbac:

View File

@@ -17,12 +17,12 @@ server:
style-src-elem:
- https://fonts.googleapis.com
flags:
script-src:
script-src-attr:
unsafe-eval: true
script-src-elem:
unsafe-inline: true
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
unsafe-eval: true
domains:

View File

@@ -27,12 +27,12 @@ features:
server:
csp:
flags:
script-src:
script-src-attr:
unsafe-eval: true
script-src-elem:
unsafe-inline: true
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
connect-src:

View File

@@ -4,6 +4,11 @@ server:
canonical:
- "m.wiki.{{ PRIMARY_DOMAIN }}"
aliases: []
csp:
flags:
script-src-elem:
unsafe-inline: true
docker:
services:
database:

View File

@@ -11,7 +11,7 @@ MEDIAWIKI_URL: "{{ domains | get_url(application_id, WEB_PROT
MEDIAWIKI_HTML_DIR: "/var/www/html"
MEDIAWIKI_CONFIG_DIR: "{{ docker_compose.directories.config }}"
MEDIAWIKI_VOLUMES_DIR: "{{ docker_compose.directories.volumes }}"
MEDIAWIKI_LOCAL_MOUNT_DIR: "{{ MEDIAWIKI_VOLUMES_DIR }}/mw-local"
MEDIAWIKI_LOCAL_MOUNT_DIR: "{{ [ MEDIAWIKI_VOLUMES_DIR, 'mw-local' ] | path_join }}"
MEDIAWIKI_LOCAL_PATH: "/opt/mw-local"
## Docker

View File

@@ -29,7 +29,7 @@ server:
frame-ancestors:
- "*" # No damage if it's used somewhere on other websites, it anyhow looks like art
flags:
style-src:
style-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -23,3 +23,5 @@
- name: Build data (single async task)
include_tasks: 02_build_data.yml
when: MIG_BUILD_DATA | bool
- include_tasks: utils/run_once.yml

View File

@@ -1,7 +1,4 @@
---
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
name: "Setup Meta Infinite Graph"
- include_tasks: 01_core.yml
when: run_once_web_app_mig is not defined

View File

@@ -0,0 +1,26 @@
# Mini-QR
## Description
**Mini-QR** is a lightweight, self-hosted web application for generating QR codes instantly and privately.
It provides a minimal and elegant interface to convert any text, URL, or message into a QR code — directly in your browser, without external tracking or dependencies.
## Overview
Mini-QR is designed for simplicity, privacy, and speed.
It offers an ad-free interface that works entirely within your local environment, making it ideal for individuals, organizations, and educational institutions that value data sovereignty.
The app runs as a single Docker container and requires no database or backend setup, enabling secure and frictionless QR generation anywhere.
## Features
- **Instant QR code creation** — simply type or paste your content.
- **Privacy-friendly** — all generation happens client-side; no data leaves your server.
- **Open Source** — fully auditable and modifiable for custom integrations.
- **Responsive Design** — optimized for both desktop and mobile devices.
- **Docker-ready** — can be deployed in seconds using the official image.
## Further Resources
- 🧩 Upstream project: [lyqht/mini-qr](https://github.com/lyqht/mini-qr)
- 📦 Upstream Dockerfile: [View on GitHub](https://github.com/lyqht/mini-qr/blob/main/Dockerfile)
- 🌐 Docker Image: `ghcr.io/lyqht/mini-qr:latest`

View File

@@ -0,0 +1,2 @@
# To-dos
- Remove clarity.ms

View File

@@ -0,0 +1,38 @@
docker:
services:
redis:
enabled: false
database:
enabled: false
features:
matomo: true
css: true
desktop: true
logout: false
server:
csp:
whitelist:
script-src-elem:
# Propably some tracking code
# Anyhow implemented to pass CSP checks
# @todo Remove
- https://www.clarity.ms/
- https://scripts.clarity.ms/
connect-src:
- https://q.clarity.ms
- https://n.clarity.ms
- "data:"
style-src-elem: []
font-src: []
frame-ancestors: []
flags:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true
script-src-attr:
unsafe-eval: true
domains:
canonical:
- "qr.{{ PRIMARY_DOMAIN }}"
aliases: []

View File

@@ -0,0 +1,27 @@
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: >
Mini-QR is a minimalist, self-hosted web application that allows users to
instantly generate QR codes in a privacy-friendly way.
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
galaxy_tags:
- infinito
- qr
- webapp
- privacy
- utility
- education
- lightweight
repository: "https://github.com/lyqht/mini-qr"
issue_tracker_url: "https://github.com/lyqht/mini-qr/issues"
documentation: "https://github.com/lyqht/mini-qr"
logo:
class: "fa-solid fa-qrcode"
run_after: []
dependencies: []

View File

@@ -0,0 +1,7 @@
- name: "load docker, proxy for '{{ application_id }}'"
include_role:
name: sys-stk-full-stateless
vars:
docker_compose_flush_handlers: false
- include_tasks: utils/run_once.yml

View File

@@ -0,0 +1,4 @@
---
- include_tasks: 01_core.yml
when: run_once_web_app_mini_qr is not defined

View File

@@ -0,0 +1,12 @@
---
{% include 'roles/docker-compose/templates/base.yml.j2' %}
{% set container_port = 8080 %}
{{ application_id | get_entity_name }}:
{% include 'roles/docker-container/templates/base.yml.j2' %}
image: "{{ MINI_QR_IMAGE }}:{{ MINI_QR_VERSION }}"
container_name: "{{ MINI_QR_CONTAINER }}"
ports:
- 127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -0,0 +1,12 @@
# General
application_id: web-app-mini-qr
entity_name: "{{ application_id | get_entity_name }}"
# Docker
docker_compose_flush_handlers: false
docker_pull_git_repository: false
# Helper variables
MINI_QR_IMAGE: "ghcr.io/lyqht/mini-qr"
MINI_QR_VERSION: "latest"
MINI_QR_CONTAINER: "{{ entity_name }}"

View File

@@ -10,7 +10,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-eval: true
domains:
canonical:

View File

@@ -12,9 +12,9 @@ server:
script-src-elem:
unsafe-inline: true
unsafe-eval: true
script-src:
script-src-attr:
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
unsafe-eval: true
whitelist:

View File

@@ -19,9 +19,9 @@ server:
# Makes sense that all of the website content is available in the navigator
- "{{ WEB_PROTOCOL }}://*.{{ PRIMARY_DOMAIN }}"
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-eval: true
script-src-elem:
unsafe-inline: true

View File

@@ -2,13 +2,16 @@ version: "production" # @see https://nextcloud.com/blog/nex
server:
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-eval: true
whitelist:
script-src-elem:
- "https://www.hcaptcha.com"
- "https://js.hcaptcha.com"
font-src:
- "data:"
connect-src:
@@ -19,6 +22,7 @@ server:
frame-src:
- "{{ WEBSOCKET_PROTOCOL }}://collabora.{{ PRIMARY_DOMAIN }}"
- "{{ WEB_PROTOCOL }}://collabora.{{ PRIMARY_DOMAIN }}"
- "https://newassets.hcaptcha.com/"
worker-src:
- "blob:"
domains:
@@ -28,13 +32,15 @@ server:
docker:
volumes:
data: nextcloud_data
whiteboard_tmp: nextcloud_whiteboard_tmp
whiteboard_fontcache: nextcloud_whiteboard_fontcache
services:
redis:
enabled: true
cpus: "0.25"
mem_reservation: "64m"
mem_limit: "256m"
pids_limit: 256
cpus: "1"
mem_reservation: "1g"
mem_limit: "8g"
pids_limit: 512
database:
enabled: true
cpus: "0.75"
@@ -80,7 +86,7 @@ docker:
cpus: "1.0"
mem_reservation: "256m"
mem_limit: "1g"
pids_limit: 512
pids_limit: 1024
whiteboard:
name: "nextcloud-whiteboard"
image: "ghcr.io/nextcloud-releases/whiteboard"
@@ -90,7 +96,7 @@ docker:
cpus: "0.25"
mem_reservation: "128m"
mem_limit: "512m"
pids_limit: 256
pids_limit: 1024
enabled: "{{ applications | get_app_conf('web-app-nextcloud', 'features.oidc', False, True, True) }}" # Activate OIDC for Nextcloud
# floavor decides which OICD plugin should be used.
# Available options: oidc_login, sociallogin

View File

@@ -14,6 +14,21 @@
vars:
docker_compose_flush_handlers: false
- block:
- name: "Create '{{ NEXTCLOUD_WHITEBOARD_SERVICE_DIRECTORY }}' Directory"
file:
path: "{{ NEXTCLOUD_WHITEBOARD_SERVICE_DIRECTORY }}"
state: directory
mode: "0755"
- name: "Deploy Whiteboard Dockerfile to '{{ NEXTCLOUD_WHITEBOARD_SERVICE_DOCKERFILE }}'"
template:
src: "Dockerfiles/Whiteboard.j2"
dest: "{{ NEXTCLOUD_WHITEBOARD_SERVICE_DOCKERFILE }}"
notify: docker compose build
when: NEXTCLOUD_WHITEBOARD_ENABLED | bool
- name: "create {{ NEXTCLOUD_HOST_CONF_ADD_PATH }}"
file:
path: "{{ NEXTCLOUD_HOST_CONF_ADD_PATH }}"
@@ -24,8 +39,8 @@
template:
src: "{{ item }}"
dest: "{{ NEXTCLOUD_HOST_CONF_ADD_PATH }}/{{ item | basename | regex_replace('\\.j2$', '') }}"
owner: "{{ NEXTCLOUD_DOCKER_USER_id }}"
group: "{{ NEXTCLOUD_DOCKER_USER_id }}"
owner: "{{ NEXTCLOUD_DOCKER_USER_ID }}"
group: "{{ NEXTCLOUD_DOCKER_USER_ID }}"
loop: "{{ lookup('fileglob', role_path ~ '/templates/config/*.j2', wantlist=True) }}"
notify: docker compose up

View File

@@ -7,6 +7,9 @@
command: "{{ NEXTCLOUD_DOCKER_EXEC_OCC }} maintenance:repair --include-expensive"
register: occ_repair
changed_when: "'No repairs needed' not in occ_repair.stdout"
retries: 3
delay: 10
until: occ_repair.rc == 0
- name: Nextcloud | App update (retry once)
command: "{{ NEXTCLOUD_DOCKER_EXEC_OCC }} app:update --all"

View File

@@ -16,6 +16,13 @@
- name: Flush all handlers immediately so that occ can be used
meta: flush_handlers
- name: Wait until Redis is ready (PONG)
command: "docker exec {{ NEXTCLOUD_REDIS_CONTAINER }} redis-cli ping"
register: redis_ping
retries: 60
delay: 2
until: (redis_ping.stdout | default('')) is search('PONG')
- name: Update\Upgrade Nextcloud
include_tasks: 03_upgrade.yml
when: MODE_UPDATE | bool

View File

@@ -0,0 +1,27 @@
FROM {{ NEXTCLOUD_WHITEBOARD_IMAGE }}:{{ NEXTCLOUD_WHITEBOARD_VERSION }}
# Temporarily switch to root so we can install packages
USER 0
# Install Chromium, ffmpeg, fonts, and runtime libraries for headless operation on Alpine
RUN apk add --no-cache \
chromium \
ffmpeg \
nss \
freetype \
harfbuzz \
ttf-dejavu \
ttf-liberation \
udev \
ca-certificates \
&& update-ca-certificates
# Ensure a consistent Chromium binary path
RUN if [ -x /usr/bin/chromium-browser ]; then ln -sf /usr/bin/chromium-browser /usr/bin/chromium; fi
# Environment variables used by Puppeteer
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium \
PUPPETEER_SKIP_DOWNLOAD=true
# Switch back to the original non-root user (nobody)
USER 65534

View File

@@ -67,16 +67,29 @@
{{ service_name }}:
{% set container_port = NEXTCLOUD_WHITEBOARD_PORT_INTERNAL %}
{% include 'roles/docker-container/templates/base.yml.j2' %}
build:
context: .
dockerfile: {{ NEXTCLOUD_WHITEBOARD_SERVICE_DOCKERFILE }}
pull_policy: never
{% include 'roles/docker-container/templates/healthcheck/nc.yml.j2' %}
image: "{{ NEXTCLOUD_WHITEBOARD_IMAGE }}:{{ NEXTCLOUD_WHITEBOARD_VERSION }}"
image: "{{ NEXTCLOUD_WHITEBOARD_CUSTOM_IMAGE }}"
container_name: {{ NEXTCLOUD_WHITEBOARD_CONTAINER }}
volumes:
- whiteboard_tmp:/tmp
- whiteboard_fontcache:/var/cache/fontconfig
expose:
- "{{ container_port }}"
shm_size: 1g
networks:
default:
ipv4_address: 192.168.102.71
depends_on:
redis:
condition: service_healthy
{% endif %}
{% set service_name = NEXTCLOUD_CRON_SERVICE %}
{{ service_name }}:
container_name: "{{ NEXTCLOUD_CRON_CONTAINER }}"
@@ -99,5 +112,11 @@
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
data:
name: {{ NEXTCLOUD_VOLUME }}
{% if NEXTCLOUD_WHITEBOARD_ENABLED %}
whiteboard_tmp:
name: {{ NEXTCLOUD_WHITEBOARD_TMP_VOLUME }}
whiteboard_fontcache:
name: {{ NEXTCLOUD_WHITEBOARD_FRONTCACHE_VOLUME }}
{% endif %}
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -60,4 +60,9 @@ NEXTCLOUD_URL= "{{ NEXTCLOUD_URL }}"
JWT_SECRET_KEY= "{{ NEXTCLOUD_WHITEBOARD_JWT }}"
STORAGE_STRATEGY=redis
REDIS_URL=redis://redis:6379/0
# Chromium (headless) hardening for Whiteboard
CHROMIUM_FLAGS=--headless=new --no-sandbox --disable-gpu --disable-dev-shm-usage --use-gl=swiftshader --disable-software-rasterizer
# Falls das Image Chromium mitbringt Pfad meistens /usr/bin/chromium oder /usr/bin/chromium-browser:
PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium
PUPPETEER_SKIP_DOWNLOAD=true
{% endif %}

View File

@@ -23,6 +23,12 @@ server
{% include 'roles/sys-svc-proxy/templates/location/ws.conf.j2' %}
{% endif %}
{% if NEXTCLOUD_WHITEBOARD_ENABLED | bool %}
{% set location_ws = '^~ ' ~ NEXTCLOUD_WHITEBOARD_LOCATION %}
{% set ws_port = NEXTCLOUD_PORT %}
{% include 'roles/sys-svc-proxy/templates/location/ws.conf.j2' %}
{% endif %}
{% include 'roles/sys-svc-proxy/templates/location/html.conf.j2' %}
location ^~ /.well-known {

View File

@@ -116,24 +116,32 @@ NEXTCLOUD_HPB_TURN_STANDALONE_CONFIG: >-
}}
### Whiteboard
NEXTCLOUD_WHITEBOARD_SERVICE: "whiteboard"
NEXTCLOUD_WHITEBOARD_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'.name') }}"
NEXTCLOUD_WHITEBOARD_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'.image') }}"
NEXTCLOUD_WHITEBOARD_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'.version') }}"
NEXTCLOUD_WHITEBOARD_ENABLED: "{{ applications | get_app_conf(application_id, 'plugins.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'.enabled') }}"
NEXTCLOUD_WHITEBOARD_PORT_INTERNAL: "3002"
NEXTCLOUD_WHITEBOARD_JWT: "{{ applications | get_app_conf(application_id, 'credentials.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'_jwt_secret') }}"
NEXTCLOUD_WHITEBOARD_LOCATION: "/whiteboard/"
NEXTCLOUD_WHITEBOARD_URL: "{{ [ NEXTCLOUD_URL, NEXTCLOUD_WHITEBOARD_LOCATION ] | url_join }}"
NEXTCLOUD_WHITEBOARD_SERVICE: "whiteboard"
NEXTCLOUD_WHITEBOARD_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'.name') }}"
NEXTCLOUD_WHITEBOARD_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'.image') }}"
NEXTCLOUD_WHITEBOARD_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'.version') }}"
NEXTCLOUD_WHITEBOARD_CUSTOM_IMAGE: "nextcloud_whiteboard_custom"
NEXTCLOUD_WHITEBOARD_ENABLED: "{{ applications | get_app_conf(application_id, 'plugins.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'.enabled') }}"
NEXTCLOUD_WHITEBOARD_PORT_INTERNAL: "3002"
NEXTCLOUD_WHITEBOARD_JWT: "{{ applications | get_app_conf(application_id, 'credentials.' ~ NEXTCLOUD_WHITEBOARD_SERVICE ~'_jwt_secret') }}"
NEXTCLOUD_WHITEBOARD_LOCATION: "/whiteboard/"
NEXTCLOUD_WHITEBOARD_URL: "{{ [ NEXTCLOUD_URL, NEXTCLOUD_WHITEBOARD_LOCATION ] | url_join }}"
NEXTCLOUD_WHITEBOARD_TMP_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.whiteboard_tmp') }}"
NEXTCLOUD_WHITEBOARD_FRONTCACHE_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.whiteboard_fontcache') }}"
NEXTCLOUD_WHITEBOARD_SERVICE_DIRECTORY: "{{ [ docker_compose.directories.services, 'whiteboard' ] | path_join }}"
NEXTCLOUD_WHITEBOARD_SERVICE_DOCKERFILE: "{{ [ NEXTCLOUD_WHITEBOARD_SERVICE_DIRECTORY, 'Dockerfile' ] | path_join }}"
### Collabora
NEXTCLOUD_COLLABORA_URL: "{{ domains | get_url('web-svc-collabora', WEB_PROTOCOL) }}"
NEXTCLOUD_COLLABORA_URL: "{{ domains | get_url('web-svc-collabora', WEB_PROTOCOL) }}"
## User Configuration
NEXTCLOUD_DOCKER_USER_id: 82 # UID of the www-data user
NEXTCLOUD_DOCKER_USER: "www-data" # Name of the www-data user (Set here to easy change it in the future)
NEXTCLOUD_DOCKER_USER_ID: 82 # UID of the www-data user
NEXTCLOUD_DOCKER_USER: "www-data" # Name of the www-data user (Set here to easy change it in the future)
## Execution
NEXTCLOUD_INTERNAL_OCC_COMMAND: "{{ [ NEXTCLOUD_DOCKER_WORK_DIRECTORY, 'occ'] | path_join }}"
NEXTCLOUD_DOCKER_EXEC: "docker exec -u {{ NEXTCLOUD_DOCKER_USER }} {{ NEXTCLOUD_CONTAINER }}" # General execute composition
NEXTCLOUD_DOCKER_EXEC_OCC: "{{ NEXTCLOUD_DOCKER_EXEC }} {{ NEXTCLOUD_INTERNAL_OCC_COMMAND }}" # Execute docker occ command
NEXTCLOUD_INTERNAL_OCC_COMMAND: "{{ [ NEXTCLOUD_DOCKER_WORK_DIRECTORY, 'occ'] | path_join }}"
NEXTCLOUD_DOCKER_EXEC: "docker exec -u {{ NEXTCLOUD_DOCKER_USER }} {{ NEXTCLOUD_CONTAINER }}" # General execute composition
NEXTCLOUD_DOCKER_EXEC_OCC: "{{ NEXTCLOUD_DOCKER_EXEC }} {{ NEXTCLOUD_INTERNAL_OCC_COMMAND }}" # Execute docker occ command
## Redis
NEXTCLOUD_REDIS_CONTAINER: "{{ entity_name }}-redis"

View File

@@ -1,11 +1,18 @@
{% if applications | get_app_conf(application_id, 'features.oauth2', False) %}
oauth2-proxy:
image: quay.io/oauth2-proxy/oauth2-proxy:{{ applications['web-app-oauth2-proxy'].version}}
image: quay.io/oauth2-proxy/oauth2-proxy:{{ applications['web-app-oauth2-proxy'].version }}
restart: {{ DOCKER_RESTART_POLICY }}
command: --config /oauth2-proxy.cfg
container_name: {{ application_id | get_entity_name }}-oauth2-proxy
hostname: oauth2-proxy
ports:
- 127.0.0.1:{{ ports.localhost.oauth2_proxy[application_id] }}:4180/tcp
volumes:
- "{{ docker_compose.directories.volumes }}{{ applications | get_app_conf('web-app-oauth2-proxy','configuration_file')}}:/oauth2-proxy.cfg"
healthcheck:
test: ["CMD", "/bin/oauth2-proxy", "--version"]
interval: 30s
timeout: 5s
retries: 1
start_period: 5s
{% endif %}

View File

@@ -23,7 +23,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
font-src:

View File

@@ -17,11 +17,6 @@ server:
flags:
script-src-elem:
unsafe-inline: true
#script-src:
# unsafe-inline: true
# unsafe-eval: true
#style-src:
# unsafe-inline: true
whitelist:
font-src: []
connect-src: []

View File

@@ -10,9 +10,9 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
frame-ancestors:

View File

@@ -16,7 +16,7 @@ features:
server:
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true

View File

@@ -15,7 +15,7 @@ features:
server:
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true

View File

@@ -9,13 +9,13 @@ features:
server:
csp:
flags:
script-src:
script-src-attr:
unsafe-eval: true
unsafe-inline: true
script-src-elem:
unsafe-inline: true
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
frame-ancestors:

Some files were not shown because too many files have changed in this diff Show More