50 Commits

Author SHA1 Message Date
2f46b99e4e XWiki: add diagnostic and modern AuthService handling
- Added 05_set_authservice.yml to set XWikiPreferences.authenticationService
  to modern component hints (standard, oidc, ldap).
- Added _auth_diag.yml to introspect registered AuthService components and
  verify the active preference.
- Updated docker-compose.yml.j2 to use -Dxwiki.authentication.authservice
  instead of deprecated authclass syntax.
- Temporarily included AuthDiag task in 01_core.yml for runtime verification.

Context: https://chatgpt.com/share/69005d88-6bf8-800f-af41-73b0e5dc9c13
2025-10-28 07:07:42 +01:00
295ae7e477 Solved Mediawiki CPS bug whichg prevented OIDC Login 2025-10-27 20:33:07 +01:00
c67ccc1df6 Used path_join @ web-app-friendica 2025-10-26 15:48:28 +01:00
cb483f60d1 optimized for easier debugging 2025-10-25 12:52:17 +02:00
2be73502ca Solved tests 2025-10-25 11:46:36 +02:00
57d5269b07 CSP (Safari-safe): merge -elem/-attr into base; respect explicit disables; no mirror-back; header only for documents/workers
- Add CSP3 support for style/script: include -elem and -attr directives
- Base (style-src, script-src) now unions elem/attr (CSP2/Safari fallback)
- Respect explicit base disables (e.g. style-src.unsafe-inline: false)
- Hashes only when 'unsafe-inline' absent in the final base tokens
- Nginx: set CSP only for HTML/worker via header_filter_by_lua_block; drop for subresources
- Remove per-location header_filter; keep body_filter only
- Update app role flags to *-attr where appropriate; extend desktop CSS sources
- Add comprehensive unit tests for union/explicit-disable/no-mirror-back

Ref: https://chatgpt.com/share/68f87a0a-cebc-800f-bb3e-8c8ab4dee8ee
2025-10-22 13:53:06 +02:00
1eefdea050 Solved CSP errors for MiniQR 2025-10-22 12:49:22 +02:00
561160504e Add new web-app-mini-qr role
- Introduced new role 'web-app-mini-qr' to deploy the lightweight, self-hosted Mini-QR application.
- Added dedicated subnet and localhost port mapping (8059) in group_vars.
- Ensured proper dependency structure and run_once handling in MIG role.
- Included upstream reference and CSP whitelist for temporary clarity.ms removal tracking.
- Added README.md and meta information following the Infinito.Nexus web-app schema.

See: https://chatgpt.com/share/68f890ab-5960-800f-85f8-ba30bd4350fe
2025-10-22 10:07:35 +02:00
9a4bf91276 feat(nextcloud): enable custom Alpine-based Whiteboard image with Chromium & ffmpeg support
- Added role tasks to deploy templated Dockerfile for Whiteboard service
- Configured build context and custom image name (nextcloud_whiteboard_custom)
- Increased PID limits and shm_size for stable recording
- Adjusted user ID variable naming consistency
- Integrated path_join for service directory variables
- Fixed build permissions (install as root, revert to nobody)

Reference: ChatGPT conversation https://chatgpt.com/share/68f771c6-0e98-800f-99ca-9e367f4cd0c2
2025-10-21 13:44:11 +02:00
468b6e734c Deactivated whiteboar 2025-10-20 21:17:06 +02:00
83cb94b6ff Refactored Redis resource include macro and increased memory limits
- Replaced deprecated lookup(vars=...) in svc-db-redis with macro-based include (Ansible/Jinja safe)
- Redis now uses higher resource values (1 CPU, 1G reserved, 8G max, 512 pids)
- Enables stable Whiteboard operation with >3.5 GB Redis memory usage
- Related conversation: https://chatgpt.com/share/68f67a00-d598-800f-a6be-ee5987e66fba
2025-10-20 20:08:38 +02:00
6857295969 Fix variable definition test to recognize block-style Jinja 'set ... endset' statements
This update extends the regex to detect block-style variable definitions such as:
  {% set var %} ... {% endset %}
Previously, only inline 'set var =' syntax was recognized, causing false positives
like '_snippet' being flagged as undefined in Jinja templates.

Reference: https://chatgpt.com/share/68f6799a-eb80-800f-ab5c-7c196d4c4661
2025-10-20 20:04:40 +02:00
8ab398f679 nextcloud:whiteboard: wait for Redis before start (depends_on: service_healthy) to prevent early SocketClosedUnexpectedlyError
Context: added depends_on on redis for the Whiteboard service so websockets don’t crash when Redis isn’t ready yet. See discussion: https://chatgpt.com/share/68f65a3e-aa54-800f-a1a7-e6878775fd7e
2025-10-20 17:50:47 +02:00
31133ddd90 Enhancement: Fix for Nextcloud Whiteboard recording and collaboration server
- Added Chromium headless flags and writable font cache/tmp volumes
- Enabled WebSocket proxy forwarding for /whiteboard/
- Verified and adjusted CSP and frontend integration
- Added Whiteboard-related variables and volumes in main.yml

See ChatGPT conversation (20 Oct 2025):
https://chatgpt.com/share/68f655e1-fa3c-800f-b35f-4f875dfed4fd
2025-10-20 17:31:59 +02:00
783b1e152d Added numpy 2025-10-20 11:03:44 +02:00
eca567fefd Made gitea LDAP Source primary domain independent 2025-10-18 10:54:39 +02:00
905f461ee8 Add basic healthcheck to oauth2-proxy container template using binary version check for distroless compatibility
Reference: https://chatgpt.com/share/68f35550-4248-800f-9c6a-dbd49a48592e
2025-10-18 10:52:58 +02:00
9f0b259ba9 Merge branch 'master' of github.com:kevinveenbirkenbach/infinito-nexus 2025-10-18 09:41:18 +02:00
06e4323faa Added ansible environmnet 2025-10-17 23:07:43 +02:00
3d99226f37 Refactor BigBlueButton and backup task structure:
- Moved database seed variables from vars/main.yml to task-level include in BigBlueButton
- Simplified core include logic in sys-ctl-bkp-docker-2-loc
- Ensured clean conditional for BKP_DOCKER_2_LOC_DB_ENABLED
See: https://chatgpt.com/share/68f216f7-62d8-800f-94e3-c82e4418e51b (deutsch)
2025-10-17 12:14:39 +02:00
73ba09fbe2 Optimize SSH connection performance by disabling GSSAPI authentication and reverse DNS lookups
- Added 'GSSAPIAuthentication no' to prevent unnecessary Kerberos negotiation delays.
- Added 'UseDNS no' to skip reverse DNS resolution during SSH login, improving connection speed.
- Both changes improve SSH responsiveness, especially in non-domain environments.

Reference: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
2025-10-15 18:37:09 +02:00
01ea9b76ce Enable pipelining globally and modernize SSH settings
- Activated pipelining in [defaults] for better performance.
- Replaced deprecated 'scp_if_ssh' with 'transfer_method'.
- Flattened multi-line ssh_args for compatibility.
- Verified configuration parsing as discussed in https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
2025-10-15 17:45:16 +02:00
c22acf202f Solved bugs 2025-10-15 17:03:57 +02:00
61e138c1a6 Optimize OpenLDAP container resources for up to 5k users (1.25 CPU / 1.5GB RAM / 1024 PIDs). See https://chatgpt.com/share/68ef7228-4028-800f-8986-54206a51b9c1 2025-10-15 12:06:51 +02:00
07c8e036ec Deactivated change when because its anyhow not trackable 2025-10-15 10:27:12 +02:00
0b36059cd2 feat(web-app-gitea): add optional Redis integration for caching, sessions, and queues
This update introduces conditional Redis support for Gitea, allowing connection
to either a local or centralized Redis instance depending on configuration.
Includes resource limits for the Redis service and corresponding environment
variables for cache, session, and queue backends.

Reference: ChatGPT conversation on centralized vs per-app Redis architecture (2025-10-15).
https://chatgpt.com/share/68ef5930-49c8-800f-b6b8-069e6fefda01
2025-10-15 10:20:18 +02:00
d76e384ae3 Enhance CertUtils to return the newest matching certificate and add comprehensive unit tests
- Added run_openssl_dates() to extract notBefore/notAfter timestamps.
- Modified mapping logic to store multiple cert entries per SAN with metadata.
- find_cert_for_domain() now selects the newest certificate based on notBefore and mtime.
- Exact SAN matches take precedence over wildcard matches.
- Added new unit tests (test_cert_utils_newest.py) verifying freshness logic, fallback handling, and wildcard behavior.

Reference: https://chatgpt.com/share/68ef4b4c-41d4-800f-9e50-5da4b6be1105
2025-10-15 09:21:00 +02:00
e6f4f3a6a4 feat(cli/build/defaults): ensure deterministic alphabetical sorting for applications and users
- Added sorting by application key and user key before YAML output.
- Ensures stable and reproducible file generation across runs.
- Added comprehensive unit tests verifying key order and output stability.

See: https://chatgpt.com/share/68ef4778-a848-800f-a50b-a46a3b878797
2025-10-15 09:04:39 +02:00
a80b26ed9e Moved bbb database seeding 2025-10-15 08:50:21 +02:00
45ec7b0ead Optimized include text 2025-10-15 08:39:37 +02:00
ec396d130c Optimized time schedule 2025-10-15 08:37:51 +02:00
93c2fbedd7 Added setting of timezone 2025-10-15 02:24:25 +02:00
d006f0ba5e Optimized schedule 2025-10-15 02:13:13 +02:00
dd43722e02 Raised memory for baserow 2025-10-14 21:59:10 +02:00
05d7ddc491 svc-bkp-rmt-2-loc: migrate pull script to Python + add unit tests; lock down backup-provider ACLs
- Replace Bash pull-specific-host.sh with Python pull-specific-host.py (argparse, identical logic)
- Update role vars and runner template to call python script
- Add __init__.py files for test discovery/imports
- Add unittest: tests/unit/roles/svc-bkp-rmt-2-loc/files/test_pull_specific_host.py (mocks subprocess/os/time; covers success, no types, find-fail, retry-exhaustion)
- Backup provider SSH wrapper: align allowed ls path (backup-docker-to-local)
- Split user role tasks: 01_core (sudoers), 02_permissions_ssh (SSH keys + wrapper), 03_permissions_folders (ownership + default ACLs + depth-limited chown/chmod)
- Ensure default ACLs grant rwx to 'backup' and none to group/other; keep sudo rsync working

Ref: ChatGPT discussion (2025-10-14) — https://chatgpt.com/share/68ee920a-9b98-800f-8806-ddcfe0255149
2025-10-14 20:10:49 +02:00
e54436821c Refactor sys-front-inj-all dependencies handling
Moved CDN and logout role inclusions into a dedicated '01_dependencies.yml' file for better modularity and reusability.
Added variable injection support via 'vars:' to allow flexible configuration like 'proxy_extra_configuration'.

See: https://chatgpt.com/share/68ee880d-cd80-800f-8dda-9e981631a5c7
2025-10-14 19:27:56 +02:00
ed73a37795 Improve get_app_conf robustness and add skip_missing_app parameter support
- Added new optional parameter 'skip_missing_app' to get_app_conf() in module_utils/config_utils.py to safely return defaults when applications are missing.
- Updated group_vars/all/00_general.yml and roles/web-app-nextcloud/config/main.yml to include skip_missing_app=True in all Nextcloud-related calls.
- Added comprehensive unit tests under tests/unit/module_utils/test_config_utils.py covering missing app handling, schema enforcement, nested lists, and index edge cases.

Ref: https://chatgpt.com/share/68ee6b5c-6db0-800f-bc20-d51470d7b39f
2025-10-14 17:25:37 +02:00
adff9271fd Solved rmt backup bugs 2025-10-14 16:29:42 +02:00
2f0fb2cb69 Merged network definitions before application definitions 2025-10-14 15:52:28 +02:00
6abf2629e0 Removed false - 2025-10-13 19:03:23 +02:00
6a8e0f38d8 fix(web-svc-collabora): add required Docker capabilities and resource limits for Collabora Jails
- Added security_opt (seccomp=unconfined, apparmor=unconfined) and cap_add (MKNOD, SYS_CHROOT, SETUID, SETGID, FOWNER)
  to allow Collabora's sandbox (coolmount/systemplate) to mount and chroot properly
- Increased resource limits (2 CPUs, 2 GB RAM, 2048 PIDs) to prevent document timeout and OOM issues
- Resolves 'coolmount: Operation not permitted' and systemplate performance warnings

Refs: https://chatgpt.com/share/68ed03cd-1afc-800f-904e-d1c1cb133914
2025-10-13 15:52:50 +02:00
ae618cbf19 refactor(web-app-desktop, web-app-discourse): improve initialization handling and HTTP readiness check
- Added HTTP readiness check for Desktop application to ensure all logos can be downloaded during initialization
- Introduced 'http_port' variable for better readability
- Simplified role execution structure by moving run_once inclusion into core task file
- Adjusted docker compose handler flushing behavior
- Applied consistent structure to Discourse role

See: https://chatgpt.com/share/68ed02aa-b44c-800f-a125-de8600b102d4
2025-10-13 15:48:26 +02:00
c835ca8f2c refactor(front-injection): stabilize run_once flow and explicit web-service loading
- sys-front-inj-all: load web-svc-cdn and web-svc-logout once; reinitialize inj_enabled after services; move run_once block to top; reorder injections.
- sys-front-inj-css: move run_once call into 01_core; fix app_style_present default; simplify main.
- sys-front-inj-desktop/js/matomo: deactivate per-role run_once blocks; keep utils/run_once where appropriate.
- sys-front-inj-logout: switch to files/logout.js + copy; update head_sub mtime lookup; mark set_fact tasks unchanged.
- sys-svc-cdn: inline former 01_core tasks into main; ensure shared/vendor dirs and set run_once in guarded block; remove 01_core.yml.

Rationale: prevent cascading 'false_condition: run_once_sys_svc_cdn is not defined' skips by setting run-once facts only after the necessary tasks and avoiding parent-scope guards; improves determinism and handler flushing.

Conversation: https://chatgpt.com/share/68ecfaa5-94a0-800f-b1b6-2b969074651f
2025-10-13 15:12:23 +02:00
087175a3c7 Solved mailu token bug 2025-10-13 10:50:59 +02:00
3da645f3b8 Mailu/MSMTP: split token mgmt, idempotent reload, safer guards
• Rename: 02_create-user.yml → 02_manage_user.yml; 03_create-token.yml → 03a_manage_user_token.yml + 03b_create_user_token.yml
• Only (re)run sys-svc-msmtp when no-reply token exists; set run_once_sys_svc_msmtp=true in 01_core
• Reset by setting run_once_sys_svc_msmtp=false after creating no-reply token; then include sys-svc-msmtp
• Harden when-guards (no '{{ }}' in when, safe .get lookups)
• Minor formatting and failed_when readability

Conversation: https://chatgpt.com/share/68ebd196-a264-800f-a215-3a89d0f96c79
2025-10-12 18:05:00 +02:00
a996e2190f feat(logout): wire injector to web-svc-logout and add robust CORS/CSP for /logout
- sys-front-inj-logout: depend on web-svc-logout (run-once guarded) and simplify task flow.
- web-svc-logout: align feature flags/formatting and extend CSP:
  - add cdn.jsdelivr.net to connect/script/style and quote values.
- Nginx: move CORS config into logout-proxy.conf.j2 with dynamic vars:
  - Access-Control-Allow-Origin set to canonical logout origin,
  - Allow-Credentials=true,
  - Allow-Methods=GET, OPTIONS,
  - basic headers list (Accept, Authorization),
  - cache disabled for /logout responses.
- Drop obsolete CORS var passing from 01_core.yml; headers now templated at proxy layer.

Prepares clean cross-origin logout orchestration from https://logout.veen.world.

Refs: ChatGPT discussion – https://chatgpt.com/share/68ebb75f-0170-800f-93c5-e5cb438b8ed4
2025-10-12 16:16:47 +02:00
7dccffd52d Optimized variable whitespacing 2025-10-12 14:37:45 +02:00
853f2c3e2d Added mastodon to disc space cleanup script 2025-10-12 14:37:19 +02:00
b2978a3141 Changed mailu token name 2025-10-12 14:36:13 +02:00
0e0b703ccd Added cleanup to mastodon 2025-10-12 14:03:28 +02:00
150 changed files with 2132 additions and 573 deletions

View File

@@ -1,5 +1,6 @@
[defaults]
# --- Performance & Behavior ---
pipelining = True
forks = 25
strategy = linear
gathering = smart
@@ -14,19 +15,14 @@ stdout_callback = yaml
callbacks_enabled = profile_tasks,timer
# --- Plugin paths ---
filter_plugins = ./filter_plugins
filter_plugins = ./filter_plugins
lookup_plugins = ./lookup_plugins
module_utils = ./module_utils
[ssh_connection]
# Multiplexing: safer socket path in HOME instead of /tmp
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r \
-o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new \
-o PreferredAuthentications=publickey,password,keyboard-interactive
# Pipelining boosts speed; works fine if sudoers does not enforce "requiretty"
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r -o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new -o PreferredAuthentications=publickey,password,keyboard-interactive
pipelining = True
scp_if_ssh = smart
transfer_method = smart
[persistent_connection]
connect_timeout = 30

View File

@@ -83,6 +83,13 @@ class DefaultsGenerator:
print(f"Error during rendering: {e}", file=sys.stderr)
sys.exit(1)
# Sort applications by application key for stable output
apps = result.get("defaults_applications", {})
if isinstance(apps, dict) and apps:
result["defaults_applications"] = {
k: apps[k] for k in sorted(apps.keys())
}
# Write output
self.output_file.parent.mkdir(parents=True, exist_ok=True)
with self.output_file.open("w", encoding="utf-8") as f:

View File

@@ -220,6 +220,10 @@ def main():
print(f"Error building user entries: {e}", file=sys.stderr)
sys.exit(1)
# Sort users by key for deterministic output
if isinstance(users, dict) and users:
users = OrderedDict(sorted(users.items()))
# Convert OrderedDict into plain dict for YAML
default_users = {'default_users': users}
plain_data = dictify(default_users)

View File

@@ -10,9 +10,23 @@ from module_utils.config_utils import get_app_conf
from module_utils.get_url import get_url
def _dedup_preserve(seq):
"""Return a list with stable order and unique items."""
seen = set()
out = []
for x in seq:
if x not in seen:
seen.add(x)
out.append(x)
return out
class FilterModule(object):
"""
Custom filters for Content Security Policy generation and CSP-related utilities.
Jinja filters for building a robust, CSP3-aware Content-Security-Policy header.
Safari/CSP2 compatibility is ensured by merging the -elem/-attr variants into the base
directives (style-src, script-src). We intentionally do NOT mirror back into -elem/-attr
to allow true CSP3 granularity on modern browsers.
"""
def filters(self):
@@ -61,11 +75,14 @@ class FilterModule(object):
"""
Returns CSP flag tokens (e.g., "'unsafe-eval'", "'unsafe-inline'") for a directive,
merging sane defaults with app config.
Default: 'unsafe-inline' is enabled for style-src and style-src-elem.
Defaults:
- For styles we enable 'unsafe-inline' by default (style-src, style-src-elem, style-src-attr),
because many apps rely on inline styles / style attributes.
- For scripts we do NOT enable 'unsafe-inline' by default.
"""
# Defaults that apply to all apps
default_flags = {}
if directive in ('style-src', 'style-src-elem'):
if directive in ('style-src', 'style-src-elem', 'style-src-attr'):
default_flags = {'unsafe-inline': True}
configured = get_app_conf(
@@ -76,7 +93,6 @@ class FilterModule(object):
{}
)
# Merge defaults with configured flags (configured overrides defaults)
merged = {**default_flags, **configured}
tokens = []
@@ -131,82 +147,148 @@ class FilterModule(object):
):
"""
Builds the Content-Security-Policy header value dynamically based on application settings.
- Flags (e.g., 'unsafe-eval', 'unsafe-inline') are read from server.csp.flags.<directive>,
with sane defaults applied in get_csp_flags (always 'unsafe-inline' for style-src and style-src-elem).
- Inline hashes are read from server.csp.hashes.<directive>.
- Whitelists are read from server.csp.whitelist.<directive>.
- Inline hashes are added only if the final tokens do NOT include 'unsafe-inline'.
Key points:
- CSP3-aware: supports base/elem/attr for styles and scripts.
- Safari/CSP2 fallback: base directives (style-src, script-src) always include
the union of their -elem/-attr variants.
- We do NOT mirror back into -elem/-attr; finer CSP3 rules remain effective
on modern browsers if you choose to use them.
- If the app explicitly disables a token on the *base* (e.g. style-src.unsafe-inline: false),
that token is removed from the merged base even if present in elem/attr.
- Inline hashes are added ONLY if that directive does NOT include 'unsafe-inline'.
- Whitelists/flags/hashes read from:
server.csp.whitelist.<directive>
server.csp.flags.<directive>
server.csp.hashes.<directive>
- “Smart defaults”:
* internal CDN for style/script elem and connect
* Matomo endpoints (if feature enabled) for script-elem/connect
* Simpleicons (if feature enabled) for connect
* reCAPTCHA (if feature enabled) for script-elem/frame-src
* frame-ancestors extended for desktop/logout/keycloak if enabled
"""
try:
directives = [
'default-src', # Fallback source list for content types not explicitly listed
'connect-src', # Allowed URLs for XHR, WebSockets, EventSource, fetch()
'frame-ancestors', # Who may embed this page
'frame-src', # Sources for nested browsing contexts (e.g., <iframe>)
'script-src', # Sources for script execution
'script-src-elem', # Sources for <script> elements
'style-src', # Sources for inline styles and <style>/<link> elements
'style-src-elem', # Sources for <style> and <link rel="stylesheet">
'font-src', # Sources for fonts
'worker-src', # Sources for workers
'manifest-src', # Sources for web app manifests
'media-src', # Sources for audio and video
'default-src',
'connect-src',
'frame-ancestors',
'frame-src',
'script-src',
'script-src-elem',
'script-src-attr',
'style-src',
'style-src-elem',
'style-src-attr',
'font-src',
'worker-src',
'manifest-src',
'media-src',
]
parts = []
tokens_by_dir = {}
explicit_flags_by_dir = {}
for directive in directives:
# Collect explicit flags (to later respect explicit "False" on base during merge)
explicit_flags = get_app_conf(
applications,
application_id,
'server.csp.flags.' + directive,
False,
{}
)
explicit_flags_by_dir[directive] = explicit_flags
tokens = ["'self'"]
# Load flags (includes defaults from get_csp_flags)
# 1) Flags (with sane defaults)
flags = self.get_csp_flags(applications, application_id, directive)
tokens += flags
# Allow fetching from internal CDN by default for selected directives
if directive in ['script-src-elem', 'connect-src', 'style-src-elem']:
# 2) Internal CDN defaults for selected directives
if directive in ('script-src-elem', 'connect-src', 'style-src-elem', 'style-src'):
tokens.append(get_url(domains, 'web-svc-cdn', web_protocol))
# Matomo integration if feature is enabled
if directive in ['script-src-elem', 'connect-src']:
# 3) Matomo (if enabled)
if directive in ('script-src-elem', 'connect-src'):
if self.is_feature_enabled(applications, matomo_feature_name, application_id):
tokens.append(get_url(domains, 'web-app-matomo', web_protocol))
# Simpleicons integration if feature is enabled
if directive in ['connect-src']:
# 4) Simpleicons (if enabled) typically used via connect-src (fetch)
if directive == 'connect-src':
if self.is_feature_enabled(applications, 'simpleicons', application_id):
tokens.append(get_url(domains, 'web-svc-simpleicons', web_protocol))
# ReCaptcha integration (scripts + frames) if feature is enabled
# 5) reCAPTCHA (if enabled) scripts + frames
if self.is_feature_enabled(applications, 'recaptcha', application_id):
if directive in ['script-src-elem', 'frame-src']:
if directive in ('script-src-elem', 'frame-src'):
tokens.append('https://www.gstatic.com')
tokens.append('https://www.google.com')
# Frame ancestors handling (desktop + logout support)
# 6) Frame ancestors (desktop + logout)
if directive == 'frame-ancestors':
if self.is_feature_enabled(applications, 'desktop', application_id):
# Allow being embedded by the desktop app domain (and potentially its parent)
# Allow being embedded by the desktop app domain's site
domain = domains.get('web-app-desktop')[0]
sld_tld = ".".join(domain.split(".")[-2:]) # e.g., example.com
tokens.append(f"{sld_tld}")
if self.is_feature_enabled(applications, 'logout', application_id):
# Allow embedding via logout proxy and Keycloak app
tokens.append(get_url(domains, 'web-svc-logout', web_protocol))
tokens.append(get_url(domains, 'web-app-keycloak', web_protocol))
# Custom whitelist entries
# 7) Custom whitelist
tokens += self.get_csp_whitelist(applications, application_id, directive)
# Add inline content hashes ONLY if final tokens do NOT include 'unsafe-inline'
# (Check tokens, not flags, to include defaults and later modifications.)
# 8) Inline hashes (only if this directive does NOT include 'unsafe-inline')
if "'unsafe-inline'" not in tokens:
for snippet in self.get_csp_inline_content(applications, application_id, directive):
tokens.append(self.get_csp_hash(snippet))
# Append directive
parts.append(f"{directive} {' '.join(tokens)};")
tokens_by_dir[directive] = _dedup_preserve(tokens)
# Static img-src directive (kept permissive for data/blob and any host)
# ----------------------------------------------------------
# CSP3 families → ensure CSP2 fallback (Safari-safe)
# Merge style/script families so base contains union of elem/attr.
# Respect explicit disables on the base (e.g. unsafe-inline=False).
# Do NOT mirror back into elem/attr (keep granularity).
# ----------------------------------------------------------
def _strip_if_disabled(unioned_tokens, explicit_flags, name):
"""
Remove a token (e.g. 'unsafe-inline') from the unioned token list
if it is explicitly disabled in the base directive flags.
"""
if isinstance(explicit_flags, dict) and explicit_flags.get(name) is False:
tok = f"'{name}'"
return [t for t in unioned_tokens if t != tok]
return unioned_tokens
def merge_family(base_key, elem_key, attr_key):
base = tokens_by_dir.get(base_key, [])
elem = tokens_by_dir.get(elem_key, [])
attr = tokens_by_dir.get(attr_key, [])
union = _dedup_preserve(base + elem + attr)
# Respect explicit disables on the base
explicit_base = explicit_flags_by_dir.get(base_key, {})
# The most relevant flags for script/style:
for flag_name in ('unsafe-inline', 'unsafe-eval'):
union = _strip_if_disabled(union, explicit_base, flag_name)
tokens_by_dir[base_key] = union # write back only to base
merge_family('style-src', 'style-src-elem', 'style-src-attr')
merge_family('script-src', 'script-src-elem', 'script-src-attr')
# ----------------------------------------------------------
# Assemble header
# ----------------------------------------------------------
parts = []
for directive in directives:
if directive in tokens_by_dir:
parts.append(f"{directive} {' '.join(tokens_by_dir[directive])};")
# Keep permissive img-src for data/blob + any host (as before)
parts.append("img-src * data: blob:;")
return ' '.join(parts)

View File

@@ -76,8 +76,9 @@ _applications_nextcloud_oidc_flavor: >-
False,
'oidc_login'
if applications
| get_app_conf('web-app-nextcloud','features.ldap',False, True)
else 'sociallogin'
| get_app_conf('web-app-nextcloud','features.ldap',False, True, True)
else 'sociallogin',
True
)
}}

View File

@@ -1,4 +1,3 @@
# Service Timers
## Meta
@@ -24,29 +23,29 @@ SYS_SCHEDULE_HEALTH_BTRFS: "*-*-* 00:00:00"
SYS_SCHEDULE_HEALTH_JOURNALCTL: "*-*-* 00:00:00" # Check once per day the journalctl for errors
SYS_SCHEDULE_HEALTH_DISC_SPACE: "*-*-* 06,12,18,00:00:00" # Check four times per day if there is sufficient disc space
SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if the docker containers are healthy
SYS_SCHEDULE_HEALTH_DOCKER_VOLUMES: "*-*-* {{ HOURS_SERVER_AWAKE }}:15:00" # Check once per hour if the docker volumes are healthy
SYS_SCHEDULE_HEALTH_CSP_CRAWLER: "*-*-* {{ HOURS_SERVER_AWAKE }}:30:00" # Check once per hour if all CSP are fullfilled available
SYS_SCHEDULE_HEALTH_NGINX: "*-*-* {{ HOURS_SERVER_AWAKE }}:45:00" # Check once per hour if all webservices are available
SYS_SCHEDULE_HEALTH_DOCKER_VOLUMES: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if the docker volumes are healthy
SYS_SCHEDULE_HEALTH_CSP_CRAWLER: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if all CSP are fullfilled available
SYS_SCHEDULE_HEALTH_NGINX: "*-*-* {{ HOURS_SERVER_AWAKE }}:00:00" # Check once per hour if all webservices are available
SYS_SCHEDULE_HEALTH_MSMTP: "*-*-* 00:00:00" # Check once per day SMTP Server
### Schedule for cleanup tasks
SYS_SCHEDULE_CLEANUP_BACKUPS: "*-*-* 00,06,12,18:30:00" # Cleanup backups every 6 hours, MUST be called before disc space cleanup
SYS_SCHEDULE_CLEANUP_DISC_SPACE: "*-*-* 07,13,19,01:30:00" # Cleanup disc space every 6 hours
SYS_SCHEDULE_CLEANUP_CERTS: "*-*-* 12,00:45:00" # Deletes and revokes unused certs
SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 12:00:00" # Clean up failed docker backups every noon
SYS_SCHEDULE_CLEANUP_CERTS: "*-*-* 20:00" # Deletes and revokes unused certs once per day
SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 21:00" # Clean up failed docker backups once per day
SYS_SCHEDULE_CLEANUP_BACKUPS: "*-*-* 22:00" # Cleanup backups once per day, MUST be called before disc space cleanup
SYS_SCHEDULE_CLEANUP_DISC_SPACE: "*-*-* 23:00" # Cleanup disc space once per day
### Schedule for repair services
SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 00:00:00" # Restart docker instances every Sunday
### Schedule for backup tasks
SYS_SCHEDULE_BACKUP_DOCKER_TO_LOCAL: "*-*-* 03:30:00"
SYS_SCHEDULE_BACKUP_REMOTE_TO_LOCAL: "*-*-* 21:30:00"
SYS_SCHEDULE_BACKUP_REMOTE_TO_LOCAL: "*-*-* 00:30:00" # Pull Backup of the previous day
SYS_SCHEDULE_BACKUP_DOCKER_TO_LOCAL: "*-*-* 01:00:00" # Backup the current day
### Schedule for Maintenance Tasks
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_RENEW: "*-*-* 12,00:30:00" # Renew Mailu certificates twice per day
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_DEPLOY: "*-*-* 13,01:30:00" # Deploy letsencrypt certificates twice per day to docker containers
SYS_SCHEDULE_MAINTANANCE_NEXTCLOUD: "22" # Do nextcloud maintanace between 22:00 and 02:00
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_RENEW: "*-*-* 10,22:00:00" # Renew Mailu certificates twice per day
SYS_SCHEDULE_MAINTANANCE_LETSENCRYPT_DEPLOY: "*-*-* 11,23:00:00" # Deploy letsencrypt certificates twice per day to docker containers
SYS_SCHEDULE_MAINTANANCE_NEXTCLOUD: "21" # Do nextcloud maintanace between 21:00 and 01:00
### Animation
SYS_SCHEDULE_ANIMATION_KEYBOARD_COLOR: "*-*-* *:*:00" # Change the keyboard color every minute

View File

@@ -112,6 +112,8 @@ defaults_networks:
subnet: 192.168.104.32/28
web-svc-coturn:
subnet: 192.168.104.48/28
web-app-mini-qr:
subnet: 192.168.104.64/28
# /24 Networks / 254 Usable Clients
web-app-bigbluebutton:

View File

@@ -80,6 +80,7 @@ ports:
web-app-flowise: 8056
web-app-minio_api: 8057
web-app-minio_console: 8058
web-app-mini-qr: 8059
web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port
public:
# The following ports should be changed to 22 on the subdomain via stream mapping

View File

@@ -6,6 +6,7 @@ __metaclass__ = type
import os
import subprocess
import time
from datetime import datetime
class CertUtils:
_domain_cert_mapping = None
@@ -22,6 +23,30 @@ class CertUtils:
except subprocess.CalledProcessError:
return ""
@staticmethod
def run_openssl_dates(cert_path):
"""
Returns (not_before_ts, not_after_ts) as POSIX timestamps or (None, None) on failure.
"""
try:
output = subprocess.check_output(
['openssl', 'x509', '-in', cert_path, '-noout', '-startdate', '-enddate'],
universal_newlines=True
)
nb, na = None, None
for line in output.splitlines():
line = line.strip()
if line.startswith('notBefore='):
nb = line.split('=', 1)[1].strip()
elif line.startswith('notAfter='):
na = line.split('=', 1)[1].strip()
def _parse(openssl_dt):
# OpenSSL format example: "Oct 10 12:34:56 2025 GMT"
return int(datetime.strptime(openssl_dt, "%b %d %H:%M:%S %Y %Z").timestamp())
return (_parse(nb) if nb else None, _parse(na) if na else None)
except Exception:
return (None, None)
@staticmethod
def extract_sans(cert_text):
dns_entries = []
@@ -59,7 +84,6 @@ class CertUtils:
else:
return domain == san
@classmethod
def build_snapshot(cls, cert_base_path):
snapshot = []
@@ -82,6 +106,17 @@ class CertUtils:
@classmethod
def refresh_cert_mapping(cls, cert_base_path, debug=False):
"""
Build mapping: SAN -> list of entries
entry = {
'folder': str,
'cert_path': str,
'mtime': float,
'not_before': int|None,
'not_after': int|None,
'is_wildcard': bool
}
"""
cert_files = cls.list_cert_files(cert_base_path)
mapping = {}
for cert_path in cert_files:
@@ -90,46 +125,82 @@ class CertUtils:
continue
sans = cls.extract_sans(cert_text)
folder = os.path.basename(os.path.dirname(cert_path))
try:
mtime = os.stat(cert_path).st_mtime
except FileNotFoundError:
mtime = 0.0
nb, na = cls.run_openssl_dates(cert_path)
for san in sans:
if san not in mapping:
mapping[san] = folder
entry = {
'folder': folder,
'cert_path': cert_path,
'mtime': mtime,
'not_before': nb,
'not_after': na,
'is_wildcard': san.startswith('*.'),
}
mapping.setdefault(san, []).append(entry)
cls._domain_cert_mapping = mapping
if debug:
print(f"[DEBUG] Refreshed domain-to-cert mapping: {mapping}")
print(f"[DEBUG] Refreshed domain-to-cert mapping (counts): "
f"{ {k: len(v) for k, v in mapping.items()} }")
@classmethod
def ensure_cert_mapping(cls, cert_base_path, debug=False):
if cls._domain_cert_mapping is None or cls.snapshot_changed(cert_base_path):
cls.refresh_cert_mapping(cert_base_path, debug)
@staticmethod
def _score_entry(entry):
"""
Return tuple used for sorting newest-first:
(not_before or -inf, mtime)
"""
nb = entry.get('not_before')
mtime = entry.get('mtime', 0.0)
return (nb if nb is not None else -1, mtime)
@classmethod
def find_cert_for_domain(cls, domain, cert_base_path, debug=False):
cls.ensure_cert_mapping(cert_base_path, debug)
exact_match = None
wildcard_match = None
candidates_exact = []
candidates_wild = []
for san, folder in cls._domain_cert_mapping.items():
for san, entries in cls._domain_cert_mapping.items():
if san == domain:
exact_match = folder
break
if san.startswith('*.'):
candidates_exact.extend(entries)
elif san.startswith('*.'):
base = san[2:]
if domain.count('.') == base.count('.') + 1 and domain.endswith('.' + base):
wildcard_match = folder
candidates_wild.extend(entries)
if exact_match:
if debug:
print(f"[DEBUG] Exact match for {domain} found in {exact_match}")
return exact_match
def _pick_newest(entries):
if not entries:
return None
# newest by (not_before, mtime)
best = max(entries, key=cls._score_entry)
return best
if wildcard_match:
if debug:
print(f"[DEBUG] Wildcard match for {domain} found in {wildcard_match}")
return wildcard_match
best_exact = _pick_newest(candidates_exact)
best_wild = _pick_newest(candidates_wild)
if best_exact and debug:
print(f"[DEBUG] Best exact match for {domain}: {best_exact['folder']} "
f"(not_before={best_exact['not_before']}, mtime={best_exact['mtime']})")
if best_wild and debug:
print(f"[DEBUG] Best wildcard match for {domain}: {best_wild['folder']} "
f"(not_before={best_wild['not_before']}, mtime={best_wild['mtime']})")
# Prefer exact if it exists; otherwise wildcard
chosen = best_exact or best_wild
if chosen:
return chosen['folder']
if debug:
print(f"[DEBUG] No certificate folder found for {domain}")
return None

View File

@@ -24,7 +24,7 @@ class ConfigEntryNotSetError(AppConfigKeyError):
pass
def get_app_conf(applications, application_id, config_path, strict=True, default=None):
def get_app_conf(applications, application_id, config_path, strict=True, default=None, skip_missing_app=False):
# Path to the schema file for this application
schema_path = os.path.join('roles', application_id, 'schema', 'main.yml')
@@ -133,6 +133,9 @@ def get_app_conf(applications, application_id, config_path, strict=True, default
try:
obj = applications[application_id]
except KeyError:
if skip_missing_app:
# Simply return default instead of failing
return default if default is not None else False
raise AppConfigKeyError(
f"Application ID '{application_id}' not found in applications dict.\n"
f"path_trace: {path_trace}\n"

View File

@@ -3,4 +3,7 @@ collections:
- name: community.general
- name: hetzner.hcloud
yay:
- python-simpleaudio
- python-simpleaudio
- python-numpy
pacman:
- ansible

View File

@@ -153,6 +153,11 @@ roles:
description: "Core AI building blocks—model serving, OpenAI-compatible gateways, vector databases, orchestration, and chat UIs."
icon: "fas fa-brain"
invokable: true
bkp:
title: "Backup Services"
description: "Service-level backup and recovery components—handling automated data snapshots, remote backups, synchronization services, and backup orchestration across databases, files, and containers."
icon: "fas fa-database"
invokable: true
user:
title: "Users & Access"
description: "User accounts & access control"

View File

@@ -127,7 +127,7 @@
#de_BE@euro ISO-8859-15
#de_CH.UTF-8 UTF-8
#de_CH ISO-8859-1
de_DE.UTF-8 UTF-8
#de_DE.UTF-8 UTF-8
#de_DE ISO-8859-1
#de_DE@euro ISO-8859-15
#de_IT.UTF-8 UTF-8

View File

View File

@@ -0,0 +1,132 @@
#!/usr/bin/env python3
import argparse
import os
import subprocess
import time
import sys
def run_command(command, capture_output=True, check=False, shell=True):
"""Run a shell command and return its output as string."""
try:
result = subprocess.run(
command,
capture_output=capture_output,
shell=shell,
text=True,
check=check
)
return result.stdout.strip()
except subprocess.CalledProcessError as e:
if capture_output:
print(e.stdout)
print(e.stderr)
raise
def pull_backups(hostname: str):
print(f"pulling backups from: {hostname}")
errors = 0
print("loading meta data...")
remote_host = f"backup@{hostname}"
print(f"host address: {remote_host}")
remote_machine_id = run_command(f'ssh "{remote_host}" sha256sum /etc/machine-id')[:64]
print(f"remote machine id: {remote_machine_id}")
general_backup_machine_dir = f"/Backups/{remote_machine_id}/"
print(f"backup dir: {general_backup_machine_dir}")
try:
remote_backup_types = run_command(
f'ssh "{remote_host}" "find {general_backup_machine_dir} -maxdepth 1 -type d -execdir basename {{}} ;"'
).splitlines()
print(f"backup types: {' '.join(remote_backup_types)}")
except subprocess.CalledProcessError:
sys.exit(1)
for backup_type in remote_backup_types:
if backup_type == remote_machine_id:
continue
print(f"backup type: {backup_type}")
general_backup_type_dir = f"{general_backup_machine_dir}{backup_type}/"
general_versions_dir = general_backup_type_dir
# local previous version
try:
local_previous_version_dir = run_command(f"ls -d {general_versions_dir}* | tail -1")
except subprocess.CalledProcessError:
local_previous_version_dir = ""
print(f"last local backup: {local_previous_version_dir}")
# remote versions
remote_backup_versions = run_command(
f'ssh "{remote_host}" "ls -d /Backups/{remote_machine_id}/backup-docker-to-local/*"'
).splitlines()
print(f"remote backup versions: {' '.join(remote_backup_versions)}")
remote_last_backup_dir = remote_backup_versions[-1] if remote_backup_versions else ""
print(f"last remote backup: {remote_last_backup_dir}")
remote_source_path = f"{remote_host}:{remote_last_backup_dir}/"
print(f"source path: {remote_source_path}")
local_backup_destination_path = remote_last_backup_dir
print(f"backup destination: {local_backup_destination_path}")
print("creating local backup destination folder...")
os.makedirs(local_backup_destination_path, exist_ok=True)
rsync_command = (
f'rsync -abP --delete --delete-excluded --rsync-path="sudo rsync" '
f'--link-dest="{local_previous_version_dir}" "{remote_source_path}" "{local_backup_destination_path}"'
)
print("starting backup...")
print(f"executing: {rsync_command}")
retry_count = 0
max_retries = 12
retry_delay = 300 # 5 minutes
last_retry_start = 0
max_retry_duration = 43200 # 12 hours
rsync_exit_code = 1
while retry_count < max_retries:
print(f"Retry attempt: {retry_count + 1}")
if retry_count > 0:
current_time = int(time.time())
last_retry_duration = current_time - last_retry_start
if last_retry_duration >= max_retry_duration:
print("Last retry took more than 12 hours, increasing max retries to 12.")
max_retries = 12
last_retry_start = int(time.time())
rsync_exit_code = os.system(rsync_command)
if rsync_exit_code == 0:
break
retry_count += 1
time.sleep(retry_delay)
if rsync_exit_code != 0:
print(f"Error: rsync failed after {max_retries} attempts")
errors += 1
sys.exit(errors)
def main():
parser = argparse.ArgumentParser(
description="Pull backups from a remote backup host via rsync."
)
parser.add_argument(
"hostname",
help="Hostname from which backup should be pulled"
)
args = parser.parse_args()
pull_backups(args.hostname)
if __name__ == "__main__":
main()

View File

@@ -1,85 +0,0 @@
#!/bin/bash
# @param $1 hostname from which backup should be pulled
echo "pulling backups from: $1" &&
# error counter
errors=0 &&
echo "loading meta data..." &&
remote_host="backup@$1" &&
echo "host address: $remote_host" &&
remote_machine_id="$( (ssh "$remote_host" sha256sum /etc/machine-id) | head -c 64 )" &&
echo "remote machine id: $remote_machine_id" &&
general_backup_machine_dir="/Backups/$remote_machine_id/" &&
echo "backup dir: $general_backup_machine_dir" &&
remote_backup_types="$(ssh "$remote_host" "find $general_backup_machine_dir -maxdepth 1 -type d -execdir basename {} ;")" &&
echo "backup types: $remote_backup_types" || exit 1
for backup_type in $remote_backup_types; do
if [ "$backup_type" != "$remote_machine_id" ]; then
echo "backup type: $backup_type" &&
general_backup_type_dir="$general_backup_machine_dir""$backup_type/" &&
general_versions_dir="$general_backup_type_dir" &&
local_previous_version_dir="$(ls -d $general_versions_dir* | tail -1)" &&
echo "last local backup: $local_previous_version_dir" &&
remote_backup_versions="$(ssh "$remote_host" ls -d "$general_backup_type_dir"\*)" &&
echo "remote backup versions: $remote_backup_versions" &&
remote_last_backup_dir=$(echo "$remote_backup_versions" | tail -1) &&
echo "last remote backup: $remote_last_backup_dir" &&
remote_source_path="$remote_host:$remote_last_backup_dir/" &&
echo "source path: $remote_source_path" &&
local_backup_destination_path=$remote_last_backup_dir &&
echo "backup destination: $local_backup_destination_path" &&
echo "creating local backup destination folder..." &&
mkdir -vp "$local_backup_destination_path" &&
echo "starting backup..."
rsync_command='rsync -abP --delete --delete-excluded --rsync-path="sudo rsync" --link-dest="'$local_previous_version_dir'" "'$remote_source_path'" "'$local_backup_destination_path'"'
echo "executing: $rsync_command"
retry_count=0
max_retries=12
retry_delay=300 # Retry delay in seconds (5 minutes)
last_retry_start=0
max_retry_duration=43200 # Maximum duration for a single retry attempt (12 hours)
while [[ $retry_count -lt $max_retries ]]; do
echo "Retry attempt: $((retry_count + 1))"
if [[ $retry_count -gt 0 ]]; then
current_time=$(date +%s)
last_retry_duration=$((current_time - last_retry_start))
if [[ $last_retry_duration -ge $max_retry_duration ]]; then
echo "Last retry took more than 12 hours, increasing max retries to 12."
max_retries=12
fi
fi
last_retry_start=$(date +%s)
eval "$rsync_command"
rsync_exit_code=$?
if [[ $rsync_exit_code -eq 0 ]]; then
break
fi
retry_count=$((retry_count + 1))
sleep $retry_delay
done
if [[ $rsync_exit_code -ne 0 ]]; then
echo "Error: rsync failed after $max_retries attempts"
((errors += 1))
fi
fi
done
exit $errors;

View File

@@ -10,15 +10,15 @@
- include_tasks: utils/run_once.yml
when: run_once_svc_bkp_rmt_2_loc is not defined
- name: "create {{ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR }}"
- name: "Create Directory '{{ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR }}'"
file:
path: "{{ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR }}"
state: directory
mode: "0755"
- name: create svc-bkp-rmt-2-loc.sh
- name: "Deploy '{{ DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT }}'"
copy:
src: svc-bkp-rmt-2-loc.sh
src: "{{ DOCKER_BACKUP_REMOTE_2_LOCAL_FILE }}"
dest: "{{ DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT }}"
mode: "0755"

View File

@@ -3,6 +3,6 @@
hosts="{{ DOCKER_BACKUP_REMOTE_2_LOCAL_BACKUP_PROVIDERS | join(' ') }}";
errors=0
for host in $hosts; do
bash {{ DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT }} $host || ((errors+=1));
python {{ DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT }} $host || ((errors+=1));
done;
exit $errors;

View File

@@ -1,5 +1,9 @@
# General
application_id: svc-bkp-rmt-2-loc
system_service_id: "{{ application_id }}"
system_service_id: "{{ application_id }}"
# Role Specific
DOCKER_BACKUP_REMOTE_2_LOCAL_DIR: '{{ PATH_ADMINISTRATOR_SCRIPTS }}{{ application_id }}/'
DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT: "{{ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR }}svc-bkp-rmt-2-loc.sh"
DOCKER_BACKUP_REMOTE_2_LOCAL_FILE: 'pull-specific-host.py'
DOCKER_BACKUP_REMOTE_2_LOCAL_SCRIPT: "{{ [ DOCKER_BACKUP_REMOTE_2_LOCAL_DIR , DOCKER_BACKUP_REMOTE_2_LOCAL_FILE ] | path_join }}"
DOCKER_BACKUP_REMOTE_2_LOCAL_BACKUP_PROVIDERS: "{{ applications | get_app_conf(application_id, 'backup_providers') }}"

View File

@@ -8,6 +8,11 @@ docker:
image: "bitnamilegacy/openldap"
name: "openldap"
version: "latest"
cpus: 1.25
# Optimized up to 5k user
mem_reservation: 1g
mem_limit: 1.5g
pids_limit: 1024
network: "openldap"
volumes:
data: "openldap_data"

View File

@@ -16,5 +16,12 @@
retries: 30
networks:
- default
{{ lookup('template', 'roles/docker-container/templates/resource.yml.j2',vars={'service_name':'redis'}) | indent(4) }}
{% macro include_resource_for(svc, indent=4) -%}
{% set service_name = svc -%}
{%- set _snippet -%}
{% include 'roles/docker-container/templates/resource.yml.j2' %}
{%- endset -%}
{{ _snippet | indent(indent, true) }}
{%- endmacro %}
{{ include_resource_for('redis') }}
{{ "\n" }}

View File

@@ -13,7 +13,7 @@ get_backup_types="find /Backups/$hashed_machine_id/ -maxdepth 1 -type d -execdir
# @todo This configuration is not scalable yet. If other backup services then sys-ctl-bkp-docker-2-loc are integrated, this logic needs to be optimized
get_version_directories="ls -d /Backups/$hashed_machine_id/sys-ctl-bkp-docker-2-loc/*"
get_version_directories="ls -d /Backups/$hashed_machine_id/backup-docker-to-local/*"
last_version_directory="$($get_version_directories | tail -1)"
rsync_command="sudo rsync --server --sender -blogDtpre.iLsfxCIvu . $last_version_directory/"

View File

@@ -3,30 +3,6 @@
name: backup
create_home: yes
- name: create .ssh directory
file:
path: /home/backup/.ssh
state: directory
owner: backup
group: backup
mode: '0700'
- name: create /home/backup/.ssh/authorized_keys
template:
src: "authorized_keys.j2"
dest: /home/backup/.ssh/authorized_keys
owner: backup
group: backup
mode: '0644'
- name: create /home/backup/ssh-wrapper.sh
copy:
src: "ssh-wrapper.sh"
dest: /home/backup/ssh-wrapper.sh
owner: backup
group: backup
mode: '0700'
- name: grant backup sudo rights
copy:
src: "backup"
@@ -35,3 +11,9 @@
owner: root
group: root
notify: sshd restart
- include_tasks: 02_permissions_ssh.yml
- include_tasks: 03_permissions_folders.yml
- include_tasks: utils/run_once.yml

View File

@@ -0,0 +1,23 @@
- name: create .ssh directory
file:
path: /home/backup/.ssh
state: directory
owner: backup
group: backup
mode: '0700'
- name: create /home/backup/.ssh/authorized_keys
template:
src: "authorized_keys.j2"
dest: /home/backup/.ssh/authorized_keys
owner: backup
group: backup
mode: '0644'
- name: create /home/backup/ssh-wrapper.sh
copy:
src: "ssh-wrapper.sh"
dest: /home/backup/ssh-wrapper.sh
owner: backup
group: backup
mode: '0700'

View File

@@ -0,0 +1,66 @@
# Ensure the backups root exists and is owned by backup
- name: Ensure backups root exists and owned by backup
file:
path: "{{ BACKUPS_FOLDER_PATH }}"
state: directory
owner: backup
group: backup
mode: "0700"
# Explicit ACL so 'backup' has rwx, others none
- name: Grant ACL rwx on backups root to backup user
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
entity: backup
etype: user
permissions: rwx
state: present
# Set default ACLs so new entries inherit rwx for backup and nothing for others
- name: Set default ACL (inherit) for backup user under backups root
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
entity: backup
etype: user
permissions: rwx
default: true
state: present
# Remove default ACLs for group/others (defensive hardening)
# Default ACLs so new entries inherit only backup's rwx
- name: Default ACL for backup user (inherit)
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
etype: user
entity: backup
permissions: rwx
default: true
state: present
# Explicitly set default group/other to no permissions (instead of absent)
- name: Default ACL for group -> none
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
etype: group
permissions: '---'
default: true
state: present
- name: Default ACL for other -> none
ansible.posix.acl:
path: "{{ BACKUPS_FOLDER_PATH }}"
etype: other
permissions: '---'
default: true
state: present
- name: Fix ownership level 0..2 directories to backup:backup
ansible.builtin.shell: >
find "{{ BACKUPS_FOLDER_PATH }}" -mindepth 0 -maxdepth 2 -xdev -type d -exec chown backup:backup {} +
changed_when: false
- name: Fix perms level 0..2 directories to 0700
ansible.builtin.shell: >
find "{{ BACKUPS_FOLDER_PATH }}" -mindepth 0 -maxdepth 2 -xdev -type d -exec chmod 700 {} +
changed_when: false

View File

@@ -1,4 +1,2 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
- include_tasks: 01_core.yml
when: run_once_sys_bkp_provider_user is not defined

View File

@@ -1,8 +1,7 @@
- name: Include dependencies
include_role:
name: '{{ item }}'
loop:
- sys-svc-msmtp
name: "sys-svc-msmtp"
when: run_once_sys_svc_msmtp is not defined or run_once_sys_svc_msmtp is false
- include_role:
name: sys-service

View File

@@ -1,9 +1,6 @@
- block:
- include_tasks: 01_core.yml
when:
- run_once_sys_ctl_bkp_docker_2_loc is not defined
- include_tasks: 01_core.yml
when: run_once_sys_ctl_bkp_docker_2_loc is not defined
- name: "include 04_seed-database-to-backup.yml"
include_tasks: 04_seed-database-to-backup.yml
when:
- BKP_DOCKER_2_LOC_DB_ENABLED | bool
when: BKP_DOCKER_2_LOC_DB_ENABLED | bool

View File

@@ -17,7 +17,7 @@
name: sys-service
vars:
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start: "{{ system_service_script_exec }} --backups-folder-path {{ BACKUPS_FOLDER_PATH }} --maximum-backup-size-percent {{SIZE_PERCENT_MAXIMUM_BACKUP}}"
system_service_tpl_exec_start: "{{ system_service_script_exec }} --backups-folder-path {{ BACKUPS_FOLDER_PATH }} --maximum-backup-size-percent {{ SIZE_PERCENT_MAXIMUM_BACKUP }}"
system_service_tpl_exec_start_pre: '/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(" ") }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP | join(" ") }} --timeout "{{ SYS_TIMEOUT_BACKUP_SERVICES }}"'
system_service_copy_files: true
system_service_force_linear_sync: false

View File

@@ -39,6 +39,18 @@ if [ "$force_freeing" = true ]; then
docker exec -u www-data $nextcloud_application_container /var/www/html/occ versions:cleanup || exit 6
fi
# Mastodon cleanup (remote media cache)
mastodon_application_container="{{ applications | get_app_conf('web-app-mastodon', 'docker.services.mastodon.name') }}"
mastodon_cleanup_days="1"
if [ -n "$mastodon_application_container" ] && docker ps -a --format '{% raw %}{{.Names}}{% endraw %}' | grep -qw "$mastodon_application_container"; then
echo "Cleaning up Mastodon media cache (older than ${mastodon_cleanup_days} days)" &&
docker exec -u root "$mastodon_application_container" bash -lc "bin/tootctl media remove --days=${mastodon_cleanup_days}" || exit 8
# Optional: additionally remove local thumbnail/cache files older than X days
# Warning: these will be regenerated when accessed, which may cause extra CPU/I/O load
# docker exec -u root "$mastodon_application_container" bash -lc "find /mastodon/public/system/cache -type f -mtime +${mastodon_cleanup_days} -delete" || exit 9
fi
fi
if command -v pacman >/dev/null 2>&1 ; then

View File

@@ -0,0 +1,16 @@
- name: "Load CDN for '{{ domain }}'"
include_role:
name: web-svc-cdn
public: false
when:
- application_id != 'web-svc-cdn'
- run_once_web_svc_cdn is not defined
- name: Load Logout for '{{ domain }}'
include_role:
name: web-svc-logout
public: false
when:
- run_once_web_svc_logout is not defined
- application_id != 'web-svc-logout'
- inj_enabled.logout

View File

@@ -1,22 +1,41 @@
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_all is not defined
- name: Build inj_enabled
set_fact:
inj_enabled: "{{ applications | inj_enabled(application_id, SRV_WEB_INJ_COMP_FEATURES_ALL) }}"
- name: "Load CDN Service for '{{ domain }}'"
include_role:
name: sys-svc-cdn
public: true # Expose variables so that they can be used in all injection roles
- name: "Included dependent services"
include_tasks: 01_dependencies.yml
vars:
proxy_extra_configuration: ""
- name: Reinitialize 'inj_enabled' for '{{ domain }}', after modification by CDN
- name: Reinitialize 'inj_enabled' for '{{ domain }}', after loading the required webservices
set_fact:
inj_enabled: "{{ applications | inj_enabled(application_id, SRV_WEB_INJ_COMP_FEATURES_ALL) }}"
inj_head_features: "{{ SRV_WEB_INJ_COMP_FEATURES_ALL | inj_features('head') }}"
inj_body_features: "{{ SRV_WEB_INJ_COMP_FEATURES_ALL | inj_features('body') }}"
- name: "Load CDN Service for '{{ domain }}'"
include_role:
name: sys-svc-cdn
public: true
- name: "Activate logout proxy for '{{ domain }}'"
include_role:
name: sys-front-inj-logout
public: true
when: inj_enabled.logout
- name: "Activate Desktop iFrame notifier for '{{ domain }}'"
include_role:
name: sys-front-inj-desktop
public: true # Vars used in templates
public: true
when: inj_enabled.desktop
- name: "Activate Corporate CSS for '{{ domain }}'"
@@ -33,17 +52,3 @@
include_role:
name: sys-front-inj-javascript
when: inj_enabled.javascript
- name: "Activate logout proxy for '{{ domain }}'"
include_role:
name: sys-front-inj-logout
public: true # Vars used in templates
when: inj_enabled.logout
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_all is not defined

View File

@@ -10,17 +10,6 @@
lua_need_request_body on;
header_filter_by_lua_block {
local ct = ngx.header.content_type or ""
if ct:lower():find("^text/html") then
ngx.ctx.is_html = true
-- IMPORTANT: body will be modified → drop Content-Length to avoid mismatches
ngx.header.content_length = nil
else
ngx.ctx.is_html = false
end
}
body_filter_by_lua_block {
-- Only process HTML responses
if not ngx.ctx.is_html then

View File

@@ -1,8 +1,3 @@
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- name: Generate color palette with colorscheme-generator
set_fact:
color_palette: "{{ lookup('colorscheme', CSS_BASE_COLOR, count=CSS_COUNT, shades=CSS_SHADES) }}"
@@ -19,3 +14,5 @@
group: "{{ NGINX.USER }}"
mode: '0644'
loop: "{{ CSS_FILES }}"
- include_tasks: utils/run_once.yml

View File

@@ -1,6 +1,4 @@
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
- include_tasks: 01_core.yml
when: run_once_sys_front_inj_css is not defined
- name: "Resolve optional app style.css source for '{{ application_id }}'"

View File

@@ -3,6 +3,6 @@
{% for css_file in ['default.css','bootstrap.css'] %}
<link rel="stylesheet" href="{{ [ cdn_urls.shared.css, css_file, lookup('local_mtime_qs', [__css_tpl_dir, css_file ~ '.j2'] | path_join)] | url_join }}">
{% endfor %}
{% if app_style_present | bool %}
{% if app_style_present | default(false) | bool %}
<link rel="stylesheet" href="{{ [ cdn_urls.role.release.css, 'style.css', lookup('local_mtime_qs', app_style_src)] | url_join }}">
{% endif %}

View File

@@ -1,8 +1,4 @@
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: 01_deploy.yml
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_desktop is not defined

View File

@@ -1,11 +1,4 @@
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_javascript is not defined
# run_once_sys_front_inj_javascript: deactivated
- name: "Load JavaScript code for '{{ application_id }}'"
set_fact:

View File

@@ -1,8 +1,6 @@
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when:
- run_once_sys_svc_webserver_core is not defined
- name: "deploy the logout.js"
include_tasks: "02_deploy.yml"
include_tasks: "02_deploy.yml"
- set_fact:
run_once_sys_front_inj_logout: true
changed_when: false

View File

@@ -1,10 +1,10 @@
- name: Deploy logout.js
template:
src: logout.js.j2
dest: "{{ INJ_LOGOUT_JS_DESTINATION }}"
owner: "{{ NGINX.USER }}"
group: "{{ NGINX.USER }}"
mode: '0644'
copy:
src: logout.js
dest: "{{ INJ_LOGOUT_JS_DESTINATION }}"
owner: "{{ NGINX.USER }}"
group: "{{ NGINX.USER }}"
mode: '0644'
- name: Get stat for logout.js
stat:

View File

@@ -1,16 +1,16 @@
- block:
- include_tasks: 01_core.yml
- set_fact:
run_once_sys_front_inj_logout: true
- name: "Load base for '{{ application_id }}'"
include_tasks: 01_core.yml
when: run_once_sys_front_inj_logout is not defined
- name: "Load logout code for '{{ application_id }}'"
set_fact:
logout_code: "{{ lookup('template', 'logout_one_liner.js.j2') }}"
changed_when: false
- name: "Collapse logout code into one-liner for '{{ application_id }}'"
set_fact:
logout_code_one_liner: "{{ logout_code | to_one_liner }}"
changed_when: false
- name: "Append logout CSP hash for '{{ application_id }}'"
set_fact:

View File

@@ -1 +1 @@
<script src="{{ cdn_urls.shared.js }}/{{ INJ_LOGOUT_JS_FILE_NAME }}{{ lookup('local_mtime_qs', [playbook_dir, 'roles', 'sys-front-inj-logout', 'templates', INJ_LOGOUT_JS_FILE_NAME ~ '.j2'] | path_join) }}"></script>
<script src="{{ cdn_urls.shared.js }}/{{ INJ_LOGOUT_JS_FILE_NAME }}{{ lookup('local_mtime_qs', [playbook_dir, 'roles', 'sys-front-inj-logout', 'files', INJ_LOGOUT_JS_FILE_NAME] | path_join) }}"></script>

View File

@@ -1,10 +1,4 @@
- block:
- name: Include dependency 'sys-svc-webserver-core'
include_role:
name: sys-svc-webserver-core
when: run_once_sys_svc_webserver_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_sys_front_inj_matomo is not defined
# run_once_sys_front_inj_matomo: deactivated
- name: "Relevant variables for role: {{ role_path | basename }}"
debug:

View File

@@ -1,21 +0,0 @@
- name: "Load CDN for '{{ domain }}'"
include_role:
name: web-svc-cdn
public: false
when:
- application_id != 'web-svc-cdn'
- run_once_web_svc_cdn is not defined
# ------------------------------------------------------------------
# Only-once creations (shared root and vendor)
# ------------------------------------------------------------------
- name: Ensure shared root and vendor exist (run once)
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ NGINX.USER }}"
group: "{{ NGINX.USER }}"
mode: "0755"
loop: "{{ CDN_DIRS_GLOBAL }}"
- include_tasks: utils/run_once.yml

View File

@@ -1,6 +1,14 @@
---
- block:
- include_tasks: 01_core.yml
- name: Ensure shared root and vendor exist (run once)
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ NGINX.USER }}"
group: "{{ NGINX.USER }}"
mode: "0755"
loop: "{{ CDN_DIRS_GLOBAL }}"
- include_tasks: utils/run_once.yml
when:
- run_once_sys_svc_cdn is not defined

View File

@@ -1,3 +1,3 @@
ssl_certificate {{ [ LETSENCRYPT_LIVE_PATH, ssl_cert_folder, 'fullchain.pem'] | path_join }};
ssl_certificate_key {{ [ LETSENCRYPT_LIVE_PATH, ssl_cert_folder, 'privkey.pem' ] | path_join }};
ssl_trusted_certificate {{ [ LETSENCRYPT_LIVE_PATH, ssl_cert_folder, 'chain.pem' ] | path_join }};
ssl_certificate {{ [ LETSENCRYPT_LIVE_PATH | mandatory, ssl_cert_folder | mandatory, 'fullchain.pem'] | path_join }};
ssl_certificate_key {{ [ LETSENCRYPT_LIVE_PATH | mandatory, ssl_cert_folder | mandatory, 'privkey.pem' ] | path_join }};
ssl_trusted_certificate {{ [ LETSENCRYPT_LIVE_PATH | mandatory, ssl_cert_folder | mandatory, 'chain.pem' ] | path_join }};

View File

@@ -14,4 +14,7 @@
- include_role:
name: sys-ctl-hlth-msmtp
when: run_once_sys_ctl_hlth_msmtp is not defined
when: run_once_sys_ctl_hlth_msmtp is not defined
- set_fact:
run_once_sys_svc_msmtp: true

View File

@@ -1,5 +1,6 @@
- block:
- include_tasks: 01_core.yml
- set_fact:
run_once_sys_svc_msmtp: true
when: run_once_sys_svc_msmtp is not defined
- name: "Load MSMTP Core Once"
include_tasks: 01_core.yml
when:
- run_once_sys_svc_msmtp is not defined or run_once_sys_svc_msmtp is false
# Just execute when mailu_token is defined
- users['no-reply'].mailu_token is defined

View File

@@ -1,2 +1,33 @@
add_header Content-Security-Policy "{{ applications | build_csp_header(application_id, domains) }}" always;
proxy_hide_header Content-Security-Policy; # Todo: Make this optional
# ===== Content Security Policy: only for documents and workers (no locations needed) =====
# 1) Define your CSP once (Jinja: escape double quotes to be safe)
set $csp "{{ applications | build_csp_header(application_id, domains) | replace('\"','\\\"') }}";
# 2) Send CSP ONLY for document responses; also for workers via Sec-Fetch-Dest
header_filter_by_lua_block {
local ct = ngx.header.content_type or ngx.header["Content-Type"] or ""
local dest = ngx.var.http_sec_fetch_dest or ""
local lct = ct:lower()
local is_html = lct:find("^text/html") or lct:find("^application/xhtml+xml")
local is_worker = (dest == "worker") or (dest == "serviceworker")
if is_html or is_worker then
ngx.header["Content-Security-Policy"] = ngx.var.csp
else
ngx.header["Content-Security-Policy"] = nil
ngx.header["Content-Security-Policy-Report-Only"] = nil
end
-- If you'll modify the body later, drop Content-Length on HTML
if is_html then
ngx.ctx.is_html = true
ngx.header.content_length = nil
else
ngx.ctx.is_html = false
end
}
# 3) Prevent upstream/app CSP (duplicates)
proxy_hide_header Content-Security-Policy;
proxy_hide_header Content-Security-Policy-Report-Only;

View File

@@ -68,7 +68,12 @@ ChallengeResponseAuthentication no
#KerberosGetAFSToken no
# GSSAPI options
#GSSAPIAuthentication no
# Disable GSSAPI (Kerberos) authentication to avoid unnecessary negotiation delays.
# This setting is useful for non-domain environments where GSSAPI is not used,
# improving SSH connection startup time and reducing overhead.
# See: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
# Set this to 'yes' to enable PAM authentication, account processing,
@@ -97,7 +102,13 @@ PrintMotd no # pam does that
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
# Disable reverse DNS lookups to speed up SSH logins.
# When UseDNS is enabled, sshd performs a reverse DNS lookup for each connecting client,
# which can significantly delay authentication if DNS resolution is slow or misconfigured.
# See: https://chatgpt.com/share/68efc179-1a10-800f-9656-1e8731b40546
UseDNS no
#PidFile /run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no

View File

@@ -18,10 +18,10 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
font-src:

View File

@@ -19,7 +19,7 @@ docker:
name: "baserow"
cpus: 1.0
mem_reservation: 0.5g
mem_limit: 1g
mem_limit: 2g
pids_limit: 512
volumes:
data: "baserow_data"
@@ -37,5 +37,5 @@ server:
flags:
script-src-elem:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true

View File

@@ -13,7 +13,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -14,13 +14,20 @@
name: sys-stk-full-stateless
vars:
docker_compose_flush_handlers: false
- name: "include 04_seed-database-to-backup.yml"
include_tasks: "{{ [ playbook_dir, 'roles/sys-ctl-bkp-docker-2-loc/tasks/04_seed-database-to-backup.yml' ] | path_join }}"
- name: "Unset 'proxy_extra_configuration'"
set_fact:
proxy_extra_configuration: null
- name: "Include Seed routines for '{{ application_id }}' database backup"
include_tasks: "{{ [ playbook_dir, 'roles/sys-ctl-bkp-docker-2-loc/tasks/04_seed-database-to-backup.yml' ] | path_join }}"
vars:
database_type: "postgres"
database_instance: "{{ entity_name }}"
database_password: "{{ applications | get_app_conf(application_id, 'credentials.postgresql_secret') }}"
database_username: "postgres"
database_name: "" # Multiple databases
- name: configure websocket_upgrade.conf
copy:
src: "websocket_upgrade.conf"

View File

@@ -2,13 +2,6 @@
application_id: "web-app-bigbluebutton"
entity_name: "{{ application_id | get_entity_name }}"
# Database configuration
database_type: "postgres"
database_instance: "{{ application_id | get_entity_name }}"
database_password: "{{ applications | get_app_conf(application_id, 'credentials.postgresql_secret') }}"
database_username: "postgres"
database_name: "" # Multiple databases
# Proxy
domain: "{{ domains | get_domain(application_id) }}"
http_port: "{{ ports.localhost.http[application_id] }}"

View File

@@ -27,7 +27,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -29,7 +29,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -15,6 +15,8 @@ server:
- https://code.jquery.com/
style-src-elem:
- https://cdn.jsdelivr.net
- https://kit.fontawesome.com
- https://code.jquery.com/
font-src:
- https://ka-f.fontawesome.com
- https://cdn.jsdelivr.net
@@ -25,7 +27,7 @@ server:
frame-src:
- "{{ WEB_PROTOCOL }}://*.{{ PRIMARY_DOMAIN }}"
flags:
script-src:
script-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -17,6 +17,8 @@
- name: "load docker, proxy for '{{ application_id }}'"
include_role:
name: sys-stk-full-stateless
vars:
docker_compose_flush_handlers: false
- name: "Check if host-specific config.yaml exists in {{ DESKTOP_CONFIG_INV_PATH }}"
stat:
@@ -57,8 +59,16 @@
notify: docker compose up
when: not config_file.stat.exists
- name: add docker-compose.yml
template:
src: docker-compose.yml.j2
dest: "{{ docker_compose.directories.instance }}docker-compose.yml"
notify: docker compose up
- name: "Flush docker compose handlers"
meta: flush_handlers
- name: Wait for Desktop HTTP endpoint (required so all logos can be downloaded during initialization)
uri:
url: "http://127.0.0.1:{{ http_port }}/"
status_code: 200
register: desktop_http
retries: 60
delay: 5
until: desktop_http.status == 200
- include_tasks: utils/run_once.yml

View File

@@ -1,5 +1,3 @@
---
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
- include_tasks: 01_core.yml
when: run_once_web_app_desktop is not defined

View File

@@ -1,5 +1,6 @@
# General
application_id: "web-app-desktop"
http_port: "{{ ports.localhost.http[application_id] }}"
## Webserver
proxy_extra_configuration: "{{ lookup('template', 'nginx/sso.html.conf.j2') }}"

View File

@@ -10,7 +10,7 @@ features:
server:
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true

View File

@@ -6,4 +6,6 @@
include_tasks: 03_docker.yml
- name: "Setup '{{ application_id }}' network"
include_tasks: 04_network.yml
include_tasks: 04_network.yml
- include_tasks: utils/run_once.yml

View File

@@ -1,6 +1,4 @@
---
- name: "Setup {{ application_id }}"
include_tasks: 01_core.yml
when: run_once_web_app_discourse is not defined
block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml

View File

@@ -12,9 +12,7 @@ server:
script-src-elem:
unsafe-inline: true
unsafe-eval: true
style-src:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-eval: true
whitelist:
connect-src:

View File

@@ -18,10 +18,10 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
oauth2_proxy:
application: "application"

View File

@@ -7,10 +7,10 @@ docker_compose_flush_handlers: false
# Friendica
friendica_container: "friendica"
friendica_no_validation: "{{ applications | get_app_conf(application_id, 'features.oidc', True) }}" # Email validation is not neccessary if OIDC is active
friendica_no_validation: "{{ applications | get_app_conf(application_id, 'features.oidc') }}" # Email validation is not neccessary if OIDC is active
friendica_application_base: "/var/www/html"
friendica_docker_ldap_config: "{{ friendica_application_base }}/config/ldapauth.config.php"
friendica_host_ldap_config: "{{ docker_compose.directories.volumes }}ldapauth.config.php"
friendica_config_dir: "{{ friendica_application_base }}/config"
friendica_config_file: "{{ friendica_config_dir }}/local.config.php"
friendica_docker_ldap_config: "{{ [ friendica_application_base, 'config/ldapauth.config.php' ] | path_join }}"
friendica_host_ldap_config: "{{ [ docker_compose.directories.volumes, 'ldapauth.config.php' ] | path_join }}"
friendica_config_dir: "{{ [ friendica_application_base, 'config' ] | path_join }}"
friendica_config_file: "{{ [ friendica_config_dir, 'local.config.php' ] | path_join }}"
friendica_user: "www-data"

View File

@@ -27,7 +27,7 @@ server:
aliases: []
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
font-src:

View File

@@ -24,7 +24,7 @@ server:
flags:
script-src-elem:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
font-src:
@@ -47,7 +47,17 @@ docker:
version: "latest"
backup:
no_stop_required: true
port: 3000
name: "gitea"
port: 3000
name: "gitea"
cpus: 1.0
mem_reservation: 1g
mem_limit: 2g
pids_limit: 1024
redis:
enabled: false
cpus: 0.25
mem_reservation: 0.2g
mem_limit: 0.3g
pids_limit: 512
volumes:
data: "gitea_data"

View File

@@ -2,7 +2,7 @@
shell: |
docker exec -i --user {{ GITEA_USER }} {{ GITEA_CONTAINER }} \
gitea admin auth list \
| awk -v name="LDAP ({{ PRIMARY_DOMAIN }})" '$0 ~ name {print $1; exit}'
| awk -v name="LDAP ({{ SOFTWARE_NAME }})" '$0 ~ name {print $1; exit}'
args:
chdir: "{{ docker_compose.directories.instance }}"
register: ldap_source_id_raw

View File

@@ -11,7 +11,7 @@ USER_GID=1000
# Logging configuration
GITEA__log__MODE=console
GITEA__log__LEVEL={% if MODE_DEBUG | bool %}Debug{% else %}Info{% endif %}
GITEA__log__LEVEL={% if MODE_DEBUG | bool %}Debug{% else %}Info{% endif %}
# Database
DB_TYPE=mysql
@@ -20,6 +20,28 @@ DB_NAME={{ database_name }}
DB_USER={{ database_username }}
DB_PASSWD={{ database_password }}
{% if GITEA_REDIS_ENABLED | bool %}
# ------------------------------------------------
# Redis Configuration for Gitea
# ------------------------------------------------
# @see https://docs.gitea.com/administration/config-cheat-sheet#cache-cache
GITEA__cache__ENABLED=true
GITEA__cache__ADAPTER=redis
# use a different Redis DB index than oauth2-proxy
GITEA__cache__HOST=redis://{{ GITEA_REDIS_ADDRESS }}/1
# Store sessions in Redis (instead of the internal DB)
GITEA__session__PROVIDER=redis
GITEA__session__PROVIDER_CONFIG=network=tcp,addr={{ GITEA_REDIS_ADDRESS }},db=2,pool_size=100,idle_timeout=180
# Use Redis for background task queues
GITEA__queue__TYPE=redis
GITEA__queue__CONN_STR=redis://{{ GITEA_REDIS_ADDRESS }}/3
{% endif %}
# SSH
SSH_PORT={{ports.public.ssh[application_id]}}
SSH_LISTEN_PORT=22
@@ -48,7 +70,7 @@ GITEA__security__INSTALL_LOCK=true # Locks the installation page
GITEA__openid__ENABLE_OPENID_SIGNUP={{ applications | get_app_conf(application_id, 'features.oidc', False) | lower }}
GITEA__openid__ENABLE_OPENID_SIGNIN={{ applications | get_app_conf(application_id, 'features.oidc', False) | lower }}
{% if applications | get_app_conf(application_id, 'features.oidc', False) or applications | get_app_conf(application_id, 'features.ldap', False) %}
{% if GITEA_IAM_ENABLED | bool %}
EXTERNAL_USER_DISABLE_FEATURES=deletion,manage_credentials,change_username,change_full_name
@@ -58,9 +80,5 @@ GITEA__ldap__SYNC_USER_ON_LOGIN=true
{% endif %}
# ------------------------------------------------
# Disable user self-registration
# ------------------------------------------------
# After this only admins can create accounts
GITEA__service__DISABLE_REGISTRATION=false
GITEA__service__DISABLE_REGISTRATION={{ GITEA_IAM_ENABLED | lower }}

View File

@@ -22,9 +22,15 @@ GITEA_LDAP_AUTH_ARGS:
- '--email-attribute "{{ LDAP.USER.ATTRIBUTES.MAIL }}"'
- '--public-ssh-key-attribute "{{ LDAP.USER.ATTRIBUTES.SSH_PUBLIC_KEY }}"'
- '--synchronize-users'
GITEA_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.version') }}"
GITEA_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.image') }}"
GITEA_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.name') }}"
GITEA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
GITEA_USER: "git"
GITEA_CONFIG: "/data/gitea/conf/app.ini"
GITEA_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.version') }}"
GITEA_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.image') }}"
GITEA_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.gitea.name') }}"
GITEA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
GITEA_USER: "git"
GITEA_CONFIG: "/data/gitea/conf/app.ini"
## Redis
GITEA_REDIS_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.redis.enabled') }}"
GITEA_REDIS_ADDRESS: "redis:6379"
GITEA_IAM_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc', False) or applications | get_app_conf(application_id, 'features.ldap', False) }}"

View File

@@ -29,7 +29,7 @@ server:
script-src-elem:
unsafe-inline: true
unsafe-eval: true
script-src:
script-src-attr:
unsafe-inline: true
unsafe-eval: true
domains:

View File

@@ -14,7 +14,7 @@ server:
aliases: []
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true

View File

@@ -19,9 +19,9 @@ server:
flags:
script-src-elem:
unsafe-inline: true
script-src:
script-src-attr:
unsafe-inline: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
frame-src:

View File

@@ -18,12 +18,12 @@ features:
server:
csp:
flags:
style-src:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true
unsafe-eval: true
script-src:
script-src-attr:
unsafe-inline: true
domains:
aliases: []

View File

@@ -16,13 +16,13 @@ server:
aliases: []
csp:
flags:
style-src:
unsafe-inline: true
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true
script-src:
unsafe-inline: true
unsafe-eval: true
unsafe-inline: true
script-src-attr:
unsafe-inline: true
unsafe-eval: true
rbac:
roles:
mail-bot:

View File

@@ -41,7 +41,7 @@
meta: flush_handlers
- name: "Create Mailu accounts"
include_tasks: 02_create-user.yml
include_tasks: 02_manage_user.yml
vars:
MAILU_DOCKER_DIR: "{{ docker_compose.directories.instance }}"
mailu_api_base_url: "http://127.0.0.1:8080/api/v1"
@@ -55,7 +55,8 @@
mailu_user_key: "{{ item.key }}"
mailu_user_name: "{{ item.value.username }}"
mailu_password: "{{ item.value.password }}"
mailu_token_ip: "{{ item.value.ip | default('') }}"
mailu_token_ip: "{{ item.value.ip | default(networks.internet.ip4) }}"
mailu_token_name: "{{ SOFTWARE_NAME ~ ' Token for ' ~ item.value.username }}"
loop: "{{ users | dict2items }}"
loop_control:
loop_var: item
@@ -66,3 +67,5 @@
- name: Set Mailu DNS records
include_tasks: 05_dns-records.yml
- include_tasks: utils/run_once.yml

View File

@@ -25,5 +25,5 @@
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Create Mailu API Token for {{ mailu_user_name }}"
include_tasks: 03_create-token.yml
when: "{{ 'mail-bot' in item.value.roles }}"
include_tasks: 03a_manage_user_token.yml
when: "'mail-bot' in item.value.roles"

View File

@@ -0,0 +1,26 @@
- name: "Fetch existing API tokens via curl inside admin container"
command: >-
{{ docker_compose_command_exec }} -T admin \
curl -s -X GET {{ mailu_api_base_url }}/token \
-H "Authorization: Bearer {{ MAILU_API_TOKEN }}"
args:
chdir: "{{ MAILU_DOCKER_DIR }}"
register: mailu_tokens_cli
changed_when: false
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Extract existing token info for '{{ mailu_user_key }};{{ mailu_user_name }}'"
set_fact:
mailu_user_existing_token: >-
{{ (
mailu_tokens_cli.stdout
| default('[]')
| from_json
| selectattr('comment','equalto', mailu_token_name)
| list
).0 | default(None) }}
- name: "Start Mailu token procedures for undefined tokens"
when: users[mailu_user_key].mailu_token is not defined
include_tasks: 03b_create_user_token.yml

View File

@@ -1,26 +1,3 @@
- name: "Fetch existing API tokens via curl inside admin container"
command: >-
{{ docker_compose_command_exec }} -T admin \
curl -s -X GET {{ mailu_api_base_url }}/token \
-H "Authorization: Bearer {{ MAILU_API_TOKEN }}"
args:
chdir: "{{ MAILU_DOCKER_DIR }}"
register: mailu_tokens_cli
changed_when: false
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Extract existing token info for '{{ mailu_user_key }};{{ mailu_user_name }}'"
set_fact:
mailu_user_existing_token: >-
{{ (
mailu_tokens_cli.stdout
| default('[]')
| from_json
| selectattr('comment','equalto', mailu_user_key ~ " - ansible.infinito")
| list
).0 | default(None) }}
- name: "Delete existing API token for '{{ mailu_user_key }};{{ mailu_user_name }}' if local token missing but remote exists"
command: >-
{{ docker_compose_command_exec }} -T admin \
@@ -29,7 +6,6 @@
args:
chdir: "{{ MAILU_DOCKER_DIR }}"
when:
- users[mailu_user_key].mailu_token is not defined
- mailu_user_existing_token is not none
- mailu_user_existing_token.id is defined
register: mailu_token_delete
@@ -43,13 +19,12 @@
-H "Authorization: Bearer {{ MAILU_API_TOKEN }}"
-H "Content-Type: application/json"
-d '{{ {
"comment": mailu_user_key ~ " - ansible.infinito",
"comment": mailu_token_name,
"email": users[mailu_user_key].email,
"ip": mailu_token_ip
} | to_json }}'
args:
chdir: "{{ MAILU_DOCKER_DIR }}"
when: users[mailu_user_key].mailu_token is not defined
register: mailu_token_creation
# If curl sees 4xx/5xx it returns non-zero due to -f → fail the task.
failed_when:
@@ -57,7 +32,7 @@
# Fallback: if some gateway returns 200 but embeds an error JSON.
- mailu_token_creation.rc == 0 and
(mailu_token_creation.stdout is search('"code"\\s*:\\s*4\\d\\d') or
mailu_token_creation.stdout is search('cannot be found'))
mailu_token_creation.stdout is search('cannot be found'))
# Only mark changed when a token is actually present in the JSON.
changed_when: mailu_token_creation.stdout is search('"token"\\s*:')
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
@@ -66,14 +41,25 @@
set_fact:
users: >-
{{ users
| combine({
mailu_user_key: (
users[mailu_user_key]
| combine({
'mailu_token': (mailu_token_creation.stdout | from_json).token
})
)
}, recursive=True)
| combine({
mailu_user_key: (
users[mailu_user_key]
| combine({
'mailu_token': (mailu_token_creation.stdout | from_json).token
})
)
}, recursive=True)
}}
when: users[mailu_user_key].mailu_token is not defined
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Reset MSMTP Configuration if No-Reply User Token changed"
when: users['no-reply'].username == mailu_user_name
block:
- name: "Set MSMTP run-once fact false"
set_fact:
run_once_sys_svc_msmtp: false
changed_when: false
- name: Reload MSMTP role
include_role:
name: "sys-svc-msmtp"

View File

@@ -1,5 +1,3 @@
---
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
- include_tasks: 01_core.yml
when: run_once_web_app_mailu is not defined

View File

@@ -0,0 +1,19 @@
- name: Check health status of '{{ item }}' container
shell: |
cid=$(docker compose ps -q {{ item }})
docker inspect \
--format '{{ "{{.State.Health.Status}}" }}' \
$cid
args:
chdir: "{{ docker_compose.directories.instance }}"
register: healthcheck
retries: 60
delay: 5
until: healthcheck.stdout == "healthy"
loop:
- mastodon
- streaming
- sidekiq
loop_control:
label: "{{ item }}"
changed_when: false

View File

@@ -0,0 +1,9 @@
---
# Cleanup routine for Mastodon
# Removes cached remote media older than 14 days when MODE_CLEANUP is enabled.
- name: "Cleanup Mastodon media cache older than 14 days"
command:
cmd: "docker exec -u root {{ MASTODON_CONTAINER }} bin/tootctl media remove --days=14"
register: mastodon_cleanup
changed_when: mastodon_cleanup.rc == 0
failed_when: mastodon_cleanup.rc != 0

View File

@@ -1,6 +1,3 @@
- name: "Execute migration for '{{ application_id }}'"
command:
cmd: "docker exec {{ MASTODON_CONTAINER }} bundle exec rails db:migrate"
- name: "Include administrator routines for '{{ application_id }}'"
include_tasks: 02_administrator.yml

View File

@@ -1,26 +1,5 @@
# Routines to create the administrator account
# @see https://chatgpt.com/share/67b9b12c-064c-800f-9354-8e42e6459764
- name: Check health status of '{{ item }}' container
shell: |
cid=$(docker compose ps -q {{ item }})
docker inspect \
--format '{{ "{{.State.Health.Status}}" }}' \
$cid
args:
chdir: "{{ docker_compose.directories.instance }}"
register: healthcheck
retries: 60
delay: 5
until: healthcheck.stdout == "healthy"
loop:
- mastodon
- streaming
- sidekiq
loop_control:
label: "{{ item }}"
changed_when: false
- name: Remove line containing "- administrator" from config/settings.yml to allow creating administrator account
command:
cmd: "docker exec -u root {{ MASTODON_CONTAINER }} sed -i '/- administrator/d' config/settings.yml"

View File

@@ -18,5 +18,15 @@
vars:
docker_compose_flush_handlers: true
- name: "Wait for Mastodon"
include_tasks: 01_wait.yml
- name: "Cleanup Mastodon caches when MODE_CLEANUP is true"
include_tasks: 02_cleanup.yml
when: MODE_CLEANUP | bool
- name: "start setup procedures for mastodon"
include_tasks: 01_setup.yml
include_tasks: 03_setup.yml
- name: "Include administrator routines for '{{ application_id }}'"
include_tasks: 04_administrator.yml

View File

@@ -17,12 +17,12 @@ server:
style-src-elem:
- https://fonts.googleapis.com
flags:
script-src:
script-src-attr:
unsafe-eval: true
script-src-elem:
unsafe-inline: true
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
unsafe-eval: true
domains:

View File

@@ -27,12 +27,12 @@ features:
server:
csp:
flags:
script-src:
script-src-attr:
unsafe-eval: true
script-src-elem:
unsafe-inline: true
unsafe-eval: true
style-src:
style-src-attr:
unsafe-inline: true
whitelist:
connect-src:

View File

@@ -4,6 +4,11 @@ server:
canonical:
- "m.wiki.{{ PRIMARY_DOMAIN }}"
aliases: []
csp:
flags:
script-src-elem:
unsafe-inline: true
docker:
services:
database:

View File

@@ -11,7 +11,7 @@ MEDIAWIKI_URL: "{{ domains | get_url(application_id, WEB_PROT
MEDIAWIKI_HTML_DIR: "/var/www/html"
MEDIAWIKI_CONFIG_DIR: "{{ docker_compose.directories.config }}"
MEDIAWIKI_VOLUMES_DIR: "{{ docker_compose.directories.volumes }}"
MEDIAWIKI_LOCAL_MOUNT_DIR: "{{ MEDIAWIKI_VOLUMES_DIR }}/mw-local"
MEDIAWIKI_LOCAL_MOUNT_DIR: "{{ [ MEDIAWIKI_VOLUMES_DIR, 'mw-local' ] | path_join }}"
MEDIAWIKI_LOCAL_PATH: "/opt/mw-local"
## Docker

View File

@@ -29,7 +29,7 @@ server:
frame-ancestors:
- "*" # No damage if it's used somewhere on other websites, it anyhow looks like art
flags:
style-src:
style-src-attr:
unsafe-inline: true
domains:
canonical:

View File

@@ -23,3 +23,5 @@
- name: Build data (single async task)
include_tasks: 02_build_data.yml
when: MIG_BUILD_DATA | bool
- include_tasks: utils/run_once.yml

View File

@@ -1,7 +1,4 @@
---
- block:
- include_tasks: 01_core.yml
- include_tasks: utils/run_once.yml
name: "Setup Meta Infinite Graph"
- include_tasks: 01_core.yml
when: run_once_web_app_mig is not defined

View File

@@ -0,0 +1,26 @@
# Mini-QR
## Description
**Mini-QR** is a lightweight, self-hosted web application for generating QR codes instantly and privately.
It provides a minimal and elegant interface to convert any text, URL, or message into a QR code — directly in your browser, without external tracking or dependencies.
## Overview
Mini-QR is designed for simplicity, privacy, and speed.
It offers an ad-free interface that works entirely within your local environment, making it ideal for individuals, organizations, and educational institutions that value data sovereignty.
The app runs as a single Docker container and requires no database or backend setup, enabling secure and frictionless QR generation anywhere.
## Features
- **Instant QR code creation** — simply type or paste your content.
- **Privacy-friendly** — all generation happens client-side; no data leaves your server.
- **Open Source** — fully auditable and modifiable for custom integrations.
- **Responsive Design** — optimized for both desktop and mobile devices.
- **Docker-ready** — can be deployed in seconds using the official image.
## Further Resources
- 🧩 Upstream project: [lyqht/mini-qr](https://github.com/lyqht/mini-qr)
- 📦 Upstream Dockerfile: [View on GitHub](https://github.com/lyqht/mini-qr/blob/main/Dockerfile)
- 🌐 Docker Image: `ghcr.io/lyqht/mini-qr:latest`

View File

@@ -0,0 +1,2 @@
# To-dos
- Remove clarity.ms

View File

@@ -0,0 +1,38 @@
docker:
services:
redis:
enabled: false
database:
enabled: false
features:
matomo: true
css: true
desktop: true
logout: false
server:
csp:
whitelist:
script-src-elem:
# Propably some tracking code
# Anyhow implemented to pass CSP checks
# @todo Remove
- https://www.clarity.ms/
- https://scripts.clarity.ms/
connect-src:
- https://q.clarity.ms
- https://n.clarity.ms
- "data:"
style-src-elem: []
font-src: []
frame-ancestors: []
flags:
style-src-attr:
unsafe-inline: true
script-src-elem:
unsafe-inline: true
script-src-attr:
unsafe-eval: true
domains:
canonical:
- "qr.{{ PRIMARY_DOMAIN }}"
aliases: []

View File

@@ -0,0 +1,27 @@
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: >
Mini-QR is a minimalist, self-hosted web application that allows users to
instantly generate QR codes in a privacy-friendly way.
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
galaxy_tags:
- infinito
- qr
- webapp
- privacy
- utility
- education
- lightweight
repository: "https://github.com/lyqht/mini-qr"
issue_tracker_url: "https://github.com/lyqht/mini-qr/issues"
documentation: "https://github.com/lyqht/mini-qr"
logo:
class: "fa-solid fa-qrcode"
run_after: []
dependencies: []

Some files were not shown because too many files have changed in this diff Show More