25 Commits

Author SHA1 Message Date
36f9573fdf feat(filters): enforce safe Node.js heap sizing via reusable filter
- Add node_autosize filter (node_max_old_space_size) using get_app_conf
- Raise error when mem_limit < min_mb to prevent OOM-kill misconfigurations
- Wire Whiteboard NODE_OPTIONS and increase mem_limit to 1g; set cpus=1
- Refactor PeerTube to use the same filter; simplify vars
- Add unit tests; keep integration filters usage green

Context: https://chatgpt.com/share/690e0499-6a94-800f-b8ed-2c5124690103
2025-11-07 15:39:54 +01:00
493d5bbbda refactor(web-app-shopware): externalize trusted proxy and host configuration with mounted framework.yaml
- added new file roles/web-app-shopware/files/framework.yaml defining trusted_proxies and trusted_headers for Symfony
 - mounted framework.yaml into /var/www/html/config/packages/ in docker-compose
 - exposed new role vars SHOPWARE_FRAMEWORK_HOST/DOCKER for mounting path
 - rendered framework.yaml via Ansible copy task with proper permissions
 - adjusted env.j2 to set TRUSTED_PROXIES and TRUSTED_HOSTS dynamically from domains and networks
 - added SHOPWARE_DOMAIN var to vars/main.yml
 - removed inline framework.yaml creation from Dockerfile (now managed via mount)
 - updated proxy template (html.conf.j2) to include X-Forwarded-Ssl header
 - improved init.sh permission handling for shared volumes

See ChatGPT conversation for implementation details and rationale:
https://chatgpt.com/share/690d4fe7-2830-800f-8b6d-b868e7fe0e97
2025-11-07 02:48:49 +01:00
2fcbae8fc7 Added z.clarity.ms to mini-qr 2025-11-07 00:18:01 +01:00
02f38d60db Added z.clarity.ms to mini-qr 2025-11-07 00:02:36 +01:00
d66ad37c5d enh(shopware): improve healthchecks and proxy configuration
Removed obsolete EXPOSE/healthcheck from Dockerfile and added robust service-specific healthchecks:

- web: HTTP robots.txt check

- worker/scheduler: php -v runtime check

- opensearch: cluster health API check

Added TRUSTED_PROXIES=* for proxy-aware headers and centralized OPENSEARCH_PORT in vars.

Context: discussed implementation details in ChatGPT conversation on 2025-11-06 — https://chatgpt.com/share/690c9fb3-79f4-800f-bbdf-ea370c8f142c
2025-11-06 14:17:00 +01:00
0c16f9c43c Optimized code 2025-11-05 20:46:33 +01:00
7330aeb8ec feat(web-app-peertube): add dynamic performance tuning for heap and transcoding concurrency
- Dynamically calculate PEERTUBE_MAX_OLD_SPACE_SIZE (~35% of container RAM, clamped between 768–3072 MB)
- Dynamically calculate PEERTUBE_TRANSCODING_CONCURRENCY (~½ vCPUs, min 1, max 8)
- Added default resource limits for Redis and Peertube containers
- Updated test suite to include human_to_bytes filter in built-in filter list

https://chatgpt.com/share/690914d2-6100-800f-a850-94e6d226e7c9
2025-11-03 21:47:38 +01:00
d3aad632c0 Merge branch 'master' of github.com:kevinveenbirkenbach/infinito-nexus 2025-11-03 16:41:13 +01:00
d1bad3d7a6 Added joomla user for install 2025-11-03 11:24:56 +01:00
43056a8b92 Activated CSS and Desktop for shopware 2025-11-03 11:20:03 +01:00
0bf286f62a Enhance Shopware role: fix init script permissions, CSP for data: fonts, and unify shell usage
- Added 'font-src data:' to CSP whitelist to allow inline fonts in Admin UI
- Refactored init.sh to run as root only for volume permission setup, then drop privileges to www-data
- Unified all bash invocations to sh for POSIX compliance
- Added missing 'bundles' named volume and mount to Docker Compose
- Set init container to run as root (0:0) for permission setup
- Added admin user rename step via Ansible task

See discussion: https://chatgpt.com/share/69087361-859c-800f-862c-7413350cca3e
2025-11-03 10:18:45 +01:00
df8390f386 Refactor category sorting in docker_cards_grouped lookup plugin, restructure Shopware task sequence, and extend menu categories (Commerce, Storage). Added unit tests for lookup plugin.
Conversation reference: https://chatgpt.com/share/6908642f-29cc-800f-89ec-fd6de9892b44
2025-11-03 09:14:15 +01:00
48557b06e3 refactor(web-app-shopware): make init script idempotent and handle admin via Ansible
- moved init.sh from template to files/ for direct copying and bind mounting
- removed hardcoded user creation from init process
- added database emptiness check before running system:install
- added new task 03_admin.yml to ensure admin user exists and update password/email via Ansible
- switched docker exec shell from bash to sh for Alpine compatibility
- updated Dockerfile and docker-compose.yml accordingly for mount-based init script
2025-11-03 03:36:13 +01:00
1cff5778d3 Activated debugging 2025-11-03 02:42:52 +01:00
60e2c972d6 Fix Shopware Docker build: add Redis support and align network includes
- Added symfony/redis-messenger installation with ignored build-time PHP extension checks
- Installed php83-redis in runtime stage
- Ensured consistent network includes across all Shopware services in docker-compose template
- Improves compatibility with Redis-based Symfony Messenger transport during init phase

https://chatgpt.com/share/6908068e-0bb8-800f-8855-7b3913c57158
2025-11-03 02:34:51 +01:00
637de6a190 Added network to init 2025-11-03 02:00:36 +01:00
f5efbce205 feat(shopware): migrate to single Shopware base image and split services (web/worker/scheduler/init)
• Introduce init container and runtime-ready Dockerfile (Alpine) installing php83-gd/intl/pdo_mysql
• Disable composer scripts in builder and ignore build-time ext reqs
• New docker-compose template (web/worker/scheduler/opensearch) + persistent volumes
• Use TRUSTED_PROXIES env; fix APP_URL formatting; set OPENSEARCH_HOST=opensearch
• Replace SHOPWARE_PHP_CONTAINER refs with SHOPWARE_WEB_CONTAINER in tasks
• Render and copy init.sh via volumes path
• Remove old nginx/php split and legacy DB env task
• Fix svc-db-postgres var: database_type now uses entity_name
https://chatgpt.com/share/6907fc58-7c28-800f-a993-c207f28859c9
2025-11-03 01:51:38 +01:00
d6f3618d70 Add reusable HTTP healthcheck template and integrate into Shopware and Taiga roles 2025-11-02 22:26:42 +01:00
773655efb5 Used correct image and deactivated oidc and ldap 2025-11-02 21:40:03 +01:00
7bc9f7abd9 Refactor Shopware role to use dedicated OpenSearch service and improved environment handling.
Changes include:
- Added OpenSearch configuration and variable definitions (image, version, heap, memory limits)
- Replaced legacy search/elasticsearch logic with OpenSearch integration
- Updated docker-compose template for OpenSearch with proper JVM heap and ulimits
- Ensured both OPENSEARCH_URL and ELASTICSEARCH_URL are set for compatibility

Reference: https://chatgpt.com/share/6907b0d4-ab14-800f-b576-62c0d26c8ad1
2025-11-02 21:05:52 +01:00
ec7b8662dd Implemented service name 2025-11-02 20:36:20 +01:00
d1ccfd9cdd Add new Shopware 6 role with OIDC/LDAP plugin integration and Docker-based deployment configuration.
Includes:
- New role: web-app-shopware (Docker, MariaDB, Redis, OpenSearch)
- Updated networks and ports configuration
- Automated install, migration, and admin creation
- Optional IAM integration via OIDC/LDAP plugins

Reference: https://chatgpt.com/share/6907b0d4-ab14-800f-b576-62c0d26c8ad1
2025-11-02 20:29:13 +01:00
d61c81634c Add Joomla CLI paths and implement non-interactive admin password reset via CLI
Ref: https://chatgpt.com/share/69039c22-f530-800f-a641-fd2636d5b6af
2025-10-30 18:11:18 +01:00
265f815b48 Optimized Listmonk and Nextcloud CSP for hcaptcha 2025-10-30 16:02:09 +01:00
f8e5110730 Add Redis readiness check before Nextcloud upgrade and add retry logic for maintenance repair
This prevents OCC repair failures caused by Redis still loading its dataset after container restarts.
See context: https://chatgpt.com/share/690377ba-1520-800f-b8c1-bc93fbd9232f
2025-10-30 15:36:00 +01:00
48 changed files with 1293 additions and 44 deletions

View File

@@ -0,0 +1,141 @@
# filter_plugins/node_autosize.py
# Reuse app config to derive sensible Node.js heap sizes for containers.
#
# Usage example (Jinja):
# {{ applications | node_max_old_space_size('web-app-nextcloud', 'whiteboard') }}
#
# Heuristics (defaults):
# - candidate = 35% of mem_limit
# - min = 768 MB (required minimum)
# - cap = min(3072 MB, 60% of mem_limit)
#
# NEW: If mem_limit (container cgroup RAM) is smaller than min_mb, we raise an
# exception — to prevent a misconfiguration where Node's heap could exceed the cgroup
# and be OOM-killed.
from __future__ import annotations
import re
from ansible.errors import AnsibleFilterError
# Import the shared config resolver from module_utils
try:
from module_utils.config_utils import get_app_conf, AppConfigKeyError
except Exception as e:
raise AnsibleFilterError(
f"Failed to import get_app_conf from module_utils.config_utils: {e}"
)
_SIZE_RE = re.compile(r"^\s*(\d+(?:\.\d+)?)\s*([kmgtp]?i?b?)?\s*$", re.IGNORECASE)
_MULT = {
"": 1,
"b": 1,
"k": 10**3, "kb": 10**3,
"m": 10**6, "mb": 10**6,
"g": 10**9, "gb": 10**9,
"t": 10**12, "tb": 10**12,
"p": 10**15, "pb": 10**15,
"kib": 1024,
"mib": 1024**2,
"gib": 1024**3,
"tib": 1024**4,
"pib": 1024**5,
}
def _to_bytes(val):
"""Convert numeric or string memory limits (e.g. '512m', '2GiB') to bytes."""
if val is None or val == "":
return None
if isinstance(val, (int, float)):
return int(val)
if not isinstance(val, str):
raise AnsibleFilterError(f"Unsupported mem_limit type: {type(val).__name__}")
m = _SIZE_RE.match(val)
if not m:
raise AnsibleFilterError(f"Unrecognized mem_limit string: {val!r}")
num = float(m.group(1))
unit = (m.group(2) or "").lower()
if unit not in _MULT:
raise AnsibleFilterError(f"Unknown unit in mem_limit: {unit!r}")
return int(num * _MULT[unit])
def _mb(bytes_val: int) -> int:
"""Return decimal MB (10^6) as integer — Node expects MB units."""
return int(round(bytes_val / 10**6))
def _compute_old_space_mb(
total_mb: int, pct: float, min_mb: int, hardcap_mb: int, safety_cap_pct: float
) -> int:
"""
Compute Node.js old-space heap (MB) with safe minimum and cap handling.
NOTE: The calling function ensures total_mb >= min_mb; here we only
apply the sizing heuristics and caps.
"""
candidate = int(total_mb * float(pct))
safety_cap = int(total_mb * float(safety_cap_pct))
final_cap = min(int(hardcap_mb), safety_cap)
# Enforce minimum first; only apply cap if it's above the minimum
candidate = max(candidate, int(min_mb))
if final_cap >= int(min_mb):
candidate = min(candidate, final_cap)
# Never below a tiny hard floor
return max(candidate, 128)
def node_max_old_space_size(
applications: dict,
application_id: str,
service_name: str,
pct: float = 0.35,
min_mb: int = 768,
hardcap_mb: int = 3072,
safety_cap_pct: float = 0.60,
) -> int:
"""
Derive Node.js --max-old-space-size (MB) from the service's mem_limit in app config.
Looks up: docker.services.<service_name>.mem_limit for the given application_id.
Raises:
AnsibleFilterError if mem_limit is missing/invalid OR if mem_limit (MB) < min_mb.
"""
try:
mem_limit = get_app_conf(
applications=applications,
application_id=application_id,
config_path=f"docker.services.{service_name}.mem_limit",
strict=True,
default=None,
)
except AppConfigKeyError as e:
raise AnsibleFilterError(str(e))
if mem_limit in (None, False, ""):
raise AnsibleFilterError(
f"mem_limit not set for application '{application_id}', service '{service_name}'"
)
total_bytes = _to_bytes(mem_limit)
total_mb = _mb(total_bytes)
# NEW: guardrail — refuse to size a heap larger than the cgroup limit
if total_mb < int(min_mb):
raise AnsibleFilterError(
f"mem_limit ({total_mb} MB) is below the required minimum heap ({int(min_mb)} MB) "
f"for application '{application_id}', service '{service_name}'. "
f"Increase mem_limit or lower min_mb."
)
return _compute_old_space_mb(total_mb, pct, min_mb, hardcap_mb, safety_cap_pct)
class FilterModule(object):
def filters(self):
return {
"node_max_old_space_size": node_max_old_space_size,
}

View File

@@ -114,6 +114,8 @@ defaults_networks:
subnet: 192.168.104.48/28 subnet: 192.168.104.48/28
web-app-mini-qr: web-app-mini-qr:
subnet: 192.168.104.64/28 subnet: 192.168.104.64/28
web-app-shopware:
subnet: 192.168.104.80/28
# /24 Networks / 254 Usable Clients # /24 Networks / 254 Usable Clients
web-app-bigbluebutton: web-app-bigbluebutton:

View File

@@ -81,6 +81,7 @@ ports:
web-app-minio_api: 8057 web-app-minio_api: 8057
web-app-minio_console: 8058 web-app-minio_console: 8058
web-app-mini-qr: 8059 web-app-mini-qr: 8059
web-app-shopware: 8060
web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port
public: public:
# The following ports should be changed to 22 on the subdomain via stream mapping # The following ports should be changed to 22 on the subdomain via stream mapping

View File

@@ -0,0 +1,31 @@
{# ------------------------------------------------------------------------------
Healthcheck: HTTP Local
------------------------------------------------------------------------------
This template defines a generic HTTP healthcheck for containers exposing
a web service on a local port (e.g., Nginx, Apache, PHP-FPM, Shopware, etc.).
It uses `wget` or `curl` (as fallback) to test if the container responds on
http://127.0.0.1:{{ container_port }}/. If the request succeeds, Docker marks
the container as "healthy"; otherwise, as "unhealthy".
Parameters:
- container_port: The internal port the service listens on.
Timing:
- interval: 30s → Check every 30 seconds
- timeout: 5s → Each check must complete within 5 seconds
- retries: 5 → Mark unhealthy after 5 consecutive failures
- start_period: 20s → Grace period before health checks begin
Usage:
{% filter indent(4) %}
{% include 'roles/docker-container/templates/healthcheck/http.yml.j2' %}
{% endfilter %}
------------------------------------------------------------------------------
#}
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:{{ container_port }}/ >/dev/null || curl -fsS http://127.0.0.1:{{ container_port }}/ >/dev/null"]
interval: 30s
timeout: 5s
retries: 5
start_period: 20s

View File

@@ -6,7 +6,7 @@ entity_name: "{{ application_id | get_entity_name }
docker_compose_flush_handlers: true docker_compose_flush_handlers: true
# Docker Compose # Docker Compose
database_type: "{{ application_id | get_entity_name }}" database_type: "{{ entity_name }}"
## Postgres ## Postgres
POSTGRES_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}" POSTGRES_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"

View File

@@ -15,6 +15,7 @@ location {{location}}
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port {{ WEB_PORT }}; proxy_set_header X-Forwarded-Port {{ WEB_PORT }};
proxy_set_header X-Forwarded-Ssl on;
proxy_pass_request_headers on; proxy_pass_request_headers on;
{% include 'roles/sys-svc-proxy/templates/headers/content_security_policy.conf.j2' %} {% include 'roles/sys-svc-proxy/templates/headers/content_security_policy.conf.j2' %}

View File

@@ -4,11 +4,13 @@ __metaclass__ = type
from ansible.plugins.lookup import LookupBase from ansible.plugins.lookup import LookupBase
from ansible.errors import AnsibleError from ansible.errors import AnsibleError
class LookupModule(LookupBase): class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs): def run(self, terms, variables=None, **kwargs):
""" """
Group the given cards into categorized and uncategorized lists Group the given cards into categorized and uncategorized lists
based on the tags from menu_categories. based on the tags from menu_categories.
Categories are sorted alphabetically before returning.
""" """
if len(terms) < 2: if len(terms) < 2:
raise AnsibleError("Missing required arguments") raise AnsibleError("Missing required arguments")
@@ -19,6 +21,7 @@ class LookupModule(LookupBase):
categorized = {} categorized = {}
uncategorized = [] uncategorized = []
# Categorize cards
for card in cards: for card in cards:
found = False found = False
for category, data in menu_categories.items(): for category, data in menu_categories.items():
@@ -29,10 +32,14 @@ class LookupModule(LookupBase):
if not found: if not found:
uncategorized.append(card) uncategorized.append(card)
# Sort categories alphabetically
sorted_categorized = {
k: categorized[k] for k in sorted(categorized.keys(), key=str.lower)
}
return [ return [
{ {
'categorized': categorized, 'categorized': sorted_categorized,
'uncategorized': uncategorized, 'uncategorized': uncategorized,
} }
] ]

View File

@@ -11,8 +11,8 @@ contact:
description: Send {{ 'us' if service_provider.type == 'legal' else 'me' }} an email description: Send {{ 'us' if service_provider.type == 'legal' else 'me' }} an email
icon: icon:
class: fa-solid fa-envelope class: fa-solid fa-envelope
url: mailto:{{service_provider.contact.email}} url: mailto:{{ service_provider.contact.email }}
identifier: {{service_provider.contact.email}} identifier: {{ service_provider.contact.email }}
{% endif %} {% endif %}
{% if service_provider.contact.phone is defined %} {% if service_provider.contact.phone is defined %}
@@ -32,6 +32,6 @@ contact:
description: Chat with {{ 'us' if service_provider.type == 'legal' else 'me' }} on Matrix description: Chat with {{ 'us' if service_provider.type == 'legal' else 'me' }} on Matrix
icon: icon:
class: fa-solid fa-cubes class: fa-solid fa-cubes
identifier: "{{service_provider.contact.matrix}}" identifier: "{{ service_provider.contact.matrix }}"
{% endif %} {% endif %}

View File

@@ -25,7 +25,6 @@ portfolio_menu_categories:
- ollama - ollama
- openwebui - openwebui
- flowise - flowise
- minio
- qdrant - qdrant
- litellm - litellm
@@ -102,14 +101,12 @@ portfolio_menu_categories:
- fusiondirectory - fusiondirectory
- user-management - user-management
Customer Relationship Management: Customer Relationship:
description: "Tools for managing customer relationships, sales pipelines, marketing, and support activities." description: "Customer Relationship Management (CRM) software for managing customer relationships, sales pipelines, marketing, and support activities."
icon: "fa-solid fa-address-book" icon: "fa-solid fa-address-book"
tags: tags:
- crm - crm
- customer - customer
- relationship
- sales
- marketing - marketing
- support - support
- espocrm - espocrm
@@ -222,7 +219,7 @@ portfolio_menu_categories:
- snipe-it - snipe-it
Content Management: Content Management:
description: "CMS and web publishing platforms" description: "Content Management Systems (CMS) and web publishing platforms"
icon: "fa-solid fa-file-alt" icon: "fa-solid fa-file-alt"
tags: tags:
- cms - cms
@@ -231,4 +228,27 @@ portfolio_menu_categories:
- website - website
- joomla - joomla
- wordpress - wordpress
- blog - blog
Commerce:
description: "Platforms for building and managing online shops, product catalogs, and digital sales channels — including payment, inventory, and customer features."
icon: "fa-solid fa-cart-shopping"
tags:
- commerce
- ecommerce
- shopware
- shop
- sales
- store
- magento
- pretix
Storage:
description: "High-performance, self-hosted storage solutions for managing, scaling, and accessing unstructured data — including object storage compatible with Amazon S3 APIs."
icon: "fa-solid fa-database"
tags:
- storage
- object-storage
- s3
- minio
- datasets

View File

@@ -11,7 +11,7 @@
# (Optional) specifically wait for the CLI installer script # (Optional) specifically wait for the CLI installer script
- name: "Check for CLI installer" - name: "Check for CLI installer"
command: command:
argv: [ docker, exec, "{{ JOOMLA_CONTAINER }}", test, -f, /var/www/html/installation/joomla.php ] argv: [ docker, exec, "{{ JOOMLA_CONTAINER }}", test, -f, "{{ JOOMLA_INSTALLER_CLI_FILE }}" ]
register: has_installer register: has_installer
changed_when: false changed_when: false
failed_when: false failed_when: false
@@ -30,9 +30,11 @@
argv: argv:
- docker - docker
- exec - exec
- --user
- "{{ JOOMLA_WEB_USER }}"
- "{{ JOOMLA_CONTAINER }}" - "{{ JOOMLA_CONTAINER }}"
- php - php
- /var/www/html/installation/joomla.php - "{{ JOOMLA_INSTALLER_CLI_FILE }}"
- install - install
- "--db-type={{ JOOMLA_DB_CONNECTOR }}" - "--db-type={{ JOOMLA_DB_CONNECTOR }}"
- "--db-host={{ database_host }}" - "--db-host={{ database_host }}"

View File

@@ -0,0 +1,18 @@
---
# Reset Joomla admin password via CLI (inside the container)
- name: "Reset Joomla admin password (non-interactive CLI)"
command:
argv:
- docker
- exec
- "{{ JOOMLA_CONTAINER }}"
- php
- "{{ JOOMLA_CLI_FILE }}"
- user:reset-password
- "--username"
- "{{ JOOMLA_USER_NAME }}"
- "--password"
- "{{ JOOMLA_USER_PASSWORD }}"
register: j_password_reset
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
changed_when: j_password_reset.rc == 0

View File

@@ -24,3 +24,7 @@
- name: Include assert routines - name: Include assert routines
include_tasks: "04_assert.yml" include_tasks: "04_assert.yml"
when: MODE_ASSERT | bool when: MODE_ASSERT | bool
- name: Reset Admin Password
include_tasks: 05_reset_admin_password.yml

View File

@@ -13,9 +13,12 @@ JOOMLA_DOMAINS: "{{ applications | get_app_conf(application_id
JOOMLA_SITE_NAME: "{{ SOFTWARE_NAME }} Joomla - CMS" JOOMLA_SITE_NAME: "{{ SOFTWARE_NAME }} Joomla - CMS"
JOOMLA_DB_CONNECTOR: "{{ 'pgsql' if database_type == 'postgres' else 'mysqli' }}" JOOMLA_DB_CONNECTOR: "{{ 'pgsql' if database_type == 'postgres' else 'mysqli' }}"
JOOMLA_CONFIG_FILE: "/var/www/html/configuration.php" JOOMLA_CONFIG_FILE: "/var/www/html/configuration.php"
JOOMLA_INSTALLER_CLI_FILE: "/var/www/html/installation/joomla.php"
JOOMLA_CLI_FILE: "/var/www/html/cli/joomla.php"
# User # User
JOOMLA_USER_NAME: "{{ users.administrator.username }}" JOOMLA_USER_NAME: "{{ users.administrator.username }}"
JOOMLA_USER: "{{ JOOMLA_USER_NAME | capitalize }}" JOOMLA_USER: "{{ JOOMLA_USER_NAME | capitalize }}"
JOOMLA_USER_PASSWORD: "{{ users.administrator.password }}" JOOMLA_USER_PASSWORD: "{{ users.administrator.password }}"
JOOMLA_USER_EMAIL: "{{ users.administrator.email }}" JOOMLA_USER_EMAIL: "{{ users.administrator.email }}"
JOOMLA_WEB_USER: "www-data"

View File

@@ -13,6 +13,16 @@ server:
aliases: [] aliases: []
status_codes: status_codes:
default: 404 default: 404
csp:
flags:
script-src-elem:
unsafe-inline: true
whitelist:
script-src-elem:
- "https://www.hcaptcha.com"
- "https://js.hcaptcha.com"
frame-src:
- "https://newassets.hcaptcha.com/"
docker: docker:
services: services:
database: database:

View File

@@ -21,9 +21,10 @@ server:
connect-src: connect-src:
- https://q.clarity.ms - https://q.clarity.ms
- https://n.clarity.ms - https://n.clarity.ms
- https://z.clarity.ms
- "data:" - "data:"
style-src-elem: [] style-src-elem: []
font-src: [] font-src: []
frame-ancestors: [] frame-ancestors: []
flags: flags:
style-src-attr: style-src-attr:

View File

@@ -9,6 +9,9 @@ server:
script-src-attr: script-src-attr:
unsafe-eval: true unsafe-eval: true
whitelist: whitelist:
script-src-elem:
- "https://www.hcaptcha.com"
- "https://js.hcaptcha.com"
font-src: font-src:
- "data:" - "data:"
connect-src: connect-src:
@@ -19,6 +22,7 @@ server:
frame-src: frame-src:
- "{{ WEBSOCKET_PROTOCOL }}://collabora.{{ PRIMARY_DOMAIN }}" - "{{ WEBSOCKET_PROTOCOL }}://collabora.{{ PRIMARY_DOMAIN }}"
- "{{ WEB_PROTOCOL }}://collabora.{{ PRIMARY_DOMAIN }}" - "{{ WEB_PROTOCOL }}://collabora.{{ PRIMARY_DOMAIN }}"
- "https://newassets.hcaptcha.com/"
worker-src: worker-src:
- "blob:" - "blob:"
domains: domains:
@@ -89,9 +93,9 @@ docker:
version: "latest" version: "latest"
backup: backup:
no_stop_required: true no_stop_required: true
cpus: "0.25" cpus: "1"
mem_reservation: "128m" mem_reservation: "128m"
mem_limit: "512m" mem_limit: "1g"
pids_limit: 1024 pids_limit: 1024
enabled: "{{ applications | get_app_conf('web-app-nextcloud', 'features.oidc', False, True, True) }}" # Activate OIDC for Nextcloud enabled: "{{ applications | get_app_conf('web-app-nextcloud', 'features.oidc', False, True, True) }}" # Activate OIDC for Nextcloud
# floavor decides which OICD plugin should be used. # floavor decides which OICD plugin should be used.

View File

@@ -7,6 +7,9 @@
command: "{{ NEXTCLOUD_DOCKER_EXEC_OCC }} maintenance:repair --include-expensive" command: "{{ NEXTCLOUD_DOCKER_EXEC_OCC }} maintenance:repair --include-expensive"
register: occ_repair register: occ_repair
changed_when: "'No repairs needed' not in occ_repair.stdout" changed_when: "'No repairs needed' not in occ_repair.stdout"
retries: 3
delay: 10
until: occ_repair.rc == 0
- name: Nextcloud | App update (retry once) - name: Nextcloud | App update (retry once)
command: "{{ NEXTCLOUD_DOCKER_EXEC_OCC }} app:update --all" command: "{{ NEXTCLOUD_DOCKER_EXEC_OCC }} app:update --all"

View File

@@ -16,6 +16,13 @@
- name: Flush all handlers immediately so that occ can be used - name: Flush all handlers immediately so that occ can be used
meta: flush_handlers meta: flush_handlers
- name: Wait until Redis is ready (PONG)
command: "docker exec {{ NEXTCLOUD_REDIS_CONTAINER }} redis-cli ping"
register: redis_ping
retries: 60
delay: 2
until: (redis_ping.stdout | default('')) is search('PONG')
- name: Update\Upgrade Nextcloud - name: Update\Upgrade Nextcloud
include_tasks: 03_upgrade.yml include_tasks: 03_upgrade.yml
when: MODE_UPDATE | bool when: MODE_UPDATE | bool

View File

@@ -77,7 +77,8 @@
volumes: volumes:
- whiteboard_tmp:/tmp - whiteboard_tmp:/tmp
- whiteboard_fontcache:/var/cache/fontconfig - whiteboard_fontcache:/var/cache/fontconfig
environment:
- NODE_OPTIONS=--max-old-space-size={{ NEXTCLOUD_WHITEBOARD_MAX_OLD_SPACE_SIZE }}
expose: expose:
- "{{ container_port }}" - "{{ container_port }}"
shm_size: 1g shm_size: 1g

View File

@@ -130,6 +130,7 @@ NEXTCLOUD_WHITEBOARD_TMP_VOLUME: "{{ applications | get_app_conf(applic
NEXTCLOUD_WHITEBOARD_FRONTCACHE_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.whiteboard_fontcache') }}" NEXTCLOUD_WHITEBOARD_FRONTCACHE_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.whiteboard_fontcache') }}"
NEXTCLOUD_WHITEBOARD_SERVICE_DIRECTORY: "{{ [ docker_compose.directories.services, 'whiteboard' ] | path_join }}" NEXTCLOUD_WHITEBOARD_SERVICE_DIRECTORY: "{{ [ docker_compose.directories.services, 'whiteboard' ] | path_join }}"
NEXTCLOUD_WHITEBOARD_SERVICE_DOCKERFILE: "{{ [ NEXTCLOUD_WHITEBOARD_SERVICE_DIRECTORY, 'Dockerfile' ] | path_join }}" NEXTCLOUD_WHITEBOARD_SERVICE_DOCKERFILE: "{{ [ NEXTCLOUD_WHITEBOARD_SERVICE_DIRECTORY, 'Dockerfile' ] | path_join }}"
NEXTCLOUD_WHITEBOARD_MAX_OLD_SPACE_SIZE: "{{ applications | node_max_old_space_size(application_id, NEXTCLOUD_WHITEBOARD_SERVICE) }}"
### Collabora ### Collabora
NEXTCLOUD_COLLABORA_URL: "{{ domains | get_url('web-svc-collabora', WEB_PROTOCOL) }}" NEXTCLOUD_COLLABORA_URL: "{{ domains | get_url('web-svc-collabora', WEB_PROTOCOL) }}"
@@ -141,4 +142,7 @@ NEXTCLOUD_DOCKER_USER: "www-data" # Name of the www-data user
## Execution ## Execution
NEXTCLOUD_INTERNAL_OCC_COMMAND: "{{ [ NEXTCLOUD_DOCKER_WORK_DIRECTORY, 'occ'] | path_join }}" NEXTCLOUD_INTERNAL_OCC_COMMAND: "{{ [ NEXTCLOUD_DOCKER_WORK_DIRECTORY, 'occ'] | path_join }}"
NEXTCLOUD_DOCKER_EXEC: "docker exec -u {{ NEXTCLOUD_DOCKER_USER }} {{ NEXTCLOUD_CONTAINER }}" # General execute composition NEXTCLOUD_DOCKER_EXEC: "docker exec -u {{ NEXTCLOUD_DOCKER_USER }} {{ NEXTCLOUD_CONTAINER }}" # General execute composition
NEXTCLOUD_DOCKER_EXEC_OCC: "{{ NEXTCLOUD_DOCKER_EXEC }} {{ NEXTCLOUD_INTERNAL_OCC_COMMAND }}" # Execute docker occ command NEXTCLOUD_DOCKER_EXEC_OCC: "{{ NEXTCLOUD_DOCKER_EXEC }} {{ NEXTCLOUD_INTERNAL_OCC_COMMAND }}" # Execute docker occ command
## Redis
NEXTCLOUD_REDIS_CONTAINER: "{{ entity_name }}-redis"

View File

@@ -30,6 +30,10 @@ docker:
services: services:
redis: redis:
enabled: true enabled: true
cpus: "0.5"
mem_reservation: "256m"
mem_limit: "512m"
pids_limit: 512
database: database:
enabled: true enabled: true
peertube: peertube:
@@ -38,6 +42,10 @@ docker:
image: "chocobozzz/peertube" image: "chocobozzz/peertube"
backup: backup:
no_stop_required: true no_stop_required: true
cpus: 4
mem_reservation: "4g"
mem_limit: "8g"
pids_limit: 2048 # ffmpeg spawnt Threads/Prozesse
volumes: volumes:
data: peertube_data data: peertube_data
config: peertube_config config: peertube_config

View File

@@ -12,6 +12,17 @@
- assets:/app/client/dist - assets:/app/client/dist
- data:/data - data:/data
- config:/config - config:/config
environment:
- NODE_OPTIONS=--max-old-space-size={{ PEERTUBE_MAX_OLD_SPACE_SIZE }}
- PEERTUBE_TRANSCODING_CONCURRENCY={{ PEERTUBE_TRANSCODING_CONCURRENCY }}
shm_size: "512m"
tmpfs:
- /tmp:size=1g,exec
ulimits:
nofile:
soft: 131072
hard: 131072
nproc: 8192
{% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %} {% include 'roles/docker-container/templates/depends_on/dmbs_excl.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %} {% include 'roles/docker-container/templates/networks.yml.j2' %}
{% include 'roles/docker-container/templates/healthcheck/tcp.yml.j2' %} {% include 'roles/docker-container/templates/healthcheck/tcp.yml.j2' %}

View File

@@ -1,17 +1,24 @@
# General # General
application_id: "web-app-peertube" application_id: "web-app-peertube"
database_type: "postgres" database_type: "postgres"
entity_name: "{{ application_id | get_entity_name }}"
# Docker # Docker
docker_compose_flush_handlers: true docker_compose_flush_handlers: true
# Role variables # Role variables
PEERTUBE_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.peertube.version') }}" PEERTUBE_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.peertube.version') }}"
PEERTUBE_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.peertube.image') }}" PEERTUBE_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.peertube.image') }}"
PEERTUBE_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.peertube.name') }}" PEERTUBE_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.peertube.name') }}"
PEERTUBE_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}" PEERTUBE_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
PEERTUBE_CONFIG_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.config') }}" PEERTUBE_CONFIG_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.config') }}"
# OIDC # OIDC
PEERTUBE_OIDC_PLUGIN: "peertube-plugin-auth-openid-connect" PEERTUBE_OIDC_PLUGIN: "peertube-plugin-auth-openid-connect"
PEERTUBE_OIDC_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc', False) }}" PEERTUBE_OIDC_ENABLED: "{{ applications | get_app_conf(application_id, 'features.oidc') }}"
# Performance
PEERTUBE_CPUS: "{{ applications | get_app_conf(application_id, 'docker.services.peertube.cpus') | float }}"
PEERTUBE_MAX_OLD_SPACE_SIZE: "{{ applications | node_max_old_space_size(application_id, entity_name) }}"
_peertube_concurrency_candidate: "{{ ((PEERTUBE_CPUS | float) * 0.5) | round(0, 'floor') | int }}"
PEERTUBE_TRANSCODING_CONCURRENCY: "{{ [ ( [ (_peertube_concurrency_candidate | int), 1 ] | max ), 8 ] | min }}"

View File

@@ -0,0 +1,34 @@
# Shopware
## Description
Empower your e-commerce vision with **Shopware 6**, a modern, flexible, and open-source commerce platform built on **Symfony and Vue.js**. Designed for growth and innovation, it enables seamless integration, outstanding customer experiences, and complete control over your digital business. Build, scale, and sell with confidence.
## Overview
This role deploys **Shopware 6** using **Docker**. It automates installation, migration, and configuration of your storefront, integrating with a central **MariaDB** database.
Optional components like **Redis** and **OpenSearch** enhance performance and search capabilities, while **OIDC** and **LDAP** support integration with centralized identity systems such as **Keycloak**.
With automated setup, update handling, variable management, and plugin-based authentication, this role simplifies the deployment and maintenance of your Shopware instance.
## Features
* **Modern and Scalable:** A robust Symfony-based framework optimized for commerce innovation.
* **Automated Setup & Maintenance:** Installs, migrates, and configures Shopware automatically.
* **Extensible Architecture:** Optional Redis, OpenSearch, and plugin-based IAM integrations.
* **Centralized Database Access:** Connects seamlessly to the shared MariaDB service.
* **Integrated Configuration:** Environment and Docker Compose variables managed automatically.
## Further Resources
* [Shopware Official Website](https://www.shopware.com/en/)
* [Shopware Developer Documentation](https://developer.shopware.com/)
* [Shopware Store (Plugins)](https://store.shopware.com/en/)
## Credits
Developed and maintained by **Kevin Veen-Birkenbach**.
Learn more at [veen.world](https://www.veen.world).
Part of the [Infinito.Nexus Project](https://s.infinito.nexus/code)
Licensed under [Infinito.Nexus NonCommercial License](https://s.infinito.nexus/license).

View File

@@ -0,0 +1,3 @@
# to-dos
- Implement OIDC
- Implement LDAP

View File

@@ -0,0 +1,80 @@
title: "{{ SOFTWARE_NAME }} Shop"
features:
central_database: true
redis: true
ldap: false # Not implemented yet
oidc: false # Not implemented yet
logout: true
desktop: true
css: true
server:
csp:
flags:
script-src-elem:
unsafe-inline: true
unsafe-eval: true
whitelist:
font-src:
- "data:"
domains:
aliases: []
canonical:
- shop.{{ PRIMARY_DOMAIN }}
docker:
services:
database:
enabled: true
init:
name: software-init
cpus: 1.0
mem_reservation: 1g
mem_limit: 2g
# Base PHP image used by all app services
shopware:
image: "ghcr.io/shopware/docker-base"
version: "8.3"
web:
name: "shopware-web"
port: 8000
cpus: 1.0
mem_reservation: 1g
mem_limit: 2g
worker:
name: "shopware-worker"
entrypoint: [ "php", "bin/console", "messenger:consume", "async", "low_priority", "--time-limit=300", "--memory-limit=512M" ]
replicas: 3
cpus: 1.0
mem_reservation: 1g
mem_limit: 2g
scheduler:
name: "shopware-scheduler"
entrypoint: [ "php", "bin/console", "scheduled-task:run" ]
cpus: 0.5
mem_reservation: 512m
mem_limit: 1g
redis:
enabled: true
image: "redis"
version: "7-alpine"
cpus: 0.25
mem_reservation: 256m
mem_limit: 512m
opensearch:
enabled: true
image: "opensearchproject/opensearch"
version: "2.12.0"
name: "shopware-opensearch"
cpus: 1.0
mem_reservation: 2g
mem_limit: 4g
volumes:
data: "shopware_data"

View File

@@ -0,0 +1,7 @@
framework:
trusted_proxies: '%env(TRUSTED_PROXIES)%'
trusted_headers:
- x-forwarded-for
- x-forwarded-proto
- x-forwarded-host
- x-forwarded-port

View File

@@ -0,0 +1,146 @@
#!/bin/sh
# Shopware initialization script (POSIX sh)
# - Root phase: fix volumes & permissions, then switch to www-data
# - First run: perform system:install
# - Every run: run DB migrations + rebuild cache + compile assets & themes
# - Verifies admin bundles exist, otherwise exits with error
set -eu
APP_ROOT="/var/www/html"
MARKER="$APP_ROOT/.infinito/installed"
LOG_PREFIX="[INIT]"
PHP_BIN="php"
log() { printf "%s %s\n" "$LOG_PREFIX" "$1"; }
fail() { printf "%s [ERROR] %s\n" "$LOG_PREFIX" "$1" >&2; exit 1; }
# ---------------------------
# 0) Root phase (if running as root)
# ---------------------------
if [ "$(id -u)" -eq 0 ]; then
# Prepare required folders and shared volumes
mkdir -p "$APP_ROOT/.infinito" \
"$APP_ROOT/public/bundles" \
"$APP_ROOT/public/media" \
"$APP_ROOT/public/theme" \
"$APP_ROOT/public/thumbnail" \
"$APP_ROOT/public/sitemap" \
"$APP_ROOT/var"
log "Fixing permissions on shared volumes..."
chown -R www-data:www-data \
"$APP_ROOT/public" \
"$APP_ROOT/var" \
"$APP_ROOT/.infinito" || true
chmod -R 775 \
"$APP_ROOT/public" \
"$APP_ROOT/var" \
"$APP_ROOT/.infinito" || true
# Switch to www-data for all subsequent operations
exec su -s /bin/sh www-data "$0" "$@"
fi
# From here on: running as www-data
cd "$APP_ROOT" || fail "Cannot cd to $APP_ROOT"
# Optional environment hints
APP_ENV_STR=$($PHP_BIN -r 'echo getenv("APP_ENV") ?: "";' 2>/dev/null || true)
APP_URL_STR=$($PHP_BIN -r 'echo getenv("APP_URL") ?: "";' 2>/dev/null || true)
[ -n "$APP_ENV_STR" ] || log "APP_ENV not set (using defaults)"
[ -n "$APP_URL_STR" ] || log "APP_URL not set (reverse proxy must set headers)"
# ---------------------------
# 1) Database reachability check (PDO)
# ---------------------------
log "Checking database via PDO..."
$PHP_BIN -r '
$url = getenv("DATABASE_URL");
if (!$url) { fwrite(STDERR, "DATABASE_URL not set\n"); exit(1); }
$p = parse_url($url);
if (!$p || !isset($p["scheme"])) { fwrite(STDERR, "Invalid DATABASE_URL\n"); exit(1); }
$host = $p["host"] ?? "localhost";
$port = $p["port"] ?? 3306;
$db = ltrim($p["path"] ?? "", "/");
$user = $p["user"] ?? "";
$pass = $p["pass"] ?? "";
$dsn = "mysql:host=".$host.";port=".$port.";dbname=".$db.";charset=utf8mb4";
$retries = 60;
while ($retries-- > 0) {
try { new PDO($dsn, $user, $pass, [PDO::ATTR_TIMEOUT => 3]); exit(0); }
catch (Exception $e) { sleep(2); }
}
fwrite(STDERR, "DB not reachable\n"); exit(1);
' || fail "Database not reachable"
# ---------------------------
# 2) First-time install detection
# ---------------------------
FIRST_INSTALL=0
if [ ! -f "$MARKER" ]; then
log "Checking if database is empty..."
if $PHP_BIN -r '
$url = getenv("DATABASE_URL");
$p = parse_url($url);
$db = ltrim($p["path"] ?? "", "/");
$dsn = "mysql:host=".($p["host"]??"localhost").";port=".($p["port"]??3306).";dbname=".$db.";charset=utf8mb4";
$pdo = new PDO($dsn, $p["user"] ?? "", $p["pass"] ?? "");
$q = $pdo->query("SELECT COUNT(*) FROM information_schema.tables WHERE table_schema=".$pdo->quote($db));
$cnt = (int)$q->fetchColumn();
exit($cnt === 0 ? 0 : 100);
'; then
FIRST_INSTALL=1
else
ST=$?
if [ "$ST" -eq 100 ]; then
log "Database not empty → skipping install"
else
fail "Database check failed (exit code $ST)"
fi
fi
fi
if [ "$FIRST_INSTALL" -eq 1 ]; then
log "Performing first-time Shopware installation..."
$PHP_BIN -d memory_limit=1024M bin/console system:install --basic-setup --create-database
mkdir -p "$(dirname "$MARKER")"
: > "$MARKER"
fi
# ---------------------------
# 3) Always run migrations
# ---------------------------
log "Running database migrations..."
$PHP_BIN -d memory_limit=1024M bin/console database:migrate --all
$PHP_BIN -d memory_limit=1024M bin/console database:migrate-destructive --all
# ---------------------------
# 4) Always rebuild caches, bundles, and themes
# ---------------------------
log "Rebuilding caches and assets..."
$PHP_BIN bin/console cache:clear
$PHP_BIN bin/console bundle:dump
# Use --copy if symlinks cause issues
$PHP_BIN bin/console assets:install --no-interaction --force
$PHP_BIN bin/console theme:refresh
$PHP_BIN bin/console theme:compile
# Best-effort: not critical if it fails
$PHP_BIN bin/console dal:refresh:index || log "dal:refresh:index failed (non-critical)"
# ---------------------------
# 5) Verify admin bundles
# ---------------------------
if [ ! -d "public/bundles/administration" ]; then
fail "Missing directory public/bundles/administration (asset build failed)"
fi
if ! ls public/bundles/administration/* >/dev/null 2>&1; then
fail "No files found in public/bundles/administration (asset build failed)"
fi
# ---------------------------
# 6) Show version info
# ---------------------------
$PHP_BIN bin/console system:version 2>/dev/null || log "system:version not available"
log "Initialization complete."

View File

@@ -0,0 +1,22 @@
---
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "Shopware is a modern open-source eCommerce platform built on PHP and Symfony. It enables businesses to create scalable online stores with flexible product management, intuitive administration, customizable storefronts, and powerful APIs for headless and omnichannel commerce."
license: "Infinito.Nexus NonCommercial License"
license_url: "https://s.infinito.nexus/license"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
galaxy_tags:
- shopware
- ecommerce
repository: https://s.infinito.nexus/code
issue_tracker_url: https://s.infinito.nexus/issues
documentation: "https://docs.infinito.nexus/"
logo:
class: "fa-solid fa-cart-shopping"
run_after:
- web-app-keycloak
- web-app-mailu
dependencies: []

View File

@@ -0,0 +1,2 @@
# Minimal schema placeholder (extend with your own config contract if desired)
credentials: {}

View File

@@ -0,0 +1,38 @@
- name: "Rename default Shopware admin user to {{ users.administrator.username }}"
shell: |
docker exec -i --user {{ SHOPWARE_USER }} {{ SHOPWARE_WEB_CONTAINER }} sh -lc '
set -e
cd {{ SHOPWARE_ROOT }}
old_user="admin"
new_user="{{ users.administrator.username }}"
if php bin/console user:list | grep -q "^$old_user "; then
echo "[INFO] Renaming Shopware user: $old_user -> $new_user"
php bin/console user:update "$old_user" --username="$new_user" || true
else
echo "[INFO] No user named $old_user found (already renamed or custom setup)"
fi
'
args:
chdir: "{{ docker_compose.directories.instance }}"
changed_when: false
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"
- name: "Ensure Shopware admin exists and has the desired password"
shell: |
docker exec -i --user {{ SHOPWARE_USER }} {{ SHOPWARE_WEB_CONTAINER }} sh -lc '
set -e
cd {{ SHOPWARE_ROOT }}
php bin/console user:create "{{ users.administrator.username }}" \
--admin \
--password="{{ users.administrator.password }}" \
--firstName="{{ users.administrator.username }}" \
--lastName="{{ PRIMARY_DOMAIN | lower }}" \
--email="{{ users.administrator.email }}" || true
php bin/console user:change-password "{{ users.administrator.username }}" \
--password="{{ users.administrator.password }}" || true
php bin/console user:update "{{ users.administrator.username }}" \
--email="{{ users.administrator.email }}" 2>/dev/null || true
'
args:
chdir: "{{ docker_compose.directories.instance }}"
no_log: "{{ MASK_CREDENTIALS_IN_LOGS | bool }}"

View File

@@ -0,0 +1,7 @@
- name: Install & configure OIDC plugin (if enabled)
include_tasks: setup/oidc.yml
when: applications | get_app_conf(application_id, 'features.oidc')
- name: Install & configure LDAP plugin (if enabled)
include_tasks: setup/ldap.yml
when: applications | get_app_conf(application_id, 'features.ldap')

View File

@@ -0,0 +1,7 @@
- name: Remove OIDC plugin if disabled
include_tasks: cleanup/oidc.yml
when: not (applications | get_app_conf(application_id, 'features.oidc'))
- name: Remove LDAP plugin if disabled
include_tasks: cleanup/ldap.yml
when: not (applications | get_app_conf(application_id, 'features.ldap'))

View File

@@ -0,0 +1,10 @@
- name: "Deactivate/uninstall LDAP plugin if present"
shell: |
docker exec -i --user {{ SHOPWARE_USER }} {{ SHOPWARE_WEB_CONTAINER }} sh -lc '
cd {{ SHOPWARE_ROOT }}
php bin/console plugin:deactivate INFX_LDAP_PLUGIN || true
php bin/console plugin:uninstall INFX_LDAP_PLUGIN --keep-user-data || true
php bin/console cache:clear
'
args:
chdir: "{{ docker_compose.directories.instance }}"

View File

@@ -0,0 +1,10 @@
- name: "Deactivate/uninstall OIDC plugin if present"
shell: |
docker exec -i --user {{ SHOPWARE_USER }} {{ SHOPWARE_WEB_CONTAINER }} sh -lc '
cd {{ SHOPWARE_ROOT }}
php bin/console plugin:deactivate INFX_OIDC_PLUGIN || true
php bin/console plugin:uninstall INFX_OIDC_PLUGIN --keep-user-data || true
php bin/console cache:clear
'
args:
chdir: "{{ docker_compose.directories.instance }}"

View File

@@ -0,0 +1,43 @@
---
- name: "Load docker, DB and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
vars:
docker_compose_flush_handlers: false
- name: "Deploy {{ SHOPWARE_INIT_HOST }}"
copy:
src: init.sh
dest: "{{ SHOPWARE_INIT_HOST }}"
mode: "0755"
notify:
- docker compose up
- docker compose build
- name: "Render framework.yaml (trusted proxies/headers/hosts)"
copy:
src: "framework.yaml"
dest: "{{ SHOPWARE_FRAMEWORK_HOST }}"
mode: "0644"
notify:
- docker compose up
- name: "Flush docker compose handlers"
meta: flush_handlers
- name: Wait for Shopware HTTP endpoint
wait_for:
host: "127.0.0.1"
port: "{{ ports.localhost.http[application_id] }}"
delay: 5
timeout: 300
- name: "Ensure admin user exists with correct password"
include_tasks: 01_admin.yml
#- name: Execute setup routines (OIDC/LDAP)
# include_tasks: 02_setup.yml
#
#- name: Execute cleanup routines
# include_tasks: 03_cleanup.yml
# when: MODE_CLEANUP

View File

@@ -0,0 +1,27 @@
# Replace INFX_LDAP_PLUGIN with the actual plugin name you use
- name: "Install LDAP admin plugin & activate"
shell: |
docker exec -i --user {{ SHOPWARE_USER }} {{ SHOPWARE_WEB_CONTAINER }} sh -lc '
set -e
cd {{ SHOPWARE_ROOT }}
php bin/console plugin:refresh
php bin/console plugin:install --activate INFX_LDAP_PLUGIN || true
php bin/console cache:clear
'
args:
chdir: "{{ docker_compose.directories.instance }}"
- name: "Configure LDAP connection"
shell: |
docker exec -i --user {{ SHOPWARE_USER }} {{ SHOPWARE_WEB_CONTAINER }} sh -lc '
set -e
cd {{ SHOPWARE_ROOT }}
php bin/console system:config:set "InfxLdap.config.host" "{{ LDAP.SERVER.DOMAIN }}"
php bin/console system:config:set "InfxLdap.config.port" "{{ LDAP.SERVER.PORT }}"
php bin/console system:config:set "InfxLdap.config.bindDn" "{{ LDAP.DN.ADMINISTRATOR.DATA }}"
php bin/console system:config:set "InfxLdap.config.password" "{{ LDAP.BIND_CREDENTIAL }}"
php bin/console system:config:set "InfxLdap.config.userBase" "{{ LDAP.DN.OU.USERS }}"
php bin/console cache:clear
'
args:
chdir: "{{ docker_compose.directories.instance }}"

View File

@@ -0,0 +1,26 @@
# Replace INFX_OIDC_PLUGIN with the actual plugin name (Composer or local)
- name: "Install OIDC plugin & activate"
shell: |
docker exec -i --user {{ SHOPWARE_USER }} {{ SHOPWARE_WEB_CONTAINER }} sh -lc '
set -e
cd {{ SHOPWARE_ROOT }}
php bin/console plugin:refresh
php bin/console plugin:install --activate INFX_OIDC_PLUGIN || true
php bin/console cache:clear
'
args:
chdir: "{{ docker_compose.directories.instance }}"
- name: "Configure OIDC via system:config"
shell: |
docker exec -i --user {{ SHOPWARE_USER }} {{ SHOPWARE_WEB_CONTAINER }} sh -lc '
set -e
cd {{ SHOPWARE_ROOT }}
php bin/console system:config:set "InfxOidc.config.clientId" "{{ OIDC.CLIENT.ID }}"
php bin/console system:config:set "InfxOidc.config.clientSecret" "{{ OIDC.CLIENT.SECRET }}"
php bin/console system:config:set "InfxOidc.config.discoveryUrl" "{{ OIDC.CLIENT.DISCOVERY_DOCUMENT }}"
php bin/console system:config:set "InfxOidc.config.scopes" "openid profile email"
php bin/console cache:clear
'
args:
chdir: "{{ docker_compose.directories.instance }}"

View File

@@ -0,0 +1,80 @@
# ------------------------------------------------------------------------------
# Shopware Application Image (Alpine-compatible)
# ------------------------------------------------------------------------------
# - Stage 1 (builder): use Composer to fetch Shopware while ignoring build-time
# PHP extensions (we'll install them in the runtime image).
# - Stage 2 (runtime): install required PHP extensions and copy the app + init.sh
# ------------------------------------------------------------------------------
############################
# Stage 1: Builder
############################
FROM composer:2.7 AS builder
ENV COMPOSER_ALLOW_SUPERUSER=1 \
COMPOSER_NO_INTERACTION=1 \
COMPOSER_PROCESS_TIMEOUT=900
WORKDIR /app
ARG SHOPWARE_PROD_VERSION=shopware/production:6.7.3.1
# 1) Scaffold project without installing dependencies
RUN set -eux; \
composer create-project "${SHOPWARE_PROD_VERSION}" /app --no-install
# 2) Install dependencies (ignoring build-time extension checks) + add Redis transport
RUN set -eux; \
composer install \
--no-dev \
--optimize-autoloader \
--no-progress \
--no-scripts \
--ignore-platform-req=ext-gd \
--ignore-platform-req=ext-intl \
--ignore-platform-req=ext-pdo_mysql; \
composer require symfony/redis-messenger:^6.4 \
-W \
--no-scripts \
--no-progress \
--update-no-dev \
--ignore-platform-req=ext-gd \
--ignore-platform-req=ext-intl \
--ignore-platform-req=ext-pdo_mysql \
--ignore-platform-req=ext-redis
############################
# Stage 2: Runtime
############################
FROM ghcr.io/shopware/docker-base:8.3
WORKDIR /var/www/html
# Install required PHP extensions in the Alpine-based runtime
# (try php83-*, fall back to php82-*, then to generic)
USER root
RUN set -eux; \
apk add --no-cache php83-gd || apk add --no-cache php82-gd || apk add --no-cache php-gd || true; \
apk add --no-cache php83-intl || apk add --no-cache php82-intl || apk add --no-cache php-intl || true; \
apk add --no-cache php83-pdo_mysql || apk add --no-cache php82-pdo_mysql || apk add --no-cache php-pdo_mysql || true; \
apk add --no-cache php83-redis || apk add --no-cache php82-redis || apk add --no-cache php-redis || true
# Copy built application from the builder
COPY --chown=www-data:www-data --from=builder /app /var/www/html
# Optional: snapshot of pristine app to seed an empty volume (used by init container)
RUN mkdir -p /usr/src/shopware \
&& cp -a /var/www/html/. /usr/src/shopware/. \
&& chown -R www-data:www-data /var/www/html /usr/src/shopware
# Ensure writable directories exist with correct ownership
RUN set -eux; \
mkdir -p \
/var/www/html/files \
/var/www/html/var \
/var/www/html/public/media \
/var/www/html/public/thumbnail \
/var/www/html/public/sitemap \
/var/www/html/public/theme; \
chown -R www-data:www-data /var/www/html
# Drop back to the app user
USER www-data

View File

@@ -0,0 +1,145 @@
x-environment: &shopware
image: "{{ SHOPWARE_CUSTOM_IMAGE }}"
volumes:
- files:/var/www/html/files
- theme:/var/www/html/public/theme
- media:/var/www/html/public/media
- thumbnail:/var/www/html/public/thumbnail
- sitemap:/var/www/html/public/sitemap
- "{{ SHOPWARE_INIT_HOST }}:{{ SHOPWARE_INIT_DOCKER }}:ro"
- bundles:/var/www/html/public/bundles
- "{{ SHOPWARE_FRAMEWORK_HOST }}:{{ SHOPWARE_FRAMEWORK_DOCKER }}:ro"
working_dir: {{ SHOPWARE_ROOT }}
{% include 'roles/docker-compose/templates/base.yml.j2' %}
# -------------------------
# INIT (runs once per deployment)
# -------------------------
{% set service_name = 'init' %}
{% set docker_restart_policy = 'no' %}
{{ service_name }}:
{% include 'roles/docker-container/templates/base.yml.j2' %}
{% set docker_restart_policy = DOCKER_RESTART_POLICY %}
<<: *shopware
container_name: "{{ SHOPWARE_INIT_CONTAINER }}"
entrypoint: [ "sh", "{{ SHOPWARE_INIT_DOCKER }}" ]
user: "0:0"
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{# -------------------------
WEB (serves HTTP on 8000)
------------------------- #}
{% set service_name = 'web' %}
{% set container_port = applications | get_app_conf(application_id, 'docker.services.web.port') %}
{{ service_name }}:
{% include 'roles/docker-container/templates/base.yml.j2' %}
<<: *shopware
{{ lookup('template', 'roles/docker-container/templates/build.yml.j2') | indent(4) }}
container_name: "{{ SHOPWARE_WEB_CONTAINER }}"
ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}"
depends_on:
init:
condition: service_completed_successfully
healthcheck:
test: ["CMD-SHELL", "wget -q --spider http://127.0.0.1:{{ container_port }}/robots.txt || wget -q --spider http://127.0.0.1:{{ container_port }}/ || exit 1"]
interval: 30s
timeout: 5s
retries: 10
start_period: 120s
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{# -------------------------
WORKER (async queues)
------------------------- #}
{% set service_name = 'worker' %}
{{ service_name }}:
{% include 'roles/docker-container/templates/base.yml.j2' %}
<<: *shopware
container_name: "{{ SHOPWARE_WORKER_CONTAINER }}"
pull_policy: never
entrypoint: {{ SHOPWARE_WORKER_ENTRYPOINT }}
depends_on:
init:
condition: service_completed_successfully
# @todo Activate for swarm deploy
# deploy:
# replicas: {{ SHOPWARE_WORKER_REPLICAS }}
healthcheck:
test: ["CMD", "php", "-v"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{# -------------------------
SCHEDULER (cron-like)
------------------------- #}
{% set service_name = 'scheduler' %}
{{ service_name }}:
{% include 'roles/docker-container/templates/base.yml.j2' %}
<<: *shopware
container_name: "{{ SHOPWARE_SCHED_CONTAINER }}"
pull_policy: never
entrypoint: {{ SHOPWARE_SCHED_ENTRYPOINT }}
depends_on:
init:
condition: service_completed_successfully
healthcheck:
test: ["CMD", "php", "-v"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% if SHOPWARE_OPENSEARCH_ENABLED %}
{% set service_name = 'opensearch' %}
{{ service_name }}:
{% include 'roles/docker-container/templates/base.yml.j2' %}
image: "{{ SHOPWARE_OPENSEARCH_IMAGE }}:{{ SHOPWARE_OPENSEARCH_VERSION }}"
container_name: "{{ SHOPWARE_OPENSEARCH_CONTAINER }}"
environment:
- discovery.type=single-node
- plugins.security.disabled=true
- bootstrap.memory_lock=true
- OPENSEARCH_JAVA_OPTS=-Xms{{ SHOPWARE_OPENSEARCH_MEM_RESERVATION }} -Xmx{{ SHOPWARE_OPENSEARCH_MEM_RESERVATION }}
ulimits:
memlock: { soft: -1, hard: -1 }
depends_on:
init:
condition: service_completed_successfully
healthcheck:
test: ["CMD-SHELL", "curl -fsSL http://127.0.0.1:{{ SHOPWARE_OPENSEARCH_PORT }}/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
{% include 'roles/docker-container/templates/networks.yml.j2' %}
{% endif %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %}
data:
name: {{ SHOPWARE_VOLUME }}
files:
name: {{ entity_name }}_files
theme:
name: {{ entity_name }}_theme
media:
name: {{ entity_name }}_media
thumbnail:
name: {{ entity_name }}_thumbnail
sitemap:
name: {{ entity_name }}_sitemap
bundles:
name: {{ entity_name }}_bundles
{% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -0,0 +1,36 @@
# DOMAIN/URL
DOMAIN={{ SHOPWARE_DOMAIN }}
APP_URL="{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
APP_DEBUG="{{ MODE_DEBUG | ternary(1, 0) }}"
# Shopware
APP_ENV={{ 'dev' if (ENVIRONMENT | lower) == 'development' else 'prod' }}
INSTANCE_ID={{ application_id }}
# Proxy
TRUSTED_PROXIES="{{ networks.internet.values() | select | join(',') }},127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
TRUSTED_HOSTS="{{ SHOPWARE_DOMAIN }}"
# Database
DATABASE_URL="mysql://{{ database_username }}:{{ database_password }}@{{ database_host }}:{{ database_port }}/{{ database_name }}"
# Redis (optional)
{% if SHOPWARE_REDIS_ENABLED | bool %}
REDIS_URL="redis://{{ SHOPWARE_REDIS_ADDRESS }}/0"
CACHE_URL="redis://{{ SHOPWARE_REDIS_ADDRESS }}/1"
MESSENGER_TRANSPORT_DSN="redis://{{ SHOPWARE_REDIS_ADDRESS }}/2"
{% else %}
CACHE_URL="file://cache"
{% endif %}
{% if SHOPWARE_OPENSEARCH_ENABLED %}
# Search
ELASTICSEARCH_URL="http://opensearch:{{ SHOPWARE_OPENSEARCH_PORT }}"
OPENSEARCH_URL="http://opensearch:{{ SHOPWARE_OPENSEARCH_PORT }}"
OPENSEARCH_HOST="opensearch"
OPENSEARCH_PORT_NUMBER="{{ SHOPWARE_OPENSEARCH_PORT }}"
OPENSEARCH_INITIAL_ADMIN_PASSWORD="{{ users.administrator.password }}"
{% endif %}
# Mail (Mailu)
MAILER_DSN="smtps://{{ users['no-reply'].email }}:{{ users['no-reply'].mailu_token }}@{{ SYSTEM_EMAIL.HOST }}:{{ SYSTEM_EMAIL.PORT }}"

View File

@@ -0,0 +1,45 @@
# General
application_id: "web-app-shopware"
database_type: "mariadb"
entity_name: "{{ application_id | get_entity_name }}"
# Docker
container_port: "{{ applications | get_app_conf(application_id, 'docker.services.web.port') }}"
docker_compose_flush_handlers: true
SHOPWARE_DOMAIN: "{{ domains | get_domain(application_id) }}"
# Shopware container/image vars
SHOPWARE_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.shopware.version') }}"
SHOPWARE_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.shopware.image') }}"
SHOPWARE_CUSTOM_IMAGE: "{{ SHOPWARE_IMAGE }}:{{ SHOPWARE_VERSION }}"
SHOPWARE_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
SHOPWARE_USER: "www-data"
SHOPWARE_ROOT: "/var/www/html"
# Split service container names
SHOPWARE_INIT_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.init.name') }}"
SHOPWARE_WEB_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.web.name') }}"
SHOPWARE_WORKER_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.worker.name') }}"
SHOPWARE_SCHED_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.scheduler.name') }}"
SHOPWARE_INIT_HOST: "{{ [ docker_compose.directories.volumes, 'init.sh' ] | path_join }}"
SHOPWARE_INIT_DOCKER: "/usr/local/bin/init.sh"
SHOPWARE_FRAMEWORK_HOST: "{{ [ docker_compose.directories.config, 'framework.yaml' ] | path_join }}"
SHOPWARE_FRAMEWORK_DOCKER: "/var/www/html/config/packages/framework.yaml"
# Entrypoints & replicas
SHOPWARE_WORKER_ENTRYPOINT: "{{ applications | get_app_conf(application_id, 'docker.services.worker.entrypoint') }}"
SHOPWARE_SCHED_ENTRYPOINT: "{{ applications | get_app_conf(application_id, 'docker.services.scheduler.entrypoint') }}"
SHOPWARE_WORKER_REPLICAS: "{{ applications | get_app_conf(application_id, 'docker.services.worker.replicas') }}"
# Redis Cache
SHOPWARE_REDIS_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.redis.enabled') }}"
SHOPWARE_REDIS_ADDRESS: "redis:6379"
# Opensearch
SHOPWARE_OPENSEARCH_PORT: "9200"
SHOPWARE_OPENSEARCH_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.opensearch.enabled') }}"
SHOPWARE_OPENSEARCH_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.opensearch.image') }}"
SHOPWARE_OPENSEARCH_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.opensearch.version') }}"
SHOPWARE_OPENSEARCH_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.opensearch.name') }}"
SHOPWARE_OPENSEARCH_MEM_RESERVATION: "{{ applications | get_app_conf(application_id, 'docker.services.opensearch.mem_reservation') }}"

View File

@@ -88,16 +88,15 @@
taiga: taiga:
{% set service_name = TAIGA_FRONT_SERVICE %} {% set service_name = TAIGA_FRONT_SERVICE %}
{% set container_port = 80 %}
{{ service_name }}: {{ service_name }}:
container_name: {{ TAIGA_CONTAINER }}-{{ service_name }} container_name: {{ TAIGA_CONTAINER }}-{{ service_name }}
image: "{{TAIGA_DOCKER_IMAGE_FRONTEND}}:{{ TAIGA_VERSION }}" image: "{{TAIGA_DOCKER_IMAGE_FRONTEND}}:{{ TAIGA_VERSION }}"
{% include 'roles/docker-container/templates/base.yml.j2' %} {% include 'roles/docker-container/templates/base.yml.j2' %}
healthcheck: {% filter indent(4) %}
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1/ >/dev/null || curl -fsS http://127.0.0.1/ >/dev/null"] {% include 'roles/docker-container/templates/healthcheck/http.yml.j2' %}
interval: 30s {% endfilter %}
timeout: 5s
retries: 5
start_period: 20s
{% include 'roles/docker-container/templates/networks.yml.j2' %} {% include 'roles/docker-container/templates/networks.yml.j2' %}
taiga: taiga:
# volumes: # volumes:
@@ -152,22 +151,21 @@
taiga: taiga:
{% set service_name = 'gateway' %} {% set service_name = 'gateway' %}
{% set container_port = 80 %}
{{ service_name }}: {{ service_name }}:
container_name: {{ TAIGA_CONTAINER }}-{{ service_name }} container_name: {{ TAIGA_CONTAINER }}-{{ service_name }}
image: nginx:alpine image: nginx:alpine
ports: ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:80" - "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}"
volumes: volumes:
- {{ docker_repository_path }}taiga-gateway/taiga.conf:/etc/nginx/conf.d/default.conf - {{ docker_repository_path }}taiga-gateway/taiga.conf:/etc/nginx/conf.d/default.conf
- static-data:/taiga/static - static-data:/taiga/static
- media-data:/taiga/media - media-data:/taiga/media
{% include 'roles/docker-container/templates/base.yml.j2' %} {% include 'roles/docker-container/templates/base.yml.j2' %}
healthcheck: {% filter indent(4) %}
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1/ >/dev/null || curl -fsS http://127.0.0.1/ >/dev/null"] {% include 'roles/docker-container/templates/healthcheck/http.yml.j2' %}
interval: 30s {% endfilter %}
timeout: 5s
retries: 5
start_period: 20s
{% include 'roles/docker-container/templates/networks.yml.j2' %} {% include 'roles/docker-container/templates/networks.yml.j2' %}
taiga: taiga:
depends_on: depends_on:

View File

@@ -37,7 +37,7 @@ BUILTIN_FILTERS: Set[str] = {
"type_debug", "json_query", "mandatory", "hash", "checksum", "type_debug", "json_query", "mandatory", "hash", "checksum",
"lower", "upper", "capitalize", "unique", "dict2items", "items2dict", "lower", "upper", "capitalize", "unique", "dict2items", "items2dict",
"password_hash", "path_join", "product", "quote", "split", "ternary", "to_nice_yaml", "password_hash", "path_join", "product", "quote", "split", "ternary", "to_nice_yaml",
"tojson", "to_nice_json", "tojson", "to_nice_json", "human_to_bytes",
# Date/time-ish # Date/time-ish

View File

@@ -0,0 +1,80 @@
# tests/unit/filter_plugins/test_node_autosize.py
import unittest
from unittest.mock import patch
# Module under test
import filter_plugins.node_autosize as na
try:
from ansible.errors import AnsibleFilterError # type: ignore
except Exception:
AnsibleFilterError = Exception
class TestNodeAutosizeFilter(unittest.TestCase):
"""Unit tests for the node_autosize filter plugin."""
def setUp(self):
# Default parameters used by all tests
self.applications = {"web-app-nextcloud": {"docker": {"services": {"whiteboard": {}}}}}
self.application_id = "web-app-nextcloud"
self.service_name = "whiteboard"
# Patch get_app_conf (imported from module_utils.config_utils) inside the filter plugin
self.patcher = patch("filter_plugins.node_autosize.get_app_conf")
self.mock_get_app_conf = self.patcher.start()
def tearDown(self):
self.patcher.stop()
def _set_mem_limit(self, value):
"""Helper: mock get_app_conf to return a specific mem_limit value."""
def _fake_get_app_conf(applications, application_id, config_path, strict=True, default=None, **_kwargs):
assert application_id == self.application_id
assert config_path == f"docker.services.{self.service_name}.mem_limit"
return value
self.mock_get_app_conf.side_effect = _fake_get_app_conf
# --- Tests for node_max_old_space_size (MB) ---
def test_512m_below_minimum_raises(self):
# mem_limit=512 MB < min_mb=768 -> must raise
self._set_mem_limit("512m")
with self.assertRaises(AnsibleFilterError):
na.node_max_old_space_size(self.applications, self.application_id, self.service_name)
def test_2g_caps_to_minimum_768(self):
self._set_mem_limit("2g")
mb = na.node_max_old_space_size(self.applications, self.application_id, self.service_name)
self.assertEqual(mb, 768) # 35% of 2g = 700 < 768 -> min wins
def test_8g_uses_35_percent_without_hitting_hardcap(self):
self._set_mem_limit("8g")
mb = na.node_max_old_space_size(self.applications, self.application_id, self.service_name)
self.assertEqual(mb, 2800) # 8g -> 8000 MB * 0.35 = 2800
def test_16g_hits_hardcap_3072(self):
self._set_mem_limit("16g")
mb = na.node_max_old_space_size(self.applications, self.application_id, self.service_name)
self.assertEqual(mb, 3072) # 35% of 16g = 5600, hardcap=3072
def test_numeric_bytes_input(self):
# 2 GiB in bytes (IEC): 2 * 1024 ** 3 = 2147483648
self._set_mem_limit(2147483648)
mb = na.node_max_old_space_size(self.applications, self.application_id, self.service_name)
# 2 GiB ≈ 2147 MB; 35% => ~751, min 768 => 768
self.assertEqual(mb, 768)
def test_invalid_unit_raises_error(self):
self._set_mem_limit("12x") # invalid unit
with self.assertRaises(AnsibleFilterError):
na.node_max_old_space_size(self.applications, self.application_id, self.service_name)
def test_missing_mem_limit_raises_error(self):
self._set_mem_limit(None)
with self.assertRaises(AnsibleFilterError):
na.node_max_old_space_size(self.applications, self.application_id, self.service_name)
if __name__ == "__main__":
unittest.main()

View File

View File

@@ -0,0 +1,117 @@
import os
import sys
import unittest
from importlib import import_module
# Compute repo root (…/tests/unit/roles/web-app-desktop/lookup_plugins/docker_cards_grouped.py -> repo root)
_THIS_DIR = os.path.dirname(__file__)
_REPO_ROOT = os.path.abspath(os.path.join(_THIS_DIR, "../../../../.."))
# Add the lookup_plugins directory to sys.path so we can import the plugin as a plain module
_LOOKUP_DIR = os.path.join(_REPO_ROOT, "roles", "web-app-desktop", "lookup_plugins")
if _LOOKUP_DIR not in sys.path:
sys.path.insert(0, _LOOKUP_DIR)
# Import the plugin module
plugin = import_module("docker_cards_grouped")
LookupModule = plugin.LookupModule
try:
from ansible.errors import AnsibleError
except Exception: # Fallback for environments without full Ansible
class AnsibleError(Exception):
pass
class TestDockerCardsGroupedLookup(unittest.TestCase):
def setUp(self):
self.lookup = LookupModule()
# Menu categories with mixed-case names to verify case-insensitive sort
self.menu_categories = {
"B-Group": {"tags": ["b", "beta"]},
"a-Group": {"tags": ["a", "alpha"]},
"Zeta": {"tags": ["z"]},
}
# Cards with tags; one should end up uncategorized
self.cards = [
{"title": "Alpha Tool", "tags": ["a"]},
{"title": "Beta Widget", "tags": ["beta"]},
{"title": "Zed App", "tags": ["z"]},
{"title": "Unmatched Thing", "tags": ["x"]},
]
def _run(self, cards=None, menu_categories=None):
result = self.lookup.run(
[cards or self.cards, menu_categories or self.menu_categories]
)
# Plugin returns a single-element list containing the result dict
self.assertIsInstance(result, list)
self.assertEqual(len(result), 1)
self.assertIsInstance(result[0], dict)
return result[0]
def test_categorization_and_uncategorized(self):
data = self._run()
self.assertIn("categorized", data)
self.assertIn("uncategorized", data)
categorized = data["categorized"]
uncategorized = data["uncategorized"]
# Each matching card is placed into the proper category
self.assertIn("a-Group", categorized)
self.assertIn("B-Group", categorized)
self.assertIn("Zeta", categorized)
titles_in_a = [c["title"] for c in categorized["a-Group"]]
titles_in_b = [c["title"] for c in categorized["B-Group"]]
titles_in_z = [c["title"] for c in categorized["Zeta"]]
self.assertEqual(titles_in_a, ["Alpha Tool"])
self.assertEqual(titles_in_b, ["Beta Widget"])
self.assertEqual(titles_in_z, ["Zed App"])
# Unmatched card should be in 'uncategorized'
self.assertEqual(len(uncategorized), 1)
self.assertEqual(uncategorized[0]["title"], "Unmatched Thing")
def test_categories_sorted_alphabetically_case_insensitive(self):
data = self._run()
categorized = data["categorized"]
# Verify order is alphabetical by key, case-insensitive
keys = list(categorized.keys())
self.assertEqual(keys, ["a-Group", "B-Group", "Zeta"])
def test_multiple_tags_match_first_category_encountered(self):
# A card that matches multiple categories should be placed
# into the first matching category based on menu_categories iteration order.
# Here "Dual Match" has both 'a' and 'b' tags; since "a-Group" is alphabetically
# before "B-Group" only after sorting happens at RETURN time, we need to ensure the
# assignment is based on menu_categories order (insertion order).
menu_categories = {
"B-Group": {"tags": ["b"]},
"a-Group": {"tags": ["a"]},
}
cards = [{"title": "Dual Match", "tags": ["a", "b"]}]
# The plugin iterates menu_categories in insertion order and breaks on first match,
# so this card should end up in "B-Group".
data = self._run(cards=cards, menu_categories=menu_categories)
categorized = data["categorized"]
self.assertIn("B-Group", categorized)
self.assertEqual([c["title"] for c in categorized["B-Group"]], ["Dual Match"])
self.assertNotIn("a-Group", categorized) # no card added there
def test_missing_arguments_raises(self):
with self.assertRaises(AnsibleError):
self.lookup.run([]) # no args
with self.assertRaises(AnsibleError):
self.lookup.run([[]]) # only one arg
if __name__ == "__main__":
unittest.main()