Compare commits

...

8 Commits

Author SHA1 Message Date
d2dc2eab5f web-app-bluesky: refactor role, add Cloudflare DNS integration, split tasks
Changes: add AppView port; add CSP whitelist; new tasks (01_pds, 02_social_app, 03_dns); switch templates to BLUESKY_* vars; update docker-compose and env; TCP healthcheck; remove admin_password from schema.

Conversation context: https://chatgpt.com/share/68b85276-e0ec-800f-90ec-480a1d528593
2025-09-03 16:37:35 +02:00
a1130e33d7 web-app-chess: refactor runtime & entrypoint
- Move entrypoint to files/ and deploy via copy
- Parameterize APP_KEY_FILE, data dir, and entrypoint paths
- Require explicit PORT/PG envs (remove fallbacks)
- Drop stray header from config/main.yml
- Dockerfile: use templated data dir & entrypoint; keep node user
- Compose: set custom image, adjust volume mapping
- env: derive APP_SCHEME from WEB_PROTOCOL; NODE_ENV from ENVIRONMENT
- tasks: add 01_core and simplify main to include it

Ref: https://chatgpt.com/share/68b851c5-4dd8-800f-8e9e-22b985597b8f
2025-09-03 16:34:04 +02:00
df122905eb mailu: include base defaults for oletools (env_file/LD_PRELOAD)
Add base include to oletools service so it inherits env_file (LD_PRELOAD=/usr/lib/libhardened_malloc.so) and other defaults. Fixes crash: PermissionError: '/proc/cpuinfo' during hardened_malloc compatibility probe when LD_PRELOAD was absent. Aligns oletools with other Mailu services.

Refs: ChatGPT discussion – https://chatgpt.com/share/68b837ba-c9cc-800f-b5d9-62b60d6fafd9
2025-09-03 14:42:50 +02:00
d093a22d61 Added correct CSP for JIRA 2025-09-03 11:35:24 +02:00
5e550ce3a3 sys-ctl-rpr-docker-soft: switch to STRICT label mode and adapt tests
- script.py now resolves docker-compose project and working_dir strictly from container labels
- removed container-name fallback logic
- adjusted sys-ctl-hlth-docker-container to include sys-ctl-rpr-docker-soft
- cleaned up sys-svc-docker dependencies
- updated unit tests to mock docker inspect and os.path.isfile for STRICT mode

Conversation: https://chatgpt.com/share/68b80927-b800-800f-a909-0fe8d110fd0e
2025-09-03 11:24:14 +02:00
0ada12e3ca Enabled rpr service by failed health checkl isntead of tiumer 2025-09-03 10:46:46 +02:00
1a5ce4a7fa web-app-bookwyrm, web-app-confluence:
- Fix BookWyrm email SSL/TLS handling (use ternary without 'not' for clarity)
- Add truststore_enabled flag in Confluence config and vars
- Wire JVM_SUPPORT_RECOMMENDED_ARGS to disable UPM signature check if truststore is disabled
- Add placeholder style.css.j2 for Confluence

See conversation: https://chatgpt.com/share/68b80024-7100-800f-a2fe-ba8b9f5cec05
2025-09-03 10:45:41 +02:00
a9abb3ce5d Added unsafe-eval csp to jira 2025-09-03 09:43:07 +02:00
35 changed files with 423 additions and 196 deletions

View File

@@ -37,7 +37,6 @@ SYS_SCHEDULE_CLEANUP_FAILED_BACKUPS: "*-*-* 12:00:00"
### Schedule for repair services ### Schedule for repair services
SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month SYS_SCHEDULE_REPAIR_BTRFS_AUTO_BALANCER: "Sat *-*-01..07 00:00:00" # Execute btrfs auto balancer every first Saturday of a month
SYS_SCHEDULE_REPAIR_DOCKER_SOFT: "*-*-* {{ HOURS_SERVER_AWAKE }}:30:00" # Heal unhealthy docker instances once per hour
SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM SYS_SCHEDULE_REPAIR_DOCKER_HARD: "Sun *-*-* 08:00:00" # Restart docker instances every Sunday at 8:00 AM
### Schedule for backup tasks ### Schedule for backup tasks

View File

@@ -72,6 +72,7 @@ ports:
web-svc-logout: 8048 web-svc-logout: 8048
web-app-bookwyrm: 8049 web-app-bookwyrm: 8049
web-app-chess: 8050 web-app-chess: 8050
web-app-bluesky_view: 8051
web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port web-app-bigbluebutton: 48087 # This port is predefined by bbb. @todo Try to change this to a 8XXX port
public: public:
# The following ports should be changed to 22 on the subdomain via stream mapping # The following ports should be changed to 22 on the subdomain via stream mapping

View File

@@ -3,9 +3,14 @@
name: sys-ctl-alm-compose name: sys-ctl-alm-compose
when: run_once_sys_ctl_alm_compose is not defined when: run_once_sys_ctl_alm_compose is not defined
- name: Include dependency 'sys-ctl-rpr-docker-soft'
include_role:
name: sys-ctl-rpr-docker-soft
when: run_once_sys_ctl_rpr_docker_soft is not defined
- include_role: - include_role:
name: sys-service name: sys-service
vars: vars:
system_service_timer_enabled: true system_service_timer_enabled: true
system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER }}" system_service_on_calendar: "{{ SYS_SCHEDULE_HEALTH_DOCKER_CONTAINER }}"
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}" system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }}"

View File

@@ -2,7 +2,7 @@
include_role: include_role:
name: sys-ctl-alm-compose name: sys-ctl-alm-compose
when: run_once_sys_ctl_alm_compose is not defined when: run_once_sys_ctl_alm_compose is not defined
- include_role: - include_role:
name: sys-service name: sys-service
vars: vars:

View File

@@ -1,15 +1,26 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
""" """
Restart Docker-Compose configurations with exited or unhealthy containers. Restart Docker-Compose configurations with exited or unhealthy containers.
This version receives the *manipulation services* via argparse (no Jinja).
STRICT mode: Resolve the Compose project exclusively via Docker labels
(com.docker.compose.project and com.docker.compose.project.working_dir).
No container-name fallback. If labels are missing or Docker is unavailable,
the script records an error for that container.
All shell interactions that matter for tests go through print_bash()
so they can be monkeypatched in unit tests.
""" """
import subprocess import subprocess
import time import time
import os import os
import argparse import argparse
from typing import List from typing import List, Optional, Tuple
# ---------------------------
# Shell helpers
# ---------------------------
def bash(command: str) -> List[str]: def bash(command: str) -> List[str]:
print(command) print(command)
process = subprocess.Popen( process = subprocess.Popen(
@@ -30,31 +41,45 @@ def list_to_string(lst: List[str]) -> str:
def print_bash(command: str) -> List[str]: def print_bash(command: str) -> List[str]:
"""
Wrapper around bash() that echoes combined output for easier debugging
and can be monkeypatched in tests.
"""
output = bash(command) output = bash(command)
if output: if output:
print(list_to_string(output)) print(list_to_string(output))
return output return output
def find_docker_compose_file(directory: str) -> str | None: # ---------------------------
# Filesystem / compose helpers
# ---------------------------
def find_docker_compose_file(directory: str) -> Optional[str]:
"""
Search for docker-compose.yml beneath a directory.
"""
for root, _, files in os.walk(directory): for root, _, files in os.walk(directory):
if "docker-compose.yml" in files: if "docker-compose.yml" in files:
return os.path.join(root, "docker-compose.yml") return os.path.join(root, "docker-compose.yml")
return None return None
def detect_env_file(project_path: str) -> str | None: def detect_env_file(project_path: str) -> Optional[str]:
""" """
Return the path to a Compose env file if present (.env preferred, fallback to env). Return the path to a Compose env file if present (.env preferred, fallback to .env/env).
""" """
candidates = [os.path.join(project_path, ".env"), os.path.join(project_path, ".env", "env")] candidates = [
os.path.join(project_path, ".env"),
os.path.join(project_path, ".env", "env"),
]
for candidate in candidates: for candidate in candidates:
if os.path.isfile(candidate): if os.path.isfile(candidate):
return candidate return candidate
return None return None
def compose_cmd(subcmd: str, project_path: str, project_name: str | None = None) -> str: def compose_cmd(subcmd: str, project_path: str, project_name: Optional[str] = None) -> str:
""" """
Build a docker-compose command string with optional -p and --env-file if present. Build a docker-compose command string with optional -p and --env-file if present.
Example: compose_cmd("restart", "/opt/docker/foo", "foo") Example: compose_cmd("restart", "/opt/docker/foo", "foo")
@@ -69,6 +94,10 @@ def compose_cmd(subcmd: str, project_path: str, project_name: str | None = None)
return " ".join(parts) return " ".join(parts)
# ---------------------------
# Business logic
# ---------------------------
def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[str]: def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[str]:
""" """
Accept either: Accept either:
@@ -78,7 +107,6 @@ def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[s
if raw: if raw:
return [s for s in raw if s.strip()] return [s for s in raw if s.strip()]
if raw_str: if raw_str:
# split on comma or whitespace
parts = [p.strip() for chunk in raw_str.split(",") for p in chunk.split()] parts = [p.strip() for chunk in raw_str.split(",") for p in chunk.split()]
return [p for p in parts if p] return [p for p in parts if p]
return [] return []
@@ -87,7 +115,7 @@ def normalize_services_arg(raw: List[str] | None, raw_str: str | None) -> List[s
def wait_while_manipulation_running( def wait_while_manipulation_running(
services: List[str], services: List[str],
waiting_time: int = 600, waiting_time: int = 600,
timeout: int | None = None, timeout: Optional[int] = None,
) -> None: ) -> None:
""" """
Wait until none of the given services are active anymore. Wait until none of the given services are active anymore.
@@ -107,7 +135,6 @@ def wait_while_manipulation_running(
break break
if any_active: if any_active:
# Check timeout
elapsed = time.time() - start elapsed = time.time() - start
if timeout and elapsed >= timeout: if timeout and elapsed >= timeout:
print(f"Timeout ({timeout}s) reached while waiting for services. Continuing anyway.") print(f"Timeout ({timeout}s) reached while waiting for services. Continuing anyway.")
@@ -119,7 +146,30 @@ def wait_while_manipulation_running(
break break
def main(base_directory: str, manipulation_services: List[str], timeout: int | None) -> int: def get_compose_project_info(container: str) -> Tuple[str, str]:
"""
Resolve project name and working dir from Docker labels.
STRICT: Raises RuntimeError if labels are missing/unreadable.
"""
out_project = print_bash(
f"docker inspect -f '{{{{ index .Config.Labels \"com.docker.compose.project\" }}}}' {container}"
)
out_workdir = print_bash(
f"docker inspect -f '{{{{ index .Config.Labels \"com.docker.compose.project.working_dir\" }}}}' {container}"
)
project = out_project[0].strip() if out_project else ""
workdir = out_workdir[0].strip() if out_workdir else ""
if not project:
raise RuntimeError(f"No compose project label found for container {container}")
if not workdir:
raise RuntimeError(f"No compose working_dir label found for container {container}")
return project, workdir
def main(base_directory: str, manipulation_services: List[str], timeout: Optional[int]) -> int:
errors = 0 errors = 0
wait_while_manipulation_running(manipulation_services, waiting_time=600, timeout=timeout) wait_while_manipulation_running(manipulation_services, waiting_time=600, timeout=timeout)
@@ -131,43 +181,50 @@ def main(base_directory: str, manipulation_services: List[str], timeout: int | N
) )
failed_containers = unhealthy_container_names + exited_container_names failed_containers = unhealthy_container_names + exited_container_names
unfiltered_failed_docker_compose_repositories = [ for container in failed_containers:
container.split("-")[0] for container in failed_containers try:
] project, workdir = get_compose_project_info(container)
filtered_failed_docker_compose_repositories = list( except Exception as e:
dict.fromkeys(unfiltered_failed_docker_compose_repositories) print(f"Error reading compose labels for {container}: {e}")
) errors += 1
continue
for repo in filtered_failed_docker_compose_repositories: compose_file_path = os.path.join(workdir, "docker-compose.yml")
compose_file_path = find_docker_compose_file(os.path.join(base_directory, repo)) if not os.path.isfile(compose_file_path):
# As STRICT: we only trust labels; if file not there, error out.
print(f"Error: docker-compose.yml not found at {compose_file_path} for container {container}")
errors += 1
continue
if compose_file_path: project_path = os.path.dirname(compose_file_path)
try:
print("Restarting unhealthy container in:", compose_file_path) print("Restarting unhealthy container in:", compose_file_path)
project_path = os.path.dirname(compose_file_path) print_bash(compose_cmd("restart", project_path, project))
try: except Exception as e:
# restart with optional --env-file and -p if "port is already allocated" in str(e):
print_bash(compose_cmd("restart", project_path, repo)) print("Detected port allocation problem. Executing recovery steps...")
except Exception as e: try:
if "port is already allocated" in str(e):
print("Detected port allocation problem. Executing recovery steps...")
# down (no -p needed), then engine restart, then up -d with -p
print_bash(compose_cmd("down", project_path)) print_bash(compose_cmd("down", project_path))
print_bash("systemctl restart docker") print_bash("systemctl restart docker")
print_bash(compose_cmd("up -d", project_path, repo)) print_bash(compose_cmd("up -d", project_path, project))
else: except Exception as e2:
print("Unhandled exception during restart:", e) print("Unhandled exception during recovery:", e2)
errors += 1 errors += 1
else: else:
print("Error: Docker Compose file not found for:", repo) print("Unhandled exception during restart:", e)
errors += 1 errors += 1
print("Finished restart procedure.") print("Finished restart procedure.")
return errors return errors
# ---------------------------
# CLI
# ---------------------------
if __name__ == "__main__": if __name__ == "__main__":
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description="Restart Docker-Compose configurations with exited or unhealthy containers." description="Restart Docker-Compose configurations with exited or unhealthy containers (STRICT label mode)."
) )
parser.add_argument( parser.add_argument(
"--manipulation", "--manipulation",
@@ -184,12 +241,12 @@ if __name__ == "__main__":
"--timeout", "--timeout",
type=int, type=int,
default=60, default=60,
help="Maximum time in seconds to wait for manipulation services before continuing.(Default 1min)", help="Maximum time in seconds to wait for manipulation services before continuing. (Default 1min)",
) )
parser.add_argument( parser.add_argument(
"base_directory", "base_directory",
type=str, type=str,
help="Base directory where Docker Compose configurations are located.", help="(Unused in STRICT mode) Base directory where Docker Compose configurations are located.",
) )
args = parser.parse_args() args = parser.parse_args()
services = normalize_services_arg(args.manipulation, args.manipulation_string) services = normalize_services_arg(args.manipulation, args.manipulation_string)

View File

@@ -6,8 +6,6 @@
- include_role: - include_role:
name: sys-service name: sys-service
vars: vars:
system_service_on_calendar: "{{ SYS_SCHEDULE_REPAIR_DOCKER_SOFT }}"
system_service_timer_enabled: true
system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}" system_service_tpl_on_failure: "{{ SYS_SERVICE_ON_FAILURE_COMPOSE }}"
system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP| join(' ') }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }} --timeout '{{ SYS_TIMEOUT_DOCKER_RPR_SOFT }}'" system_service_tpl_exec_start_pre: "/usr/bin/python {{ PATH_SYSTEM_LOCK_SCRIPT }} {{ SYS_SERVICE_GROUP_MANIPULATION | join(' ') }} --ignore {{ SYS_SERVICE_GROUP_CLEANUP| join(' ') }} {{ SYS_SERVICE_REPAIR_DOCKER_SOFT }} --timeout '{{ SYS_TIMEOUT_DOCKER_RPR_SOFT }}'"
system_service_tpl_exec_start: > system_service_tpl_exec_start: >

View File

@@ -17,14 +17,8 @@ When enabled via `MODE_CLEANUP` or `MODE_RESET`, it will automatically prune unu
Installs Docker and Docker Compose via the system package manager. Installs Docker and Docker Compose via the system package manager.
- **Integrated Dependencies** - **Integrated Dependencies**
Includes backup, repair, and health check sub-roles: Includes backup, repair, and health check sub-roles
- `sys-ctl-bkp-docker-2-loc`
- `user-administrator`
- `sys-ctl-hlth-docker-container`
- `sys-ctl-hlth-docker-volumes`
- `sys-ctl-rpr-docker-soft`
- `sys-ctl-rpr-docker-hard`
- **Cleanup & Reset Modes** - **Cleanup & Reset Modes**
- `MODE_CLEANUP`: Removes unused Docker containers, networks, images, and volumes. - `MODE_CLEANUP`: Removes unused Docker containers, networks, images, and volumes.
- `MODE_RESET`: Performs cleanup and restarts the Docker service. - `MODE_RESET`: Performs cleanup and restarts the Docker service.

View File

@@ -21,6 +21,5 @@
- sys-ctl-bkp-docker-2-loc - sys-ctl-bkp-docker-2-loc
- sys-ctl-hlth-docker-container - sys-ctl-hlth-docker-container
- sys-ctl-hlth-docker-volumes - sys-ctl-hlth-docker-volumes
- sys-ctl-rpr-docker-soft
- sys-ctl-rpr-docker-hard - sys-ctl-rpr-docker-hard
when: SYS_SVC_DOCKER_LOAD_SERVICES | bool when: SYS_SVC_DOCKER_LOAD_SERVICES | bool

View File

@@ -1,19 +1,38 @@
images:
pds: "ghcr.io/bluesky-social/pds:latest"
pds:
version: "latest"
features: features:
matomo: true matomo: true
css: true css: true
desktop: true desktop: true
central_database: true central_database: false
logout: true logout: true
server: server:
domains: domains:
canonical: canonical:
web: "bskyweb.{{ PRIMARY_DOMAIN }}" web: "bskyweb.{{ PRIMARY_DOMAIN }}"
api: "bluesky.{{ PRIMARY_DOMAIN }}" api: "bluesky.{{ PRIMARY_DOMAIN }}"
view: "view.bluesky.{{ PRIMARY_DOMAIN }}"
csp:
whitelist:
connect-src:
- "{{ WEB_PROTOCOL }}://{{ BLUESKY_API_DOMAIN }}"
- https://plc.directory
- https://bsky.social
- https://api.bsky.app
- https://public.api.bsky.app
- https://events.bsky.app
- https://statsigapi.net
- https://ip.bsky.app
- wss://bsky.network
- wss://*.bsky.app
docker: docker:
services: services:
database: database:
enabled: true enabled: false
web:
enabled: true # @see https://github.com/bluesky-social/social-app
view:
enabled: false
pds:
image: "ghcr.io/bluesky-social/pds"
version: "latest"
volumes:
pds_data: "pds_data"

View File

@@ -7,7 +7,3 @@ credentials:
description: "PLC rotation key in hex format (32 bytes)" description: "PLC rotation key in hex format (32 bytes)"
algorithm: "sha256" algorithm: "sha256"
validation: "^[a-f0-9]{64}$" validation: "^[a-f0-9]{64}$"
admin_password:
description: "Initial admin password for Bluesky PDS"
algorithm: "plain"
validation: "^.{12,}$"

View File

@@ -0,0 +1,30 @@
# The following lines should be removed when the following issue is closed:
# https://github.com/bluesky-social/pds/issues/52
- name: Download pdsadmin tarball
get_url:
url: "https://github.com/lhaig/pdsadmin/releases/download/v1.0.0-dev/pdsadmin_Linux_x86_64.tar.gz"
dest: "{{ BLUESKY_PDSADMIN_TMP_TAR }}"
mode: '0644'
notify:
- docker compose up
- docker compose build
- name: Create {{ BLUESKY_PDSADMIN_DIR }}
file:
path: "{{ BLUESKY_PDSADMIN_DIR }}"
state: directory
mode: '0755'
- name: Extract pdsadmin tarball
unarchive:
src: "{{ BLUESKY_PDSADMIN_TMP_TAR }}"
dest: "{{ BLUESKY_PDSADMIN_DIR }}"
remote_src: yes
mode: '0755'
- name: Ensure pdsadmin is executable
file:
path: "{{ BLUESKY_PDSADMIN_FILE }}"
mode: '0755'
state: file

View File

@@ -0,0 +1,8 @@
- name: clone social app repository
git:
repo: "https://github.com/bluesky-social/social-app.git"
dest: "{{ BLUESKY_SOCIAL_APP_DIR }}"
version: "main"
notify:
- docker compose up
- docker compose build

View File

@@ -0,0 +1,73 @@
---
# Creates Cloudflare DNS records for Bluesky:
# - PDS/API host (A/AAAA)
# - Handle TXT verification (_atproto)
# - Optional Web UI host (A/AAAA)
# - Optional custom AppView host (A/AAAA)
#
# Requirements:
# DNS_PROVIDER == 'cloudflare'
# CLOUDFLARE_API_TOKEN set
#
# Inputs (inventory/vars):
# BLUESKY_API_DOMAIN, BLUESKY_WEB_DOMAIN, BLUESKY_VIEW_DOMAIN
# BLUESKY_WEB_ENABLED (bool), BLUESKY_VIEW_ENABLED (bool)
# PRIMARY_DOMAIN
# networks.internet.ip4 (and optionally networks.internet.ip6)
- name: "DNS (Cloudflare) for Bluesky base records"
include_role:
name: sys-dns-cloudflare-records
when: DNS_PROVIDER | lower == 'cloudflare'
vars:
cloudflare_records:
# 1) PDS / API host
- type: A
zone: "{{ BLUESKY_API_DOMAIN | to_zone }}"
name: "{{ BLUESKY_API_DOMAIN }}"
content: "{{ networks.internet.ip4 }}"
proxied: false
- type: AAAA
zone: "{{ BLUESKY_API_DOMAIN | to_zone }}"
name: "{{ BLUESKY_API_DOMAIN }}"
content: "{{ networks.internet.ip6 | default('') }}"
proxied: false
state: "{{ (networks.internet.ip6 is defined and (networks.internet.ip6 | string) | length > 0) | ternary('present','absent') }}"
# 2) Handle verification for primary handle (Apex)
- type: TXT
zone: "{{ PRIMARY_DOMAIN | to_zone }}"
name: "_atproto.{{ PRIMARY_DOMAIN }}"
value: "did=did:web:{{ BLUESKY_API_DOMAIN }}"
# 3) Web UI host (only if enabled)
- type: A
zone: "{{ BLUESKY_WEB_DOMAIN | to_zone }}"
name: "{{ BLUESKY_WEB_DOMAIN }}"
content: "{{ networks.internet.ip4 }}"
proxied: true
state: "{{ (BLUESKY_WEB_ENABLED | bool) | ternary('present','absent') }}"
- type: AAAA
zone: "{{ BLUESKY_WEB_DOMAIN | to_zone }}"
name: "{{ BLUESKY_WEB_DOMAIN }}"
content: "{{ networks.internet.ip6 | default('') }}"
proxied: true
state: "{{ (BLUESKY_WEB_ENABLED | bool) and (networks.internet.ip6 is defined) and ((networks.internet.ip6 | string) | length > 0) | ternary('present','absent') }}"
# 4) Custom AppView host (only if you actually run one and it's not api.bsky.app)
- type: A
zone: "{{ BLUESKY_VIEW_DOMAIN | to_zone }}"
name: "{{ BLUESKY_VIEW_DOMAIN }}"
content: "{{ networks.internet.ip4 }}"
proxied: false
state: "{{ (BLUESKY_VIEW_ENABLED | bool) and (BLUESKY_VIEW_DOMAIN != 'api.bsky.app') | ternary('present','absent') }}"
- type: AAAA
zone: "{{ BLUESKY_VIEW_DOMAIN | to_zone }}"
name: "{{ BLUESKY_VIEW_DOMAIN }}"
content: "{{ networks.internet.ip6 | default('') }}"
proxied: false
state: "{{ (BLUESKY_VIEW_ENABLED | bool) and (BLUESKY_VIEW_DOMAIN != 'api.bsky.app') and (networks.internet.ip6 is defined) and ((networks.internet.ip6 | string) | length > 0) | ternary('present','absent') }}"

View File

@@ -1,48 +1,39 @@
- name: "include docker-compose role" - name: "include docker-compose role"
include_role: include_role:
name: docker-compose name: docker-compose
vars:
docker_compose_flush_handlers: false
- name: "include role sys-stk-front-proxy for {{ application_id }}" - name: "Include front proxy for {{ BLUESKY_API_DOMAIN }}:{{ BLUESKY_API_PORT }}"
include_role: include_role:
name: sys-stk-front-proxy name: sys-stk-front-proxy
vars: vars:
domain: "{{ item.domain }}" domain: "{{ BLUESKY_API_DOMAIN }}"
http_port: "{{ item.http_port }}" http_port: "{{ BLUESKY_API_PORT }}"
loop:
- { domain: "{{domains[application_id].api", http_port: "{{ports.localhost.http['web-app-bluesky_api']}}" }
- { domain: "{{domains[application_id].web}}", http_port: "{{ports.localhost.http['web-app-bluesky_web']}}" }
# The following lines should be removed when the following issue is closed: - name: "Include front proxy for {{ BLUESKY_WEB_DOMAIN }}:{{ BLUESKY_WEB_PORT }}"
# https://github.com/bluesky-social/pds/issues/52 include_role:
name: sys-stk-front-proxy
vars:
domain: "{{ BLUESKY_WEB_DOMAIN }}"
http_port: "{{ BLUESKY_WEB_PORT }}"
when: BLUESKY_WEB_ENABLED | bool
- name: Download pdsadmin tarball - name: "Include front proxy for {{ BLUESKY_VIEW_DOMAIN }}:{{ BLUESKY_VIEW_PORT }}"
get_url: include_role:
url: "https://github.com/lhaig/pdsadmin/releases/download/v1.0.0-dev/pdsadmin_Linux_x86_64.tar.gz" name: sys-stk-front-proxy
dest: "{{pdsadmin_temporary_tar_path}}" vars:
mode: '0644' domain: "{{ BLUESKY_VIEW_DOMAIN }}"
http_port: "{{ BLUESKY_VIEW_PORT }}"
when: BLUESKY_VIEW_ENABLED | bool
- name: Create {{pdsadmin_folder_path}} - name: "Execute PDS routines"
file: ansible.builtin.include_tasks: "01_pds.yml"
path: "{{pdsadmin_folder_path}}"
state: directory
mode: '0755'
- name: Extract pdsadmin tarball
unarchive:
src: "{{pdsadmin_temporary_tar_path}}"
dest: "{{pdsadmin_folder_path}}"
remote_src: yes
mode: '0755'
- name: Ensure pdsadmin is executable - name: "Execute Social App routines"
file: ansible.builtin.include_tasks: "02_social_app.yml"
path: "{{pdsadmin_file_path}}" when: BLUESKY_WEB_ENABLED | bool
mode: '0755'
state: file
- name: clone social app repository - name: "DNS for Bluesky"
git: include_tasks: "03_dns.yml"
repo: "https://github.com/bluesky-social/social-app.git" when: DNS_PROVIDER | lower == 'cloudflare'
dest: "{{social_app_path}}"
version: "main"
notify: docker compose up

View File

@@ -3,40 +3,32 @@
pds: pds:
{% set container_port = 3000 %} {% set container_port = 3000 %}
{% set container_healthcheck = 'xrpc/_health' %} {% set container_healthcheck = 'xrpc/_health' %}
image: "{{ applications | get_app_conf(application_id, 'images.pds', True) }}" image: "{{ BLUESKY_PDS_IMAGE }}:{{ BLUESKY_PDS_VERSION }}"
{% include 'roles/docker-container/templates/base.yml.j2' %} {% include 'roles/docker-container/templates/base.yml.j2' %}
volumes: volumes:
- pds_data:/opt/pds - pds_data:{{ BLUESKY_PDS_DATA_DIR }}
- {{pdsadmin_file_path}}:/usr/local/bin/pdsadmin:ro - {{ BLUESKY_PDSADMIN_FILE }}:/usr/local/bin/pdsadmin:ro
ports: ports:
- "127.0.0.1:{{ports.localhost.http['web-app-bluesky_api']}}:{{ container_port }}" - "127.0.0.1:{{ BLUESKY_API_PORT }}:{{ container_port }}"
{% include 'roles/docker-container/templates/healthcheck/wget.yml.j2' %} {% include 'roles/docker-container/templates/healthcheck/wget.yml.j2' %}
{% include 'roles/docker-container/templates/networks.yml.j2' %} {% include 'roles/docker-container/templates/networks.yml.j2' %}
# Deactivated for the moment @see https://github.com/bluesky-social/social-app {% if BLUESKY_WEB_ENABLED %}
{% set container_port = 8100 %}
web: web:
command: ["bskyweb","serve"] command: ["bskyweb","serve"]
build: build:
context: "{{ social_app_path }}" context: "{{ BLUESKY_SOCIAL_APP_DIR }}"
dockerfile: Dockerfile dockerfile: Dockerfile
# It doesn't compile yet with this parameters. @todo Fix it
args:
REACT_APP_PDS_URL: "{{ WEB_PROTOCOL }}://{{domains[application_id].api}}" # URL des PDS
REACT_APP_API_URL: "{{ WEB_PROTOCOL }}://{{domains[application_id].api}}" # API-URL des PDS
REACT_APP_SITE_NAME: "{{ PRIMARY_DOMAIN | upper }} - Bluesky"
REACT_APP_SITE_DESCRIPTION: "Decentral Social "
pull_policy: never pull_policy: never
ports: ports:
- "127.0.0.1:{{ports.localhost.http['web-app-bluesky_web']}}:8100" - "127.0.0.1:{{ BLUESKY_WEB_PORT }}:{{ container_port }}"
healthcheck: {% include 'roles/docker-container/templates/healthcheck/tcp.yml.j2' %}
test: ["CMD", "sh", "-c", "for pid in $(ls /proc | grep -E '^[0-9]+$'); do if cat /proc/$pid/cmdline 2>/dev/null | grep -q 'bskywebserve'; then exit 0; fi; done; exit 1"]
interval: 30s
timeout: 10s
retries: 3
{% include 'roles/docker-container/templates/networks.yml.j2' %} {% include 'roles/docker-container/templates/networks.yml.j2' %}
{% endif %}
{% include 'roles/docker-compose/templates/volumes.yml.j2' %} {% include 'roles/docker-compose/templates/volumes.yml.j2' %}
pds_data: pds_data:
name: {{ BLUESKY_PDS_DATA_VOLUME }}
{% include 'roles/docker-compose/templates/networks.yml.j2' %} {% include 'roles/docker-compose/templates/networks.yml.j2' %}

View File

@@ -1,21 +1,21 @@
PDS_HOSTNAME="{{domains[application_id].api}}" PDS_HOSTNAME="{{ BLUESKY_API_DOMAIN }}"
PDS_ADMIN_EMAIL="{{ applications.bluesky.users.administrator.email}}" PDS_ADMIN_EMAIL="{{ BLUESKY_ADMIN_EMAIL }}"
PDS_SERVICE_DID="did:web:{{domains[application_id].api}}" PDS_SERVICE_DID="did:web:{{ BLUESKY_API_DOMAIN }}"
# See https://mattdyson.org/blog/2024/11/self-hosting-bluesky-pds/ # See https://mattdyson.org/blog/2024/11/self-hosting-bluesky-pds/
PDS_SERVICE_HANDLE_DOMAINS=".{{ PRIMARY_DOMAIN }}" PDS_SERVICE_HANDLE_DOMAINS=".{{ PRIMARY_DOMAIN }}"
PDS_JWT_SECRET="{{ bluesky_jwt_secret }}" PDS_JWT_SECRET="{{ BLUESKY_JWT_SECRET }}"
PDS_ADMIN_PASSWORD="{{bluesky_admin_password}}" PDS_ADMIN_PASSWORD="{{ BLUESKY_ADMIN_PASSWORD }}"
PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX="{{ bluesky_rotation_key }}" PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX="{{ BLUESKY_ROTATION_KEY }}"
PDS_CRAWLERS=https://bsky.network PDS_CRAWLERS=https://bsky.network
PDS_EMAIL_SMTP_URL=smtps://{{ users['no-reply'].email }}:{{ users['no-reply'].mailu_token }}@{{ SYSTEM_EMAIL.HOST }}:{{ SYSTEM_EMAIL.PORT }}/ PDS_EMAIL_SMTP_URL=smtps://{{ users['no-reply'].email }}:{{ users['no-reply'].mailu_token }}@{{ SYSTEM_EMAIL.HOST }}:{{ SYSTEM_EMAIL.PORT }}/
PDS_EMAIL_FROM_ADDRESS={{ users['no-reply'].email }} PDS_EMAIL_FROM_ADDRESS={{ users['no-reply'].email }}
LOG_ENABLED=true LOG_ENABLED={{ MODE_DEBUG | string | lower }}
PDS_BLOBSTORE_DISK_LOCATION=/opt/pds/blocks PDS_BLOBSTORE_DISK_LOCATION={{ BLUESKY_PDS_BLOBSTORE_LOCATION }}
PDS_DATA_DIRECTORY: /opt/pds PDS_DATA_DIRECTORY={{ BLUESKY_PDS_DATA_DIR }}
PDS_BLOB_UPLOAD_LIMIT: 52428800 PDS_BLOB_UPLOAD_LIMIT=52428800
PDS_DID_PLC_URL=https://plc.directory PDS_DID_PLC_URL=https://plc.directory
PDS_BSKY_APP_VIEW_URL=https://{{domains[application_id].web}} PDS_BSKY_APP_VIEW_URL={{ BLUESKY_VIEW_URL }}
PDS_BSKY_APP_VIEW_DID=did:web:{{domains[application_id].web}} PDS_BSKY_APP_VIEW_DID={{ BLUESKY_VIEW_DID }}
PDS_REPORT_SERVICE_URL=https://mod.bsky.app PDS_REPORT_SERVICE_URL=https://mod.bsky.app
PDS_REPORT_SERVICE_DID=did:plc:ar7c4by46qjdydhdevvrndac PDS_REPORT_SERVICE_DID=did:plc:ar7c4by46qjdydhdevvrndac

View File

@@ -1,11 +1,45 @@
application_id: "web-app-bluesky" # General
social_app_path: "{{ docker_compose.directories.services }}/social-app" application_id: "web-app-bluesky"
## Bluesky
## Social App
BLUESKY_SOCIAL_APP_DIR: "{{ docker_compose.directories.services }}/social-app"
# This should be removed when the following issue is closed: # This should be removed when the following issue is closed:
# https://github.com/bluesky-social/pds/issues/52 # https://github.com/bluesky-social/pds/issues/52
pdsadmin_folder_path: "{{ docker_compose.directories.volumes }}/pdsadmin"
pdsadmin_file_path: "{{pdsadmin_folder_path}}/pdsadmin" ## PDS
pdsadmin_temporary_tar_path: "/tmp/pdsadmin.tar.gz" BLUESKY_PDSADMIN_DIR: "{{ [ docker_compose.directories.volumes, 'pdsadmin' ] | path_join }}"
bluesky_jwt_secret: "{{ applications | get_app_conf(application_id, 'credentials.jwt_secret') }}" BLUESKY_PDSADMIN_FILE: "{{ [ BLUESKY_PDSADMIN_DIR, 'pdsadmin' ] | path_join }}"
bluesky_admin_password: "{{ applications | get_app_conf(application_id, 'credentials.admin_password') }}" BLUESKY_PDSADMIN_TMP_TAR: "/tmp/pdsadmin.tar.gz"
bluesky_rotation_key: "{{ applications | get_app_conf(application_id, 'credentials.plc_rotation_key_k256_private_key_hex') }}" BLUESKY_PDS_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.pds.image') }}"
BLUESKY_PDS_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.pds.version') }}"
BLUESKY_PDS_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.pds_data') }}"
BLUESKY_PDS_DATA_DIR: "/opt/pds"
BLUESKY_PDS_BLOBSTORE_LOCATION: "{{ [ BLUESKY_PDS_DATA_DIR, 'blocks' ] | path_join }}"
## Web
BLUESKY_WEB_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.web.enabled') }}"
BLUESKY_WEB_DOMAIN: "{{ domains[application_id].web }}"
BLUESKY_WEB_PORT: "{{ ports.localhost.http['web-app-bluesky_web'] }}"
## View
BLUESKY_VIEW_ENABLED: "{{ applications | get_app_conf(application_id, 'docker.services.view.enabled') }}"
BLUESKY_VIEW_DOMAIN: "{{ domains[application_id].view if BLUESKY_VIEW_ENABLED else 'api.bsky.app' }}"
BLUESKY_VIEW_URL: "{{ WEB_PROTOCOL }}://{{ BLUESKY_VIEW_DOMAIN }}"
BLUESKY_VIEW_DID: "did:web:{{ BLUESKY_VIEW_DOMAIN }}"
BLUESKY_VIEW_PORT: "{{ ports.localhost.http['web-app-bluesky_view'] | default(8053) }}"
## Server
BLUESKY_API_DOMAIN: "{{ domains[application_id].api }}"
BLUESKY_API_PORT: "{{ ports.localhost.http['web-app-bluesky_api'] }}"
## Credentials
BLUESKY_JWT_SECRET: "{{ applications | get_app_conf(application_id, 'credentials.jwt_secret') }}"
BLUESKY_ROTATION_KEY: "{{ applications | get_app_conf(application_id, 'credentials.plc_rotation_key_k256_private_key_hex') }}"
## Admin
BLUESKY_ADMIN_EMAIL: "{{ users.administrator.email }}"
BLUESKY_ADMIN_PASSWORD: "{{ users.administrator.password }}"

View File

@@ -59,5 +59,5 @@ EMAIL_HOST_USER: "{{ users['no-reply'].email }}"
EMAIL_HOST_PASSWORD: "{{ users['no-reply'].mailu_token }}" EMAIL_HOST_PASSWORD: "{{ users['no-reply'].mailu_token }}"
# TLS/SSL: If TLS is true → TLS; else → SSL # TLS/SSL: If TLS is true → TLS; else → SSL
EMAIL_USE_TLS: "{{ SYSTEM_EMAIL.TLS | ternary('true','false') }}" EMAIL_USE_TLS: "{{ SYSTEM_EMAIL.TLS | ternary('true','false') }}"
EMAIL_USE_SSL: "{{ not SYSTEM_EMAIL.TLS | ternary('true','false') }}" EMAIL_USE_SSL: "{{ SYSTEM_EMAIL.TLS | ternary('false','true') }}"
EMAIL_DEFAULT_FROM: "BookWyrm <{{ users['no-reply'].email }}>" EMAIL_DEFAULT_FROM: "BookWyrm <{{ users['no-reply'].email }}>"

View File

@@ -1,4 +1,3 @@
# roles/web-app-chess/config/main.yml
credentials: {} credentials: {}
docker: docker:
services: services:

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -euo pipefail set -euo pipefail
APP_KEY_FILE="${APP_KEY_FILE:-/app/data/{{ CHESS_KEY_FILENAME }}}" APP_KEY_FILE="${APP_KEY_FILE}"
APP_KEY_PUB="${APP_KEY_FILE}.pub" APP_KEY_PUB="${APP_KEY_FILE}.pub"
# 1) Generate signing key pair if missing # 1) Generate signing key pair if missing
@@ -12,8 +12,8 @@ fi
# 2) Wait for PostgreSQL if env is provided # 2) Wait for PostgreSQL if env is provided
if [[ -n "${PGHOST:-}" ]]; then if [[ -n "${PGHOST:-}" ]]; then
echo "[chess] waiting for PostgreSQL at ${PGHOST}:${PGPORT:-5432}..." echo "[chess] waiting for PostgreSQL at ${PGHOST}:${PGPORT}..."
until pg_isready -h "${PGHOST}" -p "${PGPORT:-5432}" -U "${PGUSER:-postgres}" >/dev/null 2>&1; do until pg_isready -h "${PGHOST}" -p "${PGPORT}" -U "${PGUSER}" >/dev/null 2>&1; do
sleep 1 sleep 1
done done
fi fi
@@ -23,5 +23,5 @@ echo "[chess] running migrations"
yarn migrate up yarn migrate up
# 4) Start app # 4) Start app
echo "[chess] starting server on port ${PORT:-5080}" echo "[chess] starting server on port ${PORT}"
exec yarn start exec yarn start

View File

@@ -0,0 +1,10 @@
- name: "load docker, db and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
- name: "Deploy '{{ CHESS_ENTRYPOINT_ABS }}'"
copy:
src: "{{ CHESS_ENTRYPOINT_FILE }}"
dest: "{{ CHESS_ENTRYPOINT_ABS }}"
- include_tasks: utils/run_once.yml

View File

@@ -1,10 +0,0 @@
- block:
- name: "load docker, db and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
- name: "Place entrypoint and other assets"
include_tasks: 02_assets.yml
- include_tasks: utils/run_once.yml
when: run_once_web_app_chess is not defined

View File

@@ -1,8 +1,3 @@
--- - name: "Include core routines for '{{ application_id }}'"
- block: include_tasks: "01_core.yml"
- name: "load docker, db and proxy for {{ application_id }}"
include_role:
name: sys-stk-full-stateful
- include_tasks: utils/run_once.yml
when: run_once_web_app_chess is not defined when: run_once_web_app_chess is not defined

View File

@@ -21,8 +21,6 @@ RUN yarn install --frozen-lockfile && yarn build
# Stage 2: runtime # Stage 2: runtime
FROM node:{{ CHESS_VERSION }} FROM node:{{ CHESS_VERSION }}
ENV NODE_ENV=production
ENV PORT={{ container_port }}
WORKDIR /app WORKDIR /app
# Minimal runtime packages + dumb-init # Minimal runtime packages + dumb-init
@@ -34,14 +32,14 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
COPY --from=build /src /app COPY --from=build /src /app
# Create data dir for signing keys & cache # Create data dir for signing keys & cache
RUN mkdir -p /app/data && chown -R node:node /app RUN mkdir -p {{ CHESS_APP_DATA_DIR }} && chown -R node:node /app
VOLUME ["/app/data"] VOLUME ["{{ CHESS_APP_DATA_DIR }}"]
# Entrypoint script # Entrypoint script
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh COPY {{ CHESS_ENTRYPOINT_REL }} {{ CHESS_ENTRYPOINT_INT }}
RUN chmod +x /usr/local/bin/docker-entrypoint.sh RUN chmod +x {{ CHESS_ENTRYPOINT_INT }}
USER node USER node
EXPOSE {{ container_port }} EXPOSE {{ container_port }}
ENTRYPOINT ["dumb-init", "--"] ENTRYPOINT ["dumb-init", "--"]
CMD ["docker-entrypoint.sh"] CMD ["{{ CHESS_ENTRYPOINT_INT }}"]

View File

@@ -6,15 +6,13 @@
args: args:
CHESS_REPO_URL: "{{ CHESS_REPO_URL }}" CHESS_REPO_URL: "{{ CHESS_REPO_URL }}"
CHESS_REPO_REF: "{{ CHESS_REPO_REF }}" CHESS_REPO_REF: "{{ CHESS_REPO_REF }}"
image: "castling_custom" image: "{{ CHESS_CUSTOM_IMAGE }}"
container_name: "{{ CHESS_CONTAINER }}" container_name: "{{ CHESS_CONTAINER }}"
hostname: "{{ CHESS_HOSTNAME }}" hostname: "{{ CHESS_HOSTNAME }}"
environment:
- NODE_ENV=production
ports: ports:
- "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}" - "127.0.0.1:{{ ports.localhost.http[application_id] }}:{{ container_port }}"
volumes: volumes:
- 'data:/app/data' - 'data:{{ CHESS_APP_DATA_DIR }}'
env_file: env_file:
- .env - .env
{% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %} {% include 'roles/docker-container/templates/healthcheck/curl.yml.j2' %}

View File

@@ -1,11 +1,11 @@
# App basics # App basics
APP_SCHEME="{{ 'https' if WEB_PROTOCOL == 'https' else 'http' }}" APP_SCHEME="{{ WEB_PROTOCOL }}"
APP_DOMAIN="{{ CHESS_HOSTNAME }}" APP_DOMAIN="{{ CHESS_HOSTNAME }}"
APP_ADMIN_URL="{{ CHESS_ADMIN_URL }}" APP_ADMIN_URL="{{ CHESS_ADMIN_URL }}"
APP_ADMIN_EMAIL="{{ CHESS_ADMIN_EMAIL }}" APP_ADMIN_EMAIL="{{ CHESS_ADMIN_EMAIL }}"
APP_KEY_FILE="/app/data/{{ CHESS_KEY_FILENAME }}" APP_KEY_FILE="{{ CHESS_APP_KEY_FILE }}"
APP_HMAC_SECRET="{{ CHESS_HMAC_SECRET }}" APP_HMAC_SECRET="{{ CHESS_HMAC_SECRET }}"
NODE_ENV="production" NODE_ENV="{{ ENVIRONMENT }}"
PORT="{{ container_port }}" PORT="{{ container_port }}"
# PostgreSQL (libpq envs) # PostgreSQL (libpq envs)

View File

@@ -1,17 +1,20 @@
# General # General
application_id: "web-app-chess" application_id: "web-app-chess"
database_type: "postgres" database_type: "postgres"
# Container
container_port: 5080 container_port: 5080
container_hostname: "{{ domains | get_domain(application_id) }}" container_hostname: "{{ domains | get_domain(application_id) }}"
# App URLs & meta # App URLs & meta
#CHESS_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}" # CHESS_URL: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
CHESS_HOSTNAME: "{{ container_hostname }}" CHESS_HOSTNAME: "{{ container_hostname }}"
CHESS_ADMIN_URL: "" CHESS_ADMIN_URL: ""
CHESS_ADMIN_EMAIL: "" CHESS_ADMIN_EMAIL: "{{ users.users.administrator.email }}"
# Docker image # Docker image
#CHESS_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}" #CHESS_IMAGE: "{{ applications | get_app_conf(application_id, 'docker.services.application.image') }}"
CHESS_CUSTOM_IMAGE: "castling_custom"
CHESS_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version') }}" CHESS_VERSION: "{{ applications | get_app_conf(application_id, 'docker.services.application.version') }}"
CHESS_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}" CHESS_CONTAINER: "{{ applications | get_app_conf(application_id, 'docker.services.application.name') }}"
CHESS_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}" CHESS_DATA_VOLUME: "{{ applications | get_app_conf(application_id, 'docker.volumes.data') }}"
@@ -23,3 +26,10 @@ CHESS_REPO_REF: "{{ applications | get_app_conf(application_id,
# Security # Security
CHESS_HMAC_SECRET: "{{ lookup('password', '/dev/null length=63 chars=ascii_letters,digits') }}" CHESS_HMAC_SECRET: "{{ lookup('password', '/dev/null length=63 chars=ascii_letters,digits') }}"
CHESS_KEY_FILENAME: "signing-key" CHESS_KEY_FILENAME: "signing-key"
CHESS_APP_DATA_DIR: '/app/data'
CHESS_APP_KEY_FILE: "{{ [ CHESS_APP_DATA_DIR, CHESS_KEY_FILENAME ] | path_join }}"
CHESS_ENTRYPOINT_FILE: "docker-entrypoint.sh"
CHESS_ENTRYPOINT_REL: "{{ CHESS_ENTRYPOINT_FILE }}"
CHESS_ENTRYPOINT_ABS: "{{ [docker_compose.directories.instance, CHESS_ENTRYPOINT_REL] | path_join }}"
CHESS_ENTRYPOINT_INT: "{{ ['/usr/local/bin', CHESS_ENTRYPOINT_FILE] | path_join }}"

View File

@@ -30,3 +30,4 @@ server:
- "confluence.{{ PRIMARY_DOMAIN }}" - "confluence.{{ PRIMARY_DOMAIN }}"
rbac: rbac:
roles: {} roles: {}
truststore_enabled: false

View File

@@ -7,4 +7,4 @@ FROM "{{ CONFLUENCE_IMAGE }}:{{ CONFLUENCE_VERSION }}"
RUN mkdir -p {{ CONFLUENCE_HOME }} && \ RUN mkdir -p {{ CONFLUENCE_HOME }} && \
chown -R 2001:2001 {{ CONFLUENCE_HOME }} chown -R 2001:2001 {{ CONFLUENCE_HOME }}
RUN printf "confluence.home={{ CONFLUENCE_HOME }}\n" \ RUN printf "confluence.home={{ CONFLUENCE_HOME }}\n" \
> /opt/atlassian/confluence/confluence/WEB-INF/classes/confluence-init.properties > /opt/atlassian/confluence/confluence/WEB-INF/classes/confluence-init.properties

View File

@@ -9,7 +9,7 @@ ATL_TOMCAT_SECURE={{ (WEB_PORT == 443) | lower }}
JVM_MINIMUM_MEMORY={{ CONFLUENCE_JVM_MIN }} JVM_MINIMUM_MEMORY={{ CONFLUENCE_JVM_MIN }}
JVM_MAXIMUM_MEMORY={{ CONFLUENCE_JVM_MAX }} JVM_MAXIMUM_MEMORY={{ CONFLUENCE_JVM_MAX }}
JVM_SUPPORT_RECOMMENDED_ARGS=-Datlassian.home={{ CONFLUENCE_HOME }} JVM_SUPPORT_RECOMMENDED_ARGS=-Datlassian.home={{ CONFLUENCE_HOME }} -Datlassian.upm.signature.check.disabled={{ CONFLUENCE_TRUST_STORE_ENABLED | ternary('false','true')}}
## Database ## Database
ATL_DB_TYPE=postgresql ATL_DB_TYPE=postgresql

View File

@@ -39,4 +39,8 @@ CONFLUENCE_TOTAL_MB: "{{ ansible_memtotal_mb | int }}"
CONFLUENCE_JVM_MAX_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 2), 12288 ] | min }}" CONFLUENCE_JVM_MAX_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 2), 12288 ] | min }}"
CONFLUENCE_JVM_MIN_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 4), (CONFLUENCE_JVM_MAX_MB | int) ] | min }}" CONFLUENCE_JVM_MIN_MB: "{{ [ (CONFLUENCE_TOTAL_MB | int // 4), (CONFLUENCE_JVM_MAX_MB | int) ] | min }}"
CONFLUENCE_JVM_MIN: "{{ CONFLUENCE_JVM_MIN_MB }}m" CONFLUENCE_JVM_MIN: "{{ CONFLUENCE_JVM_MIN_MB }}m"
CONFLUENCE_JVM_MAX: "{{ CONFLUENCE_JVM_MAX_MB }}m" CONFLUENCE_JVM_MAX: "{{ CONFLUENCE_JVM_MAX_MB }}m"
## Options
CONFLUENCE_TRUST_STORE_ENABLED: "{{ applications | get_app_conf(application_id, 'truststore_enabled') }}"

View File

@@ -24,8 +24,10 @@ server:
flags: flags:
script-src-elem: script-src-elem:
unsafe-inline: true unsafe-inline: true
unsafe-eval: true
script-src: script-src:
unsafe-inline: true unsafe-inline: true
unsafe-eval: true
domains: domains:
canonical: canonical:
- "jira.{{ PRIMARY_DOMAIN }}" - "jira.{{ PRIMARY_DOMAIN }}"

View File

@@ -79,10 +79,10 @@
{% include 'roles/docker-container/templates/networks.yml.j2' %} {% include 'roles/docker-container/templates/networks.yml.j2' %}
oletools: oletools:
{% include 'roles/docker-container/templates/base.yml.j2' %}
container_name: {{ MAILU_CONTAINER }}_oletools container_name: {{ MAILU_CONTAINER }}_oletools
image: {{ MAILU_DOCKER_FLAVOR }}/oletools:{{ MAILU_VERSION }} image: {{ MAILU_DOCKER_FLAVOR }}/oletools:{{ MAILU_VERSION }}
hostname: oletools hostname: oletools
restart: {{ DOCKER_RESTART_POLICY }}
depends_on: depends_on:
- resolver - resolver
dns: dns:

View File

@@ -115,29 +115,48 @@ class TestRepairDockerSoft(unittest.TestCase):
def fake_print_bash(cmd): def fake_print_bash(cmd):
cmd_log.append(cmd) cmd_log.append(cmd)
# 1) docker ps Mocks (deine bisherigen)
if cmd.startswith("docker ps --filter health=unhealthy"): if cmd.startswith("docker ps --filter health=unhealthy"):
return ["app1-web-1", "db-1"] return ["app1-web-1", "db-1"]
if cmd.startswith("docker ps --filter status=exited"): if cmd.startswith("docker ps --filter status=exited"):
return ["app1-worker-1", "other-2"] return ["app1-worker-1", "other-2"]
# 2) docker inspect Labels (NEU)
# project label
if cmd.startswith("docker inspect -f '{{ index .Config.Labels \"com.docker.compose.project\" }}'"):
container = cmd.split()[-1]
if container in ("app1-web-1", "app1-worker-1"):
return ["app1"]
if container == "db-1":
return ["db"]
return [""] # other-2 hat keine Labels -> soll fehlschlagen
# working_dir label
if cmd.startswith("docker inspect -f '{{ index .Config.Labels \"com.docker.compose.project.working_dir\" }}'"):
container = cmd.split()[-1]
if container in ("app1-web-1", "app1-worker-1"):
return ["/BASE/app1"]
if container == "db-1":
return ["/BASE/db"]
return [""] # other-2 -> keine Angabe
# 3) docker-compose Aufrufe (unverändert okay)
if "docker-compose" in cmd: if "docker-compose" in cmd:
return [] return []
return [] return []
# find_docker_compose_file wird in STRICT nicht benutzt, kann aber bleiben
def fake_find_docker_compose(path): def fake_find_docker_compose(path):
# Compose-Projekte: app1, db -> vorhanden; "other" -> nicht vorhanden
if path.endswith("/app1") or path.endswith("/db"): if path.endswith("/app1") or path.endswith("/db"):
return str(Path(path) / "docker-compose.yml") return str(Path(path) / "docker-compose.yml")
return None return None
# Steuere die detect_env_file-Antwort: # 4) os.path.isfile für STRICT mode (NEU)
# - Für app1 existiert nur .env/env old_isfile = s.os.path.isfile
# - Für db existiert .env def fake_isfile(path):
def fake_detect_env_file(project_path: str): return path in ("/BASE/app1/docker-compose.yml", "/BASE/db/docker-compose.yml")
if project_path.endswith("/app1"):
return f"{project_path}/.env/env"
if project_path.endswith("/db"):
return f"{project_path}/.env"
return None
old_print_bash = s.print_bash old_print_bash = s.print_bash
old_find = s.find_docker_compose_file old_find = s.find_docker_compose_file
@@ -145,14 +164,18 @@ class TestRepairDockerSoft(unittest.TestCase):
try: try:
s.print_bash = fake_print_bash s.print_bash = fake_print_bash
s.find_docker_compose_file = fake_find_docker_compose s.find_docker_compose_file = fake_find_docker_compose
s.detect_env_file = fake_detect_env_file s.detect_env_file = lambda project_path: (
f"{project_path}/.env/env" if project_path.endswith("/app1")
else (f"{project_path}/.env" if project_path.endswith("/db") else None)
)
s.os.path.isfile = fake_isfile # <— wichtig für STRICT
errors = s.main("/BASE", manipulation_services=[], timeout=None) errors = s.main("/BASE", manipulation_services=[], timeout=None)
# one error expected for "other" (no compose file)
# Erwartung: nur "other-2" scheitert -> 1 Fehler
self.assertEqual(errors, 1) self.assertEqual(errors, 1)
restart_cmds = [c for c in cmd_log if ' docker-compose' in c and " restart" in c] restart_cmds = [c for c in cmd_log if ' docker-compose' in c and " restart" in c]
# app1: --env-file "/BASE/app1/.env/env" + -p "app1"
self.assertTrue(any( self.assertTrue(any(
'cd "/BASE/app1"' in c and 'cd "/BASE/app1"' in c and
'--env-file "/BASE/app1/.env/env"' in c and '--env-file "/BASE/app1/.env/env"' in c and
@@ -160,7 +183,6 @@ class TestRepairDockerSoft(unittest.TestCase):
' restart' in c ' restart' in c
for c in restart_cmds for c in restart_cmds
)) ))
# db: --env-file "/BASE/db/.env" + -p "db"
self.assertTrue(any( self.assertTrue(any(
'cd "/BASE/db"' in c and 'cd "/BASE/db"' in c and
'--env-file "/BASE/db/.env"' in c and '--env-file "/BASE/db/.env"' in c and
@@ -172,6 +194,8 @@ class TestRepairDockerSoft(unittest.TestCase):
s.print_bash = old_print_bash s.print_bash = old_print_bash
s.find_docker_compose_file = old_find s.find_docker_compose_file = old_find
s.detect_env_file = old_detect s.detect_env_file = old_detect
s.os.path.isfile = old_isfile
if __name__ == "__main__": if __name__ == "__main__":