Compare commits

...

84 Commits

Author SHA1 Message Date
140572a0a4 Solved missing deployment of postgres bug 2025-08-12 10:58:09 +02:00
a30cd4e8b5 Solved listmonk handler bugs 2025-08-12 04:38:41 +02:00
2067804e9f Solved ansible multiplexing auth bug 2025-08-12 04:23:45 +02:00
1a42e8bd14 Replaced depenencies by includes for performance reasons 2025-08-12 03:08:33 +02:00
8634b5e1b3 Finished move_unnecessary_dependencies implementation 2025-08-12 02:39:22 +02:00
1595a7c4a6 Optimized tests für run once 2025-08-12 02:38:37 +02:00
82aaf7ad74 fixed move_unnecessary_dependencies.py 2025-08-11 23:41:48 +02:00
7e4a1062af Added draft for fixing dependencies 2025-08-11 23:16:32 +02:00
d5e5f57f92 Optimized openproject for new repository structure 2025-08-11 23:03:24 +02:00
f671678720 Add integration test to detect unnecessary meta dependencies
This test scans all roles/*/meta/main.yml for meta dependencies that are
likely unnecessary and could be replaced with guarded include_role/import_role
calls to improve performance.

A dependency is flagged as unnecessary when:
- The consumer role does not use provider variables in defaults/vars/handlers
  (no early variable requirement), and
- Any usage of provider variables or handler notifications in tasks occurs
  only after an explicit include/import of the provider in the same file,
  or there is no usage at all.

Purpose:
Helps reduce redundant parsing/execution of roles and improves Ansible
performance by converting heavy global dependencies into conditional,
guarded includes where possible.

https://chatgpt.com/share/689a59ee-52f4-800f-8349-4f477dc97c7c
2025-08-11 23:00:49 +02:00
2219696c3f Removed redirects for performance 2025-08-11 22:21:17 +02:00
fbaee683fd Removed dependencies and used include_roles to raise performance and make infinito to a racing car 2025-08-11 21:56:34 +02:00
b301e58ee6 Removed redirect to safe performance 2025-08-11 21:48:33 +02:00
de15c42de8 Added database patch to wordpress 2025-08-11 21:46:29 +02:00
918355743f Updated ansible.cfg for better performance and tracking 2025-08-11 21:00:33 +02:00
f6e62525d1 Optimized wordpress variables 2025-08-11 20:00:48 +02:00
f72ac30884 Replaced redirects by origine to raise performance 2025-08-11 19:44:14 +02:00
1496f1de95 Replaced community.general.pacman: by pacman to raise performance 2025-08-11 19:33:28 +02:00
38de10ba65 Solved bigbluebutton admin creation bug 2025-08-11 19:24:08 +02:00
e8c19b4b84 Implemented correct path replace not just for context: but also for build: paths 2025-08-11 18:46:02 +02:00
b0737b1cdb Merge branch 'master' of github.com:kevinveenbirkenbach/cymais 2025-08-11 14:31:19 +02:00
e4cc928eea Encapsulated SAN in block with when 2025-08-11 14:31:10 +02:00
c9b2136578 Merge pull request #5 from ocrampete16/logs-dir
Create logs dir to prevent failing test
2025-08-11 14:15:39 +02:00
5709935c92 Improved performance by avoiding the load of roles which are anyhow just protected by one condition 2025-08-11 13:52:24 +02:00
c7badc608a Solved typo 2025-08-11 13:25:32 +02:00
0e59d35129 Update RunOnceSchemaTest to skip files with deactivated run_once variables via comment https://chatgpt.com/share/6899d297-4bec-800f-a748-6816398d8c7e 2025-08-11 13:23:20 +02:00
1ba50397db Optimized performance by moving multiple similar when includes to own tasks file 2025-08-11 13:15:31 +02:00
6318611931 Add integration test to detect excessive duplicate 'when' conditions in tasks files
This test scans all .yml/.yaml files under any tasks/ directory and flags cases where the same
'when' condition appears on more than 3 tasks in the same file. Excessive duplication of identical
conditions can harm Ansible performance because the condition is re-evaluated for every task.

The test suggests replacing repeated conditions with an include_tasks call or a block guarded
by the condition to evaluate it only once.

https://chatgpt.com/share/6899c605-6f40-800f-a954-ccb62f8bbcf1
2025-08-11 12:29:57 +02:00
6e04ac58d2 Moved blocks to include_tasks to raise performance. Deploy was really slow 2025-08-11 12:28:31 +02:00
b6e571a496 Solved config path entry bug 2025-08-11 12:19:24 +02:00
21b6362bc1 test(integration): fail if reset.yml exists but is never included
Updated test_mode_reset.py to also validate roles that contain a reset
task file (*_reset.yml or reset.yml) even when no mode_reset keyword is
found. The test now:

- Detects roles with reset files but no include, and fails accordingly.
- Ignores commented include_tasks and when lines.
- Ensures exactly one non-commented include of the reset file exists.
- Requires that the include is guarded in the same task block by a
  when containing mode_reset | bool (with optional extra conditions).

This prevents silent omissions of reset task integration.

https://chatgpt.com/share/6899b745-7150-800f-98f3-ca714486f5ba
2025-08-11 11:27:15 +02:00
1fcf072257 Added performance violation test for blocks 2025-08-11 10:50:42 +02:00
ea0149b5d4 Replaced nextcloud-application by nextcloud container name 2025-08-11 10:41:06 +02:00
fe76fe1e62 Added correct flush parameters for docker compose 2025-08-11 10:33:48 +02:00
3431796283 Wrapped docker compose file routines tasks in block for docker compose 2025-08-11 10:20:06 +02:00
b5d8ac5462 Reactivated keycloak docker and webserver tasks and implemented correct logic for element and synapse redirect handling 2025-08-11 02:21:02 +02:00
5426014096 Optimized handlers order for mailu 2025-08-11 01:56:22 +02:00
a9d77de2a4 Optimized docker compose ensure logic 2025-08-11 01:26:31 +02:00
766ef8619f Encapsulated cmp-docker-oauth2 into block 2025-08-11 01:25:31 +02:00
66013a4da3 Added line 2025-08-11 01:24:02 +02:00
1cb5a12d85 Encapsulated srv-web-7-7-letsencrypt into block 2025-08-11 01:23:00 +02:00
6e8ae793e3 Added auto setting for redirect urls for keycloak clients. Element and Synapse still need to be mapped 2025-08-11 00:17:18 +02:00
0746acedfd Introduced run_once_ block for srv-web-6-6-tls-renew 2025-08-10 21:50:39 +02:00
f5659a44f8 Optimized blocks in roles/srv-proxy-6-6-domain/tasks/main.yml 2025-08-10 18:31:12 +02:00
77816ac4e7 Solved bkp_docker_to_local_pkg bug 2025-08-10 18:17:52 +02:00
8779afd1f7 Removed beep backup sound 2025-08-10 17:54:14 +02:00
0074bcbd69 Implemented functioning warning sound 2025-08-10 17:39:33 +02:00
149c563831 Optimized logic for database backups and integrated test to verify that database feature is used correct 2025-08-10 15:06:37 +02:00
e9ef62b95d Optimized cloudflare purge and cache dev mdoe 2025-08-10 14:18:29 +02:00
aeaf84de6f Deactivated central_database for lam 2025-08-10 13:42:52 +02:00
fdceb0f792 Implmented dev mode für cloudflare 2025-08-10 12:18:17 +02:00
2fd83eaf55 Keep logs during deploy cleanup 2025-08-10 12:01:34 +02:00
Marco Petersen
21eb614912 Create logs dir to prevent failing test 2025-08-10 12:50:13 +03:00
b880b98ac3 Added hints for infinito modes 2025-08-10 11:34:33 +02:00
acfb1a2ee7 Made logs folder permanent 2025-08-10 11:31:56 +02:00
4885ad7eb4 Overwritte handlers for CDN 2025-08-09 18:08:30 +02:00
d9669fc6dd Added test to verify that no handlers are skipped due to when condition 2025-08-09 15:24:47 +02:00
8e0341c120 Solved some handler reloading bugs 2025-08-08 19:33:16 +02:00
22c8c395f0 Refactored handlers loading 2025-08-08 19:01:12 +02:00
aae69ea15b Ensure that keycloak is up 2025-08-08 17:25:31 +02:00
c7b25ed093 Normalized run_once_, made openresty handlers without when aviable and forced flush in run_once when blocks to avoid handlers with when conditions 2025-08-08 15:32:26 +02:00
e675aa5886 Wrapped in block to avoid multiple similar when conditions for 7-4 web core 2025-08-08 12:25:09 +02:00
14f07adc9d Wrapped in block to avoid multiple similar when conditions 2025-08-08 12:14:01 +02:00
dba12b89d8 Normalized cmp-docker-proxy include 2025-08-08 12:02:14 +02:00
0607974dac Patched url in moodle config 2025-08-08 11:46:23 +02:00
e8fa22cb43 Normalized variable 2025-08-08 11:27:34 +02:00
eedfe83ece Solved missing redirect bug 2025-08-08 11:03:43 +02:00
9f865dd215 Removed catetory domain präfix from redirect domains 2025-08-08 09:47:31 +02:00
220e3e1c60 Optimized namings in moodle 2025-08-08 09:12:50 +02:00
2996c7cbb6 Added default value for internet interfadces 2025-08-08 08:39:40 +02:00
59bd4ca8eb Added handling of multiple domains and used get_url function in mailu 2025-08-08 08:39:09 +02:00
da58691d25 Added comments why autoflush isn't possible 2025-08-08 08:37:52 +02:00
c96f278ac3 Added autoflush to mastodon für docker 2025-08-08 08:37:12 +02:00
2715479c95 Assert just applications which are in group_names 2025-08-08 08:36:07 +02:00
926640371f Optimized description 2025-08-08 08:35:16 +02:00
cdc97c8ba5 Raised certbot_dns_propagation_wait_seconds to 5min 2025-08-08 08:34:49 +02:00
4124e97aeb Added domain validator for web- apps and services for port-ui 2025-08-07 20:37:47 +02:00
7f0d40bdc3 Optimized code 2025-08-07 18:17:38 +02:00
8dc2238ba2 Optimized Funkwhale bug 2025-08-07 17:52:34 +02:00
b9b08feadd Added logout overwritte logic for espocrm 2025-08-07 17:35:13 +02:00
dc437c7621 Activated logout catcher 2025-08-07 16:11:20 +02:00
7d63d92166 Solved status codes bug 2025-08-07 15:46:56 +02:00
3eb51a32ce Adapted webserver test for web-app-yourls 2025-08-07 15:35:33 +02:00
6272303b55 Changed LAM container name 2025-08-07 15:34:40 +02:00
388 changed files with 6528 additions and 3428 deletions

View File

@@ -21,6 +21,10 @@ EXTRA_USERS := $(shell \
.PHONY: build install test .PHONY: build install test
clean-keep-logs:
@echo "🧹 Cleaning ignored files but keeping logs/…"
git clean -fdX -- ':!logs' ':!logs/**'
clean: clean:
@echo "Removing ignored git files" @echo "Removing ignored git files"
git clean -fdX git clean -fdX

View File

@@ -1,4 +1,4 @@
# Todos # Todos
- Implement multi language - Implement multi language
- Implement rbac administration interface - Implement rbac administration interface
- Implement [cloudflare dev cache via API](https://chatgpt.com/share/689385e2-7744-800f-aa93-a6e811a245df) - Implement ``MASK_CREDENTIALS_IN_LOGS`` for all sensible tasks

View File

@@ -1,4 +1,33 @@
[defaults] [defaults]
lookup_plugins = ./lookup_plugins # --- Performance & Behavior ---
forks = 25
strategy = linear
gathering = smart
timeout = 120
retry_files_enabled = False
host_key_checking = True
deprecation_warnings = True
interpreter_python = auto_silent
# --- Output & Profiling ---
stdout_callback = yaml
callbacks_enabled = profile_tasks,timer
# --- Plugin paths ---
filter_plugins = ./filter_plugins filter_plugins = ./filter_plugins
module_utils = ./module_utils lookup_plugins = ./lookup_plugins
module_utils = ./module_utils
[ssh_connection]
# Multiplexing: safer socket path in HOME instead of /tmp
ssh_args = -o ControlMaster=auto -o ControlPersist=20s -o ControlPath=~/.ssh/ansible-%h-%p-%r \
-o ServerAliveInterval=15 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=accept-new \
-o PreferredAuthentications=publickey,password,keyboard-interactive
# Pipelining boosts speed; works fine if sudoers does not enforce "requiretty"
pipelining = True
scp_if_ssh = smart
[persistent_connection]
connect_timeout = 30
command_timeout = 60

View File

@@ -16,14 +16,16 @@ def run_ansible_playbook(
skip_tests=False, skip_tests=False,
skip_validation=False, skip_validation=False,
skip_build=False, skip_build=False,
cleanup=False cleanup=False,
logs=False
): ):
start_time = datetime.datetime.now() start_time = datetime.datetime.now()
print(f"\n▶️ Script started at: {start_time.isoformat()}\n") print(f"\n▶️ Script started at: {start_time.isoformat()}\n")
if cleanup: if cleanup:
print("\n🧹 Cleaning up project (make clean)...\n") cleanup_command = ["make", "clean-keep-logs"] if logs else ["make", "clean"]
subprocess.run(["make", "clean"], check=True) print("\n🧹 Cleaning up project (" + " ".join(cleanup_command) +")...\n")
subprocess.run(cleanup_command, check=True)
else: else:
print("\n⚠️ Skipping build as requested.\n") print("\n⚠️ Skipping build as requested.\n")
@@ -180,6 +182,10 @@ def main():
"-v", "--verbose", action="count", default=0, "-v", "--verbose", action="count", default=0,
help="Increase verbosity level. Multiple -v flags increase detail (e.g., -vvv for maximum log output)." help="Increase verbosity level. Multiple -v flags increase detail (e.g., -vvv for maximum log output)."
) )
parser.add_argument(
"--logs", action="store_true",
help="Keep the CLI logs during cleanup command"
)
args = parser.parse_args() args = parser.parse_args()
validate_application_ids(args.inventory, args.id) validate_application_ids(args.inventory, args.id)
@@ -190,6 +196,7 @@ def main():
"mode_update": args.update, "mode_update": args.update,
"mode_backup": args.backup, "mode_backup": args.backup,
"mode_cleanup": args.cleanup, "mode_cleanup": args.cleanup,
"mode_logs": args.logs,
"enable_debug": args.debug, "enable_debug": args.debug,
"host_type": args.host_type "host_type": args.host_type
} }
@@ -204,7 +211,8 @@ def main():
skip_tests=args.skip_tests, skip_tests=args.skip_tests,
skip_validation=args.skip_validation, skip_validation=args.skip_validation,
skip_build=args.skip_build, skip_build=args.skip_build,
cleanup=args.cleanup cleanup=args.cleanup,
logs=args.logs
) )

View File

@@ -0,0 +1,480 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Move unnecessary meta dependencies to guarded include_role/import_role
for better performance, while preserving YAML comments, quotes, and layout.
Heuristic (matches tests/integration/test_unnecessary_role_dependencies.py):
- A dependency is considered UNNECESSARY if:
* The consumer does NOT use provider variables in defaults/vars/handlers
(no early-var need), AND
* In tasks, any usage of provider vars or provider-handler notifications
occurs only AFTER an include/import of the provider in the same file,
OR there is no usage at all.
Action:
- Remove such dependencies from roles/<role>/meta/main.yml.
- Prepend a guarded include block to roles/<role>/tasks/01_core.yml (preferred)
or roles/<role>/tasks/main.yml if 01_core.yml is absent.
- If multiple dependencies are moved for a role, use a loop over include_role.
Notes:
- Creates .bak backups for modified YAML files.
- Requires ruamel.yaml to preserve comments/quotes everywhere.
"""
import argparse
import glob
import os
import re
import shutil
import sys
from typing import Dict, Set, List, Tuple, Optional
# --- Require ruamel.yaml for full round-trip preservation ---
try:
from ruamel.yaml import YAML
from ruamel.yaml.comments import CommentedMap, CommentedSeq
from ruamel.yaml.scalarstring import SingleQuotedScalarString
_HAVE_RUAMEL = True
except Exception:
_HAVE_RUAMEL = False
if not _HAVE_RUAMEL:
print("[ERR] ruamel.yaml is required to preserve comments/quotes. Install with: pip install ruamel.yaml", file=sys.stderr)
sys.exit(3)
yaml_rt = YAML()
yaml_rt.preserve_quotes = True
yaml_rt.width = 10**9 # prevent line wrapping
# ---------------- Utilities ----------------
def _backup(path: str):
if os.path.exists(path):
shutil.copy2(path, path + ".bak")
def read_text(path: str) -> str:
try:
with open(path, "r", encoding="utf-8") as f:
return f.read()
except Exception:
return ""
def load_yaml_rt(path: str):
try:
with open(path, "r", encoding="utf-8") as f:
data = yaml_rt.load(f)
return data if data is not None else CommentedMap()
except FileNotFoundError:
return CommentedMap()
except Exception as e:
print(f"[WARN] Failed to parse YAML: {path}: {e}", file=sys.stderr)
return CommentedMap()
def dump_yaml_rt(data, path: str):
_backup(path)
with open(path, "w", encoding="utf-8") as f:
yaml_rt.dump(data, f)
def roles_root(project_root: str) -> str:
return os.path.join(project_root, "roles")
def iter_role_dirs(project_root: str) -> List[str]:
root = roles_root(project_root)
return [d for d in glob.glob(os.path.join(root, "*")) if os.path.isdir(d)]
def role_name_from_dir(role_dir: str) -> str:
return os.path.basename(role_dir.rstrip(os.sep))
def path_if_exists(*parts) -> Optional[str]:
p = os.path.join(*parts)
return p if os.path.exists(p) else None
def gather_yaml_files(base: str, patterns: List[str]) -> List[str]:
files: List[str] = []
for pat in patterns:
files.extend(glob.glob(os.path.join(base, pat), recursive=True))
return [f for f in files if os.path.isfile(f)]
def sq(v: str):
"""Return a single-quoted scalar (ruamel) for consistent quoting."""
return SingleQuotedScalarString(v)
# ---------------- Providers: vars & handlers ----------------
def flatten_keys(data) -> Set[str]:
out: Set[str] = set()
if isinstance(data, dict):
for k, v in data.items():
if isinstance(k, str):
out.add(k)
out |= flatten_keys(v)
elif isinstance(data, list):
for item in data:
out |= flatten_keys(item)
return out
def collect_role_defined_vars(role_dir: str) -> Set[str]:
"""Vars a role 'provides': defaults/vars keys + set_fact keys in tasks."""
provided: Set[str] = set()
for rel in ("defaults/main.yml", "vars/main.yml"):
p = path_if_exists(role_dir, rel)
if p:
data = load_yaml_rt(p)
provided |= flatten_keys(data)
# set_fact keys
task_files = gather_yaml_files(os.path.join(role_dir, "tasks"), ["**/*.yml", "*.yml"])
for tf in task_files:
data = load_yaml_rt(tf)
if isinstance(data, list):
for task in data:
if isinstance(task, dict) and "set_fact" in task and isinstance(task["set_fact"], dict):
provided |= set(task["set_fact"].keys())
noisy = {"when", "name", "vars", "tags", "register"}
return {v for v in provided if isinstance(v, str) and v and v not in noisy}
def collect_role_handler_names(role_dir: str) -> Set[str]:
"""Handler names defined by a role (for notify detection)."""
handler_file = path_if_exists(role_dir, "handlers/main.yml")
if not handler_file:
return set()
data = load_yaml_rt(handler_file)
names: Set[str] = set()
if isinstance(data, list):
for task in data:
if isinstance(task, dict):
nm = task.get("name")
if isinstance(nm, str) and nm.strip():
names.add(nm.strip())
return names
# ---------------- Consumers: usage scanning ----------------
def find_var_positions(text: str, varname: str) -> List[int]:
"""Return byte offsets for occurrences of varname (word-ish boundary)."""
positions: List[int] = []
if not varname:
return positions
pattern = re.compile(rf"(?<!\w){re.escape(varname)}(?!\w)")
for m in pattern.finditer(text):
positions.append(m.start())
return positions
def first_var_use_offset_in_text(text: str, provided_vars: Set[str]) -> Optional[int]:
first: Optional[int] = None
for v in provided_vars:
for off in find_var_positions(text, v):
if first is None or off < first:
first = off
return first
def first_include_offset_for_role(text: str, producer_role: str) -> Optional[int]:
"""
Find earliest include/import of a given role in this YAML text.
Handles compact dict and block styles.
"""
pattern = re.compile(
r"(include_role|import_role)\s*:\s*\{[^}]*\bname\s*:\s*['\"]?"
+ re.escape(producer_role) + r"['\"]?[^}]*\}"
r"|"
r"(include_role|import_role)\s*:\s*\n(?:\s+[a-z_]+\s*:\s*.*\n)*\s*name\s*:\s*['\"]?"
+ re.escape(producer_role) + r"['\"]?",
re.IGNORECASE,
)
m = pattern.search(text)
return m.start() if m else None
def find_notify_offsets_for_handlers(text: str, handler_names: Set[str]) -> List[int]:
"""
Heuristic: for each handler name, find occurrences where 'notify' appears within
the preceding ~200 chars. Works for single string or list-style notify blocks.
"""
if not handler_names:
return []
offsets: List[int] = []
for h in handler_names:
for m in re.finditer(re.escape(h), text):
start = m.start()
back = max(0, start - 200)
context = text[back:start]
if re.search(r"notify\s*:", context):
offsets.append(start)
return sorted(offsets)
def parse_meta_dependencies(role_dir: str) -> List[str]:
meta = path_if_exists(role_dir, "meta/main.yml")
if not meta:
return []
data = load_yaml_rt(meta)
dd = data.get("dependencies")
deps: List[str] = []
if isinstance(dd, list):
for item in dd:
if isinstance(item, str):
deps.append(item)
elif isinstance(item, dict) and "role" in item:
deps.append(str(item["role"]))
elif isinstance(item, dict) and "name" in item:
deps.append(str(item["name"]))
return deps
# ---------------- Fix application ----------------
def sanitize_run_once_var(role_name: str) -> str:
"""
Generate run_once variable name from role name.
Example: 'srv-web-7-7-inj-logout' -> 'run_once_srv_web_7_7_inj_logout'
"""
return "run_once_" + role_name.replace("-", "_")
def build_include_block_yaml(consumer_role: str, moved_deps: List[str]) -> List[dict]:
"""
Build a guarded block that includes one or many roles.
This block will be prepended to tasks/01_core.yml or tasks/main.yml.
"""
guard_var = sanitize_run_once_var(consumer_role)
if len(moved_deps) == 1:
inner_tasks = [
{
"name": f"Include dependency '{moved_deps[0]}'",
"include_role": {"name": moved_deps[0]},
}
]
else:
inner_tasks = [
{
"name": "Include dependencies",
"include_role": {"name": "{{ item }}"},
"loop": moved_deps,
}
]
# Always set the run_once fact at the end
inner_tasks.append({"set_fact": {guard_var: True}})
# Correct Ansible block structure
block_task = {
"name": "Load former meta dependencies once",
"block": inner_tasks,
"when": f"{guard_var} is not defined",
}
return [block_task]
def prepend_tasks(tasks_path: str, new_tasks, dry_run: bool):
"""
Prepend new_tasks (CommentedSeq) to an existing tasks YAML list while preserving comments.
If the file does not exist, create it with new_tasks.
"""
if os.path.exists(tasks_path):
existing = load_yaml_rt(tasks_path)
if isinstance(existing, list):
combined = CommentedSeq()
for item in new_tasks:
combined.append(item)
for item in existing:
combined.append(item)
elif isinstance(existing, dict):
# Rare case: tasks file with a single mapping; coerce to list
combined = CommentedSeq()
for item in new_tasks:
combined.append(item)
combined.append(existing)
else:
combined = new_tasks
else:
os.makedirs(os.path.dirname(tasks_path), exist_ok=True)
combined = new_tasks
if dry_run:
print(f"[DRY-RUN] Would write {tasks_path} with {len(new_tasks)} prepended task(s).")
return
dump_yaml_rt(combined, tasks_path)
print(f"[OK] Updated {tasks_path} (prepended {len(new_tasks)} task(s)).")
def update_meta_remove_deps(meta_path: str, remove: List[str], dry_run: bool):
"""
Remove entries from meta.dependencies while leaving the rest of the file intact.
Quotes, comments, key order, and line breaks are preserved.
Returns True if a change would be made (or was made when not in dry-run).
"""
if not os.path.exists(meta_path):
return False
doc = load_yaml_rt(meta_path)
deps = doc.get("dependencies")
if not isinstance(deps, list):
return False
def dep_name(item):
if isinstance(item, dict):
return item.get("role") or item.get("name")
return item
keep = CommentedSeq()
removed = []
for item in deps:
name = dep_name(item)
if name in remove:
removed.append(name)
else:
keep.append(item)
if not removed:
return False
if keep:
doc["dependencies"] = keep
else:
if "dependencies" in doc:
del doc["dependencies"]
if dry_run:
print(f"[DRY-RUN] Would rewrite {meta_path}; removed: {', '.join(removed)}")
return True
dump_yaml_rt(doc, meta_path)
print(f"[OK] Rewrote {meta_path}; removed: {', '.join(removed)}")
return True
def dependency_is_unnecessary(consumer_dir: str,
consumer_name: str,
producer_name: str,
provider_vars: Set[str],
provider_handlers: Set[str]) -> bool:
"""Apply heuristic to decide if we can move this dependency."""
# 1) Early usage in defaults/vars/handlers? If yes -> necessary
defaults_files = [p for p in [
path_if_exists(consumer_dir, "defaults/main.yml"),
path_if_exists(consumer_dir, "vars/main.yml"),
path_if_exists(consumer_dir, "handlers/main.yml"),
] if p]
for p in defaults_files:
text = read_text(p)
if first_var_use_offset_in_text(text, provider_vars) is not None:
return False # needs meta dep
# 2) Tasks: any usage before include/import? If yes -> keep meta dep
task_files = gather_yaml_files(os.path.join(consumer_dir, "tasks"), ["**/*.yml", "*.yml"])
for p in task_files:
text = read_text(p)
if not text:
continue
include_off = first_include_offset_for_role(text, producer_name)
var_use_off = first_var_use_offset_in_text(text, provider_vars)
notify_offs = find_notify_offsets_for_handlers(text, provider_handlers)
if var_use_off is not None:
if include_off is None or include_off > var_use_off:
return False # used before include
for noff in notify_offs:
if include_off is None or include_off > noff:
return False # notify before include
# If we get here: no early use, and either no usage at all or usage after include
return True
def process_role(role_dir: str,
providers_index: Dict[str, Tuple[Set[str], Set[str]]],
only_role: Optional[str],
dry_run: bool) -> bool:
"""
Returns True if any change suggested/made for this role.
"""
consumer_name = role_name_from_dir(role_dir)
if only_role and only_role != consumer_name:
return False
meta_deps = parse_meta_dependencies(role_dir)
if not meta_deps:
return False
# Build provider vars/handlers accessors
moved: List[str] = []
for producer in meta_deps:
# Only consider local roles we can analyze
producer_dir = path_if_exists(os.path.dirname(role_dir), producer) or path_if_exists(os.path.dirname(roles_root(os.path.dirname(role_dir))), "roles", producer)
if producer not in providers_index:
# Unknown/external role → skip (we cannot verify safety)
continue
pvars, phandlers = providers_index[producer]
if dependency_is_unnecessary(role_dir, consumer_name, producer, pvars, phandlers):
moved.append(producer)
if not moved:
return False
# 1) Remove from meta
meta_path = os.path.join(role_dir, "meta", "main.yml")
update_meta_remove_deps(meta_path, moved, dry_run=dry_run)
# 2) Prepend include block to tasks/01_core.yml or tasks/main.yml
target_tasks = path_if_exists(role_dir, "tasks/01_core.yml")
if not target_tasks:
target_tasks = os.path.join(role_dir, "tasks", "main.yml")
include_block = build_include_block_yaml(consumer_name, moved)
prepend_tasks(target_tasks, include_block, dry_run=dry_run)
return True
def build_providers_index(all_roles: List[str]) -> Dict[str, Tuple[Set[str], Set[str]]]:
"""
Map role_name -> (provided_vars, handler_names)
"""
index: Dict[str, Tuple[Set[str], Set[str]]] = {}
for rd in all_roles:
rn = role_name_from_dir(rd)
index[rn] = (collect_role_defined_vars(rd), collect_role_handler_names(rd))
return index
def main():
parser = argparse.ArgumentParser(
description="Move unnecessary meta dependencies to guarded include_role for performance (preserve comments/quotes)."
)
parser.add_argument(
"--project-root",
default=os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")),
help="Path to project root (default: two levels up from this script).",
)
parser.add_argument(
"--role",
dest="only_role",
default=None,
help="Only process a specific role name (e.g., 'docker-core').",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Analyze and print planned changes without modifying files.",
)
args = parser.parse_args()
roles = iter_role_dirs(args.project_root)
if not roles:
print(f"[ERR] No roles found under {roles_root(args.project_root)}", file=sys.stderr)
sys.exit(2)
providers_index = build_providers_index(roles)
changed_any = False
for role_dir in roles:
changed = process_role(role_dir, providers_index, args.only_role, args.dry_run)
changed_any = changed_any or changed
if not changed_any:
print("[OK] No unnecessary meta dependencies to move (per heuristic).")
else:
if args.dry_run:
print("[DRY-RUN] Completed analysis. No files were changed.")
else:
print("[OK] Finished moving unnecessary dependencies.")
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,7 @@
from ansible.errors import AnsibleFilterError from ansible.errors import AnsibleFilterError
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from module_utils.entity_name_utils import get_entity_name
class FilterModule(object): class FilterModule(object):
def filters(self): def filters(self):
@@ -30,8 +33,9 @@ class FilterModule(object):
) )
return values return values
def default_domain(app_id, primary): def default_domain(app_id:str, primary:str):
return f"{app_id}.{primary}" subdomain = get_entity_name(app_id)
return f"{subdomain}.{primary}"
# 1) Compute canonical domains per app (always as a list) # 1) Compute canonical domains per app (always as a list)
canonical_map = {} canonical_map = {}

View File

@@ -1,27 +1,11 @@
#!/usr/bin/python #!/usr/bin/python
import os import sys, os
import sys sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from ansible.errors import AnsibleFilterError from module_utils.get_url import get_url
class FilterModule(object): class FilterModule(object):
''' Infinito.Nexus application config extraction filters '''
def filters(self): def filters(self):
return {'get_url': self.get_url} return {
'get_url': get_url,
def get_url(self, domains, application_id, protocol): }
# 1) module_util-Verzeichnis in den Pfad aufnehmen
plugin_dir = os.path.dirname(__file__)
project_root = os.path.dirname(plugin_dir)
module_utils = os.path.join(project_root, 'module_utils')
if module_utils not in sys.path:
sys.path.append(module_utils)
# 2) jetzt domain_utils importieren
try:
from domain_utils import get_domain
except ImportError as e:
raise AnsibleFilterError(f"could not import domain_utils: {e}")
# 3) Validierung und Aufruf
if not isinstance(protocol, str):
raise AnsibleFilterError("Protocol must be a string")
return f"{protocol}://{ get_domain(domains, application_id) }"

View File

@@ -1,4 +1,9 @@
INFINITO_ENVIRONMENT: "production" INFINITO_ENVIRONMENT: "production" # Possible values: production, development
# If true, sensitive credentials will be masked or hidden from all Ansible task logs
# Recommendet to set to true
# @todo needs to be implemented everywhere
MASK_CREDENTIALS_IN_LOGS: true
HOST_CURRENCY: "EUR" HOST_CURRENCY: "EUR"
HOST_TIMEZONE: "UTC" HOST_TIMEZONE: "UTC"
@@ -46,20 +51,21 @@ certbot_acme_challenge_method: "cloudflare"
certbot_credentials_dir: /etc/certbot certbot_credentials_dir: /etc/certbot
certbot_credentials_file: "{{ certbot_credentials_dir }}/{{ certbot_acme_challenge_method }}.ini" certbot_credentials_file: "{{ certbot_credentials_dir }}/{{ certbot_acme_challenge_method }}.ini"
certbot_dns_api_token: "" # Define in inventory file: More information here: group_vars/all/docs/CLOUDFLARE_API_TOKEN.md certbot_dns_api_token: "" # Define in inventory file: More information here: group_vars/all/docs/CLOUDFLARE_API_TOKEN.md
certbot_dns_propagation_wait_seconds: 40 # How long should the script wait for DNS propagation before continuing certbot_dns_propagation_wait_seconds: 300 # How long should the script wait for DNS propagation before continuing
certbot_flavor: san # Possible options: san (recommended, with a dns flavor like cloudflare, or hetzner), wildcard(doesn't function with www redirect), dedicated certbot_flavor: san # Possible options: san (recommended, with a dns flavor like cloudflare, or hetzner), wildcard(doesn't function with www redirect), dedicated
# Path where Certbot stores challenge webroot files # Path where Certbot stores challenge webroot files
letsencrypt_webroot_path: "/var/lib/letsencrypt/" letsencrypt_webroot_path: "/var/lib/letsencrypt/"
# Base directory containing Certbot configuration, account data, and archives # Base directory containing Certbot configuration, account data, and archives
letsencrypt_base_path: "/etc/letsencrypt/" letsencrypt_base_path: "/etc/letsencrypt/"
# Symlink directory for the current active certificate and private key # Symlink directory for the current active certificate and private key
letsencrypt_live_path: "{{ letsencrypt_base_path }}live/" letsencrypt_live_path: "{{ letsencrypt_base_path }}live/"
## Docker Role Specific Parameters ## Docker Role Specific Parameters
docker_restart_policy: "unless-stopped" DOCKER_RESTART_POLICY: "unless-stopped"
DOCKER_VARS_FILE: "{{ playbook_dir }}/roles/docker-compose/vars/docker-compose.yml"
# default value if not set via CLI (-e) or in playbook vars # default value if not set via CLI (-e) or in playbook vars
allowed_applications: [] allowed_applications: []

0
logs/.gitkeep Normal file
View File

2
logs/README.md Normal file
View File

@@ -0,0 +1,2 @@
# Logs
This folder contains the log files.

75
main.py
View File

@@ -96,34 +96,71 @@ def play_start_intro():
Sound.play_infinito_intro_sound() Sound.play_infinito_intro_sound()
from multiprocessing import Process, get_start_method, set_start_method
import time import time
def _call_sound(method_name: str):
# Re-import inside child to (re)init audio backend cleanly under 'spawn'
from module_utils.sounds import Sound as _Sound
getattr(_Sound, method_name)()
def _play_in_child(method_name: str) -> bool:
p = Process(target=_call_sound, args=(method_name,))
p.start(); p.join()
if p.exitcode != 0:
try:
# Sichtbare Diagnose, wenn das Kind crasht/fehlschlägt
print(color_text(f"[sound] child '{method_name}' exitcode={p.exitcode}", Fore.YELLOW))
except Exception:
pass
return p.exitcode == 0
def failure_with_warning_loop(no_signal, sound_enabled, alarm_timeout=60): def failure_with_warning_loop(no_signal, sound_enabled, alarm_timeout=60):
""" """
On failure: Plays warning sound in a loop. Plays a warning sound in a loop until timeout; Ctrl+C stops earlier.
Aborts after alarm_timeout seconds and exits with code 1. Sound playback is isolated in a child process to avoid segfaulting the main process.
""" """
if not no_signal: if not no_signal:
Sound.play_finished_failed_sound() # Try the failure jingle once; ignore failures
_play_in_child("play_finished_failed_sound")
print(color_text("Warning: command failed. Press Ctrl+C to stop warnings.", Fore.RED)) print(color_text("Warning: command failed. Press Ctrl+C to stop warnings.", Fore.RED))
start = time.monotonic() start = time.monotonic()
try: try:
while True: while time.monotonic() - start <= alarm_timeout:
if not no_signal: if no_signal:
Sound.play_warning_sound() time.sleep(0.5)
if time.monotonic() - start > alarm_timeout: continue
print(color_text(f"Alarm aborted after {alarm_timeout} seconds.", Fore.RED))
sys.exit(1) ok = _play_in_child("play_warning_sound")
# If audio stack is broken, stay silent but avoid busy loop
if not ok:
time.sleep(0.8)
print(color_text(f"Alarm aborted after {alarm_timeout} seconds.", Fore.RED))
sys.exit(1)
except KeyboardInterrupt: except KeyboardInterrupt:
print(color_text("Warnings stopped by user.", Fore.YELLOW)) print(color_text("Warnings stopped by user.", Fore.YELLOW))
sys.exit(1)
if __name__ == "__main__": if __name__ == "__main__":
# IMPORTANT: use 'spawn' so the child re-initializes audio cleanly
try:
if get_start_method(allow_none=True) != "spawn":
set_start_method("spawn", force=True)
except RuntimeError:
pass
# Prefer system audio backend by default (prevents simpleaudio segfaults in child processes)
os.environ.setdefault("INFINITO_AUDIO_BACKEND", "system")
# Parse flags # Parse flags
sound_enabled = '--sound' in sys.argv and (sys.argv.remove('--sound') or True) sound_enabled = '--sound' in sys.argv and (sys.argv.remove('--sound') or True)
no_signal = '--no-signal' in sys.argv and (sys.argv.remove('--no-signal') or True) no_signal = '--no-signal' in sys.argv and (sys.argv.remove('--no-signal') or True)
log_enabled = '--log' in sys.argv and (sys.argv.remove('--log') or True) # Guaranty that --log is passed to deploy command
log_enabled = '--log' in sys.argv
if log_enabled and (len(sys.argv) < 2 or sys.argv[1] != 'deploy'):
sys.argv.remove('--log')
git_clean = '--git-clean' in sys.argv and (sys.argv.remove('--git-clean') or True) git_clean = '--git-clean' in sys.argv and (sys.argv.remove('--git-clean') or True)
infinite = '--infinite' in sys.argv and (sys.argv.remove('--infinite') or True) infinite = '--infinite' in sys.argv and (sys.argv.remove('--infinite') or True)
alarm_timeout = 60 alarm_timeout = 60
@@ -135,19 +172,6 @@ if __name__ == "__main__":
except Exception: except Exception:
print(color_text("Invalid --alarm-timeout value!", Fore.RED)) print(color_text("Invalid --alarm-timeout value!", Fore.RED))
sys.exit(1) sys.exit(1)
# Segfault handler
def segv_handler(signum, frame):
if not no_signal:
Sound.play_finished_failed_sound()
try:
while True:
Sound.play_warning_sound()
except KeyboardInterrupt:
pass
print(color_text("Segmentation fault detected. Exiting.", Fore.RED))
sys.exit(1)
signal.signal(signal.SIGSEGV, segv_handler)
# Play intro melody if requested # Play intro melody if requested
if sound_enabled: if sound_enabled:
@@ -182,6 +206,7 @@ if __name__ == "__main__":
print(color_text(" --log Log all proxied command output to logfile.log", Fore.YELLOW)) print(color_text(" --log Log all proxied command output to logfile.log", Fore.YELLOW))
print(color_text(" --git-clean Remove all Git-ignored files before running", Fore.YELLOW)) print(color_text(" --git-clean Remove all Git-ignored files before running", Fore.YELLOW))
print(color_text(" --infinite Run the proxied command in an infinite loop", Fore.YELLOW)) print(color_text(" --infinite Run the proxied command in an infinite loop", Fore.YELLOW))
print(color_text(" --alarm-timeout Stop warnings and exit after N seconds (default: 60)", Fore.YELLOW))
print(color_text(" -h, --help Show this help message and exit", Fore.YELLOW)) print(color_text(" -h, --help Show this help message and exit", Fore.YELLOW))
print() print()
print(color_text("Available commands:", Style.BRIGHT)) print(color_text("Available commands:", Style.BRIGHT))

18
module_utils/get_url.py Normal file
View File

@@ -0,0 +1,18 @@
from ansible.errors import AnsibleFilterError
import sys, os
def get_url(domains, application_id, protocol):
plugin_dir = os.path.dirname(__file__)
project_root = os.path.dirname(plugin_dir)
module_utils = os.path.join(project_root, 'module_utils')
if module_utils not in sys.path:
sys.path.append(module_utils)
try:
from domain_utils import get_domain
except ImportError as e:
raise AnsibleFilterError(f"could not import domain_utils: {e}")
if not isinstance(protocol, str):
raise AnsibleFilterError("Protocol must be a string")
return f"{protocol}://{ get_domain(domains, application_id) }"

View File

@@ -22,6 +22,7 @@ else:
try: try:
import numpy as np import numpy as np
import simpleaudio as sa import simpleaudio as sa
import shutil, subprocess, tempfile, wave as wavmod
class Sound: class Sound:
""" """
Sound effects for the application with enhanced complexity. Sound effects for the application with enhanced complexity.
@@ -63,10 +64,49 @@ else:
middle = (w1_end + w2_start).astype(np.int16) middle = (w1_end + w2_start).astype(np.int16)
return np.concatenate([w1[:-fade_len], middle, w2[fade_len:]]) return np.concatenate([w1[:-fade_len], middle, w2[fade_len:]])
@staticmethod
def _play_via_system(wave: np.ndarray):
# Write a temp WAV and play it via available system player
with tempfile.NamedTemporaryFile(delete=False, suffix=".wav") as f:
fname = f.name
try:
with wavmod.open(fname, "wb") as w:
w.setnchannels(1)
w.setsampwidth(2)
w.setframerate(Sound.fs)
w.writeframes(wave.tobytes())
def run(cmd):
return subprocess.run(
cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL
).returncode == 0
# Preferred order: PipeWire → PulseAudio → ALSA → ffplay
if shutil.which("pw-play") and run(["pw-play", fname]): return
if shutil.which("paplay") and run(["paplay", fname]): return
if shutil.which("aplay") and run(["aplay", "-q", fname]): return
if shutil.which("ffplay") and run(["ffplay", "-autoexit", "-nodisp", fname]): return
# Last resort if no system player exists: simpleaudio
play_obj = sa.play_buffer(wave, 1, 2, Sound.fs)
play_obj.wait_done()
finally:
try: os.unlink(fname)
except Exception: pass
@staticmethod @staticmethod
def _play(wave: np.ndarray): def _play(wave: np.ndarray):
play_obj = sa.play_buffer(wave, 1, 2, Sound.fs) # Switch via env: system | simpleaudio | auto (default)
play_obj.wait_done() backend = os.getenv("INFINITO_AUDIO_BACKEND", "auto").lower()
if backend == "system":
return Sound._play_via_system(wave)
if backend == "simpleaudio":
play_obj = sa.play_buffer(wave, 1, 2, Sound.fs)
play_obj.wait_done()
return
# auto: try simpleaudio first; if it fails, fall back to system
try:
play_obj = sa.play_buffer(wave, 1, 2, Sound.fs)
play_obj.wait_done()
except Exception:
Sound._play_via_system(wave)
@classmethod @classmethod
def play_infinito_intro_sound(cls): def play_infinito_intro_sound(cls):

View File

@@ -1,5 +1,9 @@
# run_once_cmp_db_docker_proxy: deactivated # run_once_cmp_db_docker_proxy: deactivated
- include_tasks: "{{ playbook_dir }}/tasks/utils/load_handlers.yml"
vars:
handler_role_name: "svc-prx-openresty"
- name: "For '{{ application_id }}': load docker and db" - name: "For '{{ application_id }}': load docker and db"
include_role: include_role:
name: cmp-db-docker name: cmp-db-docker
@@ -8,5 +12,5 @@
include_role: include_role:
name: srv-proxy-6-6-domain name: srv-proxy-6-6-domain
vars: vars:
domain: "{{ domains | get_domain(application_id) }}" domain: "{{ domains | get_domain(application_id) }}"
http_port: "{{ ports.localhost.http[application_id] }}" http_port: "{{ ports.localhost.http[application_id] }}"

View File

@@ -7,13 +7,13 @@
- name: "For '{{ application_id }}': Load database variables" - name: "For '{{ application_id }}': Load database variables"
include_vars: "{{ item }}" include_vars: "{{ item }}"
loop: loop:
- "{{ cmp_db_docker_vars_file_docker }}" # Important to load docker variables first so that database can use them - "{{ DOCKER_VARS_FILE }}" # Important to load docker variables first so that database can use them
- "{{ cmp_db_docker_vars_file_db }}" # Important to load them before docker role so that backup can use them - "{{ cmp_db_docker_vars_file_db }}" # Important to load them before docker role so that backup can use them
- name: "For '{{ application_id }}': Load cmp-docker-oauth2"
include_role:
name: cmp-docker-oauth2
- name: "For '{{ application_id }}': Load central RDBMS" - name: "For '{{ application_id }}': Load central RDBMS"
include_role: include_role:
name: cmp-rdbms name: cmp-rdbms
- name: "For '{{ application_id }}': Load cmp-docker-oauth2"
include_role:
name: cmp-docker-oauth2

View File

@@ -1,2 +1 @@
cmp_db_docker_vars_file_db: "{{ playbook_dir }}/roles/cmp-rdbms/vars/database.yml" cmp_db_docker_vars_file_db: "{{ playbook_dir }}/roles/cmp-rdbms/vars/database.yml"
cmp_db_docker_vars_file_docker: "{{ playbook_dir }}/roles/docker-compose/vars/docker-compose.yml"

View File

@@ -4,11 +4,10 @@
include_role: include_role:
name: docker-compose name: docker-compose
- name: "set oauth2_proxy_application_id (Needed due to lazzy loading issue)" - block:
set_fact: - name: "set oauth2_proxy_application_id (Needed due to lazzy loading issue)"
oauth2_proxy_application_id: "{{ application_id }}" set_fact:
when: applications | get_app_conf(application_id, 'features.oauth2', False) oauth2_proxy_application_id: "{{ application_id }}"
- name: "include the web-app-oauth2-proxy role {{domain}}"
- name: "include the web-app-oauth2-proxy role {{domain}}" include_tasks: "{{ playbook_dir }}/roles/web-app-oauth2-proxy/tasks/main.yml"
include_tasks: "{{ playbook_dir }}/roles/web-app-oauth2-proxy/tasks/main.yml"
when: applications | get_app_conf(application_id, 'features.oauth2', False) when: applications | get_app_conf(application_id, 'features.oauth2', False)

View File

@@ -1,11 +1,18 @@
# run_once_cmp_rdbms: deactivated # run_once_cmp_rdbms: deactivated
# The following env file will just be used from the dedicated mariadb container # The following env file will just be used from the dedicated mariadb container
# and not the central one # and not the central one
- name: "For '{{ application_id }}': Create {{database_env}}" - block:
template: - name: "Ensure env dir exists: {{ docker_compose.directories.env }}"
src: "env/{{database_type}}.env.j2" ansible.builtin.file:
dest: "{{database_env}}" path: "{{ docker_compose.directories.env }}"
notify: docker compose up state: directory
mode: "0755"
- name: "For '{{ application_id }}': Create {{database_env}}"
template:
src: "env/{{database_type}}.env.j2"
dest: "{{database_env}}"
notify: docker compose up
when: not applications | get_app_conf(application_id, 'features.central_database', False) when: not applications | get_app_conf(application_id, 'features.central_database', False)
- name: "For '{{ application_id }}': Create central database" - name: "For '{{ application_id }}': Create central database"
@@ -16,4 +23,4 @@
when: applications | get_app_conf(application_id, 'features.central_database', False) when: applications | get_app_conf(application_id, 'features.central_database', False)
- name: "For '{{ application_id }}': Add Entry for Backup Procedure" - name: "For '{{ application_id }}': Add Entry for Backup Procedure"
include_tasks: "{{ playbook_dir }}/roles/sys-bkp-docker-2-loc/tasks/seed-database-to-backup.yml" include_tasks: "{{ playbook_dir }}/roles/sys-bkp-docker-2-loc/tasks/04_seed-database-to-backup.yml"

View File

@@ -6,7 +6,7 @@
logging: logging:
driver: journald driver: journald
image: mariadb image: mariadb
restart: {{docker_restart_policy}} restart: {{DOCKER_RESTART_POLICY}}
env_file: env_file:
- {{database_env}} - {{database_env}}
command: "--transaction-isolation=READ-COMMITTED --binlog-format=ROW" command: "--transaction-isolation=READ-COMMITTED --binlog-format=ROW"

View File

@@ -6,7 +6,7 @@
container_name: {{ application_id | get_entity_name }}-database container_name: {{ application_id | get_entity_name }}-database
env_file: env_file:
- {{database_env}} - {{database_env}}
restart: {{docker_restart_policy}} restart: {{DOCKER_RESTART_POLICY}}
healthcheck: healthcheck:
test: ["CMD-SHELL", "pg_isready -U {{database_name}}"] test: ["CMD-SHELL", "pg_isready -U {{database_name}}"]
interval: 10s interval: 10s

View File

@@ -0,0 +1,2 @@
# Docker
docker_pull_git_repository: false # Deactivated here to don't inhire this

View File

@@ -12,10 +12,10 @@
- name: setup git - name: setup git
command: gitconfig --merge-option rebase --name "{{users.client.full_name}}" --email "{{users.client.email}}" --website "{{users.client.website}}" --signing gpg --gpg-key "{{users.client.gpg}}" command: gitconfig --merge-option rebase --name "{{users.client.full_name}}" --email "{{users.client.email}}" --website "{{users.client.website}}" --signing gpg --gpg-key "{{users.client.gpg}}"
when: run_once_gitconfig is not defined when: run_once_desk_git is not defined
become: false become: false
- name: run the gitconfig tasks once - name: run the gitconfig tasks once
set_fact: set_fact:
run_once_gitconfig: true run_once_desk_git: true
when: run_once_gitconfig is not defined when: run_once_desk_git is not defined

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Installs caffeine-ng and configures it to autostart for preventing screen sleep on GNOME." description: "Installs caffeine-ng and configures it to autostart for preventing screen sleep on GNOME."
@@ -6,12 +5,10 @@ galaxy_info:
license_url: "https://s.veen.world/cncl" license_url: "https://s.veen.world/cncl"
min_ansible_version: "2.4" min_ansible_version: "2.4"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- all - all
galaxy_tags: galaxy_tags:
- caffeine - caffeine
- autostart - autostart
- archlinux - archlinux
dependencies:
- dev-yay

View File

@@ -0,0 +1,21 @@
- name: Include dependency 'dev-yay'
include_role:
name: dev-yay
when: run_once_dev_yay is not defined
- name: Install caffeine
kewlfft.aur.aur:
use: yay
name:
- caffeine-ng
become: false
- name: Create autostart directory if it doesn't exist
file:
path: "{{auto_start_directory}}"
state: directory
- name: Copy caffeine.desktop file to autostart directory
template:
src: caffeine.desktop.j2
dest: "{{auto_start_directory}}caffeine.desktop"

View File

@@ -1,17 +1,4 @@
--- - block:
- name: Install caffeine - include_tasks: 01_core.yml
kewlfft.aur.aur: - include_tasks: utils/run_once.yml
use: yay when: run_once_desk_gnome_caffeine is not defined
name:
- caffeine-ng
become: false
- name: Create autostart directory if it doesn't exist
file:
path: "{{auto_start_directory}}"
state: directory
- name: Copy caffeine.desktop file to autostart directory
template:
src: caffeine.desktop.j2
dest: "{{auto_start_directory}}caffeine.desktop"

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birchenbach" author: "Kevin Veen-Birchenbach"
description: "Installs the qBittorrent torrent client via AUR on Arch Linux." description: "Installs the qBittorrent torrent client via AUR on Arch Linux."
@@ -9,15 +8,12 @@ galaxy_info:
Consulting & Coaching Solutions Consulting & Coaching Solutions
https://www.veen.world https://www.veen.world
galaxy_tags: galaxy_tags:
- qbittorrent - qbittorrent
- torrent - torrent
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/desk-qbittorrent" documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/desk-qbittorrent"
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: [ all ] versions: [all]
dependencies:
- dev-yay

View File

@@ -1,5 +1,14 @@
- name: install torrent software - block:
kewlfft.aur.aur:
use: yay - name: Include dependency 'dev-yay'
name: include_role:
- qbittorrent name: dev-yay
when: run_once_dev_yay is not defined
- name: install torrent software
kewlfft.aur.aur:
use: yay
name:
- qbittorrent
- include_tasks: utils/run_once.yml
when: run_once_desk_qbittorrent is not defined

View File

@@ -1,5 +1,5 @@
- name: Install RetroArch and assets - name: Install RetroArch and assets
pacman: community.general.pacman:
name: "{{ retroarch_packages }}" name: "{{ retroarch_packages }}"
state: present state: present
update_cache: yes update_cache: yes

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Installs the Spotify client." description: "Installs the Spotify client."
@@ -10,18 +9,16 @@ galaxy_info:
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- spotify - spotify
- aur - aur
- music - music
- streaming - streaming
- archlinux - archlinux
- client - client
repository: https://github.com/kevinveenbirkenbach/infinito-nexus repository: https://github.com/kevinveenbirkenbach/infinito-nexus
issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues
documentation: "https://docs.infinito.nexus/" documentation: "https://docs.infinito.nexus/"
dependencies:
- dev-yay

View File

@@ -1,5 +1,13 @@
- name: install spotify - block:
kewlfft.aur.aur: - name: Include dependency 'dev-yay'
use: yay include_role:
name: name: dev-yay
- spotify when: run_once_dev_yay is not defined
- name: install spotify
kewlfft.aur.aur:
use: yay
name:
- spotify
- include_tasks: utils/run_once.yml
when: run_once_desk_spotify is not defined

View File

@@ -1,30 +1,28 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Persistent SSH agent setup for GNOME Wayland sessions with SSH configuration pulled from Git." description: "Persistent SSH agent setup for GNOME Wayland sessions with SSH configuration pulled from Git."
license: "Infinito.Nexus NonCommercial License (CNCL)" license: "Infinito.Nexus NonCommercial License (CNCL)"
license_url: "https://s.veen.world/cncl" license_url: "https://s.veen.world/cncl"
company: | company: |
Kevin Veen-Birkenbach Kevin Veen-Birkenbach
Consulting & Coaching Solutions Consulting & Coaching Solutions
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- ssh - ssh
- agent - agent
- systemd - systemd
- gnome - gnome
- wayland - wayland
- archlinux - archlinux
- keepassxc - keepassxc
repository: https://github.com/kevinveenbirkenbach/infinito-nexus repository: https://github.com/kevinveenbirkenbach/infinito-nexus
issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues
documentation: "https://docs.infinito.nexus/" documentation: "https://docs.infinito.nexus/"
dependencies: dependencies:
- desk-git - desk-git
- dev-shell

View File

@@ -0,0 +1,51 @@
- name: Include dependency 'dev-shell'
include_role:
name: dev-shell
when: run_once_dev_shell is not defined
- name: pull ssh repository from {{desk_ssh_repository}}
git:
repo: "{{desk_ssh_repository}}"
dest: "$HOME/.ssh"
update: yes
register: git_result
ignore_errors: true
become: false
- name: Warn if repo is not reachable
debug:
msg: "Warning: Repository is not reachable."
when: git_result.failed and enable_debug | bool
- name: Ensure systemd user directory exists
file:
path: "$HOME/.config/systemd/user"
state: directory
mode: "0700"
become: false
- name: Deploy ssh-agent systemd unit file
template:
src: ssh-agent.service.j2
dest: "$HOME/.config/systemd/user/ssh-agent.service"
mode: "0644"
become: false
- name: Enable and start ssh-agent service
systemd:
name: ssh-agent.service
scope: user
enabled: true
state: started
daemon_reload: true
become: false
- name: Ensure ~/.profile exists with common environment
lineinfile:
path: "$HOME/.profile"
line: 'export SSH_AUTH_SOCK="$XDG_RUNTIME_DIR/ssh-agent.socket"'
insertafter: EOF
state: present
create: yes
mode: "0644"
become: false

View File

@@ -1,46 +1,4 @@
- name: pull ssh repository from {{desk_ssh_repository}} - block:
git: - include_tasks: 01_core.yml
repo: "{{desk_ssh_repository}}" - include_tasks: utils/run_once.yml
dest: "$HOME/.ssh" when: run_once_desk_ssh is not defined
update: yes
register: git_result
ignore_errors: true
become: false
- name: Warn if repo is not reachable
debug:
msg: "Warning: Repository is not reachable."
when: git_result.failed and enable_debug | bool
- name: Ensure systemd user directory exists
file:
path: "$HOME/.config/systemd/user"
state: directory
mode: "0700"
become: false
- name: Deploy ssh-agent systemd unit file
template:
src: ssh-agent.service.j2
dest: "$HOME/.config/systemd/user/ssh-agent.service"
mode: "0644"
become: false
- name: Enable and start ssh-agent service
systemd:
name: ssh-agent.service
scope: user
enabled: true
state: started
daemon_reload: true
become: false
- name: Ensure ~/.profile exists with common environment
lineinfile:
path: "$HOME/.profile"
line: 'export SSH_AUTH_SOCK="$XDG_RUNTIME_DIR/ssh-agent.socket"'
insertafter: EOF
state: present
create: yes
mode: "0644"
become: false

View File

@@ -1,7 +1,7 @@
--- ---
- name: Install VirtualBox and kernel modules - name: Install VirtualBox and kernel modules
become: true become: true
pacman: community.general.pacman:
name: >- name: >-
virtualbox virtualbox
{{ lookup('pipe', "pacman -Qsq '^linux' | grep '^linux[0-9]*[-rt]*$' | awk '{print $1 \"-virtualbox-host-modules\"}' ORS=' '") }} {{ lookup('pipe', "pacman -Qsq '^linux' | grep '^linux[0-9]*[-rt]*$' | awk '{print $1 \"-virtualbox-host-modules\"}' ORS=' '") }}

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birchenbach" author: "Kevin Veen-Birchenbach"
description: "Installs the Zoom video conferencing client via AUR on Arch Linux." description: "Installs the Zoom video conferencing client via AUR on Arch Linux."
@@ -9,16 +8,13 @@ galaxy_info:
Consulting & Coaching Solutions Consulting & Coaching Solutions
https://www.veen.world https://www.veen.world
galaxy_tags: galaxy_tags:
- zoom - zoom
- video - video
- conferencing - conferencing
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/desk-zoom" documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/desk-zoom"
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: [ all ] versions: [all]
dependencies:
- dev-yay

View File

@@ -1,6 +1,13 @@
- name: install video conference software - block:
kewlfft.aur.aur: - name: Include dependency 'dev-yay'
use: yay include_role:
name: name: dev-yay
- name: install video conference software
kewlfft.aur.aur:
use: yay
name:
- zoom - zoom
become: false become: false
- include_tasks: utils/run_once.yml
when: run_once_desk_zoom is not defined

View File

@@ -1,11 +1,11 @@
--- ---
- name: Install fakeroot - name: Install fakeroot
pacman: community.general.pacman:
name: fakeroot name: fakeroot
state: present state: present
when: run_once_fakeroot is not defined when: run_once_dev_fakeroot is not defined
- name: run the fakeroot tasks once - name: run the fakeroot tasks once
set_fact: set_fact:
run_once_fakeroot: true run_once_dev_fakeroot: true
when: run_once_fakeroot is not defined when: run_once_dev_fakeroot is not defined

View File

@@ -1,6 +1,10 @@
--- ---
- name: Install GCC - block:
pacman: - name: Install GCC
name: gcc community.general.pacman:
state: present name: gcc
update_cache: yes state: present
update_cache: yes
- set_fact:
run_once_dev_gcc: true
when: run_once_dev_gcc is not defined

View File

@@ -6,7 +6,7 @@ This Ansible role installs Git on the target system using the Pacman package man
## Overview ## Overview
Designed for Arch Linux systems, this role leverages the `pacman` module to install Git. It uses a fact (`run_once_git`) to control task execution, ensuring that the Git installation is performed only once per run. Designed for Arch Linux systems, this role leverages the `pacman` module to install Git. It uses a fact (`run_once_dev_git`) to control task execution, ensuring that the Git installation is performed only once per run.
## Purpose ## Purpose

View File

@@ -1,10 +1,10 @@
- name: install git - name: install git
pacman: community.general.pacman:
name: git name: git
state: present state: present
when: run_once_git is not defined when: run_once_dev_git is not defined
- name: run the git tasks once - name: run the git tasks once
set_fact: set_fact:
run_once_git: true run_once_dev_git: true
when: run_once_git is not defined when: run_once_dev_git is not defined

View File

@@ -5,4 +5,4 @@
template: src=locale.conf dest=/etc/locale.conf template: src=locale.conf dest=/etc/locale.conf
- name: Generate locales - name: Generate locales
shell: locale-gen shell: locale-gen
become: yes become: true

View File

@@ -1,4 +1,4 @@
- name: install make - name: install make
pacman: community.general.pacman:
name: make name: make
state: present state: present

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Installs the python-pip package to provide the Python package manager, ensuring that Python packages can be installed reliably on the target system." description: "Installs the python-pip package to provide the Python package manager, ensuring that Python packages can be installed reliably on the target system."
@@ -10,17 +9,15 @@ galaxy_info:
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- python - python
- pip - pip
- package - package
- installation - installation
- automation - automation
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://docs.infinito.nexus" documentation: "https://docs.infinito.nexus"
dependencies:
- dev-gcc

View File

@@ -1,11 +1,15 @@
--- - block:
- name: python pip install
pacman:
name: python-pip
state: present
when: run_once_python_pip is not defined
- name: run the python_pip tasks once - include_role:
set_fact: name: dev-gcc
run_once_python_pip: true when: run_once_dev_gcc is not defined
when: run_once_python_pip is not defined
- name: python pip install
community.general.pacman:
name: python-pip
state: present
- include_tasks: utils/run_once.yml
vars:
flush_handlers: false
when: run_once_dev_python_pip is not defined

View File

@@ -1,11 +1,11 @@
--- ---
- name: python-yaml install - name: python-yaml install
pacman: community.general.pacman:
name: python-yaml name: python-yaml
state: present state: present
when: run_once_python_yaml is not defined when: run_once_dev_python_yaml is not defined
- name: run the python_yaml tasks once - name: run the python_yaml tasks once
set_fact: set_fact:
run_once_python_yaml: true run_once_dev_python_yaml: true
when: run_once_python_yaml is not defined when: run_once_dev_python_yaml is not defined

View File

@@ -1,20 +1,25 @@
--- ---
- name: Ensure ~/.bash_profile sources ~/.profile - block:
lineinfile: - name: Ensure ~/.bash_profile sources ~/.profile
path: "$HOME/.bash_profile" lineinfile:
line: '[ -f ~/.profile ] && . ~/.profile' path: "$HOME/.bash_profile"
insertafter: EOF line: '[ -f ~/.profile ] && . ~/.profile'
state: present insertafter: EOF
create: yes state: present
mode: "0644" create: yes
become: false mode: "0644"
become: false
- name: Ensure ~/.zprofile sources ~/.profile - name: Ensure ~/.zprofile sources ~/.profile
lineinfile: lineinfile:
path: "$HOME/.zprofile" path: "$HOME/.zprofile"
line: '[ -f ~/.profile ] && . ~/.profile' line: '[ -f ~/.profile ] && . ~/.profile'
insertafter: EOF insertafter: EOF
state: present state: present
create: yes create: yes
mode: "0644" mode: "0644"
become: false become: false
- set_fact:
run_once_dev_shell: true
when: run_once_dev_shell is not defined

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Installs the AUR helper yay and configures an aur_builder user with appropriate sudo privileges to facilitate AUR package management on Arch Linux systems." description: "Installs the AUR helper yay and configures an aur_builder user with appropriate sudo privileges to facilitate AUR package management on Arch Linux systems."
@@ -10,20 +9,16 @@ galaxy_info:
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- aur - aur
- yay - yay
- archlinux - archlinux
- package-management - package-management
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://docs.infinito.nexus" documentation: "https://docs.infinito.nexus"
logo: logo:
class: "fas fa-archive" class: "fas fa-archive"
dependencies:
- dev-fakeroot
- dev-git
- dev-base-devel

View File

@@ -0,0 +1,47 @@
- name: Include dependencies
include_role:
name: '{{ item }}'
loop:
- dev-fakeroot
- dev-git
- dev-base-devel
- name: install yay
community.general.pacman:
name:
- base-devel
- patch
state: present
- name: Create the `aur_builder` user
become: true
ansible.builtin.user:
name: aur_builder
create_home: yes
group: wheel
- name: Allow the `aur_builder` user to run `sudo pacman` without a password
become: true
ansible.builtin.lineinfile:
path: /etc/sudoers.d/11-install-aur_builder
line: 'aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman'
create: yes
validate: 'visudo -cf %s'
- name: Clone yay from AUR
become: true
become_user: aur_builder
git:
repo: https://aur.archlinux.org/yay.git
dest: /home/aur_builder/yay
clone: yes
update: yes
- name: Build and install yay
become: true
become_user: aur_builder
shell: |
cd /home/aur_builder/yay
makepkg -si --noconfirm
args:
creates: /usr/bin/yay

View File

@@ -1,39 +1,5 @@
- name: install yay - block:
community.general.pacman: - include_tasks: 01_core.yml
name: - set_fact:
- base-devel run_once_dev_yay: true
- patch when: run_once_dev_yay is not defined
state: present
- name: Create the `aur_builder` user
become: yes
ansible.builtin.user:
name: aur_builder
create_home: yes
group: wheel
- name: Allow the `aur_builder` user to run `sudo pacman` without a password
become: yes
ansible.builtin.lineinfile:
path: /etc/sudoers.d/11-install-aur_builder
line: 'aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman'
create: yes
validate: 'visudo -cf %s'
- name: Clone yay from AUR
become: yes
become_user: aur_builder
git:
repo: https://aur.archlinux.org/yay.git
dest: /home/aur_builder/yay
clone: yes
update: yes
- name: Build and install yay
become: yes
become_user: aur_builder
shell: |
cd /home/aur_builder/yay
makepkg -si --noconfirm
args:
creates: /usr/bin/yay

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Manages Docker Compose project structure and execution logic on Arch Linux." description: "Manages Docker Compose project structure and execution logic on Arch Linux."
@@ -10,19 +9,17 @@ galaxy_info:
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- docker - docker
- compose - compose
- container - container
- infrastructure - infrastructure
- devops - devops
- automation - automation
- archlinux - archlinux
repository: https://github.com/kevinveenbirkenbach/infinito-nexus repository: https://github.com/kevinveenbirkenbach/infinito-nexus
issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues
documentation: "https://docs.infinito.nexus/" documentation: "https://docs.infinito.nexus/"
dependencies:
- docker-container # Necessary for template use

View File

@@ -6,6 +6,7 @@
git: git:
repo: "{{ docker_repository_address }}" repo: "{{ docker_repository_address }}"
dest: "{{ docker_repository_path }}" dest: "{{ docker_repository_path }}"
version: "{{ docker_repository_branch | default('main') }}"
depth: 1 depth: 1
update: yes update: yes
recursive: yes recursive: yes

View File

@@ -3,12 +3,16 @@
args: args:
chdir: "{{ docker_compose.directories.instance }}" chdir: "{{ docker_compose.directories.instance }}"
register: docker_ps register: docker_ps
changed_when: (docker_ps.stdout | trim) == "" changed_when: >
(docker_ps.stdout | trim) == ""
# The failed when catches the condition when an docker compose file will be dynamicly build after the file routine # The failed when catches the condition when an docker compose file will be dynamicly build after the file routine
# Also if an .env file isn't present
failed_when: > failed_when: >
docker_ps.rc != 0 docker_ps.rc != 0
and and (
'no configuration file provided: not found' not in (docker_ps.stderr | default('')) (docker_ps.stderr | default(''))
| regex_search('(no configuration file provided|no such file or directory|env file .* not found)') is none
)
when: > when: >
not ( not (
docker_compose_template.changed | default(false) docker_compose_template.changed | default(false)

View File

@@ -1,10 +1,15 @@
# run_once_docker_compose: deactivate - block:
- include_role:
name: docker-container
when: run_once_docker_container is not defined
- include_tasks: utils/run_once.yml
when: run_once_docker_compose is not defined
- name: "Load variables from {{ docker_compose_variable_file }} for whole play" - name: "Load variables from {{ docker_compose_variable_file }} for whole play"
include_vars: "{{ docker_compose_variable_file }}" include_vars: "{{ docker_compose_variable_file }}"
- name: "reset (if enabled)" - name: "reset (if enabled)"
include_tasks: 01_reset.yml include_tasks: 01_reset.yml
when: mode_reset | bool when: mode_reset | bool
# This could lead to problems in docker-compose directories which are based on a git repository # This could lead to problems in docker-compose directories which are based on a git repository
@@ -16,18 +21,17 @@
mode: '0755' mode: '0755'
with_dict: "{{ docker_compose.directories }}" with_dict: "{{ docker_compose.directories }}"
- name: "Include routines to set up a git repository based installaion for '{{application_id}}'." - name: "Include routines to set up a git repository based installation for '{{application_id}}'."
include_tasks: "02_repository.yml" include_tasks: "02_repository.yml"
when: docker_pull_git_repository | bool when: docker_pull_git_repository | bool
- name: "Include routines file management routines for '{{application_id}}'." - block:
include_tasks: "03_files.yml" - name: "Include file management routines for '{{application_id}}'."
include_tasks: "03_files.yml"
- name: "Ensure that {{ docker_compose.directories.instance }} is up"
include_tasks: "04_ensure_up.yml"
when: not docker_compose_skipp_file_creation | bool when: not docker_compose_skipp_file_creation | bool
- name: "Ensure that {{ docker_compose.directories.instance }} is up" - name: "flush database, docker and proxy for '{{ application_id }}'"
include_tasks: "04_ensure_up.yml"
when: not docker_compose_skipp_file_creation | bool
- name: "flush database, docker and proxy for '{{ application_id }}'"
meta: flush_handlers meta: flush_handlers
when: docker_compose_flush_handlers | bool when: docker_compose_flush_handlers | bool

View File

@@ -0,0 +1 @@
# Dummy file for handler import

View File

@@ -1 +1 @@
docker_compose_variable_file: "{{ role_path }}/vars/docker-compose.yml" docker_compose_variable_file: "{{ role_path }}/vars/docker-compose.yml"

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birchenbach" author: "Kevin Veen-Birchenbach"
description: "Provides shared Jinja2 snippets for Docker Compose service definitions (base, networks, healthchecks, depends_on)." description: "Provides shared Jinja2 snippets for Docker Compose service definitions (base, networks, healthchecks, depends_on)."
@@ -9,16 +8,13 @@ galaxy_info:
Consulting & Coaching Solutions Consulting & Coaching Solutions
https://www.veen.world https://www.veen.world
galaxy_tags: galaxy_tags:
- docker - docker
- compose - compose
- jinja2 - jinja2
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/docker-container" documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/docker-container"
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Any - name: Any
versions: [ all ] versions: [all]
dependencies:
- docker-core

View File

@@ -0,0 +1,6 @@
- block:
- include_role:
name: docker-core
when: run_once_docker_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_docker_container is not defined

View File

@@ -1,6 +1,6 @@
{# Base for docker services #} {# Base for docker services #}
restart: {{docker_restart_policy}} restart: {{DOCKER_RESTART_POLICY}}
{% if application_id | has_env %} {% if application_id | has_env %}
env_file: env_file:
- "{{docker_compose.files.env}}" - "{{docker_compose.files.env}}"

View File

@@ -26,10 +26,3 @@ galaxy_info:
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/docker" documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/docker"
dependencies:
- sys-bkp-docker-2-loc
- user-administrator
- sys-hlth-docker-container
- sys-hlth-docker-volumes
- sys-rpr-docker-soft
- sys-rpr-docker-hard

View File

@@ -0,0 +1,26 @@
- name: Include backup, repair, health and user dependencies
include_role:
name: "{{ item }}"
loop:
- sys-bkp-docker-2-loc
- user-administrator
- sys-hlth-docker-container
- sys-hlth-docker-volumes
- sys-rpr-docker-soft
- sys-rpr-docker-hard
- name: docker & docker compose install
community.general.pacman:
name:
- 'docker'
- 'docker-compose'
state: present
notify: docker restart
- name: "create {{path_docker_compose_instances}}"
file:
path: "{{path_docker_compose_instances}}"
state: directory
mode: 0700
owner: root
group: root

View File

@@ -1,31 +1,5 @@
--- ---
- block:
- name: docker & docker compose install - include_tasks: 01_core.yml
pacman: - include_tasks: utils/run_once.yml
name: ['docker','docker-compose'] when: run_once_docker_core is not defined
state: present
notify: docker restart
when: run_once_docker is not defined
- name: "create {{path_docker_compose_instances}}"
file:
path: "{{path_docker_compose_instances}}"
state: directory
mode: 0700
owner: administrator
group: administrator
when: run_once_docker is not defined
- name: Set docker_enabled to true, to activate svc-opt-ssd-hdd
set_fact:
docker_enabled: true
when: run_once_docker is not defined
- name: flush docker service
meta: flush_handlers
when: run_once_docker is not defined
- name: run the docker tasks once
set_fact:
run_once_docker: true
when: run_once_docker is not defined

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birchenbach" author: "Kevin Veen-Birchenbach"
description: "Installs Epson multifunction printer drivers and scanning utilities (escpr, imagescan) via Pacman and AUR on Arch Linux." description: "Installs Epson multifunction printer drivers and scanning utilities (escpr, imagescan) via Pacman and AUR on Arch Linux."
@@ -9,16 +8,13 @@ galaxy_info:
Consulting & Coaching Solutions Consulting & Coaching Solutions
https://www.veen.world https://www.veen.world
galaxy_tags: galaxy_tags:
- epson - epson
- printer - printer
- scanner - scanner
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/drv-epson-multiprinter" documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/drv-epson-multiprinter"
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: [ all ] versions: [all]
dependencies:
- dev-yay

View File

@@ -0,0 +1,19 @@
- name: Include dependency 'dev-yay'
include_role:
name: dev-yay
when: run_once_dev_yay is not defined
- name: install AUR packages for epson
kewlfft.aur.aur:
use: yay
name:
- epson-printer-utility
- imagescan-plugin-networkscan
- epson-inkjet-printer-escpr
- epson-inkjet-printer-escpr2
become: false
- name: install imagescan
community.general.pacman:
name: imagescan
state: present

View File

@@ -1,13 +1,5 @@
- name: install AUR packages for epson - block:
kewlfft.aur.aur: - include_tasks: 01_core.yml
use: yay - set_fact:
name: run_once_drv_epson_multiprinter: true
- epson-printer-utility when: run_once_drv_epson_multiprinter is not defined
- imagescan-plugin-networkscan
- epson-inkjet-printer-escpr
- epson-inkjet-printer-escpr2
become: false
- name: install imagescan
community.general.pacman:
name: imagescan
state: present

View File

@@ -1,6 +1,6 @@
--- ---
- name: Restart systemd-logind - name: Restart systemd-logind
become: yes become: true
systemd: systemd:
name: systemd-logind name: systemd-logind
state: restarted state: restarted

View File

@@ -10,7 +10,7 @@
become: true become: true
- name: Configure systemd lid switch behavior to hibernate on lid close (battery), lock on AC and docked - name: Configure systemd lid switch behavior to hibernate on lid close (battery), lock on AC and docked
become: yes become: true
lineinfile: lineinfile:
path: /etc/systemd/logind.conf path: /etc/systemd/logind.conf
regexp: '^#?HandleLidSwitch=' regexp: '^#?HandleLidSwitch='
@@ -20,7 +20,7 @@
become: true become: true
- name: Configure systemd to lock session when lid is closed on external power - name: Configure systemd to lock session when lid is closed on external power
become: yes become: true
lineinfile: lineinfile:
path: /etc/systemd/logind.conf path: /etc/systemd/logind.conf
regexp: '^#?HandleLidSwitchExternalPower=' regexp: '^#?HandleLidSwitchExternalPower='
@@ -30,7 +30,7 @@
become: true become: true
- name: Configure systemd to lock session when lid is closed while docked - name: Configure systemd to lock session when lid is closed while docked
become: yes become: true
lineinfile: lineinfile:
path: /etc/systemd/logind.conf path: /etc/systemd/logind.conf
regexp: '^#?HandleLidSwitchDocked=' regexp: '^#?HandleLidSwitchDocked='

View File

@@ -4,9 +4,6 @@ galaxy_info:
description: "Ansible role to set up dynamic keyboard color change on MSI laptops" description: "Ansible role to set up dynamic keyboard color change on MSI laptops"
min_ansible_version: 2.9 min_ansible_version: 2.9
platforms: platforms:
- name: Linux - name: Linux
versions: versions:
- all - all
dependencies:
- dev-yay
- sys-alm-compose

View File

@@ -0,0 +1,38 @@
- include_role:
name: '{{ item }}'
loop:
- dev-yay
- sys-alm-compose
- name: Install MSI packages
kewlfft.aur.aur:
use: yay
name:
- msi-perkeyrgb
- name: Copy keyboard_color.sh script
copy:
src: keyboard_color.py
dest: /opt/keyboard_color.py
mode: 0755
- name: Copy keyboard-color.infinito.service file
template:
src: keyboard-color.service.j2
dest: /etc/systemd/system/keyboard-color.infinito.service
mode: 0644
- name: Reload systemd daemon
systemd:
daemon_reload: yes
- name: "set 'service_name' to '{{ role_name }}'"
set_fact:
service_name: "{{ role_name }}"
- name: "include role for sys-timer for {{service_name}}"
include_role:
name: sys-timer
vars:
on_calendar: "{{on_calendar_msi_keyboard_color}}"
persistent: "true"

View File

@@ -1,33 +1,5 @@
--- - block:
- name: Install MSI packages - include_tasks: 01_core.yml
kewlfft.aur.aur: - set_fact:
use: yay run_once_drv_msi_keyboard_color: true
name: when: run_once_drv_msi_keyboard_color is not defined
- msi-perkeyrgb
- name: Copy keyboard_color.sh script
copy:
src: keyboard_color.py
dest: /opt/keyboard_color.py
mode: 0755
- name: Copy keyboard-color.infinito.service file
template:
src: keyboard-color.service.j2
dest: /etc/systemd/system/keyboard-color.infinito.service
mode: 0644
- name: Reload systemd daemon
systemd:
daemon_reload: yes
- name: "set 'service_name' to '{{ role_name }}'"
set_fact:
service_name: "{{ role_name }}"
- name: "include role for sys-timer for {{service_name}}"
include_role:
name: sys-timer
vars:
on_calendar: "{{on_calendar_msi_keyboard_color}}"
persistent: "true"

View File

@@ -1,5 +1,5 @@
- name: install wireguard for Arch - name: install wireguard for Arch
pacman: community.general.pacman:
name: wireguard-tools name: wireguard-tools
state: present state: present
when: ansible_os_family == "Archlinux" when: ansible_os_family == "Archlinux"

View File

@@ -2,7 +2,7 @@
## Description ## Description
This role manages WireGuard on a client system. It sets up essential services and scripts to configure and optimize WireGuard connectivity. Additionally, it provides a link to an Administration document for creating client keys. This role manages WireGuard on a client system. It sets up essential services and scripts to configure and optimize WireGuard connectivity.
## Overview ## Overview

View File

@@ -0,0 +1 @@
internet_interfaces: []

View File

@@ -3,22 +3,20 @@ galaxy_info:
description: "Installs and updates packages using pkgmgr." description: "Installs and updates packages using pkgmgr."
license: "Infinito.Nexus NonCommercial License (CNCL)" license: "Infinito.Nexus NonCommercial License (CNCL)"
license_url: "https://s.veen.world/cncl" license_url: "https://s.veen.world/cncl"
company: | company: |
Kevin Veen-Birkenbach Kevin Veen-Birkenbach
Consulting & Coaching Solutions Consulting & Coaching Solutions
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- package - package
- update - update
- archlinux - archlinux
- infinito - infinito
repository: https://github.com/kevinveenbirkenbach/package-manager repository: https://github.com/kevinveenbirkenbach/package-manager
issue_tracker_url: https://github.com/kevinveenbirkenbach/package-manager/issues issue_tracker_url: https://github.com/kevinveenbirkenbach/package-manager/issues
documentation: https://github.com/kevinveenbirkenbach/package-manager documentation: https://github.com/kevinveenbirkenbach/package-manager
dependencies:
- pkgmgr

View File

@@ -0,0 +1,9 @@
- name: Include dependency 'pkgmgr'
include_role:
name: pkgmgr
when: run_once_pkgmgr is not defined
- name: update pkgmgr
shell: |
source ~/.venvs/pkgmgr/bin/activate
pkgmgr update pkgmgr

View File

@@ -1,7 +1,7 @@
- name: update pkgmgr - block:
shell: | - include_tasks: 01_core.yml
source ~/.venvs/pkgmgr/bin/activate - set_fact:
pkgmgr update pkgmgr run_once_pkgmgr_install: true
when: run_once_pkgmgr_install is not defined when: run_once_pkgmgr_install is not defined
- name: update {{ package_name }} - name: update {{ package_name }}
@@ -13,7 +13,3 @@
changed_when: "'No command defined and neither main.sh nor main.py found' not in pkgmgr_update_result.stdout" changed_when: "'No command defined and neither main.sh nor main.py found' not in pkgmgr_update_result.stdout"
failed_when: pkgmgr_update_result.rc != 0 and 'No command defined and neither main.sh nor main.py found' not in pkgmgr_update_result.stdout failed_when: pkgmgr_update_result.rc != 0 and 'No command defined and neither main.sh nor main.py found' not in pkgmgr_update_result.stdout
- name: mark pkgmgr update as done
set_fact:
run_once_pkgmgr_install: true
when: run_once_pkgmgr_install is not defined

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Automates the installation of Kevin's Package Manager — a tool for managing multiple repositories and automating Git operations." description: "Automates the installation of Kevin's Package Manager — a tool for managing multiple repositories and automating Git operations."
@@ -10,29 +9,25 @@ galaxy_info:
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Debian - name: Debian
versions: versions:
- stretch - stretch
- buster - buster
- bullseye - bullseye
- name: Ubuntu - name: Ubuntu
versions: versions:
- bionic - bionic
- focal - focal
- jammy - jammy
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- pkgmgr - pkgmgr
- automation - automation
- git - git
- repositories - repositories
- development - development
repository: https://github.com/kevinveenbirkenbach/package-manager repository: https://github.com/kevinveenbirkenbach/package-manager
issue_tracker_url: https://github.com/kevinveenbirkenbach/package-manager/issues issue_tracker_url: https://github.com/kevinveenbirkenbach/package-manager/issues
documentation: https://github.com/kevinveenbirkenbach/package-manager documentation: https://github.com/kevinveenbirkenbach/package-manager
dependencies:
- dev-git
- dev-make
- dev-python-yaml

View File

@@ -0,0 +1,50 @@
- name: Include dependencies
include_role:
name: '{{ item }}'
loop:
- dev-git
- dev-make
- dev-python-yaml
- name: Ensure GitHub host key is in known_hosts
known_hosts:
path: "~/.ssh/known_hosts"
name: github.com
key: "{{ lookup('pipe', 'ssh-keyscan -t ed25519 github.com | grep -v \"^#\"') }}"
become: true
- name: Create installation directory for Kevin's Package Manager
file:
path: "{{ pkgmgr_install_path }}"
state: directory
mode: '0755'
become: true
- name: Clone Kevin's Package Manager repository
git:
repo: "{{ pkgmgr_repo_url }}"
dest: "{{ pkgmgr_install_path }}"
version: "HEAD"
force: yes
become: true
- name: Ensure main.py is executable
file:
path: "{{ pkgmgr_install_path }}/main.py"
mode: '0755'
become: true
- name: create config.yaml
template:
src: config.yaml.j2
dest: "{{pkgmgr_config_path}}"
become: true
- name: Run the Package Manager install command to create an alias for Kevins package manager
shell: |
source ~/.venvs/pkgmgr/bin/activate
make setup
args:
chdir: "{{ pkgmgr_install_path }}"
executable: /bin/bash
become: true

View File

@@ -1,53 +1,5 @@
--- ---
- name: Ensure GitHub host key is in known_hosts - block:
known_hosts: - include_tasks: 01_core.yml
path: "~/.ssh/known_hosts" - include_tasks: utils/run_once.yml
name: github.com when: run_once_pkgmgr is not defined
key: "{{ lookup('pipe', 'ssh-keyscan -t ed25519 github.com | grep -v \"^#\"') }}"
become: yes
- name: Create installation directory for Kevin's Package Manager
file:
path: "{{ pkgmgr_install_path }}"
state: directory
mode: '0755'
become: yes
when: run_once_package_manager is not defined
- name: Clone Kevin's Package Manager repository
git:
repo: "{{ pkgmgr_repo_url }}"
dest: "{{ pkgmgr_install_path }}"
version: "HEAD"
force: yes
become: yes
when: run_once_package_manager is not defined
- name: Ensure main.py is executable
file:
path: "{{ pkgmgr_install_path }}/main.py"
mode: '0755'
become: yes
when: run_once_package_manager is not defined
- name: create config.yaml
template:
src: config.yaml.j2
dest: "{{pkgmgr_config_path}}"
become: yes
when: run_once_package_manager is not defined
- name: Run the Package Manager install command to create an alias for Kevins package manager
shell: |
source ~/.venvs/pkgmgr/bin/activate
make setup
args:
chdir: "{{ pkgmgr_install_path }}"
executable: /bin/bash
become: yes
when: run_once_package_manager is not defined
- name: run run_once_package_manager tasks once
set_fact:
run_once_package_managerr: true
when: run_once_package_manager is not defined

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Automated domain provisioning (TLS, vHost, OAuth2) for Nginx." description: "Automated domain provisioning (TLS, vHost, OAuth2) for Nginx."
@@ -10,18 +9,16 @@ galaxy_info:
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- nginx - nginx
- tls - tls
- letsencrypt - letsencrypt
- oauth2 - oauth2
- automation - automation
- archlinux - archlinux
repository: https://github.com/kevinveenbirkenbach/infinito-nexus repository: https://github.com/kevinveenbirkenbach/infinito-nexus
issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues
documentation: "https://docs.infinito.nexus/" documentation: "https://docs.infinito.nexus/"
dependencies:
- srv-proxy-7-4-core

View File

@@ -0,0 +1,50 @@
# Initialize cache dict (works within the play; persists if fact cache is enabled)
- name: "Ensure cf_zone_ids cache dict exists"
set_fact:
cf_zone_ids: "{{ cf_zone_ids | default({}) }}"
# Use cached zone_id if available for the apex (to_primary_domain)
- name: "Load cf_zone_id from cache if present"
set_fact:
cf_zone_id: "{{ (cf_zone_ids | default({})).get(domain | to_primary_domain, false) }}"
# Only look up from Cloudflare if we still don't have it
- name: "Ensure Cloudflare Zone ID is known for {{ domain }}"
vars:
cf_api_url: "https://api.cloudflare.com/client/v4/zones"
ansible.builtin.uri:
url: "{{ cf_api_url }}?name={{ domain | to_primary_domain }}"
method: GET
headers:
Authorization: "Bearer {{ certbot_dns_api_token }}"
Content-Type: "application/json"
return_content: yes
register: cf_zone_lookup_dev
changed_when: false
when:
- not cf_zone_id
- name: "Set fact cf_zone_id and update cache dict"
set_fact:
cf_zone_id: "{{ cf_zone_lookup_dev.json.result[0].id }}"
cf_zone_ids: >-
{{ (cf_zone_ids | default({}))
| combine({ (domain | to_primary_domain): cf_zone_lookup_dev.json.result[0].id }) }}
when:
- not cf_zone_id
- cf_zone_lookup_dev.json.result | length > 0
- name: "Fail if no Cloudflare zone found for {{ domain | to_primary_domain }}"
ansible.builtin.fail:
msg: "No Cloudflare zone found for {{ domain | to_primary_domain }} — aborting!"
when:
- not cf_zone_id
- cf_zone_lookup_dev.json.result | length == 0
- name: activate cloudflare cache development mode
include_tasks: "cloudflare/02_enable_cf_dev_mode.yml"
when: (INFINITO_ENVIRONMENT | lower) == 'development'
- name: purge cloudflare domain cache
include_tasks: "cloudflare/01_cleanup.yml"
when: mode_cleanup | bool

View File

@@ -1,33 +0,0 @@
- name: "Lookup Cloudflare Zone ID for {{ domain }}"
vars:
cf_api_url: "https://api.cloudflare.com/client/v4/zones"
ansible.builtin.uri:
url: "{{ cf_api_url }}?name={{ domain | to_primary_domain }}"
method: GET
headers:
Authorization: "Bearer {{ certbot_dns_api_token }}"
Content-Type: "application/json"
return_content: yes
register: cf_zone_lookup
when: dns_provider == "cloudflare"
- name: "Set fact cf_zone_id"
set_fact:
cf_zone_id: "{{ cf_zone_lookup.json.result[0].id }}"
when:
- dns_provider == "cloudflare"
- cf_zone_lookup.json.result | length > 0
- name: "Purge everything from Cloudflare cache for domain {{ domain }}"
ansible.builtin.uri:
url: "https://api.cloudflare.com/client/v4/zones/{{ cf_zone_id }}/purge_cache"
method: POST
headers:
Authorization: "Bearer {{ certbot_dns_api_token }}"
Content-Type: "application/json"
body:
purge_everything: true
body_format: json
return_content: yes
register: cf_purge
when: dns_provider == "cloudflare"

View File

@@ -0,0 +1,12 @@
- name: "Purge everything from Cloudflare cache for domain {{ domain }}"
ansible.builtin.uri:
url: "https://api.cloudflare.com/client/v4/zones/{{ cf_zone_id }}/purge_cache"
method: POST
headers:
Authorization: "Bearer {{ certbot_dns_api_token }}"
Content-Type: "application/json"
body:
purge_everything: true
body_format: json
return_content: yes
register: cf_purge

View File

@@ -0,0 +1,32 @@
# roles/srv-proxy-6-6-domain/tasks/02_enable_cf_dev_mode.yml
---
# Enables Cloudflare Development Mode (bypasses cache for ~3 hours).
# Uses the same auth token as in 01_cleanup.yml: certbot_dns_api_token
# Assumes `domain` and (optionally) `cf_zone_id` are available.
# Safe to run repeatedly; only changes when the mode is not already "on".
- name: "Read current Cloudflare development_mode setting"
ansible.builtin.uri:
url: "https://api.cloudflare.com/client/v4/zones/{{ cf_zone_id }}/settings/development_mode"
method: GET
headers:
Authorization: "Bearer {{ certbot_dns_api_token }}"
Content-Type: "application/json"
return_content: yes
register: cf_dev_mode_current
- name: "Enable Cloudflare Development Mode"
ansible.builtin.uri:
url: "https://api.cloudflare.com/client/v4/zones/{{ cf_zone_id }}/settings/development_mode"
method: PATCH
headers:
Authorization: "Bearer {{ certbot_dns_api_token }}"
Content-Type: "application/json"
body:
value: "on"
body_format: json
return_content: yes
register: cf_dev_mode_enable
changed_when: >
cf_dev_mode_current.json.result.value is defined and
cf_dev_mode_current.json.result.value != 'on'

View File

@@ -1,12 +1,22 @@
# run_once_srv_proxy_6_6_domain: deactivated - block:
- name: Cleanup Domain - name: Include dependency 'srv-proxy-7-4-core'
include_tasks: cleanup.yml include_role:
when: mode_cleanup | bool name: srv-proxy-7-4-core
when: run_once_srv_proxy_7_4_core is not defined
- include_tasks: utils/run_once.yml
when: run_once_srv_proxy_6_6_domain is not defined
- include_tasks: "01_cloudflare.yml"
when: dns_provider == "cloudflare"
- include_tasks: "{{ playbook_dir }}/tasks/utils/load_handlers.yml"
vars:
handler_role_name: "svc-prx-openresty"
- name: "include role for {{ domain }} to receive certificates and do the modification routines" - name: "include role for {{ domain }} to receive certificates and do the modification routines"
include_role: include_role:
name: srv-web-7-6-composer name: srv-web-7-6-composer
- name: "Copy nginx config to {{ configuration_destination }}" - name: "Copy nginx config to {{ configuration_destination }}"
template: template:
src: "{{ vhost_template_src }}" src: "{{ vhost_template_src }}"
@@ -14,19 +24,19 @@
register: nginx_conf register: nginx_conf
notify: restart openresty notify: restart openresty
- name: "Check if {{ domains | get_domain(application_id) }} is reachable (only if config unchanged)" - block:
uri: - name: "Check if {{ domains | get_domain(application_id) }} is reachable (only if config unchanged)"
url: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}" uri:
register: site_check url: "{{ domains | get_url(application_id, WEB_PROTOCOL) }}"
failed_when: false register: site_check
changed_when: false failed_when: false
when: not nginx_conf.changed changed_when: false
- name: Restart nginx if site is down - name: Restart nginx if site is down
command: command:
cmd: "true" cmd: "true"
notify: restart openresty notify: restart openresty
when: when:
- not nginx_conf.changed
- site_check.status is defined - site_check.status is defined
- not site_check.status in [200,301,302] - not site_check.status in [200,301,302]
when: not nginx_conf.changed

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birchenbach" author: "Kevin Veen-Birchenbach"
description: "Deploys Lets Encrypt certificates into Docker Compose Nginx setups via systemd service and timer." description: "Deploys Lets Encrypt certificates into Docker Compose Nginx setups via systemd service and timer."
@@ -9,17 +8,14 @@ galaxy_info:
Consulting & Coaching Solutions Consulting & Coaching Solutions
https://www.veen.world https://www.veen.world
galaxy_tags: galaxy_tags:
- nginx - nginx
- letsencrypt - letsencrypt
- docker - docker
- systemd - systemd
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/srv-proxy-6-6-tls-deploy" documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/tree/main/roles/srv-proxy-6-6-tls-deploy"
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Any - name: Any
versions: [ all ] versions: [all]
dependencies:
- sys-alm-compose

View File

@@ -0,0 +1,10 @@
- name: Include dependency 'sys-alm-compose'
include_role:
name: sys-alm-compose
when: run_once_sys_alm_compose is not defined
- name: add srv-proxy-6-6-tls-deploy.sh
template:
src: "srv-proxy-6-6-tls-deploy.sh.j2"
dest: "{{nginx_docker_cert_deploy_script}}"
notify: restart srv-proxy-6-6-tls-deploy.infinito.service

View File

@@ -1,20 +1,19 @@
- name: add srv-proxy-6-6-tls-deploy.sh - block:
template: - include_tasks: 01_core.yml
src: "srv-proxy-6-6-tls-deploy.sh.j2" - set_fact:
dest: "{{nginx_docker_cert_deploy_script}}" run_once_srv_proxy_6_6_tls_deploy: true
when: run_once_nginx_docker_cert_deploy is not defined when: run_once_srv_proxy_6_6_tls_deploy is not defined
notify: restart srv-proxy-6-6-tls-deploy.infinito.service
- name: "create {{cert_mount_directory}}" - name: "create {{cert_mount_directory}}"
file: file:
path: "{{cert_mount_directory}}" path: "{{cert_mount_directory}}"
state: directory state: directory
mode: 0755 mode: 0755
notify: restart srv-proxy-6-6-tls-deploy.infinito.service notify: restart srv-proxy-6-6-tls-deploy.infinito.service
- name: configure srv-proxy-6-6-tls-deploy.infinito.service - name: configure srv-proxy-6-6-tls-deploy.infinito.service
template: template:
src: "srv-proxy-6-6-tls-deploy.service.j2" src: "srv-proxy-6-6-tls-deploy.service.j2"
dest: "/etc/systemd/system/srv-proxy-6-6-tls-deploy.{{application_id}}.infinito.service" dest: "/etc/systemd/system/srv-proxy-6-6-tls-deploy.{{application_id}}.infinito.service"
notify: restart srv-proxy-6-6-tls-deploy.infinito.service notify: restart srv-proxy-6-6-tls-deploy.infinito.service
@@ -22,11 +21,7 @@
include_role: include_role:
name: sys-timer name: sys-timer
vars: vars:
on_calendar: "{{on_calendar_deploy_certificates}}" on_calendar: "{{on_calendar_deploy_certificates}}"
service_name: "srv-proxy-6-6-tls-deploy.{{application_id}}" service_name: "srv-proxy-6-6-tls-deploy.{{application_id}}"
persistent: "true" persistent: "true"
- name: run the run_once_srv_proxy_6_6_tls_deploy tasks once
set_fact:
run_once_backup_directory_validator: true
when: run_once_nginx_docker_cert_deploy is not defined

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: "Nginx reverse proxy front-end for local Docker applications." description: "Nginx reverse proxy front-end for local Docker applications."
@@ -10,19 +9,16 @@ galaxy_info:
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- nginx - nginx
- docker - docker
- reverse_proxy - reverse_proxy
- web - web
- automation - automation
- archlinux - archlinux
repository: https://github.com/kevinveenbirkenbach/infinito-nexus repository: https://github.com/kevinveenbirkenbach/infinito-nexus
issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues issue_tracker_url: https://github.com/kevinveenbirkenbach/infinito-nexus/issues
documentation: "https://docs.infinito.nexus/" documentation: "https://docs.infinito.nexus/"
dependencies:
- srv-web-7-6-https
- srv-web-7-4-core

View File

@@ -0,0 +1,9 @@
- block:
- name: Include dependencies
include_role:
name: '{{ item }}'
loop:
- srv-web-7-6-https
- srv-web-7-4-core
- include_tasks: utils/run_once.yml
when: run_once_srv_proxy_7_4_core is not defined

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: | description: |
@@ -11,21 +10,19 @@ galaxy_info:
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- nginx - nginx
- certbot - certbot
- letsencrypt - letsencrypt
- ssl - ssl
- tls - tls
- acme - acme
- https - https
- wildcard - wildcard
- automation - automation
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://docs.infinito.nexus" documentation: "https://docs.infinito.nexus"
dependencies:
- srv-web-7-6-https

View File

@@ -1,32 +1,31 @@
- name: Install certbundle - block:
include_role: - name: Install certbundle
name: pkgmgr-install include_role:
vars: name: pkgmgr-install
package_name: certbundle vars:
when: run_once_san_certs is not defined package_name: certbundle
- name: Generate SAN certificate with certbundle - name: Generate SAN certificate with certbundle
command: >- command: >-
certbundle certbundle
--domains "{{ current_play_domains_all | join(',') }}" --domains "{{ current_play_domains_all | join(',') }}"
--certbot-email "{{ users.administrator.email }}" --certbot-email "{{ users.administrator.email }}"
--certbot-acme-challenge-method "{{ certbot_acme_challenge_method }}" --certbot-acme-challenge-method "{{ certbot_acme_challenge_method }}"
--chunk-size 100 --chunk-size 100
{% if certbot_acme_challenge_method != 'webroot' %} {% if certbot_acme_challenge_method != 'webroot' %}
--certbot-credentials-file "{{ certbot_credentials_file }}" --certbot-credentials-file "{{ certbot_credentials_file }}"
--certbot-dns-propagation-seconds "{{ certbot_dns_propagation_wait_seconds }}" --certbot-dns-propagation-seconds "{{ certbot_dns_propagation_wait_seconds }}"
{% else %} {% else %}
--letsencrypt-webroot-path "{{ letsencrypt_webroot_path }}" --letsencrypt-webroot-path "{{ letsencrypt_webroot_path }}"
{% endif %} {% endif %}
{{ '--mode-test' if mode_test | bool else '' }} {{ '--mode-test' if mode_test | bool else '' }}
register: certbundle_result register: certbundle_result
changed_when: "'Certificate not yet due for renewal' not in certbundle_result.stdout" changed_when: "'Certificate not yet due for renewal' not in certbundle_result.stdout"
failed_when: > failed_when: >
certbundle_result.rc != 0 certbundle_result.rc != 0
and 'too many certificates' not in certbundle_result.stderr and 'too many certificates' not in certbundle_result.stderr
when: run_once_san_certs is not defined
- name: run the san tasks once - name: run the san tasks once
set_fact: set_fact:
run_once_san_certs: true run_once_san_certs: true
when: run_once_san_certs is not defined when: run_once_san_certs is not defined

View File

@@ -1,4 +1,10 @@
# run_once_srv_web_6_6_tls_core: deactivated - block:
- name: Include dependency 'srv-web-7-6-https'
include_role:
name: srv-web-7-6-https
when: run_once_srv_web_7_6_https is not defined
- include_tasks: utils/run_once.yml
when: run_once_srv_web_6_6_tls_core is not defined
- name: "Include flavor '{{ certbot_flavor }}' for '{{ domain }}'" - name: "Include flavor '{{ certbot_flavor }}' for '{{ domain }}'"
include_tasks: "{{ role_path }}/tasks/flavors/{{ certbot_flavor }}.yml" include_tasks: "{{ role_path }}/tasks/flavors/{{ certbot_flavor }}.yml"
@@ -36,4 +42,4 @@
- name: "Ensure ssl_cert_folder is set for domain {{ domain }}" - name: "Ensure ssl_cert_folder is set for domain {{ domain }}"
fail: fail:
msg: "No certificate folder found for domain {{ domain }}" msg: "No certificate folder found for domain {{ domain }}"
when: ssl_cert_folder is undefined or ssl_cert_folder is none when: ssl_cert_folder is undefined or ssl_cert_folder is none

View File

@@ -1,4 +1,3 @@
---
galaxy_info: galaxy_info:
author: "Kevin Veen-Birkenbach" author: "Kevin Veen-Birkenbach"
description: | description: |
@@ -11,23 +10,20 @@ galaxy_info:
https://www.veen.world https://www.veen.world
min_ansible_version: "2.9" min_ansible_version: "2.9"
platforms: platforms:
- name: Archlinux - name: Archlinux
versions: versions:
- rolling - rolling
galaxy_tags: galaxy_tags:
- nginx - nginx
- certbot - certbot
- ssl - ssl
- tls - tls
- letsencrypt - letsencrypt
- https - https
- systemd - systemd
- automation - automation
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://docs.infinito.nexus" documentation: "https://docs.infinito.nexus"
dependencies: dependencies:
- srv-web-7-7-certbot - sys-cln-certs
- srv-web-7-4-core
- sys-alm-compose
- sys-cln-certs

View File

@@ -0,0 +1,30 @@
- name: Include dependencies
include_role:
name: '{{ item }}'
loop:
- srv-web-7-7-certbot
- srv-web-7-4-core
- sys-alm-compose
- name: install certbot
community.general.pacman:
name:
- certbot-nginx
state: present
- name: configure srv-web-6-6-tls-renew.infinito.service
template:
src: srv-web-6-6-tls-renew.service.j2
dest: /etc/systemd/system/srv-web-6-6-tls-renew.infinito.service
notify: reload certbot service
- name: "set 'service_name' to '{{ role_name }}'"
set_fact:
service_name: "{{ role_name }}"
- name: "include role for sys-timer for {{service_name}}"
include_role:
name: sys-timer
vars:
on_calendar: "{{on_calendar_renew_lets_encrypt_certificates}}"
persistent: "true"

View File

@@ -1,31 +1,4 @@
- name: install certbot - block:
pacman: - include_tasks: 01_core.yml
name: - include_tasks: utils/run_once.yml
- certbot-nginx when: run_once_srv_web_6_6_tls_renew is not defined
state: present
when: run_once_nginx_certbot is not defined
- name: configure srv-web-6-6-tls-renew.infinito.service
template:
src: srv-web-6-6-tls-renew.service.j2
dest: /etc/systemd/system/srv-web-6-6-tls-renew.infinito.service
notify: reload certbot service
when: run_once_nginx_certbot is not defined
- name: "set 'service_name' to '{{ role_name }}'"
set_fact:
service_name: "{{ role_name }}"
when: run_once_nginx_certbot is not defined
- name: "include role for sys-timer for {{service_name}}"
include_role:
name: sys-timer
vars:
on_calendar: "{{on_calendar_renew_lets_encrypt_certificates}}"
persistent: "true"
when: run_once_nginx_certbot is not defined
- name: run the nginx_certbot tasks once
set_fact:
run_once_nginx_certbot: true
when: run_once_nginx_certbot is not defined

View File

@@ -18,7 +18,4 @@ galaxy_info:
- performance - performance
repository: "https://github.com/kevinveenbirkenbach/infinito-nexus" repository: "https://github.com/kevinveenbirkenbach/infinito-nexus"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues" issue_tracker_url: "https://github.com/kevinveenbirkenbach/infinito-nexus/issues"
documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/roles/srv-web-7-4-core" documentation: "https://github.com/kevinveenbirkenbach/infinito-nexus/roles/srv-web-7-4-core"
dependencies:
- sys-hlth-webserver
- sys-hlth-csp

View File

@@ -0,0 +1,59 @@
- name: Include health dependencies
include_role:
name: "{{ item }}"
loop:
- sys-hlth-webserver
- sys-hlth-csp
vars:
flush_handlers: false
- name: Include openresty
# Outside of run_once block is necessary for handler loading
# Otherwise the when: condition from the block is added to the handlers
# Inside openresty their is a validation that it doesn't run multiple times
include_role:
name: svc-prx-openresty
public: false
# Explicit set to guaranty that application_id will not be overwritten.
# Should be anyhow the default case
when: run_once_svc_prx_openresty is not defined
- name: "reset (if enabled)"
include_tasks: 02_reset.yml
when: mode_reset | bool
- name: Ensure nginx configuration directories are present
file:
path: "{{ item }}"
state: directory
owner: "{{nginx.user}}"
group: "{{nginx.user}}"
mode: '0755'
recurse: yes
loop: >
{{
[ nginx.directories.configuration ] +
( nginx.directories.http.values() | list ) +
[ nginx.directories.streams ]
}}
- name: Ensure nginx data storage directories are present
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{nginx.user}}"
group: "{{nginx.user}}"
mode: '0755'
loop: >
{{ nginx.directories.data.values() | list }}
- name: "Include tasks to create cache directories"
include_tasks: 03_cache_directories.yml
- name: create nginx config file
template:
src: nginx.conf.j2
dest: "{{ nginx.files.configuration }}"
notify: restart openresty

View File

@@ -1,59 +1,5 @@
--- ---
- name: Include openresty - block:
include_role: - include_tasks: 01_core.yml
name: svc-prx-openresty - include_tasks: utils/run_once.yml
public: false when: run_once_srv_web_7_4_core is not defined
# Explicit set to guaranty that application_id will not be overwritten.
# Should be anyhow the default case
when: run_once_srv_web_core is not defined
- name: "reset (if enabled)"
include_tasks: reset.yml
when: mode_reset | bool and run_once_srv_web_core is not defined
- name: Ensure nginx configuration directories are present
file:
path: "{{ item }}"
state: directory
owner: "{{nginx.user}}"
group: "{{nginx.user}}"
mode: '0755'
recurse: yes
loop: >
{{
[ nginx.directories.configuration ] +
(nginx.directories.http.values() | list) +
[ nginx.directories.streams ]
}}
when: run_once_srv_web_core is not defined
- name: Ensure nginx data storage directories are present
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{nginx.user}}"
group: "{{nginx.user}}"
mode: '0755'
loop: >
{{ nginx.directories.data.values() | list }}
when: run_once_srv_web_core is not defined
- name: "Include tasks to create cache directories"
include_tasks: cache_directories.yml
- name: create nginx config file
template:
src: nginx.conf.j2
dest: "{{ nginx.files.configuration }}"
notify: restart openresty
when: run_once_srv_web_core is not defined
- name: flush nginx service
meta: flush_handlers
when: run_once_srv_web_core is not defined
- name: run {{ role_name }} once
set_fact:
run_once_srv_web_core: true
when: run_once_srv_web_core is not defined

Some files were not shown because too many files have changed in this diff Show More