mirror of
https://github.com/kevinveenbirkenbach/computer-playbook.git
synced 2025-09-08 11:17:17 +02:00
Compare commits
13 Commits
af3ea9039c
...
7aed3dd8c2
Author | SHA1 | Date | |
---|---|---|---|
7aed3dd8c2 | |||
1a649568ce | |||
f9f7d9b299 | |||
9d8e48d303 | |||
f9426cfb74 | |||
e56c960900 | |||
41934ab285 | |||
932ce7c8ca | |||
0730c1efd5 | |||
fd370624c7 | |||
4b8b04f29c | |||
2d276cfa5e | |||
241c5c6da8 |
8
.dockerignore
Normal file
8
.dockerignore
Normal file
@@ -0,0 +1,8 @@
|
||||
site.retry
|
||||
*__pycache__
|
||||
venv
|
||||
*.log
|
||||
*.bak
|
||||
*tree.json
|
||||
roles/list.json
|
||||
.git
|
32
.github/workflows/test-container.yml
vendored
Normal file
32
.github/workflows/test-container.yml
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
name: Build & Test Container
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
build-and-test:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Build Docker image
|
||||
run: |
|
||||
docker build -t cymais:latest .
|
||||
|
||||
- name: Clean build artifacts
|
||||
run: |
|
||||
docker run --rm cymais:latest make clean
|
||||
|
||||
- name: Generate project outputs
|
||||
run: |
|
||||
docker run --rm cymais:latest make build
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
docker run --rm cymais:latest make test
|
22
.github/workflows/test-on-arch.yml
vendored
22
.github/workflows/test-on-arch.yml
vendored
@@ -1,22 +0,0 @@
|
||||
name: Build & Test on Arch Linux
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ master ]
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
build-and-test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Build & Test in Arch Linux Container
|
||||
uses: addnab/docker-run-action@v3
|
||||
with:
|
||||
image: archlinux:latest
|
||||
options: -v ${{ github.workspace }}:/workspace -w /workspace
|
||||
run: |
|
||||
pacman -Sy --noconfirm base-devel git python python-pip docker make
|
||||
make build
|
||||
make test
|
23
Dockerfile
23
Dockerfile
@@ -42,11 +42,28 @@ RUN git clone https://github.com/kevinveenbirkenbach/package-manager.git $PKGMGR
|
||||
# 5) Ensure pkgmgr venv bin and user-local bin are on PATH
|
||||
ENV PATH="$PKGMGR_VENV/bin:/root/.local/bin:${PATH}"
|
||||
|
||||
# 6) Install CyMaIS (using HTTPS cloning mode)
|
||||
# 6) Copy local CyMaIS source into the image for override
|
||||
COPY . /opt/cymais-src
|
||||
|
||||
# 7) Install CyMaIS via pkgmgr (clone-mode https)
|
||||
RUN pkgmgr install cymais --clone-mode https
|
||||
|
||||
# 7) Symlink the cymais CLI into /usr/local/bin so ENTRYPOINT works
|
||||
RUN ln -s /root/.local/bin/cymais /usr/local/bin/cymais
|
||||
# 8) Override installed CyMaIS with local source and clean ignored files
|
||||
RUN CMAIS_PATH=$(pkgmgr path cymais) && \
|
||||
rm -rf "$CMAIS_PATH"/* && \
|
||||
cp -R /opt/cymais-src/* "$CMAIS_PATH"/ && \
|
||||
cd "$CMAIS_PATH" && \
|
||||
make clean
|
||||
|
||||
# 9) Symlink the cymais script into /usr/local/bin so ENTRYPOINT works
|
||||
RUN CMAIS_PATH=$(pkgmgr path cymais) && \
|
||||
ln -sf "$CMAIS_PATH"/main.py /usr/local/bin/cymais && \
|
||||
chmod +x /usr/local/bin/cymais
|
||||
|
||||
# 10) Run integration tests
|
||||
RUN CMAIS_PATH=$(pkgmgr path cymais) && \
|
||||
cd "$CMAIS_PATH" && \
|
||||
make test
|
||||
|
||||
ENTRYPOINT ["cymais"]
|
||||
CMD ["--help"]
|
||||
|
13
Makefile
13
Makefile
@@ -22,14 +22,19 @@ EXTRA_USERS := $(shell \
|
||||
.PHONY: build install test
|
||||
|
||||
clean:
|
||||
@echo "Removing not tracked git files"
|
||||
git clean -fdx
|
||||
@echo "Removing ignored git files"
|
||||
git clean -fdX
|
||||
|
||||
tree:
|
||||
@echo Generating Tree
|
||||
python3 main.py build tree -D 2 --no-signal
|
||||
|
||||
build:
|
||||
dockerignore:
|
||||
@echo Create dockerignore
|
||||
cat .gitignore > .dockerignore
|
||||
echo ".git" >> .dockerignore
|
||||
|
||||
build: clean dockerignore
|
||||
@echo "🔧 Generating users defaults → $(USERS_OUT)…"
|
||||
python3 $(USERS_SCRIPT) \
|
||||
--roles-dir $(ROLES_DIR) \
|
||||
@@ -56,7 +61,7 @@ build:
|
||||
install: build
|
||||
@echo "⚙️ Install complete."
|
||||
|
||||
test:
|
||||
test: build
|
||||
@echo "🧪 Running Python tests…"
|
||||
python -m unittest discover -s tests
|
||||
@echo "📑 Checking Ansible syntax…"
|
||||
|
28
README.md
28
README.md
@@ -1,15 +1,23 @@
|
||||
# IT-Infrastructure Automation Framework 🚀
|
||||
|
||||
[](https://github.com/sponsors/kevinveenbirkenbach) [](https://www.patreon.com/c/kevinveenbirkenbach) [](https://buymeacoffee.com/kevinveenbirkenbach) [](https://s.veen.world/paypaldonate)
|
||||
**🔐 One login. ♾️ Infinite application**
|
||||
|
||||
---
|
||||
*Automate the Provisioning of All Your Servers and Workstations with a Single Open‑Source Script!*
|
||||
|
||||

|
||||
---
|
||||
|
||||
## What is CyMaIS? 📌
|
||||
|
||||
**CyMaIS** is an **automated, modular infrastructure framework** built on **Docker**, **Linux**, and **Ansible**, equally suited for cloud services, local server management, and desktop workstations. At its core lies a **web-based desktop with single sign-on**—backed by an **LDAP directory** and **OIDC**—granting **seamless access** to an almost limitless portfolio of self-hosted applications. It fully supports **ActivityPub applications** and is **Fediverse-compatible**, while integrated **monitoring**, **alerting**, **cleanup**, **self-healing**, **automated updates**, and **backup solutions** provide everything an organization needs to run at scale.
|
||||
|
||||
| 📚 | 🔗 |
|
||||
|---|---|
|
||||
| 🌐 Try It Live | [](https://cymais.cloud) |
|
||||
| 🔧 Request Your Setup | [](https://cybermaster.space) |
|
||||
| 📖 About This Project | [](https://github.com/sponsors/kevinveenbirkenbach) [](https://github.com/kevinveenbirkenbach/cymais/actions/workflows/test-container.yml?query=branch%3Amaster) [](https://github.com/kevinveenbirkenbach/cymais) |
|
||||
| ☕️ Support Us | [](https://www.patreon.com/c/kevinveenbirkenbach) [](https://buymeacoffee.com/kevinveenbirkenbach) [](https://s.veen.world/paypaldonate) [](https://github.com/sponsors/kevinveenbirkenbach) |
|
||||
|
||||
---
|
||||
|
||||
## Key Features 🎯
|
||||
@@ -49,7 +57,7 @@ More informations about the features you will find [here](docs/overview/Features
|
||||
|
||||
### Use it online 🌐
|
||||
|
||||
Give CyMaIS a spin at [CyMaIS.cloud](httpy://cymais.cloud) – sign up in seconds, click around, and see how easy infra magic can be! 🚀🔧✨
|
||||
Try [CyMaIS.Cloud](https://cymais.cloud) – sign up in seconds, explore the platform, and discover what our solution can do for you! 🚀🔧✨
|
||||
|
||||
### Install locally 💻
|
||||
1. **Install CyMaIS** via [Kevin's Package Manager](https://github.com/kevinveenbirkenbach/package-manager)
|
||||
@@ -63,6 +71,20 @@ Give CyMaIS a spin at [CyMaIS.cloud](httpy://cymais.cloud) – sign up in second
|
||||
```
|
||||
---
|
||||
|
||||
### Setup with Docker🚢
|
||||
|
||||
Get CyMaIS up and running inside Docker in just a few steps. For detailed build options and troubleshooting, see the [Docker Guide](docs/Docker.md).
|
||||
|
||||
```bash
|
||||
# 1. Build the Docker image: the Docker image:
|
||||
docker build -t cymais:latest .
|
||||
|
||||
# 2. Run the CLI interactively:
|
||||
docker run --rm -it cymais:latest cymais --help
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## License ⚖️
|
||||
|
||||
CyMaIS is distributed under the **CyMaIS NonCommercial License**. Please see [LICENSE.md](LICENSE.md) for full terms.
|
||||
|
50
cli/make.py
Normal file
50
cli/make.py
Normal file
@@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
CLI wrapper for Makefile targets within CyMaIS.
|
||||
Invokes `make` commands in the project root directory.
|
||||
"""
|
||||
import argparse
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
prog='cymais make',
|
||||
description='Run Makefile targets for CyMaIS project'
|
||||
)
|
||||
parser.add_argument(
|
||||
'targets',
|
||||
nargs=argparse.REMAINDER,
|
||||
help='Make targets and options to pass to `make`'
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
# Default to 'build' if no target is specified
|
||||
make_args = args.targets or ['build']
|
||||
|
||||
# Determine repository root (one level up from cli/)
|
||||
script_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
repo_root = os.path.abspath(os.path.join(script_dir, os.pardir))
|
||||
|
||||
# Check for Makefile
|
||||
makefile_path = os.path.join(repo_root, 'Makefile')
|
||||
if not os.path.isfile(makefile_path):
|
||||
print(f"Error: Makefile not found in {repo_root}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Invoke make in repo root
|
||||
cmd = ['make'] + make_args
|
||||
try:
|
||||
result = subprocess.run(cmd, cwd=repo_root)
|
||||
sys.exit(result.returncode)
|
||||
except FileNotFoundError:
|
||||
print("Error: 'make' command not found. Please install make.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
except KeyboardInterrupt:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
124
docs/Docker.md
Normal file
124
docs/Docker.md
Normal file
@@ -0,0 +1,124 @@
|
||||
# Docker Build Guide 🚢
|
||||
|
||||
This guide explains how to build the **CyMaIS** Docker image with advanced options to avoid common issues (e.g. mirror timeouts) and control build caching.
|
||||
|
||||
---
|
||||
|
||||
## 1. Enable BuildKit (Optional but Recommended)
|
||||
|
||||
Modern versions of Docker support **BuildKit**, which speeds up build processes and offers better caching.
|
||||
|
||||
```bash
|
||||
# On your host, enable BuildKit for the current shell session:
|
||||
export DOCKER_BUILDKIT=1
|
||||
```
|
||||
|
||||
> **Note:** You only need to set this once per terminal session.
|
||||
|
||||
---
|
||||
|
||||
## 2. Build Arguments Explained
|
||||
|
||||
When you encounter errors like:
|
||||
|
||||
```text
|
||||
:: Synchronizing package databases...
|
||||
error: failed retrieving file 'core.db' from geo.mirror.pkgbuild.com : Connection timed out after 10002 milliseconds
|
||||
error: failed to synchronize all databases (failed to retrieve some files)
|
||||
```
|
||||
|
||||
it usually means the default container network cannot reach certain Arch Linux mirrors. To work around this, use:
|
||||
|
||||
* `--network=host`
|
||||
Routes all build-time network traffic through your host’s network stack.
|
||||
|
||||
* `--no-cache`
|
||||
Forces a fresh build of every layer by ignoring Docker’s layer cache. Useful if you suspect stale cache entries.
|
||||
|
||||
---
|
||||
|
||||
## 3. Recommended Build Command
|
||||
|
||||
```bash
|
||||
# 1. (Optional) Enable BuildKit
|
||||
export DOCKER_BUILDKIT=1
|
||||
|
||||
# 2. Build with host networking and no cache
|
||||
docker build \
|
||||
--network=host \
|
||||
--no-cache \
|
||||
-t cymais:latest \
|
||||
.
|
||||
```
|
||||
|
||||
**Flags:**
|
||||
|
||||
* `--network=host`
|
||||
Ensures all `pacman -Syu` and other network calls hit your host network directly—eliminating mirror connection timeouts.
|
||||
|
||||
* `--no-cache`
|
||||
Guarantees that changes to package lists or dependencies are picked up immediately by rebuilding every layer.
|
||||
|
||||
* `-t cymais:latest`
|
||||
Tags the resulting image as `cymais:latest`.
|
||||
|
||||
---
|
||||
|
||||
## 4. Running the Container
|
||||
|
||||
Once built, you can run CyMaIS as usual:
|
||||
|
||||
```bash
|
||||
docker run --rm -it \
|
||||
-v "$(pwd)":/opt/cymais \
|
||||
-w /opt/cymais \
|
||||
cymais:latest --help
|
||||
```
|
||||
|
||||
Mount any host directory into `/opt/cymais/logs` to persist logs across runs.
|
||||
|
||||
---
|
||||
|
||||
## 5. Further Troubleshooting
|
||||
|
||||
* **Mirror selection:** If you still see slow or unreachable mirrors, consider customizing `/etc/pacman.d/mirrorlist` in a local Docker stage or on your host to prioritize faster mirrors.
|
||||
|
||||
* **Firewall or VPN:** Ensure your host’s firewall or VPN allows outgoing connections on port 443/80 to Arch mirror servers.
|
||||
|
||||
* **Docker daemon config:** On some networks, you may need to configure Docker’s daemon proxy settings under `/etc/docker/daemon.json`.
|
||||
|
||||
## 6. Live Development via Volume Mount
|
||||
|
||||
The CyMaIS installation inside the container always resides at:
|
||||
|
||||
```
|
||||
/root/Repositories/github.com/kevinveenbirkenbach/cymais
|
||||
```
|
||||
|
||||
To apply code changes without rebuilding the image, mount your local installation directory into that static path:
|
||||
|
||||
```bash
|
||||
# 1. Determine the CyMaIS install path on your host
|
||||
CMAIS_PATH=$(pkgmgr path cymais)
|
||||
|
||||
# 2. Launch the container with a bind mount:
|
||||
docker run --rm -it \
|
||||
-v "${CMAIS_PATH}:/root/Repositories/github.com/kevinveenbirkenbach/cymais" \
|
||||
-w "/root/Repositories/github.com/kevinveenbirkenbach/cymais" \
|
||||
cymais:latest make build
|
||||
```
|
||||
|
||||
Or, to test the CLI help interactively:
|
||||
|
||||
```bash
|
||||
docker run --rm -it \
|
||||
-v "${CMAIS_PATH}:/root/Repositories/github.com/kevinveenbirkenbach/cymais" \
|
||||
-w "/root/Repositories/github.com/kevinveenbirkenbach/cymais" \
|
||||
cymais:latest --help
|
||||
```
|
||||
|
||||
Any edits you make in `${CMAIS_PATH}` on your host are immediately reflected inside the container, eliminating the need for repeated `docker build` cycles.
|
||||
|
||||
---
|
||||
|
||||
With these options, your Docker builds should complete reliably, even in restrictive network environments. Happy building! 🚀
|
19
main.py
19
main.py
@@ -18,7 +18,24 @@ except ImportError:
|
||||
def __getattr__(self, name): return ''
|
||||
Fore = Back = Style = Dummy()
|
||||
|
||||
from cli.sounds import Sound # ensure Sound imported
|
||||
_IN_DOCKER = os.path.exists('/.dockerenv')
|
||||
|
||||
if _IN_DOCKER:
|
||||
class Quiet:
|
||||
@staticmethod
|
||||
def play_start_sound(): pass
|
||||
@staticmethod
|
||||
def play_cymais_intro_sound(): pass
|
||||
@staticmethod
|
||||
def play_finished_successfully_sound(): pass
|
||||
@staticmethod
|
||||
def play_finished_failed_sound(): pass
|
||||
@staticmethod
|
||||
def play_warning_sound(): pass
|
||||
|
||||
Sound = Quiet
|
||||
else:
|
||||
from utils.sounds import Sound
|
||||
|
||||
|
||||
def color_text(text, color):
|
||||
|
@@ -1,2 +1,4 @@
|
||||
colorscheme-generator @ https://github.com/kevinveenbirkenbach/colorscheme-generator/archive/refs/tags/v0.3.0.zip
|
||||
numpy
|
||||
numpy
|
||||
bcrypt
|
||||
ruamel.yaml
|
11
roles/svc-db-memcached/README.md
Normal file
11
roles/svc-db-memcached/README.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# Memcached
|
||||
|
||||
## Description
|
||||
|
||||
This Ansible role provides a Jinja2 snippet to inject a Memcached service definition into your Docker Compose setup.
|
||||
|
||||
## Further Resources
|
||||
|
||||
- [Official Memcached Docker image on Docker Hub](https://hub.docker.com/_/memcached)
|
||||
- [Memcached official documentation](https://memcached.org/)
|
||||
- [Docker Compose reference](https://docs.docker.com/compose/compose-file/)
|
7
roles/svc-db-memcached/config/main.yml
Normal file
7
roles/svc-db-memcached/config/main.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
docker:
|
||||
services:
|
||||
memcached:
|
||||
image: memcached
|
||||
version: latest
|
||||
backup:
|
||||
enabled: false
|
17
roles/svc-db-memcached/meta/main.yml
Normal file
17
roles/svc-db-memcached/meta/main.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
galaxy_info:
|
||||
author: "Kevin Veen-Birkenbach"
|
||||
description: "Provides a Docker Compose snippet for a Memcached service (`memcached`) with optional volume, healthcheck, and logging."
|
||||
license: "CyMaIS NonCommercial License (CNCL)"
|
||||
license_url: "https://s.veen.world/cncl"
|
||||
company: |
|
||||
Kevin Veen-Birkenbach
|
||||
Consulting & Coaching Solutions
|
||||
https://www.veen.world
|
||||
galaxy_tags:
|
||||
- memcached
|
||||
- docker
|
||||
- cache
|
||||
repository: "https://github.com/kevinveenbirkenbach/cymais"
|
||||
issue_tracker_url: "https://github.com/kevinveenbirkenbach/cymais/issues"
|
||||
documentation: "https://github.com/kevinveenbirkenbach/cymais/tree/main/roles/svc-db-memcached"
|
||||
dependencies: []
|
1
roles/svc-db-memcached/vars/main.yml
Normal file
1
roles/svc-db-memcached/vars/main.yml
Normal file
@@ -0,0 +1 @@
|
||||
application_id: svc-db-memcached
|
@@ -1,4 +1,4 @@
|
||||
# Role: svc-db-redis
|
||||
# Redis
|
||||
|
||||
## Description
|
||||
|
||||
|
7
roles/svc-db-redis/config/main.yml
Normal file
7
roles/svc-db-redis/config/main.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
docker:
|
||||
services:
|
||||
redis:
|
||||
image: redis
|
||||
version: alpine
|
||||
backup:
|
||||
enabled: false
|
@@ -1,8 +1,10 @@
|
||||
# This template needs to be included in docker-compose.yml, which depend on redis
|
||||
{% set redis_image = applications | get_app_conf('svc-db-redis', 'docker.services.redis.image') %}
|
||||
{% set redis_version = applications | get_app_conf('svc-db-redis', 'docker.services.redis.version')%}
|
||||
redis:
|
||||
image: redis:alpine
|
||||
container_name: {{application_id}}-redis
|
||||
restart: {{docker_restart_policy}}
|
||||
image: "{{ redis_image }}:{{ redis_version }}"
|
||||
container_name: {{ application_id }}-redis
|
||||
restart: {{ docker_restart_policy }}
|
||||
logging:
|
||||
driver: journald
|
||||
volumes:
|
||||
|
@@ -1 +1 @@
|
||||
application_id: redis
|
||||
application_id: svc-db-redis
|
@@ -15,10 +15,11 @@ docker:
|
||||
database:
|
||||
enabled: true
|
||||
akaunting:
|
||||
no_stop_required: true
|
||||
image: docker.io/akaunting/akaunting
|
||||
backup:
|
||||
no_stop_required: true
|
||||
image: docker.io/akaunting/akaunting
|
||||
version: latest
|
||||
name: akaunting
|
||||
name: akaunting
|
||||
volumes:
|
||||
data: akaunting_data
|
||||
credentials: {}
|
||||
|
@@ -10,9 +10,10 @@ docker:
|
||||
database:
|
||||
enabled: true
|
||||
baserow:
|
||||
no_stop_required: true
|
||||
image: "baserow/baserow"
|
||||
version: "latest"
|
||||
name: "baserow"
|
||||
backup:
|
||||
no_stop_required: true
|
||||
image: "baserow/baserow"
|
||||
version: "latest"
|
||||
name: "baserow"
|
||||
volumes:
|
||||
data: "baserow_data"
|
||||
data: "baserow_data"
|
||||
|
@@ -29,7 +29,9 @@ docker:
|
||||
# @todo check this out and repair it if necessary
|
||||
discourse:
|
||||
name: "discourse"
|
||||
no_stop_required: true
|
||||
image: "local_discourse/discourse_application" # Necessary to define this for the docker 2 loc backup
|
||||
backup:
|
||||
no_stop_required: true
|
||||
volumes:
|
||||
data: discourse_data
|
||||
network: discourse
|
||||
|
@@ -42,7 +42,8 @@ docker:
|
||||
gitea:
|
||||
image: "gitea/gitea"
|
||||
version: "latest"
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
port: 3000
|
||||
name: "gitea"
|
||||
volumes:
|
||||
|
@@ -15,6 +15,7 @@ docker:
|
||||
listmonk:
|
||||
image: listmonk/listmonk
|
||||
version: latest
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
name: listmonk
|
||||
port: 9000
|
@@ -22,7 +22,8 @@ docker:
|
||||
mastodon:
|
||||
image: "ghcr.io/mastodon/mastodon"
|
||||
version: latest
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
name: "mastodon"
|
||||
streaming:
|
||||
image: "ghcr.io/mastodon/mastodon-streaming"
|
||||
|
@@ -36,7 +36,8 @@ docker:
|
||||
image: "matomo"
|
||||
version: "latest"
|
||||
name: "matomo"
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
database:
|
||||
enabled: true
|
||||
redis:
|
||||
|
@@ -6,7 +6,8 @@ docker:
|
||||
version: latest
|
||||
image: matrixdotorg/synapse
|
||||
name: matrix-synapse
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
element:
|
||||
version: latest
|
||||
image: vectorim/element-web
|
||||
|
@@ -6,7 +6,8 @@ docker:
|
||||
mediawiki:
|
||||
image: mediawiki
|
||||
version: latest
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
name: mediawiki
|
||||
volumes:
|
||||
data: mediawiki_data
|
@@ -22,7 +22,8 @@ docker:
|
||||
name: "nextcloud"
|
||||
image: "nextcloud"
|
||||
version: "latest-fpm-alpine"
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
proxy:
|
||||
name: "nextcloud-proxy"
|
||||
image: "nginx"
|
||||
|
@@ -13,7 +13,7 @@ ldap:
|
||||
features:
|
||||
matomo: true
|
||||
css: false # Temporary deactivated. Needs to be optimized for production use.
|
||||
port-ui-desktop: true
|
||||
port-ui-desktop: true
|
||||
ldap: true
|
||||
central_database: true
|
||||
oauth2: true
|
||||
@@ -34,8 +34,9 @@ docker:
|
||||
web:
|
||||
name: openproject-web
|
||||
image: openproject/community
|
||||
version: "13" # Update when available. Sadly no rolling release implemented
|
||||
no_stop_required: true
|
||||
version: "13" # Update when available. No rolling release implemented
|
||||
backup:
|
||||
no_stop_required: true
|
||||
seeder:
|
||||
name: openproject-seeder
|
||||
cron:
|
||||
@@ -44,6 +45,10 @@ docker:
|
||||
name: openproject-worker
|
||||
proxy:
|
||||
name: openproject-proxy
|
||||
cache:
|
||||
name: openproject-cache
|
||||
image: "" # If need a specific memcached image you have to define it here, otherwise the version from svc-db-memcached will be used
|
||||
version: "" # If need a specific memcached version you have to define it here, otherwise the version from svc-db-memcached will be used
|
||||
|
||||
volumes:
|
||||
data: "openproject_data"
|
@@ -10,8 +10,8 @@ x-op-app: &app
|
||||
{% include 'roles/docker-compose/templates/base.yml.j2' %}
|
||||
|
||||
cache:
|
||||
image: memcached
|
||||
container_name: openproject-memcached
|
||||
image: "{{ openproject_cache_image}}:{{openproject_cache_version }}"
|
||||
container_name: {{ openproject_cache_name }}
|
||||
{% include 'roles/docker-container/templates/base.yml.j2' %}
|
||||
|
||||
proxy:
|
||||
|
@@ -10,6 +10,22 @@ openproject_seeder_name: "{{ applications | get_app_conf(application_id, 'd
|
||||
openproject_cron_name: "{{ applications | get_app_conf(application_id, 'docker.services.cron.name', True) }}"
|
||||
openproject_proxy_name: "{{ applications | get_app_conf(application_id, 'docker.services.proxy.name', True) }}"
|
||||
openproject_worker_name: "{{ applications | get_app_conf(application_id, 'docker.services.worker.name', True) }}"
|
||||
|
||||
openproject_cache_name: "{{ applications | get_app_conf(application_id, 'docker.services.cache.name', True) }}"
|
||||
openproject_cache_image: >-
|
||||
{{ applications
|
||||
| get_app_conf(application_id, 'docker.services.cache.image')
|
||||
or applications
|
||||
| get_app_conf('svc-db-memcached', 'docker.services.memcached.image')
|
||||
}}
|
||||
|
||||
openproject_cache_version: >-
|
||||
{{ applications
|
||||
| get_app_conf(application_id, 'docker.services.cache.version')
|
||||
or applications
|
||||
| get_app_conf('svc-db-memcached', 'docker.services.memcached.version')
|
||||
}}
|
||||
|
||||
|
||||
openproject_plugins_folder: "{{docker_compose.directories.volumes}}plugins/"
|
||||
|
||||
|
@@ -34,6 +34,7 @@ docker:
|
||||
name: "peertube"
|
||||
version: "production-bookworm"
|
||||
image: "chocobozzz/peertube"
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
volumes:
|
||||
data: peertube_data
|
@@ -30,7 +30,8 @@ docker:
|
||||
image: "zknt/pixelfed"
|
||||
version: "latest"
|
||||
name: "pixelfed"
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
worker:
|
||||
name: "pixelfed_worker"
|
||||
volumes:
|
||||
|
@@ -46,7 +46,8 @@ docker:
|
||||
version: latest
|
||||
image: wordpress
|
||||
name: wordpress
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
volumes:
|
||||
data: wordpress_data
|
||||
rbac:
|
||||
|
@@ -6,7 +6,8 @@ docker:
|
||||
database:
|
||||
enabled: false # Enable the database
|
||||
{{ application_id }}:
|
||||
no_stop_required: true
|
||||
backup:
|
||||
no_stop_required: true
|
||||
image: ""
|
||||
version: "latest"
|
||||
name: "web-app-{{ application_id }}"
|
||||
|
0
tests/integration/backups/__init__.py
Normal file
0
tests/integration/backups/__init__.py
Normal file
51
tests/integration/backups/test_enabled.py
Normal file
51
tests/integration/backups/test_enabled.py
Normal file
@@ -0,0 +1,51 @@
|
||||
import unittest
|
||||
import os
|
||||
import yaml
|
||||
|
||||
class TestBackupsEnabledIntegrity(unittest.TestCase):
|
||||
def setUp(self):
|
||||
# Path to the roles directory
|
||||
self.roles_dir = os.path.abspath(
|
||||
os.path.join(os.path.dirname(__file__), '../../../roles')
|
||||
)
|
||||
|
||||
def test_backups_enabled_image_consistency(self):
|
||||
"""
|
||||
Ensure that if `backups.enabled` is set for any docker.services[*]:
|
||||
- it's a boolean value
|
||||
- the containing service dict has an `image` entry at the same level
|
||||
"""
|
||||
for role in os.listdir(self.roles_dir):
|
||||
docker_config_path = os.path.join(
|
||||
self.roles_dir, role, 'config', 'main.yml'
|
||||
)
|
||||
if not os.path.isfile(docker_config_path):
|
||||
continue
|
||||
|
||||
with open(docker_config_path, 'r') as f:
|
||||
try:
|
||||
config = yaml.safe_load(f) or {}
|
||||
except yaml.YAMLError as e:
|
||||
self.fail(f"YAML parsing failed for {docker_config_path}: {e}")
|
||||
continue
|
||||
|
||||
services = (config.get('docker', {}) or {}).get('services', {}) or {}
|
||||
|
||||
for service_key, service in services.items():
|
||||
if not isinstance(service, dict):
|
||||
continue
|
||||
|
||||
backups_cfg = service.get('backups', {}) or {}
|
||||
if 'enabled' in backups_cfg:
|
||||
with self.subTest(role=role, service=service_key):
|
||||
self.assertIsInstance(
|
||||
backups_cfg['enabled'], bool,
|
||||
f"`backups.enabled` in role '{role}', service '{service_key}' must be a boolean."
|
||||
)
|
||||
self.assertIn(
|
||||
'image', service,
|
||||
f"`image` is required in role '{role}', service '{service_key}' when `backups.enabled` is defined."
|
||||
)
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
55
tests/integration/backups/test_no_stop_required.py
Normal file
55
tests/integration/backups/test_no_stop_required.py
Normal file
@@ -0,0 +1,55 @@
|
||||
import unittest
|
||||
import os
|
||||
import yaml
|
||||
|
||||
class TestNoStopRequiredIntegrity(unittest.TestCase):
|
||||
def setUp(self):
|
||||
# Path to the roles directory
|
||||
self.roles_dir = os.path.abspath(
|
||||
os.path.join(os.path.dirname(__file__), '../../../roles')
|
||||
)
|
||||
|
||||
def test_backup_no_stop_required_consistency(self):
|
||||
"""
|
||||
Ensure that if `backup.no_stop_required: true` is set for any docker.services[*]:
|
||||
- it's a boolean value
|
||||
- the containing service dict has an `image` entry at the same level
|
||||
"""
|
||||
for role in os.listdir(self.roles_dir):
|
||||
docker_config_path = os.path.join(
|
||||
self.roles_dir, role, 'config', 'main.yml'
|
||||
)
|
||||
if not os.path.isfile(docker_config_path):
|
||||
continue
|
||||
|
||||
with open(docker_config_path, 'r') as f:
|
||||
try:
|
||||
# Ensure config is at least an empty dict if YAML is empty or null
|
||||
config = yaml.safe_load(f) or {}
|
||||
except yaml.YAMLError as e:
|
||||
self.fail(f"YAML parsing failed for {docker_config_path}: {e}")
|
||||
continue
|
||||
|
||||
# Safely get services dict
|
||||
services = (config.get('docker', {}) or {}).get('services', {}) or {}
|
||||
|
||||
for service_key, service in services.items():
|
||||
if not isinstance(service, dict):
|
||||
continue
|
||||
backup_cfg = service.get('backup', {}) or {}
|
||||
# Check if no_stop_required is explicitly True
|
||||
if backup_cfg.get('no_stop_required') is True:
|
||||
with self.subTest(role=role, service=service_key):
|
||||
# Must be a boolean
|
||||
self.assertIsInstance(
|
||||
backup_cfg['no_stop_required'], bool,
|
||||
f"`backup.no_stop_required` in role '{role}', service '{service_key}' must be a boolean."
|
||||
)
|
||||
# Must have `image` defined at the service level
|
||||
self.assertIn(
|
||||
'image', service,
|
||||
f"`image` is required in role '{role}', service '{service_key}' when `backup.no_stop_required` is set to True."
|
||||
)
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
@@ -1,52 +0,0 @@
|
||||
import unittest
|
||||
import os
|
||||
import yaml
|
||||
|
||||
class TestNoStopRequiredIntegrity(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.roles_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../roles'))
|
||||
|
||||
def test_no_stop_required_consistency(self):
|
||||
"""
|
||||
This test ensures that if 'no_stop_required' is defined in any
|
||||
docker.services[*] entry, it must:
|
||||
- be a boolean value (True/False)
|
||||
- have a 'name' entry defined on the same level
|
||||
|
||||
This is critical for the role 'sys-bkp-docker-2-loc', which uses the
|
||||
'no_stop_required' flag to determine which container names should be excluded
|
||||
from stopping during backup operations.
|
||||
|
||||
The logic for processing this flag is implemented in:
|
||||
https://github.com/kevinveenbirkenbach/backup-docker-to-local
|
||||
"""
|
||||
for role in os.listdir(self.roles_dir):
|
||||
docker_config_path = os.path.join(self.roles_dir, role, 'config', 'main.yml')
|
||||
if not os.path.isfile(docker_config_path):
|
||||
continue
|
||||
|
||||
with open(docker_config_path, 'r') as f:
|
||||
try:
|
||||
config = yaml.safe_load(f)
|
||||
except yaml.YAMLError as e:
|
||||
self.fail(f"YAML parsing failed for {docker_config_path}: {e}")
|
||||
continue
|
||||
|
||||
docker_services = (
|
||||
config.get('docker', {}).get('services', {}) if config else {}
|
||||
)
|
||||
|
||||
for service_key, service in docker_services.items():
|
||||
if isinstance(service, dict) and 'no_stop_required' in service:
|
||||
with self.subTest(role=role, service=service_key):
|
||||
self.assertIsInstance(
|
||||
service['no_stop_required'], bool,
|
||||
f"'no_stop_required' in role '{role}', service '{service_key}' must be a boolean."
|
||||
)
|
||||
self.assertIn(
|
||||
'name', service,
|
||||
f"'name' is required in role '{role}', service '{service_key}' when 'no_stop_required' is set."
|
||||
)
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
72
tests/unit/test_main.py
Normal file
72
tests/unit/test_main.py
Normal file
@@ -0,0 +1,72 @@
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from unittest import mock
|
||||
|
||||
# Insert project root into import path so we can import main.py
|
||||
sys.path.insert(
|
||||
0,
|
||||
os.path.abspath(os.path.join(os.path.dirname(__file__), "../../"))
|
||||
)
|
||||
|
||||
import main # assumes main.py lives at the project root
|
||||
|
||||
|
||||
class TestMainHelpers(unittest.TestCase):
|
||||
def test_format_command_help_basic(self):
|
||||
name = "cmd"
|
||||
description = "A basic description"
|
||||
output = main.format_command_help(
|
||||
name, description,
|
||||
indent=2, col_width=20, width=40
|
||||
)
|
||||
# Should start with two spaces and the command name
|
||||
self.assertTrue(output.startswith(" cmd"))
|
||||
# Description should appear somewhere in the wrapped text
|
||||
self.assertIn("A basic description", output)
|
||||
|
||||
def test_list_cli_commands_filters_and_sorts(self):
|
||||
# Create a temporary directory with sample files containing argparse
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
# Create Python files that import argparse
|
||||
one_path = os.path.join(tmpdir, "one.py")
|
||||
with open(one_path, "w") as f:
|
||||
f.write("import argparse\n# dummy CLI command\n")
|
||||
|
||||
two_path = os.path.join(tmpdir, "two.py")
|
||||
with open(two_path, "w") as f:
|
||||
f.write("import argparse\n# another CLI command\n")
|
||||
|
||||
# Non-Python and dunder files should be ignored
|
||||
open(os.path.join(tmpdir, "__init__.py"), "w").close()
|
||||
open(os.path.join(tmpdir, "ignore.txt"), "w").close()
|
||||
|
||||
# Only 'one' and 'two' should be returned, in sorted order
|
||||
commands = main.list_cli_commands(tmpdir)
|
||||
self.assertEqual([(None, 'one'), (None, 'two')], commands)
|
||||
|
||||
def test_git_clean_repo_invokes_git_clean(self):
|
||||
with mock.patch('main.subprocess.run') as mock_run:
|
||||
main.git_clean_repo()
|
||||
mock_run.assert_called_once_with(['git', 'clean', '-Xfd'], check=True)
|
||||
|
||||
@mock.patch('main.subprocess.run')
|
||||
def test_extract_description_via_help_with_description(self, mock_run):
|
||||
# Simulate subprocess returning help output with a description
|
||||
mock_stdout = "usage: dummy.py [options]\n\nThis is a help description.\n"
|
||||
mock_run.return_value = mock.Mock(stdout=mock_stdout)
|
||||
description = main.extract_description_via_help("/fake/path/dummy.py")
|
||||
self.assertEqual(description, "This is a help description.")
|
||||
|
||||
@mock.patch('main.subprocess.run')
|
||||
def test_extract_description_via_help_without_description(self, mock_run):
|
||||
# Simulate subprocess returning help output without a description
|
||||
mock_stdout = "usage: empty.py [options]\n"
|
||||
mock_run.return_value = mock.Mock(stdout=mock_stdout)
|
||||
description = main.extract_description_via_help("/fake/path/empty.py")
|
||||
self.assertEqual(description, "-")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
@@ -1,81 +0,0 @@
|
||||
import os
|
||||
import sys
|
||||
import stat
|
||||
import tempfile
|
||||
import unittest
|
||||
from unittest import mock
|
||||
|
||||
# Insert project root into import path so we can import main.py
|
||||
sys.path.insert(
|
||||
0,
|
||||
os.path.abspath(os.path.join(os.path.dirname(__file__), "../../"))
|
||||
)
|
||||
|
||||
import main # assumes main.py lives at the project root
|
||||
|
||||
|
||||
class TestMainHelpers(unittest.TestCase):
|
||||
def test_format_command_help_basic(self):
|
||||
name = "cmd"
|
||||
description = "A basic description"
|
||||
output = main.format_command_help(
|
||||
name, description,
|
||||
indent=2, col_width=20, width=40
|
||||
)
|
||||
# Should start with two spaces and the command name
|
||||
self.assertTrue(output.startswith(" cmd"))
|
||||
# Description should appear somewhere in the wrapped text
|
||||
self.assertIn("A basic description", output)
|
||||
|
||||
def test_list_cli_commands_filters_and_sorts(self):
|
||||
# Create a temporary directory with sample files
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
open(os.path.join(tmpdir, "one.py"), "w").close()
|
||||
open(os.path.join(tmpdir, "__init__.py"), "w").close()
|
||||
open(os.path.join(tmpdir, "ignore.txt"), "w").close()
|
||||
open(os.path.join(tmpdir, "two.py"), "w").close()
|
||||
|
||||
# Only 'one' and 'two' should be returned, in sorted order
|
||||
commands = main.list_cli_commands(tmpdir)
|
||||
self.assertEqual(commands, ["one", "two"])
|
||||
|
||||
def test_git_clean_repo_invokes_git_clean(self):
|
||||
with mock.patch('main.subprocess.run') as mock_run:
|
||||
main.git_clean_repo()
|
||||
mock_run.assert_called_once_with(['git', 'clean', '-Xfd'], check=True)
|
||||
|
||||
def test_extract_description_via_help_with_description(self):
|
||||
# Create a dummy script that prints a help description
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
script_path = os.path.join(tmpdir, "dummy.py")
|
||||
with open(script_path, "w") as f:
|
||||
f.write(
|
||||
"#!/usr/bin/env python3\n"
|
||||
"import sys\n"
|
||||
"if '--help' in sys.argv:\n"
|
||||
" print('usage: dummy.py [options]')\n"
|
||||
" print()\n"
|
||||
" print('This is a help description.')\n"
|
||||
)
|
||||
# Make it executable
|
||||
mode = os.stat(script_path).st_mode
|
||||
os.chmod(script_path, mode | stat.S_IXUSR)
|
||||
|
||||
description = main.extract_description_via_help(script_path)
|
||||
self.assertEqual(description, "This is a help description.")
|
||||
|
||||
def test_extract_description_via_help_without_description(self):
|
||||
# Script that has no help description
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
script_path = os.path.join(tmpdir, "empty.py")
|
||||
with open(script_path, "w") as f:
|
||||
f.write(
|
||||
"#!/usr/bin/env python3\n"
|
||||
"print('no help here')\n"
|
||||
)
|
||||
description = main.extract_description_via_help(script_path)
|
||||
self.assertEqual(description, "-")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
Reference in New Issue
Block a user