Compare commits

..

9 Commits

70 changed files with 435 additions and 246 deletions

View File

@@ -27,9 +27,6 @@ def run_ansible_playbook(inventory, playbook, modes, limit=None, allowed_applica
if allowed_applications:
joined = ",".join(allowed_applications)
cmd.extend(["-e", f"allowed_applications={joined}"])
else:
# No IDs provided: execute all applications defined in the inventory
cmd.extend(["-e", "allowed_applications=all"])
# Pass other mode flags
for key, value in modes.items():

View File

@@ -2,24 +2,36 @@
## Description
This Ansible role automates the process of detecting, revoking, and deleting unused Let's Encrypt certificates. It leverages the [`certreap`](https://github.com/kevinveenbirkenbach/certreap) tool to identify which certificates are no longer referenced by any active NGINX configuration and removes them accordingly.
This Ansible role automates the detection, revocation and deletion of unused Let's Encrypt certificates. It leverages the [`certreap`](https://github.com/kevinveenbirkenbach/certreap) tool to identify certificates no longer referenced by any active NGINX configuration and removes them automatically.
## Overview
Optimized for Archlinux, this role installs the certificate cleanup tool, configures a systemd service, and sets up an optional recurring systemd timer for automatic cleanup. It integrates with dependent roles for timer scheduling and system notifications.
## Purpose
Certbot Reaper helps you maintain a clean and secure server environment by regularly removing obsolete SSL certificates. This prevents unnecessary renewal attempts, clutter, and potential security risks from stale certificates.
- Installs the `certreap` cleanup tool using the `pkgmgr-install` role
- Deploys and configures a `cleanup-certs.cymais.service` systemd unit
- (Optionally) Sets up a recurring cleanup via a systemd timer using the `systemd-timer` role
- Integrates with `systemd-notifier` to send failure notifications
- Ensures idempotent execution with a `run_once_cleanup_certs` flag
## Features
- **Certificate Cleanup Tool Installation:** Installs `certreap` using [pkgmgr](https://github.com/kevinveenbirkenbach/package-manager)
- **Systemd Service Configuration:** Deploys and manages `cleanup-certs.cymais.service`
- **Systemd Timer Scheduling:** Optional timer via the `systemd-timer` role
- **Smart Execution Logic:** Ensures idempotent configuration using a `run_once` flag
- **Certificate Cleanup Tool Installation**
Uses `pkgmgr-install` to install the `certreap` binary.
## License
- **Systemd Service Configuration**
Deploys `cleanup-certs.cymais.service` and reloads/restarts it on changes.
This role is licensed under the [CyMaIS NonCommercial License (CNCL)](https://s.veen.world/cncl).
Commercial use is not permitted without explicit permission.
- **Systemd Timer Scheduling**
Optionally wires in a timer via the `systemd-timer` role, controlled by the `on_calendar_cleanup_certs` variable.
- **Smart Execution Logic**
Prevents multiple runs in one play by setting a `run_once_cleanup_certs` fact.
- **Failure Notification**
Triggers `systemd-notifier.cymais@cleanup-certs.cymais.service` on failure.
## Further Resources
- [certreap on GitHub](https://github.com/kevinveenbirkenbach/certreap)
- [Ansible community.general.pacman module](https://docs.ansible.com/ansible/latest/collections/community/general/pacman_module.html)
- [CyMaIS NonCommercial License (CNCL)](https://s.veen.world/cncl)
- [systemd.unit(5) manual](https://www.freedesktop.org/software/systemd/man/systemd.unit.html)

View File

@@ -1,39 +1,37 @@
# Bluray-Player
# client-bluray-player
## Description
This Ansible role installs and configures all the software required for Blu-ray playback on Arch Linuxbased systems. It ensures that VLC and the necessary libraries for Blu-ray disc decryption and playback (`libaacs`, `libbluray`) are present, and provides hooks for optional AUR packages.
## Overview
Welcome to the `client-bluray-player` role, a part of the `cymais` repository. This role is dedicated to setting up software required for Blu-ray playback on personal computers. It focuses on installing necessary packages to enable the use of Blu-ray media with VLC player and other compatible software.
## Role Contents
The `main.yml` file in this role consists of tasks that automate the installation of the following packages:
1. **Install VLC and Blu-ray Software**:
- `vlc`: A versatile media player that supports Blu-ray playback.
- `libaacs`: A library for Blu-ray disc encryption handling.
- `libbluray`: A library for Blu-ray disc playback support.
- Uses the `community.general.pacman` module to install:
- `vlc` (media player with Blu-ray support)
- `libaacs` (AACS decryption library)
- `libbluray` (Blu-ray playback support library)
- Contains commented-out tasks for optional AUR packages (`aacskeys`, `libbdplus`) you can enable as needed.
- Designed for idempotent execution on Arch Linux and derivatives.
There are commented-out tasks for installing additional AUR packages, such as `aacskeys` and `libbdplus`, which can be enabled as per the user's requirements.
## Features
## Other Resources and Resources
For more in-depth information and guidance on Blu-ray playback and software configuration, the following resources can be consulted:
- [Arch Linux Wiki on Blu-ray](https://wiki.archlinux.org/title/Blu-ray#Using_aacskeys)
- [Guide to Play Blu-ray with VLC](https://videobyte.de/play-blu-ray-with-vlc)
- [Manjaro Forum Discussion on Blu-ray UHD Playback](https://archived.forum.manjaro.org/t/wie-kann-ich-bluray-uhd-abspielen/127396/12)
- [FV Online DB](http://fvonline-db.bplaced.net/)
- **VLC Installation**
Installs `vlc` for general media and Blu-ray playback.
## Dependencies
This role depends on the `java` role, which ensures the Java runtime is available a requirement for certain Blu-ray playback tools and functionalities.
- **AACS & BD+ Support**
Installs `libaacs` and `libbluray` to handle Blu-ray disc encryption and playback.
## Prerequisites
- **Ansible**: Ansible must be installed on your system to use this role.
- **Arch Linux-based System**: Designed for Arch Linux distributions, using the `pacman` package manager.
- **Optional AUR Packages**
Drop-in tasks for `aacskeys` and `libbdplus` via AUR (commented out by default).
## Running the Role
To utilize this role:
1. Clone the `cymais` repository.
2. Navigate to the `roles/client-bluray-player` directory.
3. Execute the role using Ansible, with appropriate permissions for installing packages.
- **Idempotent Role**
Safe to run multiple times without unintended side effects.
## Customization
You can customize this role by enabling or adding additional tasks for other AUR packages related to Blu-ray playback as needed.
- **Arch LinuxOptimized**
Leverages Pacman for fast and reliable package management.
## Support and Contributions
For support, feedback, or contributions to enhance the role's capabilities, please open an issue or submit a pull request in the `cymais` repository. Contributions that improve Blu-ray playback support or compatibility are highly appreciated.
## Further Resources
- [Arch Linux Wiki: Blu-ray Playback](https://wiki.archlinux.org/title/Blu-ray#Using_aacskeys)
- [Play Blu-ray with VLC Guide](https://videobyte.de/play-blu-ray-with-vlc)
- [FV Online DB Blu-ray Tools](http://fvonline-db.bplaced.net/)

View File

@@ -23,7 +23,7 @@ For detailed setup instructions, please refer to [Installation.md](./Installatio
For more information about Attendize and its capabilities, please visit the [Attendize Homepage](https://attendize.com).
## Additional Resources
## Further Resources
- [Attendize GitHub Repository](https://github.com/Attendize/Attendize.git)
- [Attendize Documentation](https://github.com/Attendize/Attendize)

View File

@@ -1 +1,3 @@
- Implement this role
# Todo
- Implement this role
- refactore main.yml

View File

@@ -16,7 +16,7 @@ This role deploys Baserow using Docker Compose, integrating key components such
- **Scalable Architecture:** Efficiently handle increasing workloads while maintaining high performance.
- **Robust API Integration:** Leverage a comprehensive API to extend functionalities and integrate with other systems.
## Additional Resources
## Further Resources
- [Baserow Homepage](https://baserow.io/)
- [Enable Single Sign-On (SSO)](https://baserow.io/user-docs/enable-single-sign-on-sso)

View File

@@ -20,7 +20,7 @@ For DNS configuration and other setup details, please refer to [Installation.md]
- **Real-Time Content Delivery:** Enjoy dynamic and instantaneous updates for a modern social experience.
- **Developer-Friendly API:** Integrate with external systems and extend functionalities through a robust set of APIs.
## Additional Resources
## Further Resources
- [Self-hosting Bluesky with Docker and SWAG](https://therobbiedavis.com/selfhosting-bluesky-with-docker-and-swag/)
- [Notes on Self-hosting Bluesky PDS with Other Services](https://cprimozic.net/notes/posts/notes-on-self-hosting-bluesky-pds-alongside-other-services/)

View File

@@ -1 +1,2 @@
docker_compose_skipp_file_creation: false # If set to true the file creation will be skipped
docker_compose_skipp_file_creation: false # If set to true the file creation will be skipped
docker_repository: false # Activates docker repository download and routine

View File

@@ -8,6 +8,14 @@
# listen: docker compose up
# when: mode_reset | bool
- name: rebuild docker repository
command:
cmd: docker compose build
chdir: "{{docker_repository_path}}"
environment:
COMPOSE_HTTP_TIMEOUT: 600
DOCKER_CLIENT_TIMEOUT: 600
# default setup for docker compose files
- name: docker compose up
shell: docker-compose -p {{ application_id }} up -d --force-recreate --remove-orphans --build

View File

@@ -1,10 +1,10 @@
- name: Create (optional) Dockerfile
- name: "Create (optional) Dockerfile for {{ application_id }}"
template:
src: "{{ item }}"
dest: "{{ docker_compose.files.dockerfile }}"
with_first_found:
- "{{ playbook_dir }}/roles/{{ role_name }}/templates/Dockerfile.j2"
- "{{ playbook_dir }}/roles/{{ role_name }}/files/Dockerfile"
loop:
- "{{ playbook_dir }}/roles/docker-{{ application_id }}/templates/Dockerfile.j2"
- "{{ playbook_dir }}/roles/docker-{{ application_id }}/files/Dockerfile"
notify: docker compose up
register: create_dockerfile_result
failed_when:
@@ -19,9 +19,9 @@
force: yes
notify: docker compose up
register: env_template
with_first_found:
- "{{ playbook_dir }}/roles/{{ role_name }}/templates/env.j2"
- "{{ playbook_dir }}/roles/{{ role_name }}/files/env"
loop:
- "{{ playbook_dir }}/roles/docker-{{ application_id }}/templates/env.j2"
- "{{ playbook_dir }}/roles/docker-{{ application_id }}/files/env"
failed_when:
- env_template is failed
- "'Could not find or access' not in env_template.msg"

View File

@@ -16,5 +16,8 @@
mode: '0755'
with_dict: "{{ docker_compose.directories }}"
- include_tasks: "create-files.yml"
- include_tasks: "repository.yml"
when: docker_repository | bool
- include_tasks: "files.yml"
when: not docker_compose_skipp_file_creation | bool

View File

@@ -19,7 +19,7 @@ For detailed usage and configuration, please refer to the following files in thi
- **Scalable Architecture:** Utilize a Docker-based deployment that adapts easily to increasing traffic and community size.
- **Extensive Plugin Support:** Enhance your forum with a wide range of plugins and integrations for additional functionality.
## Additional Resources
## Further Resources
- [Discourse Official Website](https://www.discourse.org/)
- [Discourse GitHub Repository](https://github.com/discourse/discourse_docker.git)

View File

@@ -25,7 +25,7 @@ With this role, you'll have a production-ready CRM environment that's secure, sc
- **Health Checks & Logging:** Monitor service health and logs with built-in checks and journald 📈
- **Modular Role Composition:** Leverages central roles for database and Nginx, ensuring consistency across deployments 🔄
## Additional Resources
## Further Resources
- [EspoCRM Official Website](https://www.espocrm.com/) 🌍
- [EspoCRM Documentation](https://docs.espocrm.com/) 📖

View File

@@ -18,7 +18,7 @@ For detailed administration procedures, please refer to the [Administration.md](
- **Configuration Debugging:** Quickly inspect environment variables, volume data, and configuration files to troubleshoot issues.
- **Autoinstall Capability:** Automate initial installation steps to rapidly deploy a working Friendica instance.
## Additional Resources
## Further Resources
- [Friendica Docker Hub](https://hub.docker.com/_/friendica)
- [Friendica Installation Documentation](https://wiki.friendi.ca/docs/install)

View File

@@ -1,34 +1,29 @@
# FusionDirectory (DRAFT)
# Warning
This application isn't implemented yet
# FusionDirectory
## Description
This Ansible role deploys and configures [FusionDirectory](https://www.fusiondirectory.org/) a powerful web-based LDAP administration tool. Using Docker Compose, the role runs a pre-configured FusionDirectory container which allows you to manage your LDAP directory through a user-friendly web interface.
This Ansible role deploys and configures [FusionDirectory](https://www.fusiondirectory.org/)—a web-based LDAP administration tool—using Docker Compose. It runs a pre-configured FusionDirectory container, connects it to your existing LDAP service, and ensures a consistent, repeatable setup.
## Overview
Designed to simplify LDAP management, this role:
- Loads necessary FusionDirectory-specific variables.
- Generates an environment file based on a template.
- Deploys a FusionDirectory Docker container via Docker Compose.
- Integrates with your existing central LDAP service.
## Purpose
The purpose of this role is to automate the deployment of FusionDirectory in your Docker environment, ensuring a quick and consistent setup for managing your LDAP data. Ideal for production or homelab deployments, it reduces manual configuration steps and helps enforce best practices.
- Loads and templating of FusionDirectory-specific variables
- Generates a `.env` file for the container environment
- Deploys the FusionDirectory container via Docker Compose
- Configures NGINX (via the `nginx-domain-setup` role) to expose the service
- Integrates with your central LDAP server for authentication
## Features
- **Easy Deployment:** Minimal manual setup via pre-configured templates and variables.
- **LDAP Integration:** Connects seamlessly with your existing central LDAP server.
- **Web Interface:** Provides an intuitive GUI for LDAP administration.
- **Docker Compose Integration:** Automates container creation and restart.
- **Easy Deployment:** Runs FusionDirectory in Docker Compose with minimal manual steps
- **LDAP Integration:** Connects to your existing LDAP backend for user management
- **Environment Management:** Builds an environment file from role variables and templates
- **NGINX Setup:** Automatically configures a virtual host for FusionDirectory
- **Docker-Native:** Leverages the `docker-compose` role for container orchestration
- **Idempotent:** Safe to run multiple times without side effects
## Credits 📝
## Further Resources
Developed and maintained by **Kevin Veen-Birkenbach**.
Learn more at [www.veen.world](https://www.veen.world)
Part of the [CyMaIS Project](https://github.com/kevinveenbirkenbach/cymais)
License: [CyMaIS NonCommercial License (CNCL)](https://s.veen.world/cncl)
- [FusionDirectory Official Website](https://www.fusiondirectory.org/)
- [FusionDirectory Docker Image (tiredofit/fusiondirectory)](https://hub.docker.com/r/tiredofit/fusiondirectory)
- [Role Source & Documentation (CyMaIS)](https://github.com/kevinveenbirkenbach/cymais/tree/main/roles/docker-fusiondirectory)
- [CyMaIS NonCommercial License (CNCL)](https://s.veen.world/cncl)

View File

@@ -18,7 +18,7 @@ For detailed administration procedures, please refer to the [Administration.md](
- **Built-in Database Access:** Seamlessly interact with the underlying MariaDB for your Git service.
- **Integrated Configuration:** Easily manage settings via environment variables and Docker Compose templates.
## Additional Resources
## Further Resources
- [Gitea Official Website](https://gitea.io/)
- [Gitea LDAP integration](https://docs.gitea.com/usage/authentication)

View File

@@ -17,7 +17,7 @@ For a detailed walkthrough of this role, please refer to the [ChatGPT Session Tr
- **Nginx Reverse Proxy Integration:** Simplifies secure access with an Nginx reverse proxy.
- **Customizable Configuration:** Easily tailor deployment settings using Ansible variables and templates.
## Additional Resources
## Further Resources
- [GitLab Official Website](https://about.gitlab.com/)
- [Running GitLab on Docker](https://ralph.blog.imixs.com/2019/06/09/running-gitlab-on-docker/)

View File

@@ -18,7 +18,7 @@ For detailed administration procedures, please refer to the [Administration.md](
- **Integrated Database Support:** Utilize an external PostgreSQL database for robust data storage.
- **Nginx Reverse Proxy:** Ensure secure and efficient access to your Joomla instance.
## Additional Resources
## Further Resources
- [Joomla Official Website](https://www.joomla.org/)

View File

@@ -15,7 +15,7 @@ This role deploys Keycloak in a Docker environment, integrating it with a Postgr
- **Standards Support:** Seamlessly integrate with SAML, OpenID Connect, and OAuth2 to support various authentication flows.
- **Scalable and Customizable:** Easily tailor settings and scale your Keycloak instance to meet growing demands.
## Additional Resources
## Further Resources
- [Keycloak Official Website](https://www.keycloak.org/)
- [Official Keycloak Documentation](https://www.keycloak.org/documentation.html)

View File

@@ -890,8 +890,8 @@
"organization",
"offline_access",
"microprofile-jwt",
"{{ applications[application_id]scopes.rbac_roles }}",
"{{ applications[application_id]scopes.nextcloud }}"
"{{ applications[application_id].scopes.rbac_roles }}",
"{{ applications[application_id].scopes.nextcloud }}"
]
}
@@ -1197,7 +1197,7 @@
},
{
"id": "15dd4961-5b4f-4635-a3f1-a21e1fa7bf3a",
"name": "{{ applications[application_id]scopes.nextcloud }}",
"name": "{{ applications[application_id].scopes.nextcloud }}",
"description": "Optimized mappers for nextcloud oidc_login with ldap.",
"protocol": "openid-connect",
"attributes": {
@@ -1249,7 +1249,7 @@
},
{
"id": "59917c48-a7ef-464a-a8b0-ea24316db18e",
"name": "{{ applications[application_id]scopes.rbac_roles }}",
"name": "{{ applications[application_id].scopes.rbac_roles }}",
"description": "RBAC Groups",
"protocol": "openid-connect",
"attributes": {
@@ -1675,8 +1675,8 @@
"phone",
"microprofile-jwt",
"organization",
"{{ applications[application_id]scopes.rbac_roles }}",
"{{ applications[application_id]scopes.nextcloud }}"
"{{ applications[application_id].scopes.rbac_roles }}",
"{{ applications[application_id].scopes.nextcloud }}"
],
"browserSecurityHeaders": {
"contentSecurityPolicyReportOnly": "",

View File

@@ -15,6 +15,6 @@ This role deploys LAM in a Docker environment and integrates it with an Nginx re
- **Secure Access:** Utilize Nginx reverse proxy integration to safeguard your management interface.
- **Efficient Administration:** Streamline the handling of LDAP objects such as users, groups, and organizational units.
## Additional Resources
## Further Resources
- [LDAP Account Manager Official Website](https://www.ldap-account-manager.org/)

View File

@@ -22,7 +22,7 @@ For further setup instructions and advanced configuration details, please refer
- **Comprehensive Query Capabilities:** Utilize LDAP search tools to efficiently query and manage directory data.
- **High Performance and Scalability:** Designed to handle large-scale deployments with rapid lookup and authentication response times.
## Additional Resources
## Further Resources
- [Bitnami OpenLDAP](https://hub.docker.com/r/bitnami/openldap)
- [phpLDAPadmin Documentation](https://github.com/leenooks/phpLDAPadmin/wiki/Docker-Container)

View File

@@ -68,5 +68,5 @@ docker exec -i ldap \
-D "$LDAP_ADMIN_DN" \
-w "$LDAP_ADMIN_PASSWORD" \
-c \
-f "/tmp/ldif/data/01_rbac_roles.ldif"
-f "/tmp/ldif/data/01_rbac.ldif"
```

View File

@@ -17,7 +17,7 @@ def build_ldap_role_entries(applications, users, ldap):
group_id = application_config.get("group_id")
user_dn_base = ldap["dn"]["ou"]["users"]
ldap_user_attr = ldap["attributes"]["user_id"]
ldap_user_attr = ldap["user"]["attributes"]["id"]
role_dn_base = ldap["dn"]["ou"]["roles"]
flavors = ldap.get("rbac", {}).get("flavors", [])

View File

@@ -18,7 +18,7 @@ This role deploys Listmonk using Docker, ensuring a robust and scalable setup fo
- **Flexible Configuration:** Easily customize settings such as database connections, admin credentials, and server configurations via a TOML file.
- **Robust Infrastructure:** Seamlessly integrates with PostgreSQL for reliable data management and supports deployment behind a reverse proxy.
## Additional Resources
## Further Resources
- [Listmonk Official Website](https://listmonk.app/)
- [Listmonk Installation Documentation](https://listmonk.app/docs/installation/)

View File

@@ -27,7 +27,7 @@ For more information about this role, visit the GitHub repositories:
- **Flexible Deployment:** Easily scale and customize using Docker Compose, with configurable settings for networking, storage, and external services.
- **OIDC Support:** Optionally integrate with OpenID Connect for centralized authentication across your services.
## Additional Resources
## Further Resources
- [Mailu Official Website](https://mailu.io/)
- [Mailu compose setup guide](https://mailu.io/1.7/compose/setup.html)

View File

@@ -23,7 +23,7 @@ For detailed configuration and operational instructions, please refer to the fol
- **Flexible Authentication:** Integrated support for OpenID Connect (OIDC) simplifies user login and enhances security.
- **Customizable User Experience:** Configure themes, timeline settings, and notification options to tailor the social experience to your community.
## Additional Resources
## Further Resources
- [Mastodon Official Website](https://joinmastodon.org/)
- [Mastodon Documentation](https://docs.joinmastodon.org/)

View File

@@ -16,7 +16,7 @@ This role deploys Matomo using Docker, automating the setup of your analytics pl
- **Customizable Setup:** Configure database connections, admin credentials, and server settings via environment variables and a TOML configuration file.
- **Scalable Deployment:** Use Docker to ensure your analytics platform can grow with your traffic demands.
## Additional Resources
## Further Resources
- [Matomo Official Website](https://matomo.org/)

View File

@@ -22,7 +22,7 @@ For detailed configuration and operational instructions, please refer to the inc
- **Scalable Architecture:** Handle increasing user loads and message volumes with high performance.
- **Ansible Automation:** Enjoy a fully automated, reproducible deployment using Ansible.
## Additional Resources
## Further Resources
- [Matrix Official Website](https://matrix.org/)
- [Matrix Documentation](https://matrix.org/docs/)

View File

@@ -20,7 +20,7 @@ For detailed configuration and operational instructions, please refer to the fol
- **Scalable Architecture:** Designed to handle increasing user loads and message volumes with high performance.
- **Flexible Client Support:** Access Matrix services via modern web clients like Element, which offer an intuitive and real-time user experience.
## Additional Resources
## Further Resources
- [Matrix Official Website](https://matrix.org/)
- [Matrix Documentation](https://matrix.org/docs/)

View File

@@ -16,7 +16,7 @@ This role deploys MediaWiki using Docker, automating the setup of your wiki inst
- **Scalable Deployment:** Utilize Docker for a portable and scalable setup that adapts as your community grows.
- **Secure and Reliable:** Benefit from secure access via an Nginx reverse proxy combined with a MariaDB backend for reliable data storage.
## Additional Resources
## Further Resources
- [MediaWiki Official Website](https://www.mediawiki.org/)
- [MediaWiki Documentation](https://www.mediawiki.org/wiki/Manual:Configuration_settings)

View File

@@ -16,7 +16,7 @@ This role deploys Mobilizon using Docker, automating the setup of your event man
- **Customizable Setup:** Configure database connections, instance settings, and admin credentials via environment variables and a TOML configuration file.
- **Scalable Deployment:** Use Docker to ensure your event platform grows seamlessly with your communitys needs.
## Additional Resources
## Further Resources
- [Mobilizon Official Website](https://mobilizon.org)

View File

@@ -17,7 +17,7 @@ This role deploys Moodle using Docker, automating the setup of both the Moodle a
- **Secure Web Access:** Configured to work seamlessly behind an Nginx reverse proxy for enhanced security and performance.
* **Single Sign-On (SSO) / OpenID Connect (OIDC):** Seamless integration with external identity providers for centralized authentication.
## Additional Resources
## Further Resources
- [Bitnami Moodle Container on GitHub](https://github.com/bitnami/containers/tree/main/bitnami/moodle)
- [Moodle Official Website](https://moodle.org/)

View File

@@ -15,7 +15,7 @@ This role deploys MyBB using Docker, leveraging Docker Compose to manage both th
- **Robust Deployment:** Achieve reliable and scalable deployment of your forum via Docker Compose, ensuring seamless service continuity.
- **Secure and Flexible Access:** Integrate with an Nginx reverse proxy to securely manage traffic and domain access.
## Additional Resources
## Further Resources
- [MyBB Docker Repository](https://github.com/mybb/docker)
- [MyBB Official Website](https://mybb.com/)

View File

@@ -21,7 +21,7 @@ This role provisions a complete Nextcloud deployment using Docker Compose. It au
A detailled documentation for the use and administration of Nextcloud on CyMaIS you will find [here](docs/README.md)
## Additional Resources
## Further Resources
- [Nextcloud Official Website](https://nextcloud.com/)
- [Nextcloud Docker Documentation](https://github.com/nextcloud/docker)

View File

@@ -23,10 +23,6 @@
notify:
- docker compose up
- name: "include role docker-repository-setup for {{application_id}}"
include_role:
name: docker-repository-setup
- name: "create {{dummy_volume}}"
file:
path: "{{dummy_volume}}"

View File

@@ -1,6 +1,7 @@
application_id: "openproject"
docker_repository_address: "https://github.com/opf/openproject-deploy"
database_type: "postgres"
docker_repository: true
openproject_plugins_folder: "{{docker_compose.directories.volumes}}plugins/"

View File

@@ -12,11 +12,6 @@
http_port: "{{ ports.localhost.http[application_id] }}"
when: run_once_docker_portfolio is not defined
- name: "include role docker-repository-setup for {{application_id}}"
include_role:
name: docker-repository-setup
when: run_once_docker_portfolio is not defined
- name: "Check if host-specific config.yaml exists in {{ config_inventory_path }}"
stat:
path: "{{ config_inventory_path }}"

View File

@@ -1,3 +1,4 @@
application_id: "portfolio"
docker_repository_address: "https://github.com/kevinveenbirkenbach/portfolio"
config_inventory_path: "{{ inventory_dir }}/files/{{ inventory_hostname }}/docker/portfolio/config.yaml.j2"
config_inventory_path: "{{ inventory_dir }}/files/{{ inventory_hostname }}/docker/portfolio/config.yaml.j2"
docker_repository: true

View File

@@ -1,9 +1,4 @@
---
# Docker Routines
- name: "include docker-compose role"
include_role:
name: docker-compose
- name: "pkgmgr install"
include_role:
name: pkgmgr-install
@@ -15,10 +10,14 @@
command: pkgmgr path cymais-presentation
register: path_cymais_presentation_output
- name: Get path of cymais using pkgmgr
- name: Get path of cymais using pkgmgrpull docker repository
command: pkgmgr path cymais
register: path_cymais_output
- name: "include docker-compose role"
include_role:
name: docker-compose
- name: "include role nginx-domain-setup for {{application_id}}"
include_role:
name: nginx-domain-setup

View File

@@ -1,44 +0,0 @@
# Docker Repository Setup
This Ansible role sets up and manages your Docker repository. It ensures that the repository is pulled from your remote Git source, and it automatically triggers a rebuild of your Docker images using Docker Compose.
## Features 🔧
- **Default Path Setup:**
Automatically sets a default `docker_repository_path`
- **Repository Management:**
Clones or updates your Docker repository from a specified Git repository.
- **Automated Build Trigger:**
Notifies handlers to rebuild the Docker repository using Docker Compose with extended timeouts.
## Role Structure 📂
- **Handlers:**
- `rebuild docker repository`: Runs `docker compose build` in the designated repository directory with custom timeout settings.
- **Tasks:**
- Sets the default repository path if undefined.
- Pulls the latest code from the Docker repository.
- Notifies the Docker Compose project setup and triggers a repository rebuild.
- **Meta:**
- Declares a dependency on the `docker-compose` role to ensure that handlers and related dependencies are loaded.
## Usage ⚙️
Ensure that you have set the following variables (either via your inventory, `group_vars`, or `host_vars`):
- `docker_repository_address`: The Git repository URL of your Docker repository.
- `docker_compose.directories.services`: The base directory where your Docker services are stored.
The role will append `repository/` to this path to form `docker_repository_path`.
## Author
Kevin Veen-Birkenbach
[https://www.veen.world](https://www.veen.world)
---
Happy deploying! 🚀🐳

View File

@@ -1,7 +0,0 @@
- name: rebuild docker repository
command:
cmd: docker compose build
chdir: "{{docker_repository_path}}"
environment:
COMPOSE_HTTP_TIMEOUT: 600
DOCKER_CLIENT_TIMEOUT: 600

View File

@@ -1,2 +0,0 @@
dependencies:
- docker-compose # To load handlers and make dependencies visible

View File

@@ -1,9 +1,5 @@
---
# Docker Routines
- name: "include docker-compose role"
include_role:
name: docker-compose
- name: "pkgmgr install"
include_role:
name: pkgmgr-install
@@ -15,6 +11,10 @@
command: pkgmgr path cymais-sphinx
register: path_cymais_sphinx_output
- name: "include docker-compose role"
include_role:
name: docker-compose
- name: "include role nginx-domain-setup for {{application_id}}"
include_role:
name: nginx-domain-setup

View File

@@ -10,10 +10,6 @@
domain: "{{ domains | get_domain(application_id) }}"
http_port: "{{ ports.localhost.http[application_id] }}"
- name: "include role docker-repository-setup for {{application_id}}"
include_role:
name: docker-repository-setup
- name: "copy templates {{ settings_files }} for taiga-contrib-oidc-auth"
template:
src: "taiga/{{item}}.py.j2"

View File

@@ -10,7 +10,7 @@ taiga_image_frontend: >-
{{ 'robrotheram/taiga-front-openid' if applications[application_id].features.oidc and applications[application_id].oidc.flavor == 'robrotheram'
else 'taigaio/taiga-front' }}
taiga_frontend_conf_path: "{{docker_compose.directories.config}}conf.json"
docker_repository: true
settings_files:
- urls
- local

View File

@@ -1,2 +0,0 @@
# Docker Role Template
This folder contains a template to setup docker roles

View File

@@ -1 +0,0 @@
application_id: template

View File

@@ -37,7 +37,7 @@ This deployment provides a containerized WordPress instance optimized for multis
The goal of this deployment is to provide a productionready, scalable WordPress instance with multisite capabilities and enhanced performance. By automating the custom image build and configuration processes via Docker Compose and Ansible, it minimizes manual intervention, reduces errors, and allows you to concentrate on building great content.
## Additional Resources
## Further Resources
- [WordPress Official Website](https://wordpress.org/)
- [WordPress Multisite Documentation](https://wordpress.org/support/article/create-a-network/)

View File

@@ -1 +1,23 @@
# driver-intel
# driver-intel
## Description
This Ansible role installs Intel media drivers on systems that use the Pacman package manager (e.g., Arch Linux and derivatives). It ensures the `intel-media-driver` package is present and up-to-date.
## Overview
The `driver-intel` role leverages the `community.general.pacman` module to:
1. Update the package cache.
2. Install (or upgrade) the `intel-media-driver` package.
3. Verify that the driver is correctly installed and ready for use in media pipelines.
## Features
* Idempotent installation of Intel media drivers
* Automatic package cache update before installation
* Supports installation on any Pacman-based distribution
## Further Resources
* [Intel Media Driver upstream documentation](https://01.org/intel-media-sdk)

View File

@@ -0,0 +1,25 @@
# driver-non-free
## Description
This Ansible role installs non-free GPU drivers on Arch Linux systems by invoking the `mhwd` utility. It ensures that the appropriate proprietary drivers for your PCI graphics hardware are installed and ready for use.
## Overview
- Uses the `ansible.builtin.shell` module to run `mhwd -a pci nonfree 0300`
- Automatically detects your PCI graphics adapter and installs the recommended non-free driver
- Designed to be run once per host to provision proprietary GPU support
## Features
- **Automatic Hardware Detection**
Leverages `mhwd`s built-in auto-detect feature (`0300`) to select the correct driver.
- **Proprietary Driver Installation**
Installs the latest non-free GPU driver (e.g., NVIDIA, AMD) provided through Archs `mhwd` system.
- **Simple Execution**
Single-task role with minimal overhead.
## Further Resources
- [Arch Linux mhwd Package](https://archlinux.org/packages/community/x86_64/manjaro-tools-mhwd/)

View File

@@ -0,0 +1,2 @@
# Pretix (Draft)
See https://github.com/pretix/pretix

View File

@@ -1,26 +1,35 @@
# SSHD
# sshd
## Description
This role configures the SSH daemon ([sshd](https://man7.org/linux/man-pages/man5/sshd_config.5.html)) on the target system by deploying a templated configuration file. It ensures that secure and proper SSH settings are applied, reducing the risk of misconfiguration and potential lockout.
This Ansible role configures the OpenSSH daemon (`sshd`) by deploying a templated `sshd_config` file. It applies secure, best-practice settings—such as disabling root login, enforcing public-key authentication, and setting appropriate logging levels—to harden remote access and reduce the risk of misconfiguration or lockout.
## Overview
Optimized for secure remote access, this role:
- Generates an SSH daemon configuration file from a Jinja2 template.
- Sets appropriate ownership and permissions on the configuration file.
- Notifies systemd to restart the SSH daemon when changes are made.
## Purpose
The primary purpose of this role is to establish a secure SSH environment by deploying a well-configured sshd_config file. This helps prevent unauthorized access and potential system lockouts, while ensuring that the SSH service runs smoothly.
- Renders `sshd_config.j2` into `/etc/ssh/sshd_config` with customizable options
- Sets file ownership (`root:root`) and permissions (`0644`)
- Automatically reloads and restarts the SSH service via a Systemd handler
- Uses a `run_once_sshd` fact to ensure idempotent execution
## Features
- **SSH Configuration Deployment:** Creates an sshd_config file with best-practice settings.
- **Systemd Integration:** Automatically restarts the SSH service upon configuration changes.
- **Security Enhancements:** Enforces secure defaults such as disabled root login and public key authentication.
- **Templated Configuration**
Delivers a Jinja2-based `sshd_config` with variables for debug logging and PAM support.
## Other Resources
- https://www.google.com/search?client=firefox-b-d&q=sshd+why+to+deactivate+pam
- https://man7.org/linux/man-pages/man5/sshd_config.5.html
- **Security Defaults**
- Disables password (`PasswordAuthentication no`) and root login (`PermitRootLogin no`)
- Enforces public-key authentication (`PubkeyAuthentication yes`)
- Conditionally sets `LogLevel` to `DEBUG3` when `enable_debug` is true
- **Systemd Integration**
Handles daemon reload and service restart seamlessly on configuration changes.
- **Idempotency**
Ensures tasks run only once per play by setting the `run_once_sshd` fact.
## Further Resources
- [sshd_config Manual (OpenSSH)](https://man7.org/linux/man-pages/man5/sshd_config.5.html)
- [Ansible Template Module](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html)
- [Ansible Shell & Handler Best Practices](https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html)
- [OpenSSH Security Recommendations](https://www.openssh.com/security.html)

View File

@@ -1,16 +1,8 @@
---
- name: Show effective filter_plugins setting
shell: ansible-config dump --only-changed | grep -i filter_plugins || echo "using default"
register: filter_cfg
- name: Debug filter_plugins config
- name: "Debug: allowed_applications"
debug:
msg: "{{ filter_cfg.stdout_lines }}"
- name: "Debug: show which ansible.cfg was used"
debug:
msg: "{{ ansible_config_file }}"
msg: "{{ allowed_applications }}"
when: enable_debug | bool
- name: Merge variables
block:
@@ -110,51 +102,51 @@
when: mode_update | bool
- name: setup standard wireguard
when: ("wireguard_server" in group_names)
when: ('wireguard_server' | application_allowed(group_names, allowed_applications))
include_role:
name: wireguard
# vpn setup
- name: setup wireguard client behind firewall\nat
when: ("wireguard_behind_firewall" in group_names)
when: ('wireguard_behind_firewall' | application_allowed(group_names, allowed_applications))
include_role:
name: client-wireguard-behind-firewall
- name: setup wireguard client
when: ("wireguard_client" in group_names)
when: ('wireguard_client' | application_allowed(group_names, allowed_applications))
include_role:
name: client-wireguard
## backup setup
- name: setup replica backup hosts
when: ("backup_remote_to_local" in group_names)
when: ('backup_remote_to_local' | application_allowed(group_names, allowed_applications))
include_role:
name: backup-remote-to-local
- name: setup backup to swappable
when: ("backup_to_usb" in group_names)
when: ('backup_to_usb' | application_allowed(group_names, allowed_applications))
include_role:
name: backup-data-to-usb
## driver setup
- name: driver-intel
when: ("intel" in group_names)
when: ('intel' | application_allowed(group_names, allowed_applications))
include_role:
name: driver-intel
- name: setup multiprinter hosts
when: ("epson_multiprinter" in group_names)
when: ('epson_multiprinter' | application_allowed(group_names, allowed_applications))
include_role:
name: driver-epson-multiprinter
- name: setup hibernate lid switch
when: ("driver-lid-switch" in group_names)
when: ('driver-lid-switch' | application_allowed(group_names, allowed_applications))
include_role:
name: driver-lid-switch
## system setup
- name: setup swapfile hosts
when: ("swapfile" in group_names)
when: ('swapfile' | application_allowed(group_names, allowed_applications))
include_role:
name: system-swapfile

View File

@@ -16,25 +16,25 @@
# Native Webserver Roles
- name: setup nginx-serve-htmls
when: ("nginx-serve-htmls" in group_names)
include_role:
name: nginx-serve-html
vars:
domain: "{{primary_domain}}"
when: ('nginx-serve-htmls' | application_allowed(group_names, allowed_applications))
- name: "setup corporate identity"
when: ("corporate_identity" in group_names)
include_role:
name: persona-corporate
when: ('corporate_identity' | application_allowed(group_names, allowed_applications))
- name: setup redirect hosts
when: ("redirect" in group_names)
when: ('redirect' | application_allowed(group_names, allowed_applications))
include_role:
name: nginx-redirect-domains
vars:
domain_mappings: "{{ current_play_domain_mappings_redirect}}"
- name: setup www redirect
when: ("www_redirect" in group_names)
when: ('www_redirect' | application_allowed(group_names, allowed_applications))
include_role:
name: nginx-redirect-www

View File

@@ -1,8 +1,8 @@
- name: optimize storage performance
include_role:
name: system-storage-optimizer
when: "(path_mass_storage is defined or path_rapid_storage is defined) and enable_system_storage_optimizer | bool and (docker_enabled is defined and docker_enabled | bool) "
when: ('storage-optimizer' | application_allowed(group_names, allowed_applications))
- name: Cleanup Docker Anonymous Volumes
import_role:
name: cleanup-docker-anonymous-volumes

View File

@@ -22,16 +22,6 @@
chdir: "{{docker_compose.directories.instance}}"
ignore_errors: true
# This could be replaced by include_role: docker-repository-setup
# Attendize and Akaunting still use this. When you refactor this code replace this.
- name: pull docker repository
git:
repo: "{{ docker_repository_address }}"
dest: "{{ docker_repository_directory | default(docker_compose.directories.instance) }}"
update: yes
notify: docker compose up
become: true
- name: "restore detached files"
command: >
mv "/tmp/{{application_id}}-{{ item }}.backup" "{{docker_compose.directories.instance}}{{ item }}"

View File

@@ -0,0 +1,19 @@
# Docker Role Template
This folder contains a template to setup docker roles.
## Description
* Put a description here.
## Overview
Put an overview here.
## Features
Put a feature list here
## Further Resources
* Put more ressources here

View File

@@ -0,0 +1,28 @@
---
galaxy_info:
author: "Kevin Veen-Birkenbach"
description: "{{ description }}"
license: "CyMaIS NonCommercial License (CNCL)"
license_url: "https://s.veen.world/cncl"
company: |
Kevin Veen-Birkenbach
Consulting & Coaching Solutions
https://www.veen.world
platforms:
- name: Docker
versions:
- latest
galaxy_tags:
{% for tag in tags %}
- {{ tag }}
{% endfor %}
repository: "https://github.com/kevinveenbirkenbach/cymais"
issue_tracker_url: "https://github.com/kevinveenbirkenbach/cymais/issues"
documentation: "https://github.com/kevinveenbirkenbach/cymais/roles/{{application_id}}"
logo:
class: "{{ logo_classes }}"
run_after:
- docker-matomo
- docker-keycloak
- docker-mailu
dependencies: []

View File

View File

@@ -0,0 +1,38 @@
---
{% if database_type | bool %}
{% raw %}
- name: "include docker-central-database"
include_role:
name: docker-central-database
when: run_once_docker_{% endraw %}{{ application_id }}{% raw %} is not defined
{% endraw %}
{% else %}
{% raw %}
- name: "include docker-compose role"
include_role:
name: docker-compose
when: run_once_docker_{% endraw %}{{ application_id }}{% raw %} is not defined
{% endraw %}
{% endif %}
{% raw %}
- name: "include role nginx-domain-setup for {{application_id}}"
include_role:
name: nginx-domain-setup
vars:
domain: "{{ domains | get_domain(application_id) }}"
http_port: "{{ ports.localhost.http[application_id] }}"
when: run_once_docker_{% endraw %}{{ application_id }}{% raw %} is not defined
- name: run the {% raw %}portfolio{% endraw %} tasks once
set_fact:
run_once_docker_portfolio: true
when: run_once_docker_{% endraw %}{{ application_id }}{% raw %} is not defined
{% endraw %}

View File

@@ -0,0 +1,19 @@
services:
portfolio:
build:
context: {{docker_repository_path}}
dockerfile: Dockerfile
image: application-portfolio
container_name: portfolio
ports:
- 127.0.0.1:{{ports.localhost.http[application_id]}}:5000
volumes:
- {{docker_repository_path}}app:/app
restart: unless-stopped
{% include 'templates/docker/container/networks.yml.j2' %}
healthcheck:
test: ["CMD", "bash", "-c", "exec 3<>/dev/tcp/localhost/5000 && echo -e 'GET / HTTP/1.1\\r\\nHost: localhost\\r\\nConnection: close\\r\\n\\r\\n' >&3 && cat <&3 | grep -q 'HTTP/1.1'"]
interval: 30s
timeout: 10s
retries: 3
{% include 'templates/docker/compose/networks.yml.j2' %}

View File

@@ -0,0 +1,19 @@
credentials:
docker:
images: {}
versions: {}
features:
matomo: true # Enable Matomo Tracking
css: true # Enable Global CSS Styling
portfolio_iframe: true # Enable loading of app in iframe
ldap: false # Enable LDAP Network
central_database: false # Enable Central Database Network
recaptcha: false # Enable ReCaptcha
oauth2: false # Enable the OAuth2-Proy
csp:
whitelist: {} # URL's which should be whitelisted
flags: {} # Flags which should be set
domains:
canonical: [] # Urls under which the domain should be directly accessible
alias: [] # Alias redirections to the first element of the canonical domains

View File

@@ -0,0 +1,2 @@
application_id: {{ application_id }} # ID of the application
database_type: {{ database }} # Database type [postgres, mariadb]

View File

@@ -0,0 +1,2 @@
# Todo
- Activate Disabled Test

View File

@@ -0,0 +1,72 @@
import unittest
import re
import yaml
from pathlib import Path
# Define the required schema headings
REQUIRED_HEADINGS = [
"Description",
"Overview",
"Features",
"Further Resources",
]
class TestReadmeSchema(unittest.TestCase):
def test_readme_has_required_sections_and_title(self):
"""
Integration test that verifies each role's README.md contains all required sections
and, if vars/main.yml exists, that the title contains the application_id (case-insensitive).
"""
# Determine the roles directory (two levels up from current file)
roles_dir = Path(__file__).parents[2] / 'roles'
self.assertTrue(roles_dir.exists(), f"Roles directory not found at {roles_dir}")
for role_dir in roles_dir.iterdir():
if not role_dir.is_dir():
continue
with self.subTest(role=role_dir.name):
# Check README.md exists
readme_path = role_dir / 'README.md'
self.assertTrue(
readme_path.exists(),
f"Role '{role_dir.name}' is missing a README.md"
)
content = readme_path.read_text(encoding='utf-8')
# Verify required headings are present (multiline)
for heading in REQUIRED_HEADINGS:
pattern = rf"(?m)^##\s+{re.escape(heading)}"
self.assertRegex(
content,
pattern,
f"README.md for role '{role_dir.name}' is missing required section: '{heading}'"
)
# If vars/main.yml exists, check application_id and title
vars_path = role_dir / 'vars' / 'main.yml'
if vars_path.exists():
vars_content = vars_path.read_text(encoding='utf-8')
try:
data = yaml.safe_load(vars_content)
except Exception as e:
self.fail(f"Failed to parse YAML for role '{role_dir.name}': {e}")
app_id = data.get('application_id')
self.assertIsNotNone(
app_id,
f"application_id not found in {vars_path} for role '{role_dir.name}'"
)
# Verify README title contains application_id (case-insensitive, multiline)
title_regex = re.compile(
rf"(?mi)^#.*{re.escape(str(app_id))}.*"
)
self.assertRegex(
content,
title_regex,
f"README.md title does not contain application_id '{app_id}' for role '{role_dir.name}'"
)
if __name__ == '__main__':
unittest.main()

View File

@@ -48,8 +48,10 @@ class TestBuildLdapRoleEntries(unittest.TestCase):
"roles": "ou=roles,dc=example,dc=org"
}
},
"attributes": {
"user_id": "uid"
"user":{
"attributes": {
"id": "uid"
}
},
"rbac": {
"flavors": ["posixGroup", "groupOfNames"]