mirror of
https://github.com/kevinveenbirkenbach/computer-playbook.git
synced 2025-04-28 18:30:24 +02:00
Compare commits
No commits in common. "f6a42a4a5d7f24dbef2babfe82e3d027aff7a7b0" and "26abfd441ac333e35e01eda0254ad9bebc7147f2" have entirely different histories.
f6a42a4a5d
...
26abfd441a
3
.gitignore
vendored
3
.gitignore
vendored
@ -1,5 +1,2 @@
|
|||||||
site.retry
|
site.retry
|
||||||
*__pycache__
|
*__pycache__
|
||||||
docs/*
|
|
||||||
!docs/.gitkeep
|
|
||||||
venv
|
|
@ -1,11 +1,10 @@
|
|||||||
# License Agreement
|
# License Agreement
|
||||||
## CyMaIS NonCommercial License (CNCL)
|
|
||||||
|
|
||||||
### Definitions
|
## Definitions
|
||||||
- **"Software":** Refers to *"[CyMaIS - Cyber Master Infrastructure Solution](https://cymais.cloud/)"* and its associated source code.
|
- **"Software":** Refers to *"[CyMaIS - Cyber Master Infrastructure Solution](https://cymais.cloud/)"* and its associated source code.
|
||||||
- **"Commercial Use":** Any use of the Software intended for direct or indirect financial gain, including but not limited to sales, rentals, or provision of services.
|
- **"Commercial Use":** Any use of the Software intended for direct or indirect financial gain, including but not limited to sales, rentals, or provision of services.
|
||||||
|
|
||||||
### Provisions
|
## Provisions
|
||||||
|
|
||||||
1. **Attribution of the Original Licensor:** In any distribution or publication of the Software or derivative works, the original licensor, *Kevin Veen-Birkenbach, Email: [license@veen.world](mailto:license@veen.world), Website: [https://www.veen.world/](https://www.veen.world/)* must be explicitly named.
|
1. **Attribution of the Original Licensor:** In any distribution or publication of the Software or derivative works, the original licensor, *Kevin Veen-Birkenbach, Email: [license@veen.world](mailto:license@veen.world), Website: [https://www.veen.world/](https://www.veen.world/)* must be explicitly named.
|
||||||
|
|
||||||
@ -24,5 +23,5 @@
|
|||||||
|
|
||||||
7. **Ownership of Rights:** All rights, including copyright, trademark, and other forms of intellectual property related to the Software, belong exclusively to Kevin Veen-Birkenbach.
|
7. **Ownership of Rights:** All rights, including copyright, trademark, and other forms of intellectual property related to the Software, belong exclusively to Kevin Veen-Birkenbach.
|
||||||
|
|
||||||
### Consent
|
## Consent
|
||||||
By using, modifying, or distributing the Software, you agree to these terms.
|
By using, modifying, or distributing the Software, you agree to these terms.
|
20
Makefile
20
Makefile
@ -1,20 +0,0 @@
|
|||||||
# Minimal makefile for Sphinx documentation
|
|
||||||
#
|
|
||||||
|
|
||||||
# You can set these variables from the command line, and also
|
|
||||||
# from the environment for the first two.
|
|
||||||
SPHINXOPTS ?=
|
|
||||||
SPHINXBUILD ?= sphinx-build
|
|
||||||
SOURCEDIR = /home/kevinveenbirkenbach/Repositories/github.com/kevinveenbirkenbach/cymais/
|
|
||||||
BUILDDIR = docs
|
|
||||||
|
|
||||||
# Put it first so that "make" without argument is like "make help".
|
|
||||||
help:
|
|
||||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
|
||||||
|
|
||||||
.PHONY: help Makefile
|
|
||||||
|
|
||||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
|
||||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
|
||||||
%: Makefile
|
|
||||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
|
21
README.md
21
README.md
@ -1,5 +1,4 @@
|
|||||||
# CyMaIS
|
# CyMaIS - Cyber Master Infrastructure Solution
|
||||||
## Cyber Master Infrastructure Solution
|
|
||||||
[](https://github.com/sponsors/kevinveenbirkenbach) [](https://www.patreon.com/c/kevinveenbirkenbach) [](https://buymeacoffee.com/kevinveenbirkenbach) [](https://s.veen.world/paypaldonate)
|
[](https://github.com/sponsors/kevinveenbirkenbach) [](https://www.patreon.com/c/kevinveenbirkenbach) [](https://buymeacoffee.com/kevinveenbirkenbach) [](https://s.veen.world/paypaldonate)
|
||||||
|
|
||||||
|
|
||||||
@ -15,7 +14,7 @@ Our intuitive interface, coupled with in-depth documentation, makes it accessibl
|
|||||||
|
|
||||||
With CyMaIS, setting up a secure, scalable, and robust IT infrastructure is not just faster and easier, but also aligned with the best industry practices, ensuring that your organization stays ahead in the ever-evolving digital landscape.
|
With CyMaIS, setting up a secure, scalable, and robust IT infrastructure is not just faster and easier, but also aligned with the best industry practices, ensuring that your organization stays ahead in the ever-evolving digital landscape.
|
||||||
|
|
||||||
### Vision
|
## Vision
|
||||||
Our project is anchored in the vision of transforming IT infrastructure deployment into a seamless, secure, and scalable experience.
|
Our project is anchored in the vision of transforming IT infrastructure deployment into a seamless, secure, and scalable experience.
|
||||||
|
|
||||||
We are committed to developing a fully automated solution that enables businesses of any size and industry to set up a 100% secure and infinitely scalable IT infrastructure in just 24 hours.
|
We are committed to developing a fully automated solution that enables businesses of any size and industry to set up a 100% secure and infinitely scalable IT infrastructure in just 24 hours.
|
||||||
@ -24,17 +23,17 @@ Leveraging the power of Open Source, our tool not only promises to uphold the hi
|
|||||||
|
|
||||||
This is not just a step towards simplifying IT management – it's a leap towards democratizing access to advanced technology, ensuring every business can quickly adapt and thrive in the digital age.
|
This is not just a step towards simplifying IT management – it's a leap towards democratizing access to advanced technology, ensuring every business can quickly adapt and thrive in the digital age.
|
||||||
|
|
||||||
For a deeper understanding of our goals and the ethos driving our project, we invite you to explore our detailed **[Vision Statement](./VISION_STATEMENT.md)**. Here, you'll find the cornerstone principles that guide our development process and our commitment to making a lasting impact in the realm of IT infrastructure.
|
For a deeper understanding of our goals and the ethos driving our project, we invite you to explore our detailed **[Vision Statement](./docs/VISION_STATEMENT.md)**. Here, you'll find the cornerstone principles that guide our development process and our commitment to making a lasting impact in the realm of IT infrastructure.
|
||||||
|
|
||||||
### Solutions Overview
|
## Solutions Overview
|
||||||
|
|
||||||
To help you navigate through our repository, we have categorized our extensive range of tools and solutions into three key areas:
|
To help you navigate through our repository, we have categorized our extensive range of tools and solutions into three key areas:
|
||||||
|
|
||||||
1. **[Server Applications](./SERVER_APPLICATIONS.md)**: Detailed information on server-focused tools and configurations, ideal for managing and optimizing server environments.
|
1. **[Server Applications](./docs/SERVER_APPLICATIONS.md)**: Detailed information on server-focused tools and configurations, ideal for managing and optimizing server environments.
|
||||||
|
|
||||||
2. **[End User Applications](./END_USER_APPLICATIONS.md)**: A guide to applications and tools specifically designed for end-user PCs, enhancing personal computing experience.
|
2. **[End User Applications](./docs/END_USER_APPLICATIONS.md)**: A guide to applications and tools specifically designed for end-user PCs, enhancing personal computing experience.
|
||||||
|
|
||||||
3. **[Common Applications](./COMMON_APPLICATIONS.md)**: A comprehensive list of tools and applications that are versatile and useful across both server and end-user environments.
|
3. **[Common Applications](./docs/COMMON_APPLICATIONS.md)**: A comprehensive list of tools and applications that are versatile and useful across both server and end-user environments.
|
||||||
|
|
||||||
Each of these documents provides a tailored overview, ensuring you can find the right tools and information relevant to your specific needs, whether for server management, personal computing, or general IT infrastructure.
|
Each of these documents provides a tailored overview, ensuring you can find the right tools and information relevant to your specific needs, whether for server management, personal computing, or general IT infrastructure.
|
||||||
|
|
||||||
@ -58,7 +57,7 @@ Each of these documents provides a tailored overview, ensuring you can find the
|
|||||||
|
|
||||||
CyMaIS is more than just an IT solution; it's a commitment to empowering your business with the technology it needs to thrive in today’s digital landscape, effortlessly and securely.
|
CyMaIS is more than just an IT solution; it's a commitment to empowering your business with the technology it needs to thrive in today’s digital landscape, effortlessly and securely.
|
||||||
|
|
||||||
### Professional CyMaIS Implementation
|
## Professional CyMaIS Implementation
|
||||||
<img src="https://cybermaster.space/wp-content/uploads/sites/7/2023/11/FVG_8364BW-scaled.jpg" width="300" style="float: right; margin-left: 30px;">
|
<img src="https://cybermaster.space/wp-content/uploads/sites/7/2023/11/FVG_8364BW-scaled.jpg" width="300" style="float: right; margin-left: 30px;">
|
||||||
|
|
||||||
My name is Kevin Veen-Birkenbach and I'm glad to assist you in the implementation of your secure and scalable IT infrastrucutre solution with CyMaIS.
|
My name is Kevin Veen-Birkenbach and I'm glad to assist you in the implementation of your secure and scalable IT infrastrucutre solution with CyMaIS.
|
||||||
@ -73,11 +72,11 @@ Contact me for more details:
|
|||||||
📧 Email: [kevin@veen.world](mailto:kevin@veen.world)<br />
|
📧 Email: [kevin@veen.world](mailto:kevin@veen.world)<br />
|
||||||
☎️ Phone: [+ 49 178 179 80 23](tel:00491781798023)
|
☎️ Phone: [+ 49 178 179 80 23](tel:00491781798023)
|
||||||
|
|
||||||
### Showcases
|
## Showcases
|
||||||
The following list showcases the extensive range of solutions that CyMaIS incorporates, each playing a vital role in providing a comprehensive, efficient, and secure IT infrastructure setup:
|
The following list showcases the extensive range of solutions that CyMaIS incorporates, each playing a vital role in providing a comprehensive, efficient, and secure IT infrastructure setup:
|
||||||
|
|
||||||
[ELK Stack](./roles/docker-elk), [Intel Driver](./roles/driver-intel), [Nginx Docker Reverse Proxy](./roles/nginx-docker-reverse-proxy), [Sudo](./roles/sudo), [Funkwhale](./roles/docker-funkwhale), [MSI Keyboard Color Driver](./roles/driver-msi-keyboard-color), [Nginx Domain Redirect](./roles/nginx-redirect-domain), [GnuCash](./roles/pc-gnucash), [Backup Data to USB](./roles/backup-data-to-usb), [Gitea](./roles/docker-gitea), [Non-Free Driver](./roles/driver-non-free), [Nginx Homepage](./roles/nginx-serve-html), [Jrnl](./roles/pc-jrnl), [Systemd Notifier](./roles/systemd-notifier), [Backup Docker to Local](./roles/backup-docker-to-local), [Jenkins](./roles/docker-jenkins), [Git](./roles/git), [Nginx HTTPS](./roles/nginx-https), [Latex](./roles/pc-latex), [Email Notifier](./roles/systemd-notifier-email), [Remote to Local Backup Solution](./roles/backup-remote-to-local), [Joomla](./roles/docker-joomla), [Heal Defect Docker Installations](./roles/heal-docker), [Nginx Matomo Tracking](./roles/nginx-modifier-matomo), [LibreOffice](./roles/pc-libreoffice), [Telegram Notifier](./roles/systemd-notifier-telegram), [Listmonk](./roles/docker-listmonk), [Btrfs Health Check](./roles/health-btrfs), [Nginx WWW Redirect](./roles/nginx-redirect-www), [Network Analyze Tools](./roles/pc-network-analyze-tools), [System Security](./roles/system-security), [Mailu](./roles/docker-mailu), [Disc Space Health Check](./roles/health-disc-space), [Administrator Tools](./roles/pc-administrator-tools), [Nextcloud Client](./roles/pc-nextcloud), [Swapfile Setup](./roles/system-swapfile), [Backups Cleanup](./roles/cleanup-backups-service), [Mastodon](./roles/docker-mastodon), [Docker Container Health Checker](./roles/health-docker-container), [Blu-ray Player Tools](./roles/pc-bluray-player-tools), [Office](./roles/pc-office), [Update Solutions](./roles/update), [Matomo](./roles/docker-matomo), [Docker Volumes Health Checker](./roles/health-docker-volumes), [Caffeine](./roles/pc-caffeine), [Qbittorrent](./roles/pc-qbittorrent), [Update Apt](./roles/update-apt), [Disc Space Cleanup](./roles/cleanup-disc-space), [Matrix](./roles/docker-matrix), [Health Journalctl](./roles/health-journalctl), [Designer Tools](./roles/pc-designer-tools), [Security Tools](./roles/pc-security-tools), [Update Docker](./roles/update-docker), [Failed Docker Backups Cleanup](./roles/cleanup-failed-docker-backups), [MediaWiki](./roles/docker-mediawiki), [Nginx Health Checker](./roles/health-nginx), [Developer Tools](./roles/pc-developer-tools), [Spotify](./roles/pc-spotify), [Update Pacman](./roles/update-pacman), [Client Wireguard](./roles/client-wireguard), [MyBB](./roles/docker-mybb), [Developer Tools for Arduino](./roles/pc-developer-tools-arduino), [SSH](./roles/pc-ssh), [Update Yay](./roles/update-yay), [Client Setup for Wireguard Behind Firewall](./roles/client-wireguard-behind-firewall), [Nextcloud Server](./roles/docker-nextcloud), [Hunspell](./roles/hunspell), [Developer Tools for Bash](./roles/pc-developer-tools-bash), [Streaming Tools](./roles/pc-streaming-tools), [Administrator](./roles/user-administrator), [Docker](./roles/docker), [Peertube](./roles/docker-peertube), [Java](./roles/java), [Developer Tools for Java](./roles/pc-developer-tools-java), [Tor Browser](./roles/pc-torbrowser), [Video Conference](./roles/pc-video-conference), [Wireguard](./roles/wireguard), [Akaunting](./roles/docker-akaunting), [Pixelfed](./roles/docker-pixelfed), [Journalctl](./roles/journalctl), [Developer Tools for PHP](./roles/pc-developer-tools-php), [Virtual Box](./roles/pc-virtual-box), [Postfix](./roles/postfix), [Attendize](./roles/docker-attendize), [Wordpress](./roles/docker-wordpress), [Locales](./roles/locales), [Docker for End Users](./roles/pc-docker), [Games](./roles/pc-games), [Python Pip](./roles/python-pip), [Discourse](./roles/docker-discourse), [Epson Multiprinter Driver](./roles/driver-epson-multiprinter), [Nginx Certbot](./roles/nginx-certbot), [Git](./roles/pc-git), [SSHD](./roles/sshd), [YOURLS](./roles/docker-yourls), [BigBlueButton](./roles/docker-bigbluebutton),[System Maintenance Lock](./roles/system-maintenance-lock),[Open Project](./roles/docker-openproject)...
|
[ELK Stack](./roles/docker-elk), [Intel Driver](./roles/driver-intel), [Nginx Docker Reverse Proxy](./roles/nginx-docker-reverse-proxy), [Sudo](./roles/sudo), [Funkwhale](./roles/docker-funkwhale), [MSI Keyboard Color Driver](./roles/driver-msi-keyboard-color), [Nginx Domain Redirect](./roles/nginx-redirect-domain), [GnuCash](./roles/pc-gnucash), [Backup Data to USB](./roles/backup-data-to-usb), [Gitea](./roles/docker-gitea), [Non-Free Driver](./roles/driver-non-free), [Nginx Homepage](./roles/nginx-serve-html), [Jrnl](./roles/pc-jrnl), [Systemd Notifier](./roles/systemd-notifier), [Backup Docker to Local](./roles/backup-docker-to-local), [Jenkins](./roles/docker-jenkins), [Git](./roles/git), [Nginx HTTPS](./roles/nginx-https), [Latex](./roles/pc-latex), [Email Notifier](./roles/systemd-notifier-email), [Remote to Local Backup Solution](./roles/backup-remote-to-local), [Joomla](./roles/docker-joomla), [Heal Defect Docker Installations](./roles/heal-docker), [Nginx Matomo Tracking](./roles/nginx-modifier-matomo), [LibreOffice](./roles/pc-libreoffice), [Telegram Notifier](./roles/systemd-notifier-telegram), [Listmonk](./roles/docker-listmonk), [Btrfs Health Check](./roles/health-btrfs), [Nginx WWW Redirect](./roles/nginx-redirect-www), [Network Analyze Tools](./roles/pc-network-analyze-tools), [System Security](./roles/system-security), [Mailu](./roles/docker-mailu), [Disc Space Health Check](./roles/health-disc-space), [Administrator Tools](./roles/pc-administrator-tools), [Nextcloud Client](./roles/pc-nextcloud), [Swapfile Setup](./roles/system-swapfile), [Backups Cleanup](./roles/cleanup-backups-service), [Mastodon](./roles/docker-mastodon), [Docker Container Health Checker](./roles/health-docker-container), [Blu-ray Player Tools](./roles/pc-bluray-player-tools), [Office](./roles/pc-office), [Update Solutions](./roles/update), [Matomo](./roles/docker-matomo), [Docker Volumes Health Checker](./roles/health-docker-volumes), [Caffeine](./roles/pc-caffeine), [Qbittorrent](./roles/pc-qbittorrent), [Update Apt](./roles/update-apt), [Disc Space Cleanup](./roles/cleanup-disc-space), [Matrix](./roles/docker-matrix), [Health Journalctl](./roles/health-journalctl), [Designer Tools](./roles/pc-designer-tools), [Security Tools](./roles/pc-security-tools), [Update Docker](./roles/update-docker), [Failed Docker Backups Cleanup](./roles/cleanup-failed-docker-backups), [MediaWiki](./roles/docker-mediawiki), [Nginx Health Checker](./roles/health-nginx), [Developer Tools](./roles/pc-developer-tools), [Spotify](./roles/pc-spotify), [Update Pacman](./roles/update-pacman), [Client Wireguard](./roles/client-wireguard), [MyBB](./roles/docker-mybb), [Developer Tools for Arduino](./roles/pc-developer-tools-arduino), [SSH](./roles/pc-ssh), [Update Yay](./roles/update-yay), [Client Setup for Wireguard Behind Firewall](./roles/client-wireguard-behind-firewall), [Nextcloud Server](./roles/docker-nextcloud), [Hunspell](./roles/hunspell), [Developer Tools for Bash](./roles/pc-developer-tools-bash), [Streaming Tools](./roles/pc-streaming-tools), [Administrator](./roles/user-administrator), [Docker](./roles/docker), [Peertube](./roles/docker-peertube), [Java](./roles/java), [Developer Tools for Java](./roles/pc-developer-tools-java), [Tor Browser](./roles/pc-torbrowser), [Video Conference](./roles/pc-video-conference), [Wireguard](./roles/wireguard), [Akaunting](./roles/docker-akaunting), [Pixelfed](./roles/docker-pixelfed), [Journalctl](./roles/journalctl), [Developer Tools for PHP](./roles/pc-developer-tools-php), [Virtual Box](./roles/pc-virtual-box), [Postfix](./roles/postfix), [Attendize](./roles/docker-attendize), [Wordpress](./roles/docker-wordpress), [Locales](./roles/locales), [Docker for End Users](./roles/pc-docker), [Games](./roles/pc-games), [Python Pip](./roles/python-pip), [Discourse](./roles/docker-discourse), [Epson Multiprinter Driver](./roles/driver-epson-multiprinter), [Nginx Certbot](./roles/nginx-certbot), [Git](./roles/pc-git), [SSHD](./roles/sshd), [YOURLS](./roles/docker-yourls), [BigBlueButton](./roles/docker-bigbluebutton),[System Maintenance Lock](./roles/system-maintenance-lock),[Open Project](./roles/docker-openproject)...
|
||||||
|
|
||||||
### License
|
## License
|
||||||
|
|
||||||
This project is licensed from Kevin Veen-Birkenbach. The full license is available in the [LICENSE.md](./LICENSE.md) of this repository.
|
This project is licensed from Kevin Veen-Birkenbach. The full license is available in the [LICENSE.md](./LICENSE.md) of this repository.
|
||||||
|
62
conf.py
62
conf.py
@ -1,62 +0,0 @@
|
|||||||
# Configuration file for the Sphinx documentation builder.
|
|
||||||
#
|
|
||||||
# For the full list of built-in configuration values, see the documentation:
|
|
||||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html
|
|
||||||
|
|
||||||
# -- Project information -----------------------------------------------------
|
|
||||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
|
|
||||||
|
|
||||||
project = 'CyMaIS - Cyber Master Infrastructure Solution'
|
|
||||||
copyright = '2025, Kevin Veen-Birkenbach'
|
|
||||||
author = 'Kevin Veen-Birkenbach'
|
|
||||||
|
|
||||||
# -- General configuration ---------------------------------------------------
|
|
||||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
|
|
||||||
|
|
||||||
extensions = []
|
|
||||||
|
|
||||||
templates_path = ['_templates']
|
|
||||||
exclude_patterns = ['docs', 'venv', 'venv/**']
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for HTML output -------------------------------------------------
|
|
||||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
|
|
||||||
|
|
||||||
html_theme = 'alabaster'
|
|
||||||
html_static_path = ['_static']
|
|
||||||
html_sidebars = {
|
|
||||||
'**': [
|
|
||||||
'globaltoc.html', # globales Inhaltsverzeichnis
|
|
||||||
'relations.html', # Prev/Next Navigation
|
|
||||||
'searchbox.html', # Suchfeld
|
|
||||||
]
|
|
||||||
}
|
|
||||||
html_theme_options = {
|
|
||||||
'fixed_sidebar': True,
|
|
||||||
}
|
|
||||||
|
|
||||||
# Liste der Dateiendungen, die Sphinx verarbeiten soll:
|
|
||||||
source_suffix = {
|
|
||||||
'.rst': 'restructuredtext',
|
|
||||||
'.md': 'markdown',
|
|
||||||
}
|
|
||||||
|
|
||||||
extensions = [
|
|
||||||
"sphinx.ext.autosummary",
|
|
||||||
"sphinx.ext.autodoc",
|
|
||||||
"myst_parser",
|
|
||||||
]
|
|
||||||
autosummary_generate = True
|
|
||||||
|
|
||||||
# Optional: Zusätzliche MyST-Konfigurationen
|
|
||||||
myst_enable_extensions = [
|
|
||||||
"colon_fence", # Für erweiterte Syntax wie ::: Hinweisboxen etc.
|
|
||||||
# weitere Erweiterungen nach Bedarf
|
|
||||||
]
|
|
||||||
#
|
|
||||||
#myst_xref_ignore = [
|
|
||||||
# r"\./roles/.*",
|
|
||||||
# "../"
|
|
||||||
#]
|
|
||||||
|
|
@ -4,7 +4,7 @@ This section outlines the common applications tailored for both servers and end-
|
|||||||
## Base Setup
|
## Base Setup
|
||||||
Key for initial system configuration, this section includes hostname setting, systemd journal management, locale configurations, and swapfile handling. Essential for both server and end-user setups, it ensures a solid foundation for system operations.
|
Key for initial system configuration, this section includes hostname setting, systemd journal management, locale configurations, and swapfile handling. Essential for both server and end-user setups, it ensures a solid foundation for system operations.
|
||||||
|
|
||||||
- **[Hostname](roles/hostname/)**: Sets the system's hostname.
|
- **[Hostname](./roles/hostname/)**: Sets the system's hostname.
|
||||||
- **[Journalctl](./roles/journalctl/)**: Configures systemd journal settings.
|
- **[Journalctl](./roles/journalctl/)**: Configures systemd journal settings.
|
||||||
- **[Locales](./roles/locales/)**: Configures system locales.
|
- **[Locales](./roles/locales/)**: Configures system locales.
|
||||||
- **[System-Swapfile](./roles/system-swapfile/)**: Configures swapfile creation and management.
|
- **[System-Swapfile](./roles/system-swapfile/)**: Configures swapfile creation and management.
|
10
index.rst
10
index.rst
@ -1,10 +0,0 @@
|
|||||||
.. CyMaIS - Cyber Master Infrastructure Solution documentation
|
|
||||||
===========================================================
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 5
|
|
||||||
:caption: Index:
|
|
||||||
:glob:
|
|
||||||
:titlesonly:
|
|
||||||
|
|
||||||
**
|
|
@ -1,4 +0,0 @@
|
|||||||
pip:
|
|
||||||
myst-parser
|
|
||||||
sphinx
|
|
||||||
sphinx-rtd-theme
|
|
@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
This Ansible role automates data backups to a swappable USB device. It triggers the backup process automatically when the USB is mounted, allowing for customizable source and destination paths and integrating with systemd for reliable execution.
|
This Ansible role automates data backups to a swappable USB device. It triggers the backup process automatically when the USB is mounted, allowing for customizable source and destination paths and integrating with systemd for reliable execution.
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
|
|
||||||
Optimized for Archlinux, this role ensures that backups are performed consistently with minimal manual intervention. It leverages efficient synchronization methods and provides a seamless integration with systemd to manage the backup service.
|
Optimized for Archlinux, this role ensures that backups are performed consistently with minimal manual intervention. It leverages efficient synchronization methods and provides a seamless integration with systemd to manage the backup service.
|
||||||
|
|
||||||
@ -20,6 +20,6 @@ The primary purpose of this role is to simplify the backup process for systems t
|
|||||||
- **Efficient Synchronization:** Utilizes rsync with incremental backup strategies for optimal performance.
|
- **Efficient Synchronization:** Utilizes rsync with incremental backup strategies for optimal performance.
|
||||||
- **Optimized for Archlinux:** Tailored for Archlinux systems using the rolling release model.
|
- **Optimized for Archlinux:** Tailored for Archlinux systems using the rolling release model.
|
||||||
|
|
||||||
## Credits 📝
|
## Credits
|
||||||
|
|
||||||
Developed and maintained by **Kevin Veen-Birkenbach**. Special thanks to [OpenAI ChatGPT](https://chat.openai.com/share/a75ca771-d8a4-4b75-9912-c515ba371ae4) for its assistance in developing this role.
|
Developed and maintained by **Kevin Veen-Birkenbach**. Special thanks to [OpenAI ChatGPT](https://chat.openai.com/share/a75ca771-d8a4-4b75-9912-c515ba371ae4) for its assistance in developing this role.
|
@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
This Ansible role pulls the [directory-validator](https://github.com/kevinveenbirkenbach/directory-validator.git) repository to a predefined location. It is used by the backup-docker-to-local and cleanup-failed-docker-backups roles to verify whether backups have been successfully created.
|
This Ansible role pulls the [directory-validator](https://github.com/kevinveenbirkenbach/directory-validator.git) repository to a predefined location. It is used by the backup-docker-to-local and cleanup-failed-docker-backups roles to verify whether backups have been successfully created.
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
|
|
||||||
The role retrieves the latest version of the directory-validator from its Git repository and installs it into the designated folder (configured via the `backup_directory_validator_folder` variable). A fact is set to ensure that the repository is pulled only once per playbook run.
|
The role retrieves the latest version of the directory-validator from its Git repository and installs it into the designated folder (configured via the `backup_directory_validator_folder` variable). A fact is set to ensure that the repository is pulled only once per playbook run.
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
This Ansible role automates the process of backing up Docker volumes to a local folder. It pulls the [backup-docker-to-local repository](https://github.com/kevinveenbirkenbach/backup-docker-to-local.git), installs required software, configures systemd services for both standard and "everything" backup modes, and seeds backup database entries as needed.
|
This Ansible role automates the process of backing up Docker volumes to a local folder. It pulls the [backup-docker-to-local repository](https://github.com/kevinveenbirkenbach/backup-docker-to-local.git), installs required software, configures systemd services for both standard and "everything" backup modes, and seeds backup database entries as needed.
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
|
|
||||||
Optimized for Archlinux, this role ensures that Docker volume backups are performed reliably with minimal manual intervention. It integrates with several dependent roles to verify backup success and manage related tasks, including:
|
Optimized for Archlinux, this role ensures that Docker volume backups are performed reliably with minimal manual intervention. It integrates with several dependent roles to verify backup success and manage related tasks, including:
|
||||||
- [backup-directory-validator](../backup-directory-validator/) – Validates backup directories.
|
- [backup-directory-validator](../backup-directory-validator/) – Validates backup directories.
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
This role pulls backups from a remote server and stores them locally using rsync with retry logic. It is designed to retrieve remote backup data and integrate with your overall backup scheme.
|
This role pulls backups from a remote server and stores them locally using rsync with retry logic. It is designed to retrieve remote backup data and integrate with your overall backup scheme.
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
|
|
||||||
Optimized for Archlinux, this role is a key component of a comprehensive backup system. It works in conjunction with other roles to ensure that backup data is collected, verified, and maintained. The role uses a Bash script to pull backups, manage remote connections, and handle incremental backup creation.
|
Optimized for Archlinux, this role is a key component of a comprehensive backup system. It works in conjunction with other roles to ensure that backup data is collected, verified, and maintained. The role uses a Bash script to pull backups, manage remote connections, and handle incremental backup creation.
|
||||||
|
|
||||||
@ -20,7 +20,7 @@ Backup Remote to Local is a robust solution for retrieving backup data from remo
|
|||||||
- **Integration with Other Roles:** Works alongside roles like backup-directory-validator, cleanup-failed-docker-backups, systemd-timer, backups-provider, and system-maintenance-lock.
|
- **Integration with Other Roles:** Works alongside roles like backup-directory-validator, cleanup-failed-docker-backups, systemd-timer, backups-provider, and system-maintenance-lock.
|
||||||
- **Administrative Debugging:** Detailed debug instructions and administrative tasks are provided in a separate file.
|
- **Administrative Debugging:** Detailed debug instructions and administrative tasks are provided in a separate file.
|
||||||
|
|
||||||
## 📚 Other Resources
|
## Further Information
|
||||||
|
|
||||||
- **Backup Scheme:**
|
- **Backup Scheme:**
|
||||||

|

|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
This role sets up a dedicated backup user (`backup`) for performing secure backup operations. It creates the user, configures a restricted SSH environment with a custom `authorized_keys` template and an SSH wrapper script, and grants necessary sudo rights for executing rsync. This configuration helps ensure controlled and secure access specifically for backup processes.
|
This role sets up a dedicated backup user (`backup`) for performing secure backup operations. It creates the user, configures a restricted SSH environment with a custom `authorized_keys` template and an SSH wrapper script, and grants necessary sudo rights for executing rsync. This configuration helps ensure controlled and secure access specifically for backup processes.
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
|
|
||||||
The role is a critical component in a secure backup scheme. By isolating backup operations to a dedicated user, it minimizes the risk of unauthorized actions. The role configures the SSH environment so that only specific, allowed commands can be executed, and it sets up passwordless sudo rights for rsync, ensuring smooth and secure backup operations.
|
The role is a critical component in a secure backup scheme. By isolating backup operations to a dedicated user, it minimizes the risk of unauthorized actions. The role configures the SSH environment so that only specific, allowed commands can be executed, and it sets up passwordless sudo rights for rsync, ensuring smooth and secure backup operations.
|
||||||
|
|
||||||
@ -20,7 +20,7 @@ The purpose of this role is to enhance the security of your backup system by pro
|
|||||||
- **Sudo Configuration:** Grants passwordless sudo rights for rsync, enabling secure and automated backup transfers.
|
- **Sudo Configuration:** Grants passwordless sudo rights for rsync, enabling secure and automated backup transfers.
|
||||||
- **Integration:** Supports seamless integration with your backup infrastructure by limiting the backup user's permissions to only the required commands.
|
- **Integration:** Supports seamless integration with your backup infrastructure by limiting the backup user's permissions to only the required commands.
|
||||||
|
|
||||||
## 📚 Other Resources
|
## Further Information
|
||||||
|
|
||||||
For more details on how the role works and advanced configuration options, please see the related references below:
|
For more details on how the role works and advanced configuration options, please see the related references below:
|
||||||
- [Ansible Playbooks Lookups](https://docs.ansible.com/ansible/latest/user_guide/playbooks_lookups.html#id3)
|
- [Ansible Playbooks Lookups](https://docs.ansible.com/ansible/latest/user_guide/playbooks_lookups.html#id3)
|
||||||
|
@ -1,35 +1,12 @@
|
|||||||
# Backups Provider
|
# role backups-provider-host
|
||||||
|
|
||||||
## Description
|
## todo
|
||||||
|
- add full system backup
|
||||||
|
|
||||||
This role sets up and manages the host as a backup provider. It establishes the framework for secure backup operations and integrates with other roles to facilitate reliable backup data management.
|
## see
|
||||||
|
- https://www.thegeekstuff.com/2012/03/chroot-sftp-setup/
|
||||||
## 📌 Overview
|
- https://serverfault.com/questions/135618/is-it-possible-to-use-rsync-over-sftp-without-an-ssh-shell
|
||||||
|
- https://forum.duplicati.com/t/sftp-ssh-backups-to-a-linux-server-with-added-security/7334
|
||||||
Optimized for automated backup processes, this role:
|
- https://serverfault.com/questions/287578/trying-to-setup-chrootd-rsync
|
||||||
- Configures the host to provide backup services.
|
- http://ramblings.narrabilis.com/using-rsync-with-ssh
|
||||||
- Integrates seamlessly with the [backups-provider-user](../backups-provider-user/README.md) and [cleanup-backups-timer](../cleanup-backups-timer/README.md) roles.
|
- https://wiki.archlinux.org/index.php/rsync
|
||||||
- Lays the foundation for secure and extensible backup operations.
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
The primary purpose of this role is to enable the host to act as a backup provider, ensuring that backup data is securely stored and managed. Future enhancements will include full system backup capabilities.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- **Backup Framework:** Establishes the necessary configuration for hosting backups.
|
|
||||||
- **Role Integration:** Works in conjunction with related roles to provide a comprehensive backup solution.
|
|
||||||
- **Extensibility:** Designed to accommodate future features, such as full system backups.
|
|
||||||
|
|
||||||
## Todo
|
|
||||||
|
|
||||||
- Add full system backup functionality.
|
|
||||||
|
|
||||||
## See Also
|
|
||||||
|
|
||||||
- [Chroot SFTP Setup](https://www.thegeekstuff.com/2012/03/chroot-sftp-setup/)
|
|
||||||
- [Rsync over SFTP without an SSH Shell](https://serverfault.com/questions/135618/is-it-possible-to-use-rsync-over-sftp-without-an-ssh-shell)
|
|
||||||
- [SFTP SSH Backups with Added Security](https://forum.duplicati.com/t/sftp-ssh-backups-to-a-linux-server-with-added-security/7334)
|
|
||||||
- [Chrootd Rsync Setup](https://serverfault.com/questions/287578/trying-to-setup-chrootd-rsync)
|
|
||||||
- [Using Rsync with SSH](http://ramblings.narrabilis.com/using-rsync-with-ssh)
|
|
||||||
- [Rsync on Arch Linux](https://wiki.archlinux.org/index.php/rsync)
|
|
||||||
|
@ -1,27 +1,3 @@
|
|||||||
---
|
|
||||||
galaxy_info:
|
|
||||||
author: "Kevin Veen-Birkenbach"
|
|
||||||
description: "Configures the host as a backup provider to facilitate secure backup operations."
|
|
||||||
license: "CyMaIS NonCommercial License (CNCL)"
|
|
||||||
license_url: "https://s.veen.world/cncl"
|
|
||||||
company: |
|
|
||||||
Kevin Veen-Birkenbach
|
|
||||||
Consulting & Coaching Solutions
|
|
||||||
https://www.veen.world
|
|
||||||
min_ansible_version: "2.9"
|
|
||||||
platforms:
|
|
||||||
- name: Linux
|
|
||||||
versions:
|
|
||||||
- all
|
|
||||||
galaxy_tags:
|
|
||||||
- backups
|
|
||||||
- provider
|
|
||||||
- backup
|
|
||||||
- automation
|
|
||||||
- security
|
|
||||||
repository: "https://s.veen.world/cymais"
|
|
||||||
issue_tracker_url: "https://s.veen.world/cymaisissues"
|
|
||||||
documentation: "https://s.veen.world/cymais"
|
|
||||||
dependencies:
|
dependencies:
|
||||||
- backups-provider-user
|
- backups-provider-user
|
||||||
- cleanup-backups-timer
|
- cleanup-backups-timer
|
||||||
|
@ -1,7 +0,0 @@
|
|||||||
It may be neccessary to install gcc seperat to use psutil
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo pacman -S gcc
|
|
||||||
```
|
|
||||||
|
|
||||||
If this is the case in the future, automize it in this role.
|
|
@ -1,27 +1,14 @@
|
|||||||
# Cleanup Backups Service
|
# role cleanup-backups-timer
|
||||||
|
|
||||||
## Description
|
Cleans up old backups
|
||||||
|
|
||||||
This role automates the cleanup of old backups by executing a Python script that deletes outdated backup versions based on disk usage thresholds. It ensures that backup storage does not exceed a defined usage percentage.
|
## Additional software
|
||||||
|
|
||||||
## 📌 Overview
|
It may be neccessary to install gcc seperat to use psutil
|
||||||
|
|
||||||
Optimized for effective disk space management, this role:
|
```bash
|
||||||
- Installs required packages (e.g. [lsof](https://en.wikipedia.org/wiki/Lsof) and [psutil](https://pypi.org/project/psutil/)) using pacman.
|
sudo pacman -S gcc
|
||||||
- Creates a directory for storing cleanup scripts.
|
```
|
||||||
- Deploys a Python script that deletes old backup directories when disk usage is too high.
|
|
||||||
- Configures a systemd service to run the cleanup script, with notifications via [systemd-notifier](../systemd-notifier/README.md).
|
|
||||||
|
|
||||||
## Purpose
|
## further information
|
||||||
|
|
||||||
The primary purpose of this role is to maintain optimal backup storage by automatically removing outdated backup versions when disk usage exceeds a specified threshold.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- **Automated Cleanup:** Executes a Python script to delete old backups.
|
|
||||||
- **Threshold-Based Deletion:** Removes backups based on disk usage percentage.
|
|
||||||
- **Systemd Integration:** Configures a systemd service to run cleanup tasks.
|
|
||||||
- **Dependency Integration:** Works in conjunction with related roles for comprehensive backup management.
|
|
||||||
|
|
||||||
## 📚 Other Resources
|
|
||||||
- https://stackoverflow.com/questions/48929553/get-hard-disk-size-in-python
|
- https://stackoverflow.com/questions/48929553/get-hard-disk-size-in-python
|
@ -1,26 +1,3 @@
|
|||||||
---
|
|
||||||
galaxy_info:
|
|
||||||
author: "Kevin Veen-Birkenbach"
|
|
||||||
description: "Automates the cleanup of old backups by executing a Python script that deletes outdated backup versions when disk usage exceeds a specified threshold."
|
|
||||||
license: "CyMaIS NonCommercial License (CNCL)"
|
|
||||||
license_url: "https://s.veen.world/cncl"
|
|
||||||
company: |
|
|
||||||
Kevin Veen-Birkenbach
|
|
||||||
Consulting & Coaching Solutions
|
|
||||||
https://www.veen.world
|
|
||||||
min_ansible_version: "2.9"
|
|
||||||
platforms:
|
|
||||||
- name: Linux
|
|
||||||
versions:
|
|
||||||
- all
|
|
||||||
galaxy_tags:
|
|
||||||
- backup
|
|
||||||
- cleanup
|
|
||||||
- disk
|
|
||||||
- automation
|
|
||||||
repository: "https://s.veen.world/cymais"
|
|
||||||
issue_tracker_url: "https://s.veen.world/cymaisissues"
|
|
||||||
documentation: "https://s.veen.world/cymais"
|
|
||||||
dependencies:
|
dependencies:
|
||||||
- python-pip
|
- python-pip
|
||||||
- systemd-notifier
|
- systemd-notifier
|
||||||
|
@ -1,21 +1,3 @@
|
|||||||
# Cleanup Backups Timer
|
# role cleanup-backups-timer
|
||||||
|
|
||||||
## Description
|
Timer for cleaning up old backups
|
||||||
|
|
||||||
This role sets up a systemd timer to schedule the periodic cleanup of old backups. It leverages the cleanup-backups-service role to perform the actual cleanup operation.
|
|
||||||
|
|
||||||
## 📌 Overview
|
|
||||||
|
|
||||||
Optimized for automated maintenance, this role:
|
|
||||||
- Sets a fact for the service name.
|
|
||||||
- Integrates with the [systemd-timer](../systemd-timer/README.md) role to schedule cleanup-backups tasks at defined intervals.
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
The primary purpose of this role is to automate the scheduling of backup cleanup operations using a systemd timer, ensuring that backup storage remains within defined limits.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- **Timer Scheduling:** Configures a systemd timer to trigger the backup cleanup service.
|
|
||||||
- **Role Integration:** Works in conjunction with the cleanup-backups-service role.
|
|
||||||
- **Idempotency:** Ensures the timer tasks execute only once per playbook run.
|
|
@ -1,25 +1,2 @@
|
|||||||
---
|
|
||||||
galaxy_info:
|
|
||||||
author: "Kevin Veen-Birkenbach"
|
|
||||||
description: "Schedules periodic cleanup of old backups by configuring a systemd timer to trigger the cleanup-backups-service role."
|
|
||||||
license: "CyMaIS NonCommercial License (CNCL)"
|
|
||||||
license_url: "https://s.veen.world/cncl"
|
|
||||||
company: |
|
|
||||||
Kevin Veen-Birkenbach
|
|
||||||
Consulting & Coaching Solutions
|
|
||||||
https://www.veen.world
|
|
||||||
min_ansible_version: "2.9"
|
|
||||||
platforms:
|
|
||||||
- name: Linux
|
|
||||||
versions:
|
|
||||||
- all
|
|
||||||
galaxy_tags:
|
|
||||||
- timer
|
|
||||||
- backup
|
|
||||||
- cleanup
|
|
||||||
- automation
|
|
||||||
repository: "https://s.veen.world/cymais"
|
|
||||||
issue_tracker_url: "https://s.veen.world/cymaisissues"
|
|
||||||
documentation: "https://s.veen.world/cymais"
|
|
||||||
dependencies:
|
dependencies:
|
||||||
- cleanup-backups-service
|
- cleanup-backups-service
|
||||||
|
@ -1,24 +1,4 @@
|
|||||||
# Cleanup Disc Space
|
# cleanup-disc-space
|
||||||
|
Frees disc space
|
||||||
## Description
|
## More information
|
||||||
|
- https://askubuntu.com/questions/380238/how-to-clean-tmp
|
||||||
This role frees disk space by executing a script that cleans up temporary files, clears package caches, and optionally cleans up backup directories and Docker resources when disk usage exceeds a specified threshold.
|
|
||||||
|
|
||||||
## 📌 Overview
|
|
||||||
|
|
||||||
Optimized for efficient storage management, this role:
|
|
||||||
- Creates a directory for disk cleanup scripts.
|
|
||||||
- Deploys a Bash script that frees disk space by cleaning up /tmp, Docker resources, and pacman cache.
|
|
||||||
- Configures a systemd service to run the disk cleanup script.
|
|
||||||
- Optionally integrates with backup cleanup if backup variables are defined.
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
The primary purpose of this role is to ensure that disk space remains within safe limits by automating cleanup tasks, thereby improving system performance and stability.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- **Automated Cleanup:** Executes a script to remove temporary files and clear caches.
|
|
||||||
- **Threshold-Based Execution:** Triggers cleanup when disk usage exceeds a defined percentage.
|
|
||||||
- **Systemd Integration:** Configures a systemd service to manage the disk cleanup process.
|
|
||||||
- **Docker and Backup Integration:** Optionally cleans Docker resources and backups if configured.
|
|
@ -1,26 +1,3 @@
|
|||||||
---
|
|
||||||
galaxy_info:
|
|
||||||
author: "Kevin Veen-Birkenbach"
|
|
||||||
description: "Frees disk space on the target system by executing a cleanup script that removes temporary files, clears package caches, and optionally handles Docker and backup cleanup."
|
|
||||||
license: "CyMaIS NonCommercial License (CNCL)"
|
|
||||||
license_url: "https://s.veen.world/cncl"
|
|
||||||
company: |
|
|
||||||
Kevin Veen-Birkenbach
|
|
||||||
Consulting & Coaching Solutions
|
|
||||||
https://www.veen.world
|
|
||||||
min_ansible_version: "2.9"
|
|
||||||
platforms:
|
|
||||||
- name: Linux
|
|
||||||
versions:
|
|
||||||
- all
|
|
||||||
galaxy_tags:
|
|
||||||
- disk
|
|
||||||
- cleanup
|
|
||||||
- storage
|
|
||||||
- automation
|
|
||||||
repository: "https://s.veen.world/cymais"
|
|
||||||
issue_tracker_url: "https://s.veen.world/cymaisissues"
|
|
||||||
documentation: "https://s.veen.world/cymais"
|
|
||||||
dependencies:
|
dependencies:
|
||||||
- systemd-notifier
|
- systemd-notifier
|
||||||
- system-maintenance-lock
|
- system-maintenance-lock
|
||||||
|
@ -1,24 +1,3 @@
|
|||||||
# Docker Volume Backup Cleanup Role
|
# Docker Volume Backup Cleanup
|
||||||
|
This script cleans up failed docker backups.
|
||||||
## Description
|
It uses https://github.com/kevinveenbirkenbach/cleanup-failed-docker-backups as base.
|
||||||
|
|
||||||
This role cleans up failed Docker backups by pulling a [Git repository](https://github.com/kevinveenbirkenbach/cleanup-failed-docker-backups) that contains cleanup scripts and configuring a systemd service to execute them. It ensures that failed or incomplete backups are removed to free up disk space and maintain a healthy backup environment.
|
|
||||||
|
|
||||||
## 📌 Overview
|
|
||||||
|
|
||||||
Optimized for backup maintenance, this role:
|
|
||||||
- Clones the cleanup-failed-docker-backups repository.
|
|
||||||
- Configures a systemd service to run the cleanup script.
|
|
||||||
- Integrates with the [systemd-timer](../systemd-timer/README.md) role to schedule periodic cleanup.
|
|
||||||
- Works in conjunction with the backup-directory-validator role for additional verification.
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
The primary purpose of this role is to remove failed Docker backups automatically, thereby freeing disk space and preventing backup storage from becoming cluttered with incomplete data.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- **Repository Cloning:** Retrieves the latest cleanup scripts from a Git repository.
|
|
||||||
- **Service Configuration:** Sets up a systemd service to run the cleanup tasks.
|
|
||||||
- **Timer Integration:** Schedules periodic cleanup through a systemd timer.
|
|
||||||
- **Dependency Integration:** Works with backup-directory-validator to enhance backup integrity.
|
|
@ -1,26 +1,3 @@
|
|||||||
---
|
|
||||||
galaxy_info:
|
|
||||||
author: "Kevin Veen-Birkenbach"
|
|
||||||
description: "Cleans up failed Docker backups by configuring a systemd service and timer to execute the cleanup operations periodically."
|
|
||||||
license: "CyMaIS NonCommercial License (CNCL)"
|
|
||||||
license_url: "https://s.veen.world/cncl"
|
|
||||||
company: |
|
|
||||||
Kevin Veen-Birkenbach
|
|
||||||
Consulting & Coaching Solutions
|
|
||||||
https://www.veen.world
|
|
||||||
min_ansible_version: "2.9"
|
|
||||||
platforms:
|
|
||||||
- name: Linux
|
|
||||||
versions:
|
|
||||||
- all
|
|
||||||
galaxy_tags:
|
|
||||||
- docker
|
|
||||||
- backup
|
|
||||||
- cleanup
|
|
||||||
- automation
|
|
||||||
repository: "https://s.veen.world/cymais"
|
|
||||||
issue_tracker_url: "https://s.veen.world/cymaisissues"
|
|
||||||
documentation: "https://s.veen.world/cymais"
|
|
||||||
dependencies:
|
dependencies:
|
||||||
- git
|
- git
|
||||||
- systemd-notifier
|
- systemd-notifier
|
||||||
|
@ -1,26 +1,5 @@
|
|||||||
# Client WireGuard Behind NAT Role
|
# client-wireguard-behind-nat
|
||||||
|
|
||||||
## Description
|
# see
|
||||||
|
|
||||||
This role adapts iptables rules to enable proper connectivity for a WireGuard client running behind a NAT or firewall. It ensures that traffic is forwarded correctly by applying necessary masquerading rules.
|
|
||||||
|
|
||||||
## 📌 Overview
|
|
||||||
|
|
||||||
Optimized for environments with network address translation (NAT), this role:
|
|
||||||
- Executes shell commands to modify iptables rules.
|
|
||||||
- Allows traffic from the WireGuard client interface (e.g. `wg0-client`) and sets up NAT masquerading on the external interface (e.g. `eth0`).
|
|
||||||
- Works as an extension to the native WireGuard client role.
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
The primary purpose of this role is to enable proper routing and connectivity for a WireGuard client situated behind a firewall or NAT device. By adapting iptables rules, it ensures that the client can communicate effectively with external networks.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- **iptables Rule Adaptation:** Modifies iptables to allow forwarding and NAT masquerading for the WireGuard client.
|
|
||||||
- **NAT Support:** Configures the external interface for proper masquerading.
|
|
||||||
- **Role Integration:** Depends on the [client-wireguard](../client-wireguard/README.md) role to ensure that WireGuard is properly configured before applying firewall rules.
|
|
||||||
|
|
||||||
## 📚 Other Resources
|
|
||||||
- https://gist.github.com/insdavm/b1034635ab23b8839bf957aa406b5e39
|
- https://gist.github.com/insdavm/b1034635ab23b8839bf957aa406b5e39
|
||||||
- https://wiki.debian.org/iptables
|
- https://wiki.debian.org/iptables
|
||||||
|
@ -1,26 +1,2 @@
|
|||||||
---
|
|
||||||
galaxy_info:
|
|
||||||
author: "Kevin Veen-Birkenbach"
|
|
||||||
description: "Adapts iptables rules to enable proper connectivity for a WireGuard client running behind a NAT or firewall, ensuring that traffic is correctly forwarded and masqueraded."
|
|
||||||
license: "CyMaIS NonCommercial License (CNCL)"
|
|
||||||
license_url: "https://s.veen.world/cncl"
|
|
||||||
company: |
|
|
||||||
Kevin Veen-Birkenbach
|
|
||||||
Consulting & Coaching Solutions
|
|
||||||
https://www.veen.world
|
|
||||||
min_ansible_version: "2.9"
|
|
||||||
platforms:
|
|
||||||
- name: Linux
|
|
||||||
versions:
|
|
||||||
- all
|
|
||||||
galaxy_tags:
|
|
||||||
- wireguard
|
|
||||||
- nat
|
|
||||||
- firewall
|
|
||||||
- iptables
|
|
||||||
- networking
|
|
||||||
repository: "https://s.veen.world/cymais"
|
|
||||||
issue_tracker_url: "https://s.veen.world/cymaisissues"
|
|
||||||
documentation: "https://s.veen.world/cymais"
|
|
||||||
dependencies:
|
dependencies:
|
||||||
- client-wireguard
|
- client-wireguard
|
@ -1,11 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## Create Client Keys
|
|
||||||
|
|
||||||
```bash
|
|
||||||
wg_private_key="$(wg genkey)"
|
|
||||||
wg_public_key="$(echo "$wg_private_key" | wg pubkey)"
|
|
||||||
echo "PrivateKey: $wg_private_key"
|
|
||||||
echo "PublicKey: $wg_public_key"
|
|
||||||
echo "PresharedKey: $(wg genpsk)"
|
|
||||||
```
|
|
@ -1,37 +1,28 @@
|
|||||||
# Native Wireguard Client
|
# Role Native Wireguard
|
||||||
|
Manages wireguard on a client.
|
||||||
|
|
||||||
## Description
|
## Create Client Keys
|
||||||
|
```bash
|
||||||
|
wg_private_key="$(wg genkey)"
|
||||||
|
wg_public_key="$(echo "$wg_private_key" | wg pubkey)"
|
||||||
|
echo "PrivateKey: $wg_private_key"
|
||||||
|
echo "PublicKey: $wg_public_key"
|
||||||
|
echo "PresharedKey: $(wg genpsk)"
|
||||||
|
```
|
||||||
|
|
||||||
This role manages WireGuard on a client system. It sets up essential services and scripts to configure and optimize WireGuard connectivity. Additionally, it provides a link to an Administration document for creating client keys.
|
## Other
|
||||||
|
- https://golb.hplar.ch/2019/01/expose-server-vpn.html
|
||||||
## 📌 Overview
|
- https://wiki.archlinux.org/index.php/WireGuard
|
||||||
|
- https://wireguard.how/server/raspbian/
|
||||||
Optimized for client configurations, this role:
|
- https://www.scaleuptech.com/de/blog/was-ist-und-wie-funktioniert-subnetting/
|
||||||
- Deploys a systemd service (`set-mtu.cymais.service`) and its associated script to set the MTU on specified network interfaces.
|
- https://bodhilinux.boards.net/thread/450/wireguard-rtnetlink-answers-permission-denied
|
||||||
- Uses a Jinja2 template to generate the `set-mtu.sh` script.
|
- https://stackoverflow.com/questions/69140072/unable-to-ssh-into-wireguard-ip-until-i-ping-another-server-from-inside-the-serv
|
||||||
- Ensures that the MTU is configured correctly before starting WireGuard with [wg-quick](https://www.wireguard.com/quickstart/).
|
- https://unix.stackexchange.com/questions/717172/why-is-ufw-blocking-acces-to-ssh-via-wireguard
|
||||||
|
- https://forum.openwrt.org/t/cannot-ssh-to-clients-on-lan-when-accessing-router-via-wireguard-client/132709/3
|
||||||
## Purpose
|
- https://serverfault.com/questions/1086297/wireguard-connection-dies-on-ubuntu-peer
|
||||||
|
- https://unix.stackexchange.com/questions/624987/ssh-fails-to-start-when-listenaddress-is-set-to-wireguard-vpn-ip
|
||||||
The primary purpose of this role is to configure WireGuard on a client by setting appropriate MTU values on network interfaces. This ensures a stable and optimized VPN connection.
|
- https://serverfault.com/questions/210408/cannot-ssh-debug1-expecting-ssh2-msg-kex-dh-gex-reply
|
||||||
|
- https://www.thomas-krenn.com/de/wiki/Linux_ip_Kommando
|
||||||
## Features
|
- https://wiki.archlinux.org/title/dhcpcd
|
||||||
|
- https://wiki.ubuntuusers.de/NetworkManager/Dispatcher/
|
||||||
- **MTU Configuration:** Deploys a template-based script to set the MTU on all defined internet interfaces.
|
- https://askubuntu.com/questions/1024916/how-can-i-launch-a-systemd-service-at-startup-before-another-systemd-service-sta
|
||||||
- **Systemd Service Integration:** Creates and manages a systemd service to execute the MTU configuration script.
|
|
||||||
- **Administration Support:** For client key creation and further setup, please refer to the [Administration](./Administration.md) file.
|
|
||||||
- **Modular Design:** Easily integrates with other WireGuard roles or network configuration roles.
|
|
||||||
|
|
||||||
## 📚 Other Resources
|
|
||||||
|
|
||||||
- [WireGuard Documentation](https://www.wireguard.com/)
|
|
||||||
- [ArchWiki: WireGuard](https://wiki.archlinux.org/index.php/WireGuard)
|
|
||||||
- [WireGuard on Raspbian](https://wireguard.how/server/raspbian/)
|
|
||||||
- [Subnetting Basics](https://www.scaleuptech.com/de/blog/was-ist-und-wie-funktioniert-subnetting/)
|
|
||||||
- [WireGuard Permissions Issue Discussion](https://bodhilinux.boards.net/thread/450/wireguard-rtnetlink-answers-permission-denied)
|
|
||||||
- [SSH Issues with WireGuard](https://stackoverflow.com/questions/69140072/unable-to-ssh-into-wireguard-ip-until-i-ping-another-server-from-inside-the-serv)
|
|
||||||
- [UFW and SSH via WireGuard](https://unix.stackexchange.com/questions/717172/why-is-ufw-blocking-acces-to-ssh-via-wireguard)
|
|
||||||
- [OpenWrt Forum Discussion on WireGuard](https://forum.openwrt.org/t/cannot-ssh-to-clients-on-lan-when-accessing-router-via-wireguard-client/132709/3)
|
|
||||||
- [WireGuard Connection Dies on Ubuntu](https://serverfault.com/questions/1086297/wireguard-connection-dies-on-ubuntu-peer)
|
|
||||||
- [SSH Fails with WireGuard IP](https://unix.stackexchange.com/questions/624987/ssh-fails-to-start-when-listenaddress-is-set-to-wireguard-vpn-ip)
|
|
||||||
- [WireGuard NAT and Firewall Issues](https://serverfault.com/questions/210408/cannot-ssh-debug1-expecting-ssh2-msg-kex-dh-gex-reply)
|
|
@ -1,27 +1,2 @@
|
|||||||
---
|
|
||||||
galaxy_info:
|
|
||||||
author: "Kevin Veen-Birkenbach"
|
|
||||||
description: "Manages WireGuard on a client system by deploying services and scripts to set MTU on network interfaces and ensure optimal VPN connectivity."
|
|
||||||
license: "CyMaIS NonCommercial License (CNCL)"
|
|
||||||
license_url: "https://s.veen.world/cncl"
|
|
||||||
company: |
|
|
||||||
Kevin Veen-Birkenbach
|
|
||||||
Consulting & Coaching Solutions
|
|
||||||
https://www.veen.world
|
|
||||||
min_ansible_version: "2.9"
|
|
||||||
platforms:
|
|
||||||
- name: Linux
|
|
||||||
versions:
|
|
||||||
- all
|
|
||||||
galaxy_tags:
|
|
||||||
- wireguard
|
|
||||||
- vpn
|
|
||||||
- client
|
|
||||||
- mtu
|
|
||||||
- systemd
|
|
||||||
- configuration
|
|
||||||
repository: "https://s.veen.world/cymais"
|
|
||||||
issue_tracker_url: "https://s.veen.world/cymaisissues"
|
|
||||||
documentation: "https://s.veen.world/cymais"
|
|
||||||
dependencies:
|
dependencies:
|
||||||
- wireguard
|
- wireguard
|
@ -1,31 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## View Logs
|
|
||||||
To check the latest logs of Akaunting.
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it akaunting tail -n 300 storage/logs/laravel.log
|
|
||||||
```
|
|
||||||
|
|
||||||
## Access Containers
|
|
||||||
- Akaunting Container: `docker-compose exec -it akaunting bash`
|
|
||||||
- Database Container: `docker-compose exec -it akaunting-db /bin/mariadb -u admin --password=$akaunting_db_password akaunting`
|
|
||||||
|
|
||||||
## Manual Update
|
|
||||||
Execute PHP artisan commands in the following order for updating Akaunting:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
php artisan about
|
|
||||||
php artisan cache:clear
|
|
||||||
php artisan view:clear
|
|
||||||
php artisan migrate:status
|
|
||||||
php artisan update:all
|
|
||||||
php artisan update:db
|
|
||||||
```
|
|
||||||
|
|
||||||
## Composer
|
|
||||||
To install Composer, a PHP dependency management tool:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl https://getcomposer.org/download/2.4.1/composer.phar --output composer.phar
|
|
||||||
php composer.phar install
|
|
||||||
```
|
|
@ -1,32 +0,0 @@
|
|||||||
# Installation Steps
|
|
||||||
|
|
||||||
@ATTENTION Variable ```#AKAUNTING_SETUP: true``` needs to be set
|
|
||||||
|
|
||||||
## New Manual Setup
|
|
||||||
1. **Navigate to Docker Compose Directory**: Change to the directory containing your Docker Compose files for Akaunting.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd {{path_docker_compose_instances}}akaunting/
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Set Environment Variables**: These are necessary to prevent timeouts during long operations.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export COMPOSE_HTTP_TIMEOUT=600
|
|
||||||
export DOCKER_CLIENT_TIMEOUT=600
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Start Akaunting Service**: This command will initialize the Akaunting setup.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
AKAUNTING_SETUP=true docker-compose -p akaunting up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Check Web Interface**: Ensure the web interface is operational.
|
|
||||||
|
|
||||||
5. **Restart Services**: To finalize the setup, restart the services.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose down
|
|
||||||
docker-compose -p akaunting up -d
|
|
||||||
```
|
|
@ -16,6 +16,69 @@ This guide details the process of setting up Akaunting, a free and online accoun
|
|||||||
- Basic understanding of Docker concepts.
|
- Basic understanding of Docker concepts.
|
||||||
- Access to the command line or terminal.
|
- Access to the command line or terminal.
|
||||||
|
|
||||||
|
## Installation Steps
|
||||||
|
|
||||||
|
@ATTENTION Variable ```#AKAUNTING_SETUP: true``` needs to be set
|
||||||
|
|
||||||
|
### New Manual Setup
|
||||||
|
1. **Navigate to Docker Compose Directory**: Change to the directory containing your Docker Compose files for Akaunting.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd {{path_docker_compose_instances}}akaunting/
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Set Environment Variables**: These are necessary to prevent timeouts during long operations.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export COMPOSE_HTTP_TIMEOUT=600
|
||||||
|
export DOCKER_CLIENT_TIMEOUT=600
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Start Akaunting Service**: This command will initialize the Akaunting setup.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
AKAUNTING_SETUP=true docker-compose -p akaunting up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Check Web Interface**: Ensure the web interface is operational.
|
||||||
|
|
||||||
|
5. **Restart Services**: To finalize the setup, restart the services.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose down
|
||||||
|
docker-compose -p akaunting up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Administration
|
||||||
|
- **View Logs**: To check the latest logs of Akaunting.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it akaunting tail -n 300 storage/logs/laravel.log
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Access Containers**: For troubleshooting or configuration.
|
||||||
|
- Akaunting Container: `docker-compose exec -it akaunting bash`
|
||||||
|
- Database Container: `docker-compose exec -it akaunting-db /bin/mariadb -u admin --password=$akaunting_db_password akaunting`
|
||||||
|
|
||||||
|
### Manual Update
|
||||||
|
Execute PHP artisan commands in the following order for updating Akaunting:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
php artisan about
|
||||||
|
php artisan cache:clear
|
||||||
|
php artisan view:clear
|
||||||
|
php artisan migrate:status
|
||||||
|
php artisan update:all
|
||||||
|
php artisan update:db
|
||||||
|
```
|
||||||
|
|
||||||
|
### Composer
|
||||||
|
To install Composer, a PHP dependency management tool:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl https://getcomposer.org/download/2.4.1/composer.phar --output composer.phar
|
||||||
|
php composer.phar install
|
||||||
|
```
|
||||||
|
|
||||||
### Full Backup Routine
|
### Full Backup Routine
|
||||||
Detailed steps for backing up your Akaunting instance, including setting manual and automatic variables, destroying containers, removing volumes, and rebuilding and recovering volumes. (Refer to the full backup routine script in the original README).
|
Detailed steps for backing up your Akaunting instance, including setting manual and automatic variables, destroying containers, removing volumes, and rebuilding and recovering volumes. (Refer to the full backup routine script in the original README).
|
||||||
@ -31,7 +94,7 @@ Variables are crucial in configuring your Akaunting setup. Ensure you set the fo
|
|||||||
- **Nginx Configuration**: Necessary steps to configure Nginx as a reverse proxy for Akaunting.
|
- **Nginx Configuration**: Necessary steps to configure Nginx as a reverse proxy for Akaunting.
|
||||||
- **Database and Runtime Environment**: Instructions on how to set up the `db.env` and `run.env` files for database and runtime configurations.
|
- **Database and Runtime Environment**: Instructions on how to set up the `db.env` and `run.env` files for database and runtime configurations.
|
||||||
|
|
||||||
## 📚 Other Resources
|
## Further Information
|
||||||
For more details, visit the [Akaunting Docker Repository](https://github.com/akaunting/docker) and the [Akaunting Forums](https://akaunting.com/forum).
|
For more details, visit the [Akaunting Docker Repository](https://github.com/akaunting/docker) and the [Akaunting Forums](https://akaunting.com/forum).
|
||||||
|
|
||||||
## Contribution and Feedback
|
## Contribution and Feedback
|
||||||
|
@ -1,5 +0,0 @@
|
|||||||
## Setup Instructions
|
|
||||||
|
|
||||||
```bash
|
|
||||||
bash ./Makefile setup
|
|
||||||
```
|
|
@ -3,3 +3,9 @@
|
|||||||
# Role: docker-attendize (WIP)
|
# Role: docker-attendize (WIP)
|
||||||
|
|
||||||
This Ansible role sets up Attendize, an open-source ticket selling and event management platform.
|
This Ansible role sets up Attendize, an open-source ticket selling and event management platform.
|
||||||
|
|
||||||
|
## Setup Instructions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash ./Makefile setup
|
||||||
|
```
|
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
This role allows the setup of [baserole](https://baserow.io/).
|
This role allows the setup of [baserole](https://baserow.io/).
|
||||||
|
|
||||||
## 📚 Other Resources
|
## More Information
|
||||||
|
|
||||||
It was created with the help of [Chat GPT-4](https://chat.openai.com/share/556c2d7f-6b6f-4256-a646-a50529554efc).
|
It was created with the help of [Chat GPT-4](https://chat.openai.com/share/556c2d7f-6b6f-4256-a646-a50529554efc).
|
||||||
|
|
||||||
|
@ -1,16 +0,0 @@
|
|||||||
## Administration
|
|
||||||
|
|
||||||
## cleanup
|
|
||||||
```bash
|
|
||||||
docker compose down -v
|
|
||||||
```
|
|
||||||
|
|
||||||
## check container status
|
|
||||||
```bash
|
|
||||||
watch -n 2 "docker compose ps -a"
|
|
||||||
```
|
|
||||||
|
|
||||||
## database access
|
|
||||||
```bash
|
|
||||||
sudo docker-compose exec -it postgres psql -U postgres
|
|
||||||
```
|
|
@ -3,10 +3,27 @@
|
|||||||
|
|
||||||
Role to deploy [BigBlueButton](https://bigbluebutton.org/).
|
Role to deploy [BigBlueButton](https://bigbluebutton.org/).
|
||||||
|
|
||||||
|
## maintanace
|
||||||
|
|
||||||
|
### cleanup
|
||||||
|
```bash
|
||||||
|
docker compose down -v
|
||||||
|
```
|
||||||
|
|
||||||
|
### check container status
|
||||||
|
```bash
|
||||||
|
watch -n 2 "docker compose ps -a"
|
||||||
|
```
|
||||||
|
|
||||||
|
### database access
|
||||||
|
```bash
|
||||||
|
sudo docker-compose exec -it postgres psql -U postgres
|
||||||
|
```
|
||||||
|
|
||||||
## SSO
|
## SSO
|
||||||
- https://docs.bigbluebutton.org/greenlight/v3/external-authentication/
|
- https://docs.bigbluebutton.org/greenlight/v3/external-authentication/
|
||||||
|
|
||||||
## 📚 Other Resources
|
## further information
|
||||||
- https://github.com/bigbluebutton/docker
|
- https://github.com/bigbluebutton/docker
|
||||||
- https://docs.bigbluebutton.org/greenlight/gl-install.html#setting-bigbluebutton-credentials
|
- https://docs.bigbluebutton.org/greenlight/gl-install.html#setting-bigbluebutton-credentials
|
||||||
- https://goneuland.de/big-blue-button-mit-docker-und-traefik-installieren/
|
- https://goneuland.de/big-blue-button-mit-docker-und-traefik-installieren/
|
||||||
|
@ -1,28 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## create user via POST
|
|
||||||
```bash
|
|
||||||
curl -X POST https://your-pds-domain/xrpc/com.atproto.server.createAccount \
|
|
||||||
--user "admin:$admin-password"
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"email": "user@example.com",
|
|
||||||
"handle": "username",
|
|
||||||
"password": "securepassword123",
|
|
||||||
"inviteCode": "optional-invite-code"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
## Use pdsadmin
|
|
||||||
docker compose exec -it pds pdsadmin
|
|
||||||
|
|
||||||
docker compose exec -it pds pdsadmin account create-invite-code
|
|
||||||
|
|
||||||
## Debugging
|
|
||||||
|
|
||||||
- Websocket: https://piehost.com/websocket-tester
|
|
||||||
- Instance: https://bsky-debug.app
|
|
||||||
|
|
||||||
https://bluesky.veen.world/.well-known/atproto-did
|
|
||||||
|
|
||||||
Initial setup keine top level domain
|
|
@ -1,4 +0,0 @@
|
|||||||
# Installation
|
|
||||||
|
|
||||||
## Configure DNS
|
|
||||||
- https://bsky.social/about/blog/4-28-2023-domain-handle-tutorial
|
|
@ -1,7 +1,42 @@
|
|||||||
# DRAFT role docker-bluesky
|
# DRAFT role docker-bluesky
|
||||||
|
|
||||||
|
|
||||||
## 📚 Other Resources
|
## Setup
|
||||||
|
|
||||||
|
### Configure DNS
|
||||||
|
- https://bsky.social/about/blog/4-28-2023-domain-handle-tutorial
|
||||||
|
|
||||||
|
## Administration
|
||||||
|
|
||||||
|
### create user via POST
|
||||||
|
```bash
|
||||||
|
curl -X POST https://your-pds-domain/xrpc/com.atproto.server.createAccount \
|
||||||
|
--user "admin:$admin-password"
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"email": "user@example.com",
|
||||||
|
"handle": "username",
|
||||||
|
"password": "securepassword123",
|
||||||
|
"inviteCode": "optional-invite-code"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Use pdsadmin
|
||||||
|
docker compose exec -it pds pdsadmin
|
||||||
|
|
||||||
|
docker compose exec -it pds pdsadmin account create-invite-code
|
||||||
|
|
||||||
|
## Debugging
|
||||||
|
|
||||||
|
- Websocket: https://piehost.com/websocket-tester
|
||||||
|
- Instance: https://bsky-debug.app
|
||||||
|
|
||||||
|
https://bluesky.veen.world/.well-known/atproto-did
|
||||||
|
|
||||||
|
Initial setup keine top level domain
|
||||||
|
|
||||||
|
|
||||||
|
## more information
|
||||||
- https://therobbiedavis.com/selfhosting-bluesky-with-docker-and-swag/
|
- https://therobbiedavis.com/selfhosting-bluesky-with-docker-and-swag/
|
||||||
- Relevant for proxy configuration: https://cprimozic.net/notes/posts/notes-on-self-hosting-bluesky-pds-alongside-other-services/
|
- Relevant for proxy configuration: https://cprimozic.net/notes/posts/notes-on-self-hosting-bluesky-pds-alongside-other-services/
|
||||||
- https://github.com/bluesky-social/pds
|
- https://github.com/bluesky-social/pds
|
||||||
|
@ -4,7 +4,7 @@ This Ansible role provides the necessary tasks, files, templates, and variables
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview 🔍
|
||||||
|
|
||||||
- **Database Variables**
|
- **Database Variables**
|
||||||
Defined in [./vars/database.yml](./vars/database.yml), these variables include:
|
Defined in [./vars/database.yml](./vars/database.yml), these variables include:
|
||||||
|
@ -1,8 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## Check configuration
|
|
||||||
```bash
|
|
||||||
./launcher enter application
|
|
||||||
pry(main)> SiteSetting.all.each { |setting| puts "#{setting.name}: #{setting.value}" }
|
|
||||||
```
|
|
||||||
---
|
|
@ -2,6 +2,13 @@
|
|||||||
|
|
||||||
This Ansible role sets up Discourse, a popular open-source discussion platform, using Docker containers. It is designed to automate the deployment and configuration process of Discourse, making it easier to maintain and update.
|
This Ansible role sets up Discourse, a popular open-source discussion platform, using Docker containers. It is designed to automate the deployment and configuration process of Discourse, making it easier to maintain and update.
|
||||||
|
|
||||||
## Credits 📝
|
|
||||||
|
## Check configuration
|
||||||
|
```bash
|
||||||
|
./launcher enter application
|
||||||
|
pry(main)> SiteSetting.all.each { |setting| puts "#{setting.name}: #{setting.value}" }
|
||||||
|
```
|
||||||
|
---
|
||||||
|
|
||||||
This README was generated with information provided in the Ansible role. For more detailed instructions and information, refer to the inline comments within the role files. Additional support and context for this role can be found in an [online chat discussion](https://chat.openai.com/share/fdbf9870-1f7e-491f-b4d2-357e6e8ad59c).
|
This README was generated with information provided in the Ansible role. For more detailed instructions and information, refer to the inline comments within the role files. Additional support and context for this role can be found in an [online chat discussion](https://chat.openai.com/share/fdbf9870-1f7e-491f-b4d2-357e6e8ad59c).
|
||||||
|
|
||||||
|
@ -1,4 +0,0 @@
|
|||||||
## restart all services
|
|
||||||
```bash
|
|
||||||
docker restart elk_logstash_1 && docker restart elk_elasticsearch_1 && docker restart elk_kibana_1
|
|
||||||
```
|
|
@ -2,6 +2,11 @@
|
|||||||
|
|
||||||
I decided against using this role for security reasons. I recommend to use another tool if you don't want to pay for keeping your logs save and if you don't want to depend on external servers.
|
I decided against using this role for security reasons. I recommend to use another tool if you don't want to pay for keeping your logs save and if you don't want to depend on external servers.
|
||||||
|
|
||||||
|
## restart all services
|
||||||
|
```bash
|
||||||
|
docker restart elk_logstash_1 && docker restart elk_elasticsearch_1 && docker restart elk_kibana_1
|
||||||
|
```
|
||||||
|
|
||||||
## see
|
## see
|
||||||
- https://logz.io/blog/elk-stack-on-docker/
|
- https://logz.io/blog/elk-stack-on-docker/
|
||||||
- https://github.com/kevinveenbirkenbach/docker-elk
|
- https://github.com/kevinveenbirkenbach/docker-elk
|
||||||
|
@ -1,92 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## Full Reset 🚫➡️✅
|
|
||||||
|
|
||||||
The following environment variables need to be defined for successful operation:
|
|
||||||
|
|
||||||
- `DB_ROOT_PASSWORD`: The root password for the MariaDB instance
|
|
||||||
|
|
||||||
To completely reset Friendica, including its database and volumes, run:
|
|
||||||
```bash
|
|
||||||
docker exec -i central-mariadb mariadb -u root -p"${DB_ROOT_PASSWORD}" -e "DROP DATABASE IF EXISTS friendica; CREATE DATABASE friendica;"
|
|
||||||
docker compose down
|
|
||||||
rm -rv /mnt/hdd/data/docker/volumes/friendica_data
|
|
||||||
docker volume rm friendica_data
|
|
||||||
```
|
|
||||||
|
|
||||||
## Reset Database 🗄️
|
|
||||||
|
|
||||||
## Manual Method:
|
|
||||||
1. Connect to the MariaDB instance:
|
|
||||||
```bash
|
|
||||||
docker exec -it central-mariadb mariadb -u root -p
|
|
||||||
```
|
|
||||||
2. Run the following commands:
|
|
||||||
```sql
|
|
||||||
DROP DATABASE friendica;
|
|
||||||
CREATE DATABASE friendica;
|
|
||||||
exit;
|
|
||||||
```
|
|
||||||
|
|
||||||
## Automatic Method:
|
|
||||||
```bash
|
|
||||||
DB_ROOT_PASSWORD="your_root_password"
|
|
||||||
docker exec -i central-mariadb mariadb -u root -p"${DB_ROOT_PASSWORD}" -e "DROP DATABASE IF EXISTS friendica; CREATE DATABASE friendica;"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Enter the Application Container 🔍
|
|
||||||
|
|
||||||
To access the application container:
|
|
||||||
```bash
|
|
||||||
docker compose exec -it application sh
|
|
||||||
```
|
|
||||||
|
|
||||||
## Debugging Tools 🛠️
|
|
||||||
|
|
||||||
## Check Environment Variables
|
|
||||||
```bash
|
|
||||||
docker compose exec -it application printenv
|
|
||||||
```
|
|
||||||
|
|
||||||
## Inspect Volume Data
|
|
||||||
```bash
|
|
||||||
ls -la /var/lib/docker/volumes/friendica_data/_data/
|
|
||||||
```
|
|
||||||
|
|
||||||
## Autoinstall 🌟
|
|
||||||
|
|
||||||
Run the following command to autoinstall Friendica:
|
|
||||||
```bash
|
|
||||||
docker compose exec --user www-data -it application bin/console autoinstall
|
|
||||||
```
|
|
||||||
|
|
||||||
## Reinitialization 🔄
|
|
||||||
|
|
||||||
## Docker Only:
|
|
||||||
```bash
|
|
||||||
docker-compose up -d --force-recreate
|
|
||||||
```
|
|
||||||
|
|
||||||
## Full Reinitialization:
|
|
||||||
```bash
|
|
||||||
docker-compose up -d --force-recreate && sleep 2; docker compose exec --user www-data -it application bin/console autoinstall;
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration Information ℹ️
|
|
||||||
|
|
||||||
## General Configuration:
|
|
||||||
```bash
|
|
||||||
cat /var/lib/docker/volumes/friendica_data/_data/config/local.config.php
|
|
||||||
```
|
|
||||||
|
|
||||||
## Email Configuration:
|
|
||||||
```bash
|
|
||||||
docker compose exec -it application cat /etc/msmtprc
|
|
||||||
```
|
|
||||||
|
|
||||||
## Email Debugging ✉️
|
|
||||||
|
|
||||||
To send a test email:
|
|
||||||
```bash
|
|
||||||
docker compose exec -it application msmtp --account=system_email -t test@test.de
|
|
||||||
```
|
|
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
This role manages the setup, reset, and maintenance of a Friendica instance running with Docker.
|
This role manages the setup, reset, and maintenance of a Friendica instance running with Docker.
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview 🚀
|
||||||
|
|
||||||
Friendica is a decentralized social networking platform. This role helps manage Friendica in a containerized environment with Docker and provides tools for debugging, resetting, and maintaining the installation.
|
Friendica is a decentralized social networking platform. This role helps manage Friendica in a containerized environment with Docker and provides tools for debugging, resetting, and maintaining the installation.
|
||||||
|
|
||||||
@ -13,7 +13,100 @@ Ensure you have the following:
|
|||||||
- A central MariaDB instance running
|
- A central MariaDB instance running
|
||||||
- Necessary permissions to manage Docker and database configurations
|
- Necessary permissions to manage Docker and database configurations
|
||||||
|
|
||||||
## 📚 Other Resources
|
## Usage 📚
|
||||||
|
|
||||||
|
### Full Reset 🚫➡️✅
|
||||||
|
|
||||||
|
The following environment variables need to be defined for successful operation:
|
||||||
|
|
||||||
|
- `DB_ROOT_PASSWORD`: The root password for the MariaDB instance
|
||||||
|
|
||||||
|
To completely reset Friendica, including its database and volumes, run:
|
||||||
|
```bash
|
||||||
|
docker exec -i central-mariadb mariadb -u root -p"${DB_ROOT_PASSWORD}" -e "DROP DATABASE IF EXISTS friendica; CREATE DATABASE friendica;"
|
||||||
|
docker compose down
|
||||||
|
rm -rv /mnt/hdd/data/docker/volumes/friendica_data
|
||||||
|
docker volume rm friendica_data
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reset Database 🗄️
|
||||||
|
|
||||||
|
#### Manual Method:
|
||||||
|
1. Connect to the MariaDB instance:
|
||||||
|
```bash
|
||||||
|
docker exec -it central-mariadb mariadb -u root -p
|
||||||
|
```
|
||||||
|
2. Run the following commands:
|
||||||
|
```sql
|
||||||
|
DROP DATABASE friendica;
|
||||||
|
CREATE DATABASE friendica;
|
||||||
|
exit;
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Automatic Method:
|
||||||
|
```bash
|
||||||
|
DB_ROOT_PASSWORD="your_root_password"
|
||||||
|
docker exec -i central-mariadb mariadb -u root -p"${DB_ROOT_PASSWORD}" -e "DROP DATABASE IF EXISTS friendica; CREATE DATABASE friendica;"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Enter the Application Container 🔍
|
||||||
|
|
||||||
|
To access the application container:
|
||||||
|
```bash
|
||||||
|
docker compose exec -it application sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debugging Tools 🛠️
|
||||||
|
|
||||||
|
#### Check Environment Variables
|
||||||
|
```bash
|
||||||
|
docker compose exec -it application printenv
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Inspect Volume Data
|
||||||
|
```bash
|
||||||
|
ls -la /var/lib/docker/volumes/friendica_data/_data/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Autoinstall 🌟
|
||||||
|
|
||||||
|
Run the following command to autoinstall Friendica:
|
||||||
|
```bash
|
||||||
|
docker compose exec --user www-data -it application bin/console autoinstall
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reinitialization 🔄
|
||||||
|
|
||||||
|
#### Docker Only:
|
||||||
|
```bash
|
||||||
|
docker-compose up -d --force-recreate
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Full Reinitialization:
|
||||||
|
```bash
|
||||||
|
docker-compose up -d --force-recreate && sleep 2; docker compose exec --user www-data -it application bin/console autoinstall;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Information ℹ️
|
||||||
|
|
||||||
|
#### General Configuration:
|
||||||
|
```bash
|
||||||
|
cat /var/lib/docker/volumes/friendica_data/_data/config/local.config.php
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Email Configuration:
|
||||||
|
```bash
|
||||||
|
docker compose exec -it application cat /etc/msmtprc
|
||||||
|
```
|
||||||
|
|
||||||
|
### Email Debugging ✉️
|
||||||
|
|
||||||
|
To send a test email:
|
||||||
|
```bash
|
||||||
|
docker compose exec -it application msmtp --account=system_email -t test@test.de
|
||||||
|
```
|
||||||
|
|
||||||
|
## Additional Resources 📖
|
||||||
|
|
||||||
- [Friendica Docker Hub](https://hub.docker.com/_/friendica)
|
- [Friendica Docker Hub](https://hub.docker.com/_/friendica)
|
||||||
- [Friendica Installation Docs](https://wiki.friendi.ca/docs/install)
|
- [Friendica Installation Docs](https://wiki.friendi.ca/docs/install)
|
||||||
|
@ -1,7 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## cleanup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose down && docker volume rm funkwhale_data
|
|
||||||
```
|
|
@ -2,5 +2,11 @@
|
|||||||
|
|
||||||
This role doesn't work and needs to be implemented
|
This role doesn't work and needs to be implemented
|
||||||
|
|
||||||
## 📚 Other Resources
|
## cleanup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose down && docker volume rm funkwhale_data
|
||||||
|
```
|
||||||
|
|
||||||
|
## further information
|
||||||
- https://docs.funkwhale.audio/installation/docker.html
|
- https://docs.funkwhale.audio/installation/docker.html
|
@ -1,29 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## update
|
|
||||||
```bash
|
|
||||||
cd {{docker_compose.directories.instance}}
|
|
||||||
docker-compose down
|
|
||||||
docker-compose pull
|
|
||||||
docker-compose up -d
|
|
||||||
```
|
|
||||||
Keep in mind to track and to don't interrupt the update process until the migration is done.
|
|
||||||
|
|
||||||
## set variables
|
|
||||||
```bash
|
|
||||||
COMPOSE_HTTP_TIMEOUT=600
|
|
||||||
DOCKER_CLIENT_TIMEOUT=600
|
|
||||||
```
|
|
||||||
|
|
||||||
## recreate
|
|
||||||
```bash
|
|
||||||
cd {{docker_compose.directories.instance}} && docker-compose -p gitea up -d --force-recreate
|
|
||||||
```
|
|
||||||
|
|
||||||
## database access
|
|
||||||
To access the database execute
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it database /bin/mysql -u gitea -p
|
|
||||||
```
|
|
||||||
## bash in application
|
|
||||||
docker-compose exec -it application /bin/sh
|
|
@ -1,5 +1,33 @@
|
|||||||
# role docker-gitea
|
# role docker-gitea
|
||||||
|
|
||||||
## 📚 Other Resources
|
## update
|
||||||
|
```bash
|
||||||
|
cd {{docker_compose.directories.instance}}
|
||||||
|
docker-compose down
|
||||||
|
docker-compose pull
|
||||||
|
docker-compose up -d
|
||||||
|
```
|
||||||
|
Keep in mind to track and to don't interrupt the update process until the migration is done.
|
||||||
|
|
||||||
|
## set variables
|
||||||
|
```bash
|
||||||
|
COMPOSE_HTTP_TIMEOUT=600
|
||||||
|
DOCKER_CLIENT_TIMEOUT=600
|
||||||
|
```
|
||||||
|
|
||||||
|
## recreate
|
||||||
|
```bash
|
||||||
|
cd {{docker_compose.directories.instance}} && docker-compose -p gitea up -d --force-recreate
|
||||||
|
```
|
||||||
|
|
||||||
|
## database access
|
||||||
|
To access the database execute
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it database /bin/mysql -u gitea -p
|
||||||
|
```
|
||||||
|
## bash in application
|
||||||
|
docker-compose exec -it application /bin/sh
|
||||||
|
|
||||||
|
## More Information
|
||||||
- [Gitea LDAP integration](https://docs.gitea.com/usage/authentication)
|
- [Gitea LDAP integration](https://docs.gitea.com/usage/authentication)
|
||||||
- [Gitea Alternatives](https://chatgpt.com/share/67a5f599-c9b0-800f-87fe-49a3b61263e6)
|
- [Gitea Alternatives](https://chatgpt.com/share/67a5f599-c9b0-800f-87fe-49a3b61263e6)
|
@ -1,6 +1,6 @@
|
|||||||
# Docker-GitLab Ansible Role
|
# Docker-GitLab Ansible Role
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
This Ansible role is designed for setting up and managing a GitLab server running in a Docker container. It automates the process of installing GitLab, configuring its environment, and managing dependencies such as a PostgreSQL database and an Nginx reverse proxy.
|
This Ansible role is designed for setting up and managing a GitLab server running in a Docker container. It automates the process of installing GitLab, configuring its environment, and managing dependencies such as a PostgreSQL database and an Nginx reverse proxy.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
@ -33,7 +33,7 @@ Include this role in your Ansible playbooks and specify the necessary variables.
|
|||||||
|
|
||||||
For a detailed walkthrough and explanation of this role, refer to the conversation at [ChatGPT Session Transcript](https://chat.openai.com/share/1b0147bf-d4de-4790-b8ed-c332aa4e3ce3).
|
For a detailed walkthrough and explanation of this role, refer to the conversation at [ChatGPT Session Transcript](https://chat.openai.com/share/1b0147bf-d4de-4790-b8ed-c332aa4e3ce3).
|
||||||
|
|
||||||
## 📚 Other Resources
|
## Further Information
|
||||||
- https://ralph.blog.imixs.com/2019/06/09/running-gitlab-on-docker/
|
- https://ralph.blog.imixs.com/2019/06/09/running-gitlab-on-docker/
|
||||||
|
|
||||||
## Performance Optimation
|
## Performance Optimation
|
||||||
|
@ -1,5 +0,0 @@
|
|||||||
## delete all data
|
|
||||||
```bash
|
|
||||||
docker stop joomla_application_1; docker rm -f joomla_application_1; docker volume rm -f joomla-data;
|
|
||||||
docker stop joomla_database_1; docker rm -f joomla_database_1; docker volume rm -f joomla-database;
|
|
||||||
```
|
|
@ -1 +1,6 @@
|
|||||||
# role docker-joomla
|
# role docker-joomla
|
||||||
|
## delete all data
|
||||||
|
```bash
|
||||||
|
docker stop joomla_application_1; docker rm -f joomla_application_1; docker volume rm -f joomla-data;
|
||||||
|
docker stop joomla_database_1; docker rm -f joomla_database_1; docker volume rm -f joomla-database;
|
||||||
|
```
|
||||||
|
@ -13,7 +13,7 @@ The role integrates Keycloak with PostgreSQL as a database and supports operatio
|
|||||||
- Support for running behind a reverse proxy (e.g., NGINX).
|
- Support for running behind a reverse proxy (e.g., NGINX).
|
||||||
- Automatic creation and management of Docker Compose files.
|
- Automatic creation and management of Docker Compose files.
|
||||||
|
|
||||||
## 📚 Other Resources 📚
|
## More Information 📚
|
||||||
|
|
||||||
For more details about Keycloak, check out:
|
For more details about Keycloak, check out:
|
||||||
- [Official Keycloak Documentation](https://www.keycloak.org/)
|
- [Official Keycloak Documentation](https://www.keycloak.org/)
|
||||||
|
@ -1,26 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## Show Configuration
|
|
||||||
```bash
|
|
||||||
docker exec -it openldap bash -c "ldapsearch -LLL -Y EXTERNAL -H ldapi:/// -b 'cn=config'"
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec -it openldap bash -c "ldapsearch -LLL -Y EXTERNAL -H ldapi:/// -b 'cn=config' -s base '(objectClass=*)'"
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec -it openldap bash -c "ldapsearch -LLL -Y EXTERNAL -H ldapi:/// -b 'cn=config' -s base '(objectClass=olcModuleList)'"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Show all Entries
|
|
||||||
```bash
|
|
||||||
docker exec --env LDAP_ADMIN_PASSWORD="$LDAP_ADMIN_PASSWORD" LDAP_DN_BASE="$LDAP_DN_BASE" -it openldap bash -c "ldapsearch -LLL -o ldif-wrap=no -x -D \"cn=administrator,\$LDAP_DN_BASE\" -w \"\$LDAP_ADMIN_PASSWORD\" -b \"\$LDAP_DN_BASE\"";
|
|
||||||
```
|
|
||||||
|
|
||||||
### Delete Groups and Subgroup
|
|
||||||
To delete the group inclusive all subgroups use:
|
|
||||||
```bash
|
|
||||||
docker exec --env LDAP_ADMIN_PASSWORD="$LDAP_ADMIN_PASSWORD" -it openldap bash -c "ldapsearch -LLL -o ldif-wrap=no -x -D \"cn=administrator,\$LDAP_DN_BASE\" -w \"\$LDAP_ADMIN_PASSWORD\" -b \"ou=applications,ou=groups,\$LDAP_DN_BASE\" dn | sed -n 's/^dn: //p' | tac | while read -r dn; do echo \"Deleting \$dn\"; ldapdelete -x -D \"cn=administrator,\$LDAP_DN_BASE\" -w \"\$LDAP_ADMIN_PASSWORD\" \"\$dn\"; done"
|
|
||||||
|
|
||||||
```
|
|
@ -1,29 +0,0 @@
|
|||||||
# Installation
|
|
||||||
|
|
||||||
## MemberOf
|
|
||||||
```bash
|
|
||||||
# Activate
|
|
||||||
ldapmodify -Y EXTERNAL -H ldapi:/// <<EOF
|
|
||||||
dn: cn=module{0},cn=config
|
|
||||||
changetype: modify
|
|
||||||
add: olcModuleLoad
|
|
||||||
olcModuleLoad: /opt/bitnami/openldap/lib/openldap/memberof.so
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
ldapsearch -Q -Y EXTERNAL -H ldapi:/// -b "cn=module{0},cn=config" olcModuleLoad
|
|
||||||
|
|
||||||
ldapadd -Y EXTERNAL -H ldapi:/// <<EOF
|
|
||||||
dn: olcOverlay=memberof,olcDatabase={2}mdb,cn=config
|
|
||||||
objectClass: olcOverlayConfig
|
|
||||||
objectClass: olcMemberOf
|
|
||||||
olcOverlay: memberof
|
|
||||||
olcMemberOfRefInt: TRUE
|
|
||||||
olcMemberOfDangling: ignore
|
|
||||||
olcMemberOfGroupOC: groupOfNames
|
|
||||||
olcMemberOfMemberAD: member
|
|
||||||
olcMemberOfMemberOfAD: memberOf
|
|
||||||
EOF
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
@ -16,6 +16,64 @@ This Ansible role provides a streamlined implementation of an LDAP server with T
|
|||||||
- **Healthcheck Support**:
|
- **Healthcheck Support**:
|
||||||
- Ensures that the LDAP service is healthy and accessible using `ldapsearch`.
|
- Ensures that the LDAP service is healthy and accessible using `ldapsearch`.
|
||||||
|
|
||||||
|
--
|
||||||
|
## Maintanance
|
||||||
|
|
||||||
|
### Show Config
|
||||||
|
```bash
|
||||||
|
docker exec -it openldap bash -c "ldapsearch -LLL -Y EXTERNAL -H ldapi:/// -b 'cn=config'"
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec -it openldap bash -c "ldapsearch -LLL -Y EXTERNAL -H ldapi:/// -b 'cn=config' -s base '(objectClass=*)'"
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec -it openldap bash -c "ldapsearch -LLL -Y EXTERNAL -H ldapi:/// -b 'cn=config' -s base '(objectClass=olcModuleList)'"
|
||||||
|
```
|
||||||
|
|
||||||
|
## install
|
||||||
|
|
||||||
|
### MemberOf
|
||||||
|
```bash
|
||||||
|
# Activate
|
||||||
|
ldapmodify -Y EXTERNAL -H ldapi:/// <<EOF
|
||||||
|
dn: cn=module{0},cn=config
|
||||||
|
changetype: modify
|
||||||
|
add: olcModuleLoad
|
||||||
|
olcModuleLoad: /opt/bitnami/openldap/lib/openldap/memberof.so
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
ldapsearch -Q -Y EXTERNAL -H ldapi:/// -b "cn=module{0},cn=config" olcModuleLoad
|
||||||
|
|
||||||
|
ldapadd -Y EXTERNAL -H ldapi:/// <<EOF
|
||||||
|
dn: olcOverlay=memberof,olcDatabase={2}mdb,cn=config
|
||||||
|
objectClass: olcOverlayConfig
|
||||||
|
objectClass: olcMemberOf
|
||||||
|
olcOverlay: memberof
|
||||||
|
olcMemberOfRefInt: TRUE
|
||||||
|
olcMemberOfDangling: ignore
|
||||||
|
olcMemberOfGroupOC: groupOfNames
|
||||||
|
olcMemberOfMemberAD: member
|
||||||
|
olcMemberOfMemberOfAD: memberOf
|
||||||
|
EOF
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Show all Entries
|
||||||
|
```bash
|
||||||
|
docker exec --env LDAP_ADMIN_PASSWORD="$LDAP_ADMIN_PASSWORD" LDAP_DN_BASE="$LDAP_DN_BASE" -it openldap bash -c "ldapsearch -LLL -o ldif-wrap=no -x -D \"cn=administrator,\$LDAP_DN_BASE\" -w \"\$LDAP_ADMIN_PASSWORD\" -b \"\$LDAP_DN_BASE\"";
|
||||||
|
```
|
||||||
|
|
||||||
|
### Delete Groups and Subgroup
|
||||||
|
To delete the group inclusive all subgroups use:
|
||||||
|
```bash
|
||||||
|
docker exec --env LDAP_ADMIN_PASSWORD="$LDAP_ADMIN_PASSWORD" -it openldap bash -c "ldapsearch -LLL -o ldif-wrap=no -x -D \"cn=administrator,\$LDAP_DN_BASE\" -w \"\$LDAP_ADMIN_PASSWORD\" -b \"ou=applications,ou=groups,\$LDAP_DN_BASE\" dn | sed -n 's/^dn: //p' | tac | while read -r dn; do echo \"Deleting \$dn\"; ldapdelete -x -D \"cn=administrator,\$LDAP_DN_BASE\" -w \"\$LDAP_ADMIN_PASSWORD\" \"\$dn\"; done"
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
## 🛠️ **Technical Details**
|
## 🛠️ **Technical Details**
|
||||||
|
@ -1,14 +0,0 @@
|
|||||||
# Installation and Configuration
|
|
||||||
|
|
||||||
## Initial Database Setup
|
|
||||||
After the first setup, run the following command to initialize the Listmonk database:
|
|
||||||
```bash
|
|
||||||
docker compose run --rm application ./listmonk --install
|
|
||||||
```
|
|
||||||
|
|
||||||
## Start Services
|
|
||||||
|
|
||||||
Use the following command to start Listmonk services:
|
|
||||||
```bash
|
|
||||||
docker-compose -p listmonk up -d --force-recreate
|
|
||||||
```
|
|
@ -6,12 +6,42 @@ This role deploys the Listmonk application using Docker. Listmonk is a high perf
|
|||||||
- Docker and Docker Compose should be installed on your system.
|
- Docker and Docker Compose should be installed on your system.
|
||||||
- Make sure that the required ports are available and not used by other services.
|
- Make sure that the required ports are available and not used by other services.
|
||||||
|
|
||||||
|
## Installation and Configuration
|
||||||
|
|
||||||
|
1. **Clone the Repository**:
|
||||||
|
- Ensure you have the latest version of this playbook from the repository.
|
||||||
|
|
||||||
|
2. **Configure Variables**:
|
||||||
|
- Set your desired configurations in `vars/main.yml`. This includes the path to your Docker Compose files and any other relevant variables.
|
||||||
|
|
||||||
|
3. **Run the Playbook**:
|
||||||
|
- Execute the ansible playbook to set up Listmonk.
|
||||||
|
|
||||||
|
4. **Initial Database Setup**:
|
||||||
|
- After the first setup, run the following command to initialize the Listmonk database:
|
||||||
|
```bash
|
||||||
|
docker compose run --rm application ./listmonk --install
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Configure Reverse Proxy** (Optional):
|
||||||
|
- If you are using a reverse proxy, configure it as per your domain settings in the `nginx-docker-reverse-proxy` role.
|
||||||
|
|
||||||
|
6. **Start Services**:
|
||||||
|
- Use the following command to start Listmonk services:
|
||||||
|
```bash
|
||||||
|
docker-compose -p listmonk up -d --force-recreate
|
||||||
|
```
|
||||||
|
## Upgrade
|
||||||
|
```bash
|
||||||
|
docker compose run application ./listmonk --upgrade
|
||||||
|
```
|
||||||
|
|
||||||
## Configuration Files
|
## Configuration Files
|
||||||
|
|
||||||
- **docker-compose.yml**: Defines the Docker setup for Listmonk and its database.
|
- **docker-compose.yml**: Defines the Docker setup for Listmonk and its database.
|
||||||
- **config.toml**: Contains the application settings including the database connection, admin credentials, and server settings.
|
- **config.toml**: Contains the application settings including the database connection, admin credentials, and server settings.
|
||||||
|
|
||||||
## 📚 Other Resources
|
## Further Information
|
||||||
- For detailed installation instructions and configuration options, visit the [Listmonk Installation Documentation](https://listmonk.app/docs/installation/).
|
- For detailed installation instructions and configuration options, visit the [Listmonk Installation Documentation](https://listmonk.app/docs/installation/).
|
||||||
- You can also find more information on the [Listmonk GitHub Repository](https://github.com/knadh/listmonk/).
|
- You can also find more information on the [Listmonk GitHub Repository](https://github.com/knadh/listmonk/).
|
||||||
|
|
||||||
|
@ -1,5 +0,0 @@
|
|||||||
# Upgrade
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose run application ./listmonk --upgrade
|
|
||||||
```
|
|
@ -1,60 +0,0 @@
|
|||||||
# Administration 🕵️♂️
|
|
||||||
|
|
||||||
## Database Access 📂
|
|
||||||
|
|
||||||
To access the database, use the following command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it database mysql -u root -D mailu -p
|
|
||||||
```
|
|
||||||
|
|
||||||
## Container Access 🖥️
|
|
||||||
|
|
||||||
To access the front container, use this command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it front /bin/bash
|
|
||||||
```
|
|
||||||
|
|
||||||
## Restarting Services 🔄
|
|
||||||
|
|
||||||
To restart all services, use the following command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
## Resending Queued Mails ✉️
|
|
||||||
|
|
||||||
To resend queued mails, use this command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it smtp postqueue -f
|
|
||||||
```
|
|
||||||
|
|
||||||
# Testing 🧪
|
|
||||||
|
|
||||||
Use the following tools for testing:
|
|
||||||
|
|
||||||
- [SSL-Tools Mailserver Test](https://de.ssl-tools.net/mailservers/)
|
|
||||||
- [TestEmail.de](http://testemail.de/)
|
|
||||||
|
|
||||||
# Updates 🔄
|
|
||||||
|
|
||||||
For instructions on updating your Mailu setup, follow the official [Mailu maintenance guide](https://mailu.io/master/maintain.html).
|
|
||||||
|
|
||||||
# Queue Management 📬
|
|
||||||
|
|
||||||
To manage the Postfix email queue in Mailu, you can use the following commands:
|
|
||||||
|
|
||||||
- **Display the email queue**:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose exec -it smtp postqueue -p
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Delete all emails in the queue**:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose exec -it smtp postsuper -d ALL
|
|
||||||
```
|
|
@ -1,62 +0,0 @@
|
|||||||
# Installation ⚙️
|
|
||||||
|
|
||||||
## Fetchmail Issues 📨
|
|
||||||
|
|
||||||
Fetchmail might not work properly with large amounts of data. For more information, refer to this [issue](https://github.com/Mailu/Mailu/issues/1719).
|
|
||||||
|
|
||||||
## Deactivating Fetchmail ❌
|
|
||||||
|
|
||||||
Before uninstalling Fetchmail, ensure you remove all fetched accounts from the administration panel.
|
|
||||||
|
|
||||||
## Fetchmail Security Concerns 🔐
|
|
||||||
|
|
||||||
There are known security concerns with Fetchmail as stated in the [German Wikipedia](https://de.wikipedia.org/wiki/Fetchmail). If you require Fetchmail functions in the future, consider creating a Docker container for [Getmail](https://en.wikipedia.org/wiki/Getmail) as it is considered more secure.
|
|
||||||
|
|
||||||
## Fetchmail Workaround 🔄
|
|
||||||
|
|
||||||
If you need to receive emails from another account, follow these steps:
|
|
||||||
|
|
||||||
1. Redirect your emails to your new email account.
|
|
||||||
2. Export all data from your original account.
|
|
||||||
3. Import all data to your new account.
|
|
||||||
|
|
||||||
## Port Management 🌐
|
|
||||||
|
|
||||||
Check for any port conflicts and manually change the conflicting ports if necessary. Use the following command to verify:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
netstat -tulpn
|
|
||||||
```
|
|
||||||
|
|
||||||
## Admin Account Creation 👤
|
|
||||||
|
|
||||||
To use Mailu, create the primary administrator user account, `admin@{{hostname}}`, using the command below. Replace `PASSWORD` with your preferred password:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose -p mailu exec admin flask mailu admin {{admin}} {{hostname}} PASSWORD
|
|
||||||
```
|
|
||||||
|
|
||||||
## CLI User Management 🛠️
|
|
||||||
|
|
||||||
For managing users, follow the instructions in the official [Mailu CLI documentation](https://mailu.io/master/cli.html).
|
|
||||||
|
|
||||||
## Starting the Server ▶️
|
|
||||||
|
|
||||||
To start the server, use the following command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose -p mailu up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
## OIDC Support 🔐
|
|
||||||
|
|
||||||
This role now supports OpenID Connect (OIDC) authentication with [Mailu-OIDC](https://github.com/heviat/Mailu-OIDC)! 🎉
|
|
||||||
|
|
||||||
To enable OIDC authentication, simply set the following variable:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
oidc:
|
|
||||||
enabled: true
|
|
||||||
```
|
|
||||||
|
|
||||||
For more details, check out the [Mailu-OIDC repository](https://github.com/heviat/Mailu-OIDC/tree/2024.06).
|
|
@ -2,6 +2,171 @@
|
|||||||
|
|
||||||
This guide provides instructions for setting up, operating, and maintaining the [Mailu](https://mailu.io/) server Docker role.
|
This guide provides instructions for setting up, operating, and maintaining the [Mailu](https://mailu.io/) server Docker role.
|
||||||
|
|
||||||
|
## Table of Contents 📖
|
||||||
|
|
||||||
|
- [Setup](#setup)
|
||||||
|
- [Fetchmail Issues](#fetchmail-issues)
|
||||||
|
- [Data Deletion](#data-deletion)
|
||||||
|
- [Port Management](#port-management)
|
||||||
|
- [Admin Account Creation](#admin-account-creation)
|
||||||
|
- [CLI User Management](#cli-user-management)
|
||||||
|
- [Starting the Server](#starting-the-server)
|
||||||
|
- [Debugging](#debugging)
|
||||||
|
- [Testing](#testing)
|
||||||
|
- [Updates](#updates)
|
||||||
|
- [Queue Management](#queue-management)
|
||||||
|
- [Spam Issues](#spam-issues)
|
||||||
|
- [OIDC Support](#oidc-support)
|
||||||
|
- [To-Do](#to-do)
|
||||||
|
- [References](#references)
|
||||||
|
|
||||||
|
## Setup ⚙️
|
||||||
|
|
||||||
|
### Fetchmail Issues 📨
|
||||||
|
|
||||||
|
Fetchmail might not work properly with large amounts of data. For more information, refer to this [issue](https://github.com/Mailu/Mailu/issues/1719).
|
||||||
|
|
||||||
|
#### Deactivating Fetchmail ❌
|
||||||
|
|
||||||
|
Before uninstalling Fetchmail, ensure you remove all fetched accounts from the administration panel.
|
||||||
|
|
||||||
|
#### Fetchmail Security Concerns 🔐
|
||||||
|
|
||||||
|
There are known security concerns with Fetchmail as stated in the [German Wikipedia](https://de.wikipedia.org/wiki/Fetchmail). If you require Fetchmail functions in the future, consider creating a Docker container for [Getmail](https://en.wikipedia.org/wiki/Getmail) as it is considered more secure.
|
||||||
|
|
||||||
|
#### Fetchmail Workaround 🔄
|
||||||
|
|
||||||
|
If you need to receive emails from another account, follow these steps:
|
||||||
|
|
||||||
|
1. Redirect your emails to your new email account.
|
||||||
|
2. Export all data from your original account.
|
||||||
|
3. Import all data to your new account.
|
||||||
|
|
||||||
|
### Port Management 🌐
|
||||||
|
|
||||||
|
Check for any port conflicts and manually change the conflicting ports if necessary. Use the following command to verify:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
netstat -tulpn
|
||||||
|
```
|
||||||
|
|
||||||
|
### Admin Account Creation 👤
|
||||||
|
|
||||||
|
To use Mailu, create the primary administrator user account, `admin@{{hostname}}`, using the command below. Replace `PASSWORD` with your preferred password:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose -p mailu exec admin flask mailu admin {{admin}} {{hostname}} PASSWORD
|
||||||
|
```
|
||||||
|
|
||||||
|
### CLI User Management 🛠️
|
||||||
|
|
||||||
|
For managing users, follow the instructions in the official [Mailu CLI documentation](https://mailu.io/master/cli.html).
|
||||||
|
|
||||||
|
### Starting the Server ▶️
|
||||||
|
|
||||||
|
To start the server, use the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose -p mailu up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## Debugging 🕵️♂️
|
||||||
|
|
||||||
|
### Database Access 📂
|
||||||
|
|
||||||
|
To access the database, use the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it database mysql -u root -D mailu -p
|
||||||
|
```
|
||||||
|
|
||||||
|
### Container Access 🖥️
|
||||||
|
|
||||||
|
To access the front container, use this command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it front /bin/bash
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restarting Services 🔄
|
||||||
|
|
||||||
|
To restart all services, use the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
### Resending Queued Mails ✉️
|
||||||
|
|
||||||
|
To resend queued mails, use this command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it smtp postqueue -f
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing 🧪
|
||||||
|
|
||||||
|
Use the following tools for testing:
|
||||||
|
|
||||||
|
- [SSL-Tools Mailserver Test](https://de.ssl-tools.net/mailservers/)
|
||||||
|
- [TestEmail.de](http://testemail.de/)
|
||||||
|
|
||||||
|
## Updates 🔄
|
||||||
|
|
||||||
|
For instructions on updating your Mailu setup, follow the official [Mailu maintenance guide](https://mailu.io/master/maintain.html).
|
||||||
|
|
||||||
|
## Queue Management 📬
|
||||||
|
|
||||||
|
To manage the Postfix email queue in Mailu, you can use the following commands:
|
||||||
|
|
||||||
|
- **Display the email queue**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose exec -it smtp postqueue -p
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Delete all emails in the queue**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose exec -it smtp postsuper -d ALL
|
||||||
|
```
|
||||||
|
|
||||||
|
## Spam Issues 🚨
|
||||||
|
|
||||||
|
### Inspect 🔎
|
||||||
|
|
||||||
|
Use the following tools to monitor your domain and email deliverability:
|
||||||
|
|
||||||
|
- [Google Postmaster](https://postmaster.google.com/) - Analyzes deliverability and spam issues for Gmail.
|
||||||
|
- [Yahoo Postmaster](https://postmaster.yahooinc.com) - Provides insights and delivery reports for Yahoo.
|
||||||
|
- [mxtoolbox.com](https://mxtoolbox.com)
|
||||||
|
|
||||||
|
### Blacklist Check 🚫
|
||||||
|
|
||||||
|
If your domain is blacklisted, you can check the status with these services and take steps to remove your domain if necessary:
|
||||||
|
|
||||||
|
- [Spamhaus](https://check.spamhaus.org/)
|
||||||
|
- [Barracuda](https://www.barracudacentral.org/lookups)
|
||||||
|
|
||||||
|
### Cloudmark Reset Request 🔄
|
||||||
|
|
||||||
|
If your IP or domain is flagged by Cloudmark, you can submit a **reset request**:
|
||||||
|
|
||||||
|
- [Cloudmark Reset](https://csi.cloudmark.com/en/reset/)
|
||||||
|
|
||||||
|
## OIDC Support 🔐
|
||||||
|
|
||||||
|
This role now supports OpenID Connect (OIDC) authentication with [Mailu-OIDC](https://github.com/heviat/Mailu-OIDC)! 🎉
|
||||||
|
|
||||||
|
To enable OIDC authentication, simply set the following variable:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
oidc:
|
||||||
|
enabled: true
|
||||||
|
```
|
||||||
|
|
||||||
|
For more details, check out the [Mailu-OIDC repository](https://github.com/heviat/Mailu-OIDC/tree/2024.06).
|
||||||
|
|
||||||
## References 🔗
|
## References 🔗
|
||||||
- [Mailu compose setup guide](https://mailu.io/1.7/compose/setup.html)
|
- [Mailu compose setup guide](https://mailu.io/1.7/compose/setup.html)
|
||||||
- [SysPass issue #1299](https://github.com/nuxsmin/sysPass/issues/1299)
|
- [SysPass issue #1299](https://github.com/nuxsmin/sysPass/issues/1299)
|
||||||
|
@ -1,22 +0,0 @@
|
|||||||
# Spam Issues 🚨
|
|
||||||
|
|
||||||
## Inspect 🔎
|
|
||||||
|
|
||||||
Use the following tools to monitor your domain and email deliverability:
|
|
||||||
|
|
||||||
- [Google Postmaster](https://postmaster.google.com/) - Analyzes deliverability and spam issues for Gmail.
|
|
||||||
- [Yahoo Postmaster](https://postmaster.yahooinc.com) - Provides insights and delivery reports for Yahoo.
|
|
||||||
- [mxtoolbox.com](https://mxtoolbox.com)
|
|
||||||
|
|
||||||
## Blacklist Check 🚫
|
|
||||||
|
|
||||||
If your domain is blacklisted, you can check the status with these services and take steps to remove your domain if necessary:
|
|
||||||
|
|
||||||
- [Spamhaus](https://check.spamhaus.org/)
|
|
||||||
- [Barracuda](https://www.barracudacentral.org/lookups)
|
|
||||||
|
|
||||||
## Cloudmark Reset Request 🔄
|
|
||||||
|
|
||||||
If your IP or domain is flagged by Cloudmark, you can submit a **reset request**:
|
|
||||||
|
|
||||||
- [Cloudmark Reset](https://csi.cloudmark.com/en/reset/)
|
|
@ -1,6 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## Execute SQL commands
|
|
||||||
```bash
|
|
||||||
docker exec -it central-mariadb mariadb -u root -p
|
|
||||||
```
|
|
@ -1,6 +1,6 @@
|
|||||||
# MariaDB Docker Ansible Role
|
# MariaDB Docker Ansible Role
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
This Ansible role facilitates the deployment of a MariaDB server using Docker. It is designed to ensure ease of installation and configuration, with the flexibility to adapt to different environments.
|
This Ansible role facilitates the deployment of a MariaDB server using Docker. It is designed to ensure ease of installation and configuration, with the flexibility to adapt to different environments.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
@ -23,5 +23,10 @@ Configure the role by setting the required variables. These can be set in the pl
|
|||||||
- `database_username`: The username for the database user.
|
- `database_username`: The username for the database user.
|
||||||
- `database_password`: The password for the database user.
|
- `database_password`: The password for the database user.
|
||||||
|
|
||||||
|
## Execute SQL commands
|
||||||
|
```bash
|
||||||
|
docker exec -it central-mariadb mariadb -u root -p
|
||||||
|
```
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
Contributions to this project are welcome. Please submit issues and pull requests with your suggestions.
|
Contributions to this project are welcome. Please submit issues and pull requests with your suggestions.
|
||||||
|
@ -1,39 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## 🗑️ Cleanup (Remove Instance & Volumes)
|
|
||||||
```bash
|
|
||||||
cd {{path_docker_compose_instances}}mastodon/
|
|
||||||
docker-compose down
|
|
||||||
docker volume rm mastodon_data mastodon_database mastodon_redis
|
|
||||||
cd {{path_docker_compose_instances}} &&
|
|
||||||
rm -vR {{path_docker_compose_instances}}mastodon
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔍 Access Mastodon Terminal
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it web /bin/bash
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🛠️ Set File Permissions
|
|
||||||
After setting up Mastodon, apply the correct file permissions:
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it -u root web chown -R 991:991 public
|
|
||||||
```
|
|
||||||
|
|
||||||
# 📦 Database Management
|
|
||||||
|
|
||||||
## 🏗️ Running Database Migrations
|
|
||||||
Ensure all required database structures are up to date:
|
|
||||||
```bash
|
|
||||||
docker compose exec -it web bash -c "RAILS_ENV=production bin/rails db:migrate"
|
|
||||||
```
|
|
||||||
|
|
||||||
# 🚀 Performance Optimization
|
|
||||||
|
|
||||||
## 🗑️ Delete Cache & Recompile Assets
|
|
||||||
```bash
|
|
||||||
docker-compose exec web bundle exec rails assets:precompile
|
|
||||||
docker-compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
This ensures your Mastodon instance is loading the latest assets after updates.
|
|
@ -1,22 +0,0 @@
|
|||||||
# ⚙️ Configuration & Setup
|
|
||||||
|
|
||||||
## 🔧 Create Credentials
|
|
||||||
Run the following command to generate a new configuration setup:
|
|
||||||
```bash
|
|
||||||
docker pull ghcr.io/mastodon/mastodon:latest
|
|
||||||
# Secret Generation
|
|
||||||
docker run --rm ghcr.io/mastodon/mastodon:latest bundle exec rails secret
|
|
||||||
docker run --rm ghcr.io/mastodon/mastodon:latest bundle exec rails secret
|
|
||||||
# Vapid Key Generation
|
|
||||||
docker run --rm ghcr.io/mastodon/mastodon:latest bundle exec rails mastodon:webpush:generate_vapid_key
|
|
||||||
# ACTIVE_RECORD_ENCRYPTION Generation
|
|
||||||
docker run --rm ghcr.io/mastodon/mastodon:latest bin/rails db:encryption:init
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔄 Setup with an Existing Configuration
|
|
||||||
```bash
|
|
||||||
docker-compose run --rm web bundle exec rails db:migrate
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔐 OIDC (OpenID Connect) Authentication Support
|
|
||||||
This Mastodon role now **fully supports OpenID Connect (OIDC)**, allowing seamless authentication via identity providers like **Keycloak, Auth0, Google, or other OIDC-compliant services**.
|
|
@ -3,11 +3,70 @@
|
|||||||
## 📌 Overview
|
## 📌 Overview
|
||||||
This project provides a **Docker-based setup for Mastodon**, including full **OIDC (OpenID Connect) authentication support**. It is maintained by **[Kevin Veen-Birkenbach](https://www.veen.world)**.
|
This project provides a **Docker-based setup for Mastodon**, including full **OIDC (OpenID Connect) authentication support**. It is maintained by **[Kevin Veen-Birkenbach](https://www.veen.world)**.
|
||||||
|
|
||||||
## Credits 📝
|
|
||||||
|
|
||||||
This README and some parts of the code were created with the assistance of ChatGPT. You can follow the discussion and evolution of this project in [this conversation](https://chatgpt.com/c/67a4e19b-3884-800f-9d45-621dda2a6572).
|
This README and some parts of the code were created with the assistance of ChatGPT. You can follow the discussion and evolution of this project in [this conversation](https://chatgpt.com/c/67a4e19b-3884-800f-9d45-621dda2a6572).
|
||||||
|
|
||||||
## 📚 Other Resources
|
## ⚙️ Configuration & Setup
|
||||||
|
|
||||||
|
### 🔧 Create Credentials
|
||||||
|
Run the following command to generate a new configuration setup:
|
||||||
|
```bash
|
||||||
|
docker pull ghcr.io/mastodon/mastodon:latest
|
||||||
|
# Secret Generation
|
||||||
|
docker run --rm ghcr.io/mastodon/mastodon:latest bundle exec rails secret
|
||||||
|
docker run --rm ghcr.io/mastodon/mastodon:latest bundle exec rails secret
|
||||||
|
# Vapid Key Generation
|
||||||
|
docker run --rm ghcr.io/mastodon/mastodon:latest bundle exec rails mastodon:webpush:generate_vapid_key
|
||||||
|
# ACTIVE_RECORD_ENCRYPTION Generation
|
||||||
|
docker run --rm ghcr.io/mastodon/mastodon:latest bin/rails db:encryption:init
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🔄 Setup with an Existing Configuration
|
||||||
|
```bash
|
||||||
|
docker-compose run --rm web bundle exec rails db:migrate
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🗑️ Cleanup (Remove Instance & Volumes)
|
||||||
|
```bash
|
||||||
|
cd {{path_docker_compose_instances}}mastodon/
|
||||||
|
docker-compose down
|
||||||
|
docker volume rm mastodon_data mastodon_database mastodon_redis
|
||||||
|
cd {{path_docker_compose_instances}} &&
|
||||||
|
rm -vR {{path_docker_compose_instances}}mastodon
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🔍 Access Mastodon Terminal
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it web /bin/bash
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🛠️ Set File Permissions
|
||||||
|
After setting up Mastodon, apply the correct file permissions:
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it -u root web chown -R 991:991 public
|
||||||
|
```
|
||||||
|
|
||||||
|
## 📦 Database Management
|
||||||
|
|
||||||
|
### 🏗️ Running Database Migrations
|
||||||
|
Ensure all required database structures are up to date:
|
||||||
|
```bash
|
||||||
|
docker compose exec -it web bash -c "RAILS_ENV=production bin/rails db:migrate"
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚀 Performance Optimization
|
||||||
|
|
||||||
|
### 🗑️ Delete Cache & Recompile Assets
|
||||||
|
```bash
|
||||||
|
docker-compose exec web bundle exec rails assets:precompile
|
||||||
|
docker-compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
This ensures your Mastodon instance is loading the latest assets after updates.
|
||||||
|
|
||||||
|
## 🔐 OIDC (OpenID Connect) Authentication Support
|
||||||
|
This Mastodon role now **fully supports OpenID Connect (OIDC)**, allowing seamless authentication via identity providers like **Keycloak, Auth0, Google, or other OIDC-compliant services**.
|
||||||
|
|
||||||
|
## 📚 Further Reading
|
||||||
- [Mastodon with Docker & Traefik](https://goneuland.de/mastodon-mit-docker-und-traefik-installieren/)
|
- [Mastodon with Docker & Traefik](https://goneuland.de/mastodon-mit-docker-und-traefik-installieren/)
|
||||||
- [Mastodon Configuration Guide](https://gist.github.com/TrillCyborg/84939cd4013ace9960031b803a0590c4)
|
- [Mastodon Configuration Guide](https://gist.github.com/TrillCyborg/84939cd4013ace9960031b803a0590c4)
|
||||||
- [Check Website Availability](https://www.2daygeek.com/linux-command-check-website-is-up-down-alive/)
|
- [Check Website Availability](https://www.2daygeek.com/linux-command-check-website-is-up-down-alive/)
|
||||||
|
@ -2,5 +2,8 @@
|
|||||||
|
|
||||||
This Ansible role deploys a [Matomo](https://matomo.org/) analytics platform instance using Docker.
|
This Ansible role deploys a [Matomo](https://matomo.org/) analytics platform instance using Docker.
|
||||||
|
|
||||||
## Credits 📝
|
## AI Generated
|
||||||
This script was created with the help of ChatGPT. The full conversation is [here](https://chat.openai.com/share/49e0c7e4-a2af-4a04-adad-7a735bdd85c4) available.
|
This script was created with the help of ChatGPT. The full conversation is [here](https://chat.openai.com/share/49e0c7e4-a2af-4a04-adad-7a735bdd85c4) available.
|
||||||
|
|
||||||
|
## Author
|
||||||
|
- [Kevin Veen-Birkenbach](https://www.veen.world/)
|
@ -1,9 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## DANGER: Manuell deativation and deletion
|
|
||||||
Be carefull what you do. This code you can execute:
|
|
||||||
```
|
|
||||||
systemctl list-units --type=service | grep 'matrix' | awk '{print $1}' | xargs -I {} systemctl disable {} &&
|
|
||||||
systemctl list-units --type=service | grep 'matrix' | awk '{print $1}' | xargs -I {} systemctl stop {} &&
|
|
||||||
rm -rv /matrix/
|
|
||||||
```
|
|
@ -1,6 +1,6 @@
|
|||||||
# Docker Setup Matrix via Ansible
|
# Docker Setup Matrix via Ansible
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
|
|
||||||
This document serves as the README for the `docker-ansible-matrix` role, a part of the `CyMaIS` project. This role automates the deployment of a Matrix server using Ansible.
|
This document serves as the README for the `docker-ansible-matrix` role, a part of the `CyMaIS` project. This role automates the deployment of a Matrix server using Ansible.
|
||||||
|
|
||||||
@ -8,5 +8,13 @@ Matrix is an open-source project that provides a protocol for secure, decentrali
|
|||||||
|
|
||||||
This software uses https://github.com/spantaleev/matrix-docker-ansible-deploy as a base.
|
This software uses https://github.com/spantaleev/matrix-docker-ansible-deploy as a base.
|
||||||
|
|
||||||
|
## DANGER: Manuell deativation and deletion
|
||||||
|
Be carefull what you do. This code you can execute:
|
||||||
|
```
|
||||||
|
systemctl list-units --type=service | grep 'matrix' | awk '{print $1}' | xargs -I {} systemctl disable {} &&
|
||||||
|
systemctl list-units --type=service | grep 'matrix' | awk '{print $1}' | xargs -I {} systemctl stop {} &&
|
||||||
|
rm -rv /matrix/
|
||||||
|
```
|
||||||
|
|
||||||
## Alternativ Matrix Setup Role
|
## Alternativ Matrix Setup Role
|
||||||
An alternativ role to deploy Matrix you will find [here](../docker-matrix-compose/)
|
An alternativ role to deploy Matrix you will find [here](../docker-matrix-compose/)
|
@ -1,9 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## Cleanup
|
|
||||||
```
|
|
||||||
# Cleanup Database
|
|
||||||
for db in matrix mautrix_whatsapp_bridge mautrix_telegram_bridge mautrix_signal_bridge mautrix_slack_bridge; do python reset-database-in-central-postgres.py $db; done
|
|
||||||
# Cleanup Docker and Volumes
|
|
||||||
docker compose down -v
|
|
||||||
```
|
|
@ -1,21 +0,0 @@
|
|||||||
# Installation
|
|
||||||
|
|
||||||
## Bridges
|
|
||||||
|
|
||||||
### Mautrix
|
|
||||||
Contact one of the following bots for more information:
|
|
||||||
|
|
||||||
- @signalbot:yourdomain.tld
|
|
||||||
- @telegrambot:yourdomain.tld
|
|
||||||
- @whatsappbot:yourdomain.tld
|
|
||||||
- @slackbot:yourdomain.tld
|
|
||||||
|
|
||||||
#### Slack
|
|
||||||
For login with Token checkout [this guide](https://docs.mau.fi/bridges/go/slack/authentication.html).
|
|
||||||
|
|
||||||
### ChatGPT
|
|
||||||
- Create API Token: https://platform.openai.com/api-keys
|
|
||||||
- Set ``matrix_chatgpt_bridge_access_token``
|
|
||||||
|
|
||||||
## Debug:
|
|
||||||
- https://federationtester.matrix.org/
|
|
@ -1,12 +1,40 @@
|
|||||||
# Docker-Matrix Role README
|
# Docker-Matrix Role README
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
|
|
||||||
This document serves as the README for the `docker-matrix` role, a part of the `CyMaIS` project. This role automates the deployment of a Matrix server using Docker. This role was developed by [Kevin Veen-Birkenbach](https://www.veen.world/)
|
This document serves as the README for the `docker-matrix` role, a part of the `CyMaIS` project. This role automates the deployment of a Matrix server using Docker. This role was developed by [Kevin Veen-Birkenbach](https://www.veen.world/)
|
||||||
|
|
||||||
Matrix is an open-source project that provides a protocol for secure, decentralized, real-time communication. It offers features like end-to-end encrypted chat, VoIP, and file sharing, catering to both individual and enterprise users. With a focus on interoperability, Matrix can bridge with other communication systems, offering a unified platform for messaging and collaboration.
|
Matrix is an open-source project that provides a protocol for secure, decentralized, real-time communication. It offers features like end-to-end encrypted chat, VoIP, and file sharing, catering to both individual and enterprise users. With a focus on interoperability, Matrix can bridge with other communication systems, offering a unified platform for messaging and collaboration.
|
||||||
|
|
||||||
## Credits 📝
|
## Cleanup
|
||||||
|
```
|
||||||
|
# Cleanup Database
|
||||||
|
for db in matrix mautrix_whatsapp_bridge mautrix_telegram_bridge mautrix_signal_bridge mautrix_slack_bridge; do python reset-database-in-central-postgres.py $db; done
|
||||||
|
# Cleanup Docker and Volumes
|
||||||
|
docker compose down -v
|
||||||
|
```
|
||||||
|
|
||||||
|
## Bridges
|
||||||
|
|
||||||
|
### Mautrix
|
||||||
|
Contact one of the following bots for more information:
|
||||||
|
|
||||||
|
- @signalbot:yourdomain.tld
|
||||||
|
- @telegrambot:yourdomain.tld
|
||||||
|
- @whatsappbot:yourdomain.tld
|
||||||
|
- @slackbot:yourdomain.tld
|
||||||
|
|
||||||
|
#### Slack
|
||||||
|
For login with Token checkout [this guide](https://docs.mau.fi/bridges/go/slack/authentication.html).
|
||||||
|
|
||||||
|
### ChatGPT
|
||||||
|
- Create API Token: https://platform.openai.com/api-keys
|
||||||
|
- Set ``matrix_chatgpt_bridge_access_token``
|
||||||
|
|
||||||
|
## Debug:
|
||||||
|
- https://federationtester.matrix.org/
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
### Guides
|
### Guides
|
||||||
- https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html
|
- https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html
|
||||||
|
@ -1,145 +0,0 @@
|
|||||||
# Installation
|
|
||||||
|
|
||||||
## Generate LocalSettings.php
|
|
||||||
Login to the container:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it application /bin/sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Seed the LocalSettings.php:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat > LocalSettings.php << EOF
|
|
||||||
<?php
|
|
||||||
# This file was automatically generated by the MediaWiki 1.35.0
|
|
||||||
# installer. If you make manual changes, please keep track in case you
|
|
||||||
# need to recreate them later.
|
|
||||||
#
|
|
||||||
# See includes/DefaultSettings.php for all configurable settings
|
|
||||||
# and their default values, but don't forget to make changes in _this_
|
|
||||||
# file, not there.
|
|
||||||
#
|
|
||||||
# Further documentation for configuration settings may be found at:
|
|
||||||
# https://www.mediawiki.org/wiki/Manual:Configuration_settings
|
|
||||||
|
|
||||||
# Protect against web entry
|
|
||||||
if ( !defined( 'MEDIAWIKI' ) ) {
|
|
||||||
exit;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
## Uncomment this to disable output compression
|
|
||||||
# \$wgDisableOutputCompression = true;
|
|
||||||
|
|
||||||
\$wgSitename = "test";
|
|
||||||
\$wgMetaNamespace = "Test";
|
|
||||||
|
|
||||||
## The URL base path to the directory containing the wiki;
|
|
||||||
## defaults for all runtime URL paths are based off of this.
|
|
||||||
## For more information on customizing the URLs
|
|
||||||
## (like /w/index.php/Page_title to /wiki/Page_title) please see:
|
|
||||||
## https://www.mediawiki.org/wiki/Manual:Short_URL
|
|
||||||
\$wgScriptPath = "";
|
|
||||||
|
|
||||||
## The protocol and server name to use in fully-qualified URLs
|
|
||||||
\$wgServer = "http://wiki.veen.world";
|
|
||||||
|
|
||||||
## The URL path to static resources (images, scripts, etc.)
|
|
||||||
\$wgResourceBasePath = \$wgScriptPath;
|
|
||||||
|
|
||||||
## The URL paths to the logo. Make sure you change this from the default,
|
|
||||||
## or else you'll overwrite your logo when you upgrade!
|
|
||||||
\$wgLogos = [ '1x' => "\$wgResourceBasePath/resources/assets/wiki.png" ];
|
|
||||||
|
|
||||||
## UPO means: this is also a user preference option
|
|
||||||
|
|
||||||
\$wgEnableEmail = true;
|
|
||||||
\$wgEnableUserEmail = true; # UPO
|
|
||||||
|
|
||||||
\$wgEmergencyContact = "apache@🌻.invalid";
|
|
||||||
\$wgPasswordSender = "apache@🌻.invalid";
|
|
||||||
|
|
||||||
\$wgEnotifUserTalk = false; # UPO
|
|
||||||
\$wgEnotifWatchlist = false; # UPO
|
|
||||||
\$wgEmailAuthentication = true;
|
|
||||||
|
|
||||||
## Database settings
|
|
||||||
\$wgDBtype = "mysql";
|
|
||||||
\$wgDBserver = "database:3306";
|
|
||||||
\$wgDBname = "mediawiki";
|
|
||||||
\$wgDBuser = "mediawiki";
|
|
||||||
\$wgDBpassword = "test";
|
|
||||||
|
|
||||||
# MySQL specific settings
|
|
||||||
\$wgDBprefix = "";
|
|
||||||
|
|
||||||
# MySQL table options to use during installation or update
|
|
||||||
\$wgDBTableOptions = "ENGINE=InnoDB, DEFAULT CHARSET=binary";
|
|
||||||
|
|
||||||
## Shared memory settings
|
|
||||||
\$wgMainCacheType = CACHE_NONE;
|
|
||||||
\$wgMemCachedServers = [];
|
|
||||||
|
|
||||||
## To enable image uploads, make sure the 'images' directory
|
|
||||||
## is writable, then set this to true:
|
|
||||||
\$wgEnableUploads = false;
|
|
||||||
\$wgUseImageMagick = true;
|
|
||||||
\$wgImageMagickConvertCommand = "/usr/bin/convert";
|
|
||||||
|
|
||||||
# InstantCommons allows wiki to use images from https://commons.wikimedia.org
|
|
||||||
\$wgUseInstantCommons = false;
|
|
||||||
|
|
||||||
# Periodically send a pingback to https://www.mediawiki.org/ with basic data
|
|
||||||
# about this MediaWiki instance. The Wikimedia Foundation shares this data
|
|
||||||
# with MediaWiki developers to help guide future development efforts.
|
|
||||||
\$wgPingback = true;
|
|
||||||
|
|
||||||
## If you use ImageMagick (or any other shell command) on a
|
|
||||||
## Linux server, this will need to be set to the name of an
|
|
||||||
## available UTF-8 locale
|
|
||||||
\$wgShellLocale = "C.UTF-8";
|
|
||||||
|
|
||||||
## Set \$wgCacheDirectory to a writable directory on the web server
|
|
||||||
## to make your wiki go slightly faster. The directory should not
|
|
||||||
## be publicly accessible from the web.
|
|
||||||
#\$wgCacheDirectory = "\$IP/cache";
|
|
||||||
|
|
||||||
# Site language code, should be one of the list in ./languages/data/Names.php
|
|
||||||
\$wgLanguageCode = "en";
|
|
||||||
|
|
||||||
\$wgSecretKey = "603fe88c985b05706f19aaf77d2a61459555ff21a4a4d4ef0aa15c8f8ec50f00";
|
|
||||||
|
|
||||||
# Changing this will log out all existing sessions.
|
|
||||||
\$wgAuthenticationTokenVersion = "1";
|
|
||||||
|
|
||||||
# Site upgrade key. Must be set to a string (default provided) to turn on the
|
|
||||||
# web installer while LocalSettings.php is in place
|
|
||||||
\$wgUpgradeKey = "f99263b0f3a7c59a";
|
|
||||||
|
|
||||||
## For attaching licensing metadata to pages, and displaying an
|
|
||||||
## appropriate copyright notice / icon. GNU Free Documentation
|
|
||||||
## License and Creative Commons licenses are supported so far.
|
|
||||||
\$wgRightsPage = ""; # Set to the title of a wiki page that describes your license/copyright
|
|
||||||
\$wgRightsUrl = "";
|
|
||||||
\$wgRightsText = "";
|
|
||||||
\$wgRightsIcon = "";
|
|
||||||
|
|
||||||
# Path to the GNU diff3 utility. Used for conflict resolution.
|
|
||||||
\$wgDiff3 = "/usr/bin/diff3";
|
|
||||||
|
|
||||||
## Default skin: you can change the default skin. Use the internal symbolic
|
|
||||||
## names, ie 'vector', 'monobook':
|
|
||||||
\$wgDefaultSkin = "vector";
|
|
||||||
|
|
||||||
# Enabled skins.
|
|
||||||
# The following skins were automatically enabled:
|
|
||||||
wfLoadSkin( 'MonoBook' );
|
|
||||||
wfLoadSkin( 'Timeless' );
|
|
||||||
wfLoadSkin( 'Vector' );
|
|
||||||
|
|
||||||
|
|
||||||
# End of automatically generated settings.
|
|
||||||
# Add more configuration options below.
|
|
||||||
EOF
|
|
||||||
```
|
|
@ -1,4 +1,147 @@
|
|||||||
# role docker-mediawiki
|
# role docker-mediawiki
|
||||||
|
## Generate LocalSettings.php
|
||||||
|
Login to the container:
|
||||||
|
|
||||||
## 📚 Other Resources
|
```bash
|
||||||
|
docker-compose exec -it application /bin/sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Seed the LocalSettings.php:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat > LocalSettings.php << EOF
|
||||||
|
<?php
|
||||||
|
# This file was automatically generated by the MediaWiki 1.35.0
|
||||||
|
# installer. If you make manual changes, please keep track in case you
|
||||||
|
# need to recreate them later.
|
||||||
|
#
|
||||||
|
# See includes/DefaultSettings.php for all configurable settings
|
||||||
|
# and their default values, but don't forget to make changes in _this_
|
||||||
|
# file, not there.
|
||||||
|
#
|
||||||
|
# Further documentation for configuration settings may be found at:
|
||||||
|
# https://www.mediawiki.org/wiki/Manual:Configuration_settings
|
||||||
|
|
||||||
|
# Protect against web entry
|
||||||
|
if ( !defined( 'MEDIAWIKI' ) ) {
|
||||||
|
exit;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
## Uncomment this to disable output compression
|
||||||
|
# \$wgDisableOutputCompression = true;
|
||||||
|
|
||||||
|
\$wgSitename = "test";
|
||||||
|
\$wgMetaNamespace = "Test";
|
||||||
|
|
||||||
|
## The URL base path to the directory containing the wiki;
|
||||||
|
## defaults for all runtime URL paths are based off of this.
|
||||||
|
## For more information on customizing the URLs
|
||||||
|
## (like /w/index.php/Page_title to /wiki/Page_title) please see:
|
||||||
|
## https://www.mediawiki.org/wiki/Manual:Short_URL
|
||||||
|
\$wgScriptPath = "";
|
||||||
|
|
||||||
|
## The protocol and server name to use in fully-qualified URLs
|
||||||
|
\$wgServer = "http://wiki.veen.world";
|
||||||
|
|
||||||
|
## The URL path to static resources (images, scripts, etc.)
|
||||||
|
\$wgResourceBasePath = \$wgScriptPath;
|
||||||
|
|
||||||
|
## The URL paths to the logo. Make sure you change this from the default,
|
||||||
|
## or else you'll overwrite your logo when you upgrade!
|
||||||
|
\$wgLogos = [ '1x' => "\$wgResourceBasePath/resources/assets/wiki.png" ];
|
||||||
|
|
||||||
|
## UPO means: this is also a user preference option
|
||||||
|
|
||||||
|
\$wgEnableEmail = true;
|
||||||
|
\$wgEnableUserEmail = true; # UPO
|
||||||
|
|
||||||
|
\$wgEmergencyContact = "apache@🌻.invalid";
|
||||||
|
\$wgPasswordSender = "apache@🌻.invalid";
|
||||||
|
|
||||||
|
\$wgEnotifUserTalk = false; # UPO
|
||||||
|
\$wgEnotifWatchlist = false; # UPO
|
||||||
|
\$wgEmailAuthentication = true;
|
||||||
|
|
||||||
|
## Database settings
|
||||||
|
\$wgDBtype = "mysql";
|
||||||
|
\$wgDBserver = "database:3306";
|
||||||
|
\$wgDBname = "mediawiki";
|
||||||
|
\$wgDBuser = "mediawiki";
|
||||||
|
\$wgDBpassword = "test";
|
||||||
|
|
||||||
|
# MySQL specific settings
|
||||||
|
\$wgDBprefix = "";
|
||||||
|
|
||||||
|
# MySQL table options to use during installation or update
|
||||||
|
\$wgDBTableOptions = "ENGINE=InnoDB, DEFAULT CHARSET=binary";
|
||||||
|
|
||||||
|
## Shared memory settings
|
||||||
|
\$wgMainCacheType = CACHE_NONE;
|
||||||
|
\$wgMemCachedServers = [];
|
||||||
|
|
||||||
|
## To enable image uploads, make sure the 'images' directory
|
||||||
|
## is writable, then set this to true:
|
||||||
|
\$wgEnableUploads = false;
|
||||||
|
\$wgUseImageMagick = true;
|
||||||
|
\$wgImageMagickConvertCommand = "/usr/bin/convert";
|
||||||
|
|
||||||
|
# InstantCommons allows wiki to use images from https://commons.wikimedia.org
|
||||||
|
\$wgUseInstantCommons = false;
|
||||||
|
|
||||||
|
# Periodically send a pingback to https://www.mediawiki.org/ with basic data
|
||||||
|
# about this MediaWiki instance. The Wikimedia Foundation shares this data
|
||||||
|
# with MediaWiki developers to help guide future development efforts.
|
||||||
|
\$wgPingback = true;
|
||||||
|
|
||||||
|
## If you use ImageMagick (or any other shell command) on a
|
||||||
|
## Linux server, this will need to be set to the name of an
|
||||||
|
## available UTF-8 locale
|
||||||
|
\$wgShellLocale = "C.UTF-8";
|
||||||
|
|
||||||
|
## Set \$wgCacheDirectory to a writable directory on the web server
|
||||||
|
## to make your wiki go slightly faster. The directory should not
|
||||||
|
## be publicly accessible from the web.
|
||||||
|
#\$wgCacheDirectory = "\$IP/cache";
|
||||||
|
|
||||||
|
# Site language code, should be one of the list in ./languages/data/Names.php
|
||||||
|
\$wgLanguageCode = "en";
|
||||||
|
|
||||||
|
\$wgSecretKey = "603fe88c985b05706f19aaf77d2a61459555ff21a4a4d4ef0aa15c8f8ec50f00";
|
||||||
|
|
||||||
|
# Changing this will log out all existing sessions.
|
||||||
|
\$wgAuthenticationTokenVersion = "1";
|
||||||
|
|
||||||
|
# Site upgrade key. Must be set to a string (default provided) to turn on the
|
||||||
|
# web installer while LocalSettings.php is in place
|
||||||
|
\$wgUpgradeKey = "f99263b0f3a7c59a";
|
||||||
|
|
||||||
|
## For attaching licensing metadata to pages, and displaying an
|
||||||
|
## appropriate copyright notice / icon. GNU Free Documentation
|
||||||
|
## License and Creative Commons licenses are supported so far.
|
||||||
|
\$wgRightsPage = ""; # Set to the title of a wiki page that describes your license/copyright
|
||||||
|
\$wgRightsUrl = "";
|
||||||
|
\$wgRightsText = "";
|
||||||
|
\$wgRightsIcon = "";
|
||||||
|
|
||||||
|
# Path to the GNU diff3 utility. Used for conflict resolution.
|
||||||
|
\$wgDiff3 = "/usr/bin/diff3";
|
||||||
|
|
||||||
|
## Default skin: you can change the default skin. Use the internal symbolic
|
||||||
|
## names, ie 'vector', 'monobook':
|
||||||
|
\$wgDefaultSkin = "vector";
|
||||||
|
|
||||||
|
# Enabled skins.
|
||||||
|
# The following skins were automatically enabled:
|
||||||
|
wfLoadSkin( 'MonoBook' );
|
||||||
|
wfLoadSkin( 'Timeless' );
|
||||||
|
wfLoadSkin( 'Vector' );
|
||||||
|
|
||||||
|
|
||||||
|
# End of automatically generated settings.
|
||||||
|
# Add more configuration options below.
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
## Further Information
|
||||||
This role was adapted to solve some deprecation message. Please test it before using productive. [See this conversation](https://chatgpt.com/share/6781487e-45fc-800f-a35e-e93f49448176).
|
This role was adapted to solve some deprecation message. Please test it before using productive. [See this conversation](https://chatgpt.com/share/6781487e-45fc-800f-a35e-e93f49448176).
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# role docker-moodle
|
# role docker-moodle
|
||||||
|
|
||||||
## 📚 Other Resources
|
## further information
|
||||||
- https://github.com/bitnami/containers/tree/main/bitnami/moodle
|
- https://github.com/bitnami/containers/tree/main/bitnami/moodle
|
@ -1,55 +0,0 @@
|
|||||||
# Installation
|
|
||||||
|
|
||||||
## Multi Domain Installation
|
|
||||||
If you want to access your mybb over multiple domains, keep the following in mind:
|
|
||||||
- Set Cookie Domain to nothing
|
|
||||||
- Access mybb for installation via mybb.<primary_domain>
|
|
||||||
- Set the Board Url to mybb.<primary_domain>
|
|
||||||
|
|
||||||
## Manual Installation of MyBB Plugins
|
|
||||||
|
|
||||||
This guide describes the process of manually installing MyBB plugins in your Docker-MyBB environment. This can be useful if you want to quickly test plugins or do not wish to execute the Ansible role.
|
|
||||||
|
|
||||||
### Steps for Manual Installation
|
|
||||||
|
|
||||||
|
|
||||||
1. **Prepare Plugin Files:**
|
|
||||||
- Download the desired MyBB plugin zip files.
|
|
||||||
|
|
||||||
2. **Copy plugin to host:**
|
|
||||||
- ```bash scp <plugin> administrator@<server>:/opt/docker/mybb/plugins```
|
|
||||||
|
|
||||||
3. **Unzip Plugin Files on the Host:**
|
|
||||||
- Unzip the plugin zip files in the host's plugin directory:
|
|
||||||
```bash
|
|
||||||
unzip /opt/docker/mybb/plugins/<plugin-file>.zip -d /opt/docker/mybb/plugins/
|
|
||||||
```
|
|
||||||
- Replace `<plugin-file>.zip` with the name of the plugin zip file.
|
|
||||||
- Repeat this step for each plugin.
|
|
||||||
|
|
||||||
4. **Access the Docker Container:**
|
|
||||||
- Open a terminal or SSH session on the server where the Docker container is running.
|
|
||||||
|
|
||||||
5. **Copy Unzipped Plugin Files to the Container:**
|
|
||||||
- Copy the unzipped plugin files from the host directory to the Docker container:
|
|
||||||
```bash
|
|
||||||
docker compose cp /opt/docker/mybb/plugins/<unzipped-plugin-folder> application:/var/www/html/inc/plugins/
|
|
||||||
```
|
|
||||||
- Replace `<unzipped-plugin-folder>` with the name of the unzipped plugin folder.
|
|
||||||
|
|
||||||
6. **Restart the Container:**
|
|
||||||
- Execute the following command to restart the MyBB container:
|
|
||||||
```bash
|
|
||||||
docker-compose -p mybb up -d --force-recreate
|
|
||||||
```
|
|
||||||
- This ensures all changes take effect.
|
|
||||||
|
|
||||||
7. **Activate Plugins in the MyBB Admin Panel:**
|
|
||||||
- Open the MyBB admin panel in your web browser.
|
|
||||||
- Navigate to the plugin settings and activate the newly installed plugins.
|
|
||||||
|
|
||||||
### Important Notes
|
|
||||||
|
|
||||||
- Ensure you use the correct paths and filenames.
|
|
||||||
- Do not forget to regularly back up your MyBB database and files before making changes.
|
|
||||||
- If encountering issues, refer to the MyBB documentation or specific instructions from the plugin author.
|
|
@ -1,7 +1,76 @@
|
|||||||
# Docker MyBB
|
# Role Name: Docker MyBB
|
||||||
|
|
||||||
## Credits 📝
|
## Dependencies
|
||||||
|
- nginx-docker-reverse-proxy
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Multi Domain Installation
|
||||||
|
If you want to access your mybb over multiple domains, keep the following in mind:
|
||||||
|
- Set Cookie Domain to nothing
|
||||||
|
- Access mybb for installation via mybb.<primary_domain>
|
||||||
|
- Set the Board Url to mybb.<primary_domain>
|
||||||
|
|
||||||
|
### Manual Installation of MyBB Plugins
|
||||||
|
|
||||||
|
This guide describes the process of manually installing MyBB plugins in your Docker-MyBB environment. This can be useful if you want to quickly test plugins or do not wish to execute the Ansible role.
|
||||||
|
|
||||||
|
#### Steps for Manual Installation
|
||||||
|
|
||||||
|
|
||||||
|
1. **Prepare Plugin Files:**
|
||||||
|
- Download the desired MyBB plugin zip files.
|
||||||
|
|
||||||
|
2. **Copy plugin to host:**
|
||||||
|
- ```bash scp <plugin> administrator@<server>:/opt/docker/mybb/plugins```
|
||||||
|
|
||||||
|
3. **Unzip Plugin Files on the Host:**
|
||||||
|
- Unzip the plugin zip files in the host's plugin directory:
|
||||||
|
```bash
|
||||||
|
unzip /opt/docker/mybb/plugins/<plugin-file>.zip -d /opt/docker/mybb/plugins/
|
||||||
|
```
|
||||||
|
- Replace `<plugin-file>.zip` with the name of the plugin zip file.
|
||||||
|
- Repeat this step for each plugin.
|
||||||
|
|
||||||
|
4. **Access the Docker Container:**
|
||||||
|
- Open a terminal or SSH session on the server where the Docker container is running.
|
||||||
|
|
||||||
|
5. **Copy Unzipped Plugin Files to the Container:**
|
||||||
|
- Copy the unzipped plugin files from the host directory to the Docker container:
|
||||||
|
```bash
|
||||||
|
docker compose cp /opt/docker/mybb/plugins/<unzipped-plugin-folder> application:/var/www/html/inc/plugins/
|
||||||
|
```
|
||||||
|
- Replace `<unzipped-plugin-folder>` with the name of the unzipped plugin folder.
|
||||||
|
|
||||||
|
6. **Restart the Container:**
|
||||||
|
- Execute the following command to restart the MyBB container:
|
||||||
|
```bash
|
||||||
|
docker-compose -p mybb up -d --force-recreate
|
||||||
|
```
|
||||||
|
- This ensures all changes take effect.
|
||||||
|
|
||||||
|
7. **Activate Plugins in the MyBB Admin Panel:**
|
||||||
|
- Open the MyBB admin panel in your web browser.
|
||||||
|
- Navigate to the plugin settings and activate the newly installed plugins.
|
||||||
|
|
||||||
|
#### Important Notes
|
||||||
|
|
||||||
|
- Ensure you use the correct paths and filenames.
|
||||||
|
- Do not forget to regularly back up your MyBB database and files before making changes.
|
||||||
|
- If encountering issues, refer to the MyBB documentation or specific instructions from the plugin author.
|
||||||
|
|
||||||
|
### Running the Role
|
||||||
|
Execute the Ansible playbook containing this role to set up MyBB in a Docker environment.
|
||||||
|
|
||||||
|
## Docker Compose Configuration
|
||||||
|
The `docker-compose.yml.j2` template outlines the services required for MyBB, including the application server, Nginx web server, and database (MariaDB).
|
||||||
|
|
||||||
|
## Additional Information
|
||||||
|
- For detailed configuration and customization, refer to the contents of the `default.conf` template and the `docker-compose.yml.j2` template.
|
||||||
|
- Ensure that the environment variables and paths are correctly set as per your system's configuration.
|
||||||
|
|
||||||
|
## Created with ChatGPT
|
||||||
This README was created with the assistance of ChatGPT, based on a conversation held at this [link](https://chat.openai.com/share/83828f9a-b817-48d8-86ed-599f64850b4d). ChatGPT provided guidance on structuring this document and outlining the key components of the Docker MyBB role.
|
This README was created with the assistance of ChatGPT, based on a conversation held at this [link](https://chat.openai.com/share/83828f9a-b817-48d8-86ed-599f64850b4d). ChatGPT provided guidance on structuring this document and outlining the key components of the Docker MyBB role.
|
||||||
|
|
||||||
## 📚 Other Resources
|
## More Information
|
||||||
- https://github.com/mybb/docker
|
- https://github.com/mybb/docker
|
@ -1,190 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## Modify Config 🔧
|
|
||||||
|
|
||||||
### Enter the Container
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it application /bin/sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### Modify the Configuration
|
|
||||||
Inside the container, install a text editor and edit the config:
|
|
||||||
```bash
|
|
||||||
apk add --no-cache nano && nano config/config.php
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Update 🔄
|
|
||||||
|
|
||||||
To update the Nextcloud container, execute the following commands on the server:
|
|
||||||
```bash
|
|
||||||
docker exec -it -u www-data nextcloud-application /var/www/html/occ maintenance:mode --on &&
|
|
||||||
export COMPOSE_HTTP_TIMEOUT=600 &&
|
|
||||||
export DOCKER_CLIENT_TIMEOUT=600 &&
|
|
||||||
docker-compose down
|
|
||||||
```
|
|
||||||
|
|
||||||
Afterwards, update the ***applications.nextcloud.version*** variable to the next version and run this repository with this Ansible role.
|
|
||||||
|
|
||||||
> **Note:**
|
|
||||||
> It is only possible to update from one to the next major version at a time.
|
|
||||||
> Wait for the update to finish.
|
|
||||||
|
|
||||||
Verify the update by checking the logs:
|
|
||||||
```bash
|
|
||||||
docker-compose logs application
|
|
||||||
```
|
|
||||||
and
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it application top
|
|
||||||
```
|
|
||||||
|
|
||||||
If Nextcloud remains in maintenance mode after the update, try the following:
|
|
||||||
```bash
|
|
||||||
docker exec -it -u www-data nextcloud-application/var/www/html/occ maintenance:mode --on
|
|
||||||
docker exec -it -u www-data nextcloud-application /var/www/html/occ upgrade
|
|
||||||
docker exec -it -u www-data nextcloud-application /var/www/html/occ maintenance:mode --off
|
|
||||||
```
|
|
||||||
|
|
||||||
If the update process fails, execute:
|
|
||||||
```bash
|
|
||||||
docker exec -it -u www-data nextcloud-application /var/www/html/occ maintenance:repair --include-expensive
|
|
||||||
```
|
|
||||||
and disable any non-functioning apps.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Recover Latest Backup 💾
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd {{path_docker_compose_instances}}nextcloud &&
|
|
||||||
docker-compose down &&
|
|
||||||
docker-compose exec -i database mysql -u nextcloud -pPASSWORT nextcloud < "/Backups/$(sha256sum /etc/machine-id | head -c 64)/backup-docker-to-local/latest/nextcloud_database/sql/backup.sql" &&
|
|
||||||
cd {{path_administrator_scripts}}backup-docker-to-local &&
|
|
||||||
bash ./recover-docker-from-local.sh "nextcloud_data" "$(sha256sum /etc/machine-id | head -c 64)"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Database Management 🗄️
|
|
||||||
|
|
||||||
### Database Access
|
|
||||||
To access the database, execute:
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it database mysql -u nextcloud -D nextcloud -p
|
|
||||||
```
|
|
||||||
|
|
||||||
### Recreate Database with New Volume
|
|
||||||
```bash
|
|
||||||
docker-compose run --detach --name database --env MYSQL_USER="nextcloud" --env MYSQL_PASSWORD=PASSWORD --env MYSQL_ROOT_PASSWORD=PASSWORD --env MYSQL_DATABASE="nextcloud" -v nextcloud_database:/var/lib/mysql
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## OCC (Nextcloud Command Line) 🔧
|
|
||||||
|
|
||||||
To use OCC, run:
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it -u www-data application /var/www/html/occ
|
|
||||||
```
|
|
||||||
### User Administration
|
|
||||||
|
|
||||||
#### List Users
|
|
||||||
```bash
|
|
||||||
docker compose exec -it -u www-data application php occ user:list
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Sync Users
|
|
||||||
```bash
|
|
||||||
docker compose exec -it -u www-data application php occ user:sync
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Create user via CLI
|
|
||||||
```bash
|
|
||||||
docker compose exec -it -u www-data application php occ user:add {{username}}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Make user admin via cli
|
|
||||||
```bash
|
|
||||||
docker compose exec -it -u www-data application php occ group:adduser admin {{username}}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Delete user via CLI
|
|
||||||
```bash
|
|
||||||
docker compose exec -it -u www-data application php occ user:delete {{username}}
|
|
||||||
```
|
|
||||||
---
|
|
||||||
|
|
||||||
### App Administration
|
|
||||||
```bash
|
|
||||||
docker compose exec -u www-data application php occ config:list {{app_name}}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Initialize Duplicates
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it -u www-data application /var/www/html/occ duplicates:find-all --output
|
|
||||||
```
|
|
||||||
|
|
||||||
### Unlock Files
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it -u www-data application /var/www/html/occ maintenance:mode --on
|
|
||||||
docker-compose exec -it nextcloud_database_1 mysql -u nextcloud -pPASSWORD1234132 -D nextcloud -e "delete from oc_file_locks where 1"
|
|
||||||
docker-compose exec -it -u www-data application /var/www/html/occ maintenance:mode --off
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Apps
|
|
||||||
|
|
||||||
### App Relevant Tables 🗃️
|
|
||||||
|
|
||||||
- `oc_appconfig`
|
|
||||||
- `oc_migrations`
|
|
||||||
|
|
||||||
### Cospend
|
|
||||||
|
|
||||||
#### Relevant SQL Commands for Cospend
|
|
||||||
Debugguging Migrations:
|
|
||||||
|
|
||||||
https://github.com/julien-nc/cospend-nc/issues/325
|
|
||||||
```sql
|
|
||||||
-- Show all Cospend Tables
|
|
||||||
SHOW TABLES where Tables_in_nextcloud LIKE "%cospend%";
|
|
||||||
-- Show Cospend Configuration
|
|
||||||
SELECT * FROM `oc_appconfig` WHERE appid LIKE "%cospend%";
|
|
||||||
-- Show Cospend Database Migrations
|
|
||||||
SELECT * FROM `oc_migrations` WHERE app LIKE "%cospend%";
|
|
||||||
```
|
|
||||||
|
|
||||||
# Identity and Access Management (IAM)
|
|
||||||
|
|
||||||
## OpenID Connect (OIDC) Support 🔐
|
|
||||||
|
|
||||||
OIDC is supported in this role—for example, via **Keycloak**. OIDC-specific tasks are included when enabled, allowing integration of external authentication providers seamlessly.
|
|
||||||
|
|
||||||
### Verify OIDC Configuration
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose exec -u www-data application /var/www/html/occ config:app:get sociallogin custom_providers
|
|
||||||
```
|
|
||||||
|
|
||||||
## LDAP
|
|
||||||
|
|
||||||
More information: https://docs.nextcloud.com/server/latest/admin_manual/configuration_user/user_auth_ldap.html
|
|
||||||
|
|
||||||
## Get all relevant entries except password
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT * FROM `oc_appconfig` WHERE appid LIKE "%ldap%" and configkey != "s01ldap_agent_password";
|
|
||||||
```
|
|
||||||
|
|
||||||
## Update User with LDAP values
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose exec -it -u www-data application php occ ldap:check-user --update {{username}}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Federation
|
|
||||||
|
|
||||||
If users are just created via Keycloak and not via LDAP, they have a different username. Due to this reaso concider to use LDAP to guaranty that the username is valid.
|
|
@ -1,9 +1,201 @@
|
|||||||
# Docker Nextcloud Role 🚀
|
# Docker Nextcloud Role 🚀
|
||||||
|
|
||||||
This repository contains an Ansible role for deploying and managing [Nextcloud](https://nextcloud.com/) using [Docker](https://www.docker.com/). It covers configuration modifications, updates, backups, database management, and more. Additionally, OIDC (OpenID Connect) is supported (for example, via **Keycloak**).
|
This repository contains an Ansible role for deploying and managing [Nextcloud](https://nextcloud.com/) using [Docker](https://www.docker.com/). It covers configuration modifications, updates, backups, database management, and more. Additionally, OIDC (OpenID Connect) is supported (for example, via **Keycloak**).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📚 Other Resources
|
## Modify Config 🔧
|
||||||
|
|
||||||
|
### Enter the Container
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it application /bin/sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Modify the Configuration
|
||||||
|
Inside the container, install a text editor and edit the config:
|
||||||
|
```bash
|
||||||
|
apk add --no-cache nano && nano config/config.php
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Update 🔄
|
||||||
|
|
||||||
|
To update the Nextcloud container, execute the following commands on the server:
|
||||||
|
```bash
|
||||||
|
docker exec -it -u www-data nextcloud-application /var/www/html/occ maintenance:mode --on &&
|
||||||
|
export COMPOSE_HTTP_TIMEOUT=600 &&
|
||||||
|
export DOCKER_CLIENT_TIMEOUT=600 &&
|
||||||
|
docker-compose down
|
||||||
|
```
|
||||||
|
|
||||||
|
Afterwards, update the ***applications.nextcloud.version*** variable to the next version and run this repository with this Ansible role.
|
||||||
|
|
||||||
|
> **Note:**
|
||||||
|
> It is only possible to update from one to the next major version at a time.
|
||||||
|
> Wait for the update to finish.
|
||||||
|
|
||||||
|
Verify the update by checking the logs:
|
||||||
|
```bash
|
||||||
|
docker-compose logs application
|
||||||
|
```
|
||||||
|
and
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it application top
|
||||||
|
```
|
||||||
|
|
||||||
|
If Nextcloud remains in maintenance mode after the update, try the following:
|
||||||
|
```bash
|
||||||
|
docker exec -it -u www-data nextcloud-application/var/www/html/occ maintenance:mode --on
|
||||||
|
docker exec -it -u www-data nextcloud-application /var/www/html/occ upgrade
|
||||||
|
docker exec -it -u www-data nextcloud-application /var/www/html/occ maintenance:mode --off
|
||||||
|
```
|
||||||
|
|
||||||
|
If the update process fails, execute:
|
||||||
|
```bash
|
||||||
|
docker exec -it -u www-data nextcloud-application /var/www/html/occ maintenance:repair --include-expensive
|
||||||
|
```
|
||||||
|
and disable any non-functioning apps.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Recover Latest Backup 💾
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd {{path_docker_compose_instances}}nextcloud &&
|
||||||
|
docker-compose down &&
|
||||||
|
docker-compose exec -i database mysql -u nextcloud -pPASSWORT nextcloud < "/Backups/$(sha256sum /etc/machine-id | head -c 64)/backup-docker-to-local/latest/nextcloud_database/sql/backup.sql" &&
|
||||||
|
cd {{path_administrator_scripts}}backup-docker-to-local &&
|
||||||
|
bash ./recover-docker-from-local.sh "nextcloud_data" "$(sha256sum /etc/machine-id | head -c 64)"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Database Management 🗄️
|
||||||
|
|
||||||
|
### Database Access
|
||||||
|
To access the database, execute:
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it database mysql -u nextcloud -D nextcloud -p
|
||||||
|
```
|
||||||
|
|
||||||
|
### Recreate Database with New Volume
|
||||||
|
```bash
|
||||||
|
docker-compose run --detach --name database --env MYSQL_USER="nextcloud" --env MYSQL_PASSWORD=PASSWORD --env MYSQL_ROOT_PASSWORD=PASSWORD --env MYSQL_DATABASE="nextcloud" -v nextcloud_database:/var/lib/mysql
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OCC (Nextcloud Command Line) 🔧
|
||||||
|
|
||||||
|
To use OCC, run:
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it -u www-data application /var/www/html/occ
|
||||||
|
```
|
||||||
|
### User Administration
|
||||||
|
|
||||||
|
#### List Users
|
||||||
|
```bash
|
||||||
|
docker compose exec -it -u www-data application php occ user:list
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Sync Users
|
||||||
|
```bash
|
||||||
|
docker compose exec -it -u www-data application php occ user:sync
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Create user via CLI
|
||||||
|
```bash
|
||||||
|
docker compose exec -it -u www-data application php occ user:add {{username}}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Make user admin via cli
|
||||||
|
```bash
|
||||||
|
docker compose exec -it -u www-data application php occ group:adduser admin {{username}}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Delete user via CLI
|
||||||
|
```bash
|
||||||
|
docker compose exec -it -u www-data application php occ user:delete {{username}}
|
||||||
|
```
|
||||||
|
---
|
||||||
|
|
||||||
|
### App Administration
|
||||||
|
```bash
|
||||||
|
docker compose exec -u www-data application php occ config:list {{app_name}}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Initialize Duplicates
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it -u www-data application /var/www/html/occ duplicates:find-all --output
|
||||||
|
```
|
||||||
|
|
||||||
|
### Unlock Files
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it -u www-data application /var/www/html/occ maintenance:mode --on
|
||||||
|
docker-compose exec -it nextcloud_database_1 mysql -u nextcloud -pPASSWORD1234132 -D nextcloud -e "delete from oc_file_locks where 1"
|
||||||
|
docker-compose exec -it -u www-data application /var/www/html/occ maintenance:mode --off
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Apps
|
||||||
|
|
||||||
|
### App Relevant Tables 🗃️
|
||||||
|
|
||||||
|
- `oc_appconfig`
|
||||||
|
- `oc_migrations`
|
||||||
|
|
||||||
|
### Cospend
|
||||||
|
|
||||||
|
#### Relevant SQL Commands for Cospend
|
||||||
|
Debugguging Migrations:
|
||||||
|
|
||||||
|
https://github.com/julien-nc/cospend-nc/issues/325
|
||||||
|
```sql
|
||||||
|
-- Show all Cospend Tables
|
||||||
|
SHOW TABLES where Tables_in_nextcloud LIKE "%cospend%";
|
||||||
|
-- Show Cospend Configuration
|
||||||
|
SELECT * FROM `oc_appconfig` WHERE appid LIKE "%cospend%";
|
||||||
|
-- Show Cospend Database Migrations
|
||||||
|
SELECT * FROM `oc_migrations` WHERE app LIKE "%cospend%";
|
||||||
|
```
|
||||||
|
|
||||||
|
# Identity and Access Management (IAM)
|
||||||
|
|
||||||
|
## OpenID Connect (OIDC) Support 🔐
|
||||||
|
|
||||||
|
OIDC is supported in this role—for example, via **Keycloak**. OIDC-specific tasks are included when enabled, allowing integration of external authentication providers seamlessly.
|
||||||
|
|
||||||
|
### Verify OIDC Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose exec -u www-data application /var/www/html/occ config:app:get sociallogin custom_providers
|
||||||
|
```
|
||||||
|
|
||||||
|
## LDAP
|
||||||
|
|
||||||
|
More information: https://docs.nextcloud.com/server/latest/admin_manual/configuration_user/user_auth_ldap.html
|
||||||
|
|
||||||
|
## Get all relevant entries except password
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT * FROM `oc_appconfig` WHERE appid LIKE "%ldap%" and configkey != "s01ldap_agent_password";
|
||||||
|
```
|
||||||
|
|
||||||
|
## Update User with LDAP values
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose exec -it -u www-data application php occ ldap:check-user --update {{username}}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Federation
|
||||||
|
|
||||||
|
If users are just created via Keycloak and not via LDAP, they have a different username. Due to this reaso concider to use LDAP to guaranty that the username is valid.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Further Information ℹ️
|
||||||
|
|
||||||
- [Nextcloud Docker Example with Nginx Proxy, MariaDB, and FPM](https://github.com/nextcloud/docker/blob/master/.examples/docker-compose/with-nginx-proxy/mariadb/fpm/docker-compose.yml)
|
- [Nextcloud Docker Example with Nginx Proxy, MariaDB, and FPM](https://github.com/nextcloud/docker/blob/master/.examples/docker-compose/with-nginx-proxy/mariadb/fpm/docker-compose.yml)
|
||||||
- [Nextcloud Upgrade via Docker by Goneuland](https://goneuland.de/nextcloud-upgrade-auf-neue-versionen-mittels-docker/)
|
- [Nextcloud Upgrade via Docker by Goneuland](https://goneuland.de/nextcloud-upgrade-auf-neue-versionen-mittels-docker/)
|
||||||
@ -18,6 +210,14 @@ This repository contains an Ansible role for deploying and managing [Nextcloud](
|
|||||||
- [Nextcloud Talk Plugin and Turnserver in Docker](https://forum.openmediavault.org/index.php?thread/31782-docker-nextcloud-talk-plugin-and-turnserver/)
|
- [Nextcloud Talk Plugin and Turnserver in Docker](https://forum.openmediavault.org/index.php?thread/31782-docker-nextcloud-talk-plugin-and-turnserver/)
|
||||||
- [Nextcloud Talk on Docker: Turn Server Issues](https://help.nextcloud.com/t/nextcloud-talk-im-docker/container/turn-server-auf-docker-host-kein-video/84133/10)
|
- [Nextcloud Talk on Docker: Turn Server Issues](https://help.nextcloud.com/t/nextcloud-talk-im-docker/container/turn-server-auf-docker-host-kein-video/84133/10)
|
||||||
|
|
||||||
|
---
|
||||||
|
## Author
|
||||||
|
|
||||||
|
**Developed by:** Kevin Veen-Birkenbach
|
||||||
|
**Website:** [https://www.veen.world/](https://www.veen.world/)
|
||||||
|
|
||||||
|
*This README.md was created with the help of [ChatGPT](https://chatgpt.com/share/67a5312c-7248-800f-ae27-0288c1c82f1d).*
|
||||||
|
|
||||||
---
|
---
|
||||||
*Enjoy and happy containerizing! 😄*
|
*Enjoy and happy containerizing! 😄*
|
||||||
|
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
Welcome to the **Docker OAuth2 Proxy Role**! 🌟 This role contains helper functions to set up an OAuth2 proxy using [OAuth2 Proxy](https://github.com/oauth2-proxy/oauth2-proxy), a tool designed to secure applications by protecting them with OAuth2 authentication. 💡
|
Welcome to the **Docker OAuth2 Proxy Role**! 🌟 This role contains helper functions to set up an OAuth2 proxy using [OAuth2 Proxy](https://github.com/oauth2-proxy/oauth2-proxy), a tool designed to secure applications by protecting them with OAuth2 authentication. 💡
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
|
|
||||||
The OAuth2 Proxy is used to shield specific web applications from unauthorized access by requiring users to authenticate via an external identity provider, such as Keycloak. This role simplifies the setup process by providing templated configurations and tasks to integrate the OAuth2 Proxy with Docker Compose and Keycloak.
|
The OAuth2 Proxy is used to shield specific web applications from unauthorized access by requiring users to authenticate via an external identity provider, such as Keycloak. This role simplifies the setup process by providing templated configurations and tasks to integrate the OAuth2 Proxy with Docker Compose and Keycloak.
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# OpenProject Role
|
# OpenProject Role
|
||||||
|
|
||||||
## 📌 Overview
|
## Overview
|
||||||
|
|
||||||
This role is designed to deploy the [OpenProject](https://www.openproject.org/) application using Docker. It includes tasks for setting up the environment, pulling the Docker repository, and configuring a reverse proxy with Nginx. It was developed by [Kevin Veen-Birkenbach](https://www.veen.world/)
|
This role is designed to deploy the [OpenProject](https://www.openproject.org/) application using Docker. It includes tasks for setting up the environment, pulling the Docker repository, and configuring a reverse proxy with Nginx. It was developed by [Kevin Veen-Birkenbach](https://www.veen.world/)
|
||||||
|
|
||||||
|
@ -1,29 +0,0 @@
|
|||||||
# Administration
|
|
||||||
|
|
||||||
## track docker container status
|
|
||||||
```bash
|
|
||||||
watch -n 2 "docker ps -a | grep peertube"
|
|
||||||
```
|
|
||||||
|
|
||||||
## clean rebuild
|
|
||||||
```bash
|
|
||||||
cd {{path_docker_compose_instances}}peertube/ &&
|
|
||||||
docker-compose down
|
|
||||||
docker volume rm peertube_assets peertube_config peertube_data peertube_database peertube_redis
|
|
||||||
docker-compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
## access terminal
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it application /bin/bash
|
|
||||||
```
|
|
||||||
|
|
||||||
## update config
|
|
||||||
```bash
|
|
||||||
apt update && apt install nano && nano ./config/default.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
## get root pasword
|
|
||||||
```bash
|
|
||||||
docker logs peertube-application-1 | grep -A1 root
|
|
||||||
```
|
|
@ -1,7 +1,37 @@
|
|||||||
# docker peertube
|
# docker peertube
|
||||||
|
|
||||||
|
## track docker container status
|
||||||
|
```bash
|
||||||
|
watch -n 2 "docker ps -a | grep peertube"
|
||||||
|
```
|
||||||
|
|
||||||
## 📚 Other Resources
|
## clean rebuild
|
||||||
|
```bash
|
||||||
|
cd {{path_docker_compose_instances}}peertube/ &&
|
||||||
|
docker-compose down
|
||||||
|
docker volume rm peertube_assets peertube_config peertube_data peertube_database peertube_redis
|
||||||
|
docker-compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## access terminal
|
||||||
|
```bash
|
||||||
|
docker-compose exec -it application /bin/bash
|
||||||
|
```
|
||||||
|
|
||||||
|
## update config
|
||||||
|
```bash
|
||||||
|
apt update && apt install nano && nano ./config/default.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## get root pasword
|
||||||
|
```bash
|
||||||
|
docker logs peertube-application-1 | grep -A1 root
|
||||||
|
```
|
||||||
|
|
||||||
|
## upgrade version
|
||||||
|
- https://docs.joinpeertube.org/install/docker
|
||||||
|
|
||||||
|
## further information
|
||||||
- https://docs.joinpeertube.org/install-docker
|
- https://docs.joinpeertube.org/install-docker
|
||||||
- https://github.com/Chocobozzz/PeerTube/issues/3091
|
- https://github.com/Chocobozzz/PeerTube/issues/3091
|
||||||
- [OIDC Plugin installation](https://chatgpt.com/c/67a4f448-4be8-800f-8639-4c15cb2fb44e)
|
- [OIDC Plugin installation](https://chatgpt.com/c/67a4f448-4be8-800f-8639-4c15cb2fb44e)
|
@ -1,2 +0,0 @@
|
|||||||
# upgrade version
|
|
||||||
- https://docs.joinpeertube.org/install/docker
|
|
@ -1,134 +0,0 @@
|
|||||||
## Accessing Services
|
|
||||||
|
|
||||||
### Application Access
|
|
||||||
To gain shell access to the application container, run the following command:
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it application bash
|
|
||||||
```
|
|
||||||
|
|
||||||
### Clear Cache
|
|
||||||
```bash
|
|
||||||
docker compose exec -it application php artisan cache:clear
|
|
||||||
```
|
|
||||||
|
|
||||||
### Database Access
|
|
||||||
To access the MariaDB instance in the database container, run the following command:
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it database mariadb -u pixelfed -p
|
|
||||||
```
|
|
||||||
|
|
||||||
### User Management via CLI in Pixelfed Docker Setup
|
|
||||||
To manage users in your Pixelfed instance running in a Docker container, as configured in Kevin Veen-Birkenbach's docker-pixelfed role, you can follow these steps via the Command Line Interface (CLI):
|
|
||||||
|
|
||||||
1. **Access the Application Container:** First, gain shell access to the Pixelfed application container. Use the command provided in the README:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose exec -it application bash
|
|
||||||
```
|
|
||||||
|
|
||||||
This command lets you access the bash shell inside the `application` Docker container where Pixelfed is running.
|
|
||||||
|
|
||||||
2. **Navigate to Pixelfed Directory:** Once inside the container, navigate to the Pixelfed directory. This is typically the root directory where Pixelfed is installed.
|
|
||||||
|
|
||||||
3. **Use Artisan Commands:** Pixelfed is built on Laravel, so you'll use Laravel's Artisan CLI for user management. Here are some common tasks:
|
|
||||||
|
|
||||||
- **Create a New User:**
|
|
||||||
```bash
|
|
||||||
php artisan user:create
|
|
||||||
```
|
|
||||||
This command will prompt you to enter the user's details like username, email, and password.
|
|
||||||
|
|
||||||
- **List Users:**
|
|
||||||
```bash
|
|
||||||
php artisan user:list
|
|
||||||
```
|
|
||||||
This command displays a list of all users.
|
|
||||||
|
|
||||||
- **Delete a User:**
|
|
||||||
```bash
|
|
||||||
php artisan user:delete {username}
|
|
||||||
```
|
|
||||||
Replace `{username}` with the actual username of the user you wish to delete.
|
|
||||||
|
|
||||||
- **Reset Password:**
|
|
||||||
```bash
|
|
||||||
php artisan user:reset-password {username}
|
|
||||||
```
|
|
||||||
This will initiate a password reset process for the specified user.
|
|
||||||
|
|
||||||
4. **Verify and Validate:** Depending on your Pixelfed's configuration, especially if email verification is required, you might need to perform additional steps to verify new accounts or modify user details.
|
|
||||||
|
|
||||||
5. **Exit the Container:** After completing your user management tasks, exit the Docker container shell by typing `exit`.
|
|
||||||
|
|
||||||
### Note:
|
|
||||||
|
|
||||||
- **Commands Variability:** The available Artisan commands can vary based on your version of Pixelfed and Laravel. Always refer to the specific documentation for your version.
|
|
||||||
- **Permissions:** Ensure you have the necessary permissions and rights within the Docker container to perform these actions.
|
|
||||||
- **Environment Specifics:** The exact paths and commands may vary based on your Docker and Pixelfed setup, as defined in your `docker-compose.yml` and other configuration files.
|
|
||||||
|
|
||||||
This process provides a streamlined way to manage Pixelfed users directly from the CLI in a Dockerized environment, ensuring that you can efficiently administer your Pixelfed instance without needing to access the Pixelfed web interface.
|
|
||||||
|
|
||||||
## Instagram Import Cleanup
|
|
||||||
|
|
||||||
If you have imported posts from Instagram, you can clean up the imported data and files as follows:
|
|
||||||
|
|
||||||
### Database Cleanup
|
|
||||||
Run these commands inside your MariaDB shell to remove import related data:
|
|
||||||
```bash
|
|
||||||
DELETE from import_posts WHERE 1;
|
|
||||||
DELETE from import_jobs WHERE 1;
|
|
||||||
DELETE from import_datas WHERE 1;
|
|
||||||
DELETE from statuses where created_at < "2022-12-01 22:15:39";
|
|
||||||
DELETE from media where deleted_at >= "2023-07-28 14:39:05";
|
|
||||||
```
|
|
||||||
|
|
||||||
### File System Cleanup
|
|
||||||
Run these commands to remove the imported files and trigger the cleanup job:
|
|
||||||
```bash
|
|
||||||
docker-compose exec -u "www-data" application rm -rv "/var/www/storage/app/imports/1"
|
|
||||||
docker-compose exec -u "www-data" application php artisan schedule:run
|
|
||||||
```
|
|
||||||
|
|
||||||
## Full Cleanup (Reset)
|
|
||||||
|
|
||||||
For a hard reset, which will delete all data and stop all services, use the following commands:
|
|
||||||
```bash
|
|
||||||
docker-compose down
|
|
||||||
docker volume rm pixelfed_application_data pixelfed_database pixelfed_redis
|
|
||||||
```
|
|
||||||
|
|
||||||
## Update Procedure
|
|
||||||
|
|
||||||
To update your Pixelfed instance, navigate to the directory where your `docker-compose.yml` file is located and run these commands:
|
|
||||||
```bash
|
|
||||||
cd {{path_docker_compose_instances}}pixelfed/ &&
|
|
||||||
docker-compose down &&
|
|
||||||
docker network prune -f &&
|
|
||||||
docker-compose pull &&
|
|
||||||
docker-compose build &&
|
|
||||||
docker-compose -p pixelfed up -d --force-recreate
|
|
||||||
```
|
|
||||||
|
|
||||||
## Inspecting the Services
|
|
||||||
|
|
||||||
To see the status of all services or follow the logs, use these commands:
|
|
||||||
```bash
|
|
||||||
docker-compose ps -a
|
|
||||||
docker-compose logs -f
|
|
||||||
```
|
|
||||||
|
|
||||||
## Debug
|
|
||||||
To debug the system set APP_DEBUG to true, like descriped [here](https://docs.pixelfed.org/technical-documentation/config/).
|
|
||||||
|
|
||||||
```bash
|
|
||||||
nano config/app.php
|
|
||||||
php artisan cache:clear
|
|
||||||
php artisan route:cache
|
|
||||||
php artisan view:clear
|
|
||||||
php artisan config:cache
|
|
||||||
```
|
|
||||||
|
|
||||||
## Modifying files
|
|
||||||
```bash
|
|
||||||
apt update && apt upgrade && apt install nano
|
|
||||||
```
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user