mirror of
https://github.com/kevinveenbirkenbach/docker-volume-backup-cleanup.git
synced 2025-09-16 07:06:06 +02:00
Add CI, dirval-based validator, tests, and docs updates
• Add GitHub Actions workflow (Ubuntu, Python 3.10–3.12) • Add Makefile (test + pkgmgr install note) • Add requirements.yml (pkgmgr: dirval) • Replace shell scripts with parallel validator main.py using dirval • Add unit tests with fake dirval and timeouts • Update README to new repo name and pkgmgr alias 'cleanback' • Add .gitignore for __pycache__ Conversation context: https://chatgpt.com/share/68c309bf-8818-800f-84d9-c4aa74a4544c
This commit is contained in:
41
.github/workflows/tests.yml
vendored
Normal file
41
.github/workflows/tests.yml
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "**" ]
|
||||
pull_request:
|
||||
branches: [ "**" ]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os: [ubuntu-latest]
|
||||
python-version: ["3.10", "3.11", "3.12"]
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
- name: Show Python version
|
||||
run: python -V
|
||||
|
||||
- name: Make main.py executable (optional)
|
||||
run: chmod +x main.py || true
|
||||
|
||||
- name: Install test dependencies (if any)
|
||||
run: |
|
||||
if [ -f requirements.txt ]; then
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
fi
|
||||
|
||||
- name: Run tests
|
||||
run: make test
|
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
**__pycache__
|
18
Makefile
Normal file
18
Makefile
Normal file
@@ -0,0 +1,18 @@
|
||||
# Makefile for Cleanup Failed Backups
|
||||
|
||||
.PHONY: test install help
|
||||
|
||||
help:
|
||||
@echo "Available targets:"
|
||||
@echo " make test - Run unit tests"
|
||||
@echo " make install - Show installation instructions"
|
||||
|
||||
test:
|
||||
@echo ">> Running tests"
|
||||
@python3 -m unittest -v test.py
|
||||
|
||||
install:
|
||||
@echo ">> Installation instructions:"
|
||||
@echo " This software can be installed with pkgmgr:"
|
||||
@echo " pkgmgr install cleanback"
|
||||
@echo " See project: https://github.com/kevinveenbirkenbach/package-manager"
|
115
README.md
115
README.md
@@ -1,32 +1,113 @@
|
||||
# Cleanup Failed Docker Backups
|
||||
[](https://github.com/sponsors/kevinveenbirkenbach) [](https://www.patreon.com/c/kevinveenbirkenbach) [](https://buymeacoffee.com/kevinveenbirkenbach) [](https://s.veen.world/paypaldonate)
|
||||
# Cleanup Failed Backups (cleanback) 🚮⚡
|
||||
|
||||
[](https://github.com/sponsors/kevinveenbirkenbach)
|
||||
[](https://www.patreon.com/c/kevinveenbirkenbach)
|
||||
[](https://buymeacoffee.com/kevinveenbirkenbach)
|
||||
[](https://s.veen.world/paypaldonate)
|
||||
|
||||
This repository hosts a Bash script designed for cleaning up directories within the Docker Volume Backup system. It is intended to be used in conjunction with the [Docker Volume Backup](https://github.com/kevinveenbirkenbach/docker-volume-backup) project.
|
||||
**Repository:** https://github.com/kevinveenbirkenbach/cleanup-failed-backups
|
||||
|
||||
## Description
|
||||
This tool validates and (optionally) cleans up **failed Docker backup directories**.
|
||||
It scans backup folders under `/Backups`, uses [`dirval`](https://github.com/kevinveenbirkenbach/directory-validator) to validate each subdirectory, and lets you delete the ones that fail validation. Validation runs **in parallel** for performance; deletions are controlled and can be interactive or automatic.
|
||||
|
||||
This script operates by traversing subdirectories within a specific main directory and, under certain conditions, proposes their deletion to the user. It is useful in managing backup directories, especially when certain directories can be cleaned up based on the absence of a particular subdirectory and the name of the directories themselves.
|
||||
---
|
||||
|
||||
The script takes two arguments: a backup hash and a trigger directory. It constructs the main directory path using the given backup hash, and then iterates over all items within the main directory. If a directory's name matches a specific date-time-stamp pattern and lacks the specified trigger directory, the script will list its contents and ask for user confirmation to delete the directory.
|
||||
## ✨ Highlights
|
||||
|
||||
For more detailed information about the script's workings, refer to the comments within the `cleanup.sh` script file.
|
||||
- **Parallel validation** of backup subdirectories
|
||||
- Uses **`dirval`** (`directory-validator`) via CLI for robust validation
|
||||
- **Interactive** or **non-interactive** deletion flow (`--yes`)
|
||||
- Supports validating a single backup **ID** or **all** backups
|
||||
|
||||
## Usage
|
||||
---
|
||||
|
||||
To use this script, clone this repository to your local system and run the script with the necessary arguments. The command should be structured as follows:
|
||||
## 📦 Installation
|
||||
|
||||
This project is installable via **pkgmgr** (Kevin’s package manager).
|
||||
|
||||
**New pkgmgr alias:** `cleanback`
|
||||
|
||||
```bash
|
||||
bash cleanup.sh BACKUP_HASH
|
||||
# Install pkgmgr first (if you don't have it):
|
||||
# https://github.com/kevinveenbirkenbach/package-manager
|
||||
|
||||
pkgmgr install cleanback
|
||||
````
|
||||
|
||||
> `dirval` is declared as a dependency (see `requirements.yml`) and will be resolved by pkgmgr.
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Requirements
|
||||
|
||||
* Python 3.8+
|
||||
* `dirval` available on PATH (resolved automatically by `pkgmgr install cleanback`)
|
||||
* Access to `/Backups` directory tree
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
The executable is `main.py`:
|
||||
|
||||
```bash
|
||||
# Validate a single backup ID (under /Backups/<ID>/backup-docker-to-local)
|
||||
python3 main.py --id <ID>
|
||||
|
||||
# Validate ALL backup IDs under /Backups/*/backup-docker-to-local
|
||||
python3 main.py --all
|
||||
```
|
||||
|
||||
Replace ```BACKUP_HASH``` and ```TRIGGER_DIRECTORY``` with your actual values.
|
||||
### Common options
|
||||
|
||||
## License
|
||||
This project is licensed under the GNU Affero General Public License v3.0. See the LICENSE file for more information.
|
||||
* `--dirval-cmd <path-or-name>` — command to run `dirval` (default: `dirval`)
|
||||
* `--workers <int>` — parallel workers (default: CPU count, min 2)
|
||||
* `--timeout <seconds>` — per-directory validation timeout (float supported; default: 300.0)
|
||||
* `--yes` — **non-interactive**: auto-delete directories that fail validation
|
||||
|
||||
## Author
|
||||
This script is developed by Kevin Veen-Birkenbach. You can reach out to him at kevin@veen.world or visit his website at https://www.veen.world.
|
||||
### Examples
|
||||
|
||||
## Created with Chat GPT
|
||||
https://chat.openai.com/share/01222e15-8e1d-436d-b05b-29f406adb2ea
|
||||
```bash
|
||||
# Validate a single backup and prompt for deletions on failures
|
||||
python3 main.py --id 2024-09-01T12-00-00
|
||||
|
||||
# Validate everything with 8 workers and auto-delete failures
|
||||
python3 main.py --all --workers 8 --yes
|
||||
|
||||
# Use a custom dirval binary and shorter timeout
|
||||
python3 main.py --all --dirval-cmd /usr/local/bin/dirval --timeout 5.0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Tests
|
||||
|
||||
```bash
|
||||
make test
|
||||
```
|
||||
|
||||
This runs the unit tests in `test.py`. Tests create a temporary `/Backups`-like tree and a fake `dirval` to simulate success/failure/timeout behavior.
|
||||
|
||||
---
|
||||
|
||||
## 📁 Project Layout
|
||||
|
||||
* `main.py` — CLI entry point (parallel validator + cleanup)
|
||||
* `test.py` — unit tests
|
||||
* `requirements.yml` — `pkgmgr` dependencies (includes `dirval`)
|
||||
* `Makefile` — `make test` and an informational `make install`
|
||||
|
||||
---
|
||||
|
||||
## 🪪 License
|
||||
|
||||
This project is licensed under the **GNU Affero General Public License v3.0**.
|
||||
See the [LICENSE](LICENSE) file for details.
|
||||
|
||||
---
|
||||
|
||||
## 👤 Author
|
||||
|
||||
**Kevin Veen-Birkenbach**
|
||||
🌐 [https://www.veen.world](https://www.veen.world)
|
||||
📧 [kevin@veen.world](mailto:kevin@veen.world)
|
||||
|
@@ -1,33 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Get the absolute path of the directory where the current script is located
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
|
||||
|
||||
# Define the path to the original cleanup script using the script directory path
|
||||
CLEANUP_SCRIPT="$SCRIPT_DIR/cleanup.sh"
|
||||
|
||||
# Path to the main directory
|
||||
MAIN_DIRECTORY="/Backups"
|
||||
|
||||
# Check if the cleanup script exists
|
||||
if [ ! -f "$CLEANUP_SCRIPT" ]; then
|
||||
echo "Error: The script $CLEANUP_SCRIPT does not exist."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Iterate through each subdirectory in the main directory
|
||||
for BACKUP_FOLDER_PATH in "$MAIN_DIRECTORY"/*; do
|
||||
|
||||
# Extract the base name (folder name) from the path
|
||||
BACKUP_FOLDER=$(basename "$BACKUP_FOLDER_PATH")
|
||||
|
||||
# Check if the 'backup-docker-to-local' directory exists
|
||||
if [ -d "$BACKUP_FOLDER_PATH/backup-docker-to-local" ]; then
|
||||
echo "Running cleanup script for folder: $BACKUP_FOLDER"
|
||||
|
||||
# Call the cleanup script
|
||||
"$CLEANUP_SCRIPT" "$BACKUP_FOLDER"
|
||||
else
|
||||
echo "Directory $BACKUP_FOLDER_PATH/backup-docker-to-local not found."
|
||||
fi
|
||||
done
|
45
cleanup.sh
45
cleanup.sh
@@ -1,45 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Define backup hash argument as BACKUP_HASH
|
||||
BACKUP_HASH="$1"
|
||||
|
||||
# Define main directory containing subdirectories to potentially be deleted
|
||||
MAIN_DIRECTORY="/Backups/$BACKUP_HASH/backup-docker-to-local"
|
||||
if [ -d "$MAIN_DIRECTORY" ]; then
|
||||
echo "Cleaning up directory: $MAIN_DIRECTORY"
|
||||
else
|
||||
echo "Error: $MAIN_DIRECTORY does not exist."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Loop through all subdirectories in the main directory
|
||||
for SUBDIR in "$MAIN_DIRECTORY"/*; do
|
||||
|
||||
# Only proceed if it is a directory
|
||||
if [ -d "$SUBDIR" ]; then
|
||||
echo "Validating directory: $SUBDIR"
|
||||
scripts_directory="$(dirname "$(dirname "$(realpath "$0")")")"
|
||||
# Call the Python script for validation
|
||||
python $scripts_directory/directory-validator/directory-validator.py "$SUBDIR" --validate
|
||||
VALIDATION_STATUS=$?
|
||||
|
||||
if [ $VALIDATION_STATUS -eq 0 ]; then
|
||||
echo "Validation: ok"
|
||||
else
|
||||
echo "Validation: error"
|
||||
# Display the subdirectory contents
|
||||
echo "Contents of subdirectory: $SUBDIR"
|
||||
ls "$SUBDIR"
|
||||
|
||||
# Ask for user confirmation before deletion
|
||||
read -p "Are you sure you want to delete this subdirectory? (y/n) " -n 1 -r
|
||||
echo # move to a new line
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]
|
||||
then
|
||||
# Notify the user of the deletion, then delete the subdirectory
|
||||
echo "Deleting subdirectory: $SUBDIR"
|
||||
rm -vrf "$SUBDIR"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
done
|
250
main.py
Executable file
250
main.py
Executable file
@@ -0,0 +1,250 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Cleanup Failed Docker Backups — parallel validator (using dirval)
|
||||
|
||||
Validates backup subdirectories under:
|
||||
- /Backups/<ID>/backup-docker-to-local (when --id is used)
|
||||
- /Backups/*/backup-docker-to-local (when --all is used)
|
||||
|
||||
For each subdirectory:
|
||||
- Runs `dirval <subdir> --validate`.
|
||||
- If validation fails, it lists the contents and asks whether to delete.
|
||||
- With --yes, deletions happen automatically (no prompt).
|
||||
|
||||
Parallelism:
|
||||
- Validation runs in parallel (thread pool). Deletions are performed afterwards
|
||||
sequentially (to keep prompts sane).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
import shutil
|
||||
import subprocess
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Tuple
|
||||
import multiprocessing
|
||||
import time
|
||||
|
||||
BACKUPS_ROOT = Path("/Backups")
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ValidationResult:
|
||||
subdir: Path
|
||||
ok: bool
|
||||
returncode: int
|
||||
stderr: str
|
||||
stdout: str
|
||||
|
||||
|
||||
def discover_target_subdirs(backup_id: Optional[str], all_mode: bool) -> List[Path]:
|
||||
"""
|
||||
Return a list of subdirectories to validate:
|
||||
- If backup_id is given: /Backups/<id>/backup-docker-to-local/* (dirs only)
|
||||
- If --all: for each /Backups/* that has backup-docker-to-local, include its subdirs
|
||||
"""
|
||||
targets: List[Path] = []
|
||||
|
||||
if all_mode:
|
||||
if not BACKUPS_ROOT.is_dir():
|
||||
raise FileNotFoundError(f"Backups root does not exist: {BACKUPS_ROOT}")
|
||||
for backup_folder in sorted(p for p in BACKUPS_ROOT.iterdir() if p.is_dir()):
|
||||
candidate = backup_folder / "backup-docker-to-local"
|
||||
if candidate.is_dir():
|
||||
targets.extend(sorted([p for p in candidate.iterdir() if p.is_dir()]))
|
||||
else:
|
||||
if not backup_id:
|
||||
raise ValueError("Either --id or --all must be provided.")
|
||||
base = BACKUPS_ROOT / backup_id / "backup-docker-to-local"
|
||||
if not base.is_dir():
|
||||
raise FileNotFoundError(f"Directory does not exist: {base}")
|
||||
targets = sorted([p for p in base.iterdir() if p.is_dir()])
|
||||
|
||||
return targets
|
||||
|
||||
|
||||
def run_dirval_validate(subdir: Path, dirval_cmd: str, timeout: float) -> ValidationResult:
|
||||
"""
|
||||
Execute dirval:
|
||||
<dirval_cmd> "<SUBDIR>" --validate
|
||||
Return ValidationResult with ok = (returncode == 0).
|
||||
"""
|
||||
cmd = [dirval_cmd, str(subdir), "--validate"]
|
||||
try:
|
||||
proc = subprocess.run(
|
||||
cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
check=False,
|
||||
text=True,
|
||||
timeout=timeout,
|
||||
)
|
||||
return ValidationResult(
|
||||
subdir=subdir,
|
||||
ok=(proc.returncode == 0),
|
||||
returncode=proc.returncode,
|
||||
stderr=(proc.stderr or "").strip(),
|
||||
stdout=(proc.stdout or "").strip(),
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
return ValidationResult(
|
||||
subdir=subdir,
|
||||
ok=False,
|
||||
returncode=124,
|
||||
stderr=f"dirval timed out after {timeout}s",
|
||||
stdout="",
|
||||
)
|
||||
except FileNotFoundError:
|
||||
return ValidationResult(
|
||||
subdir=subdir,
|
||||
ok=False,
|
||||
returncode=127,
|
||||
stderr=f"dirval not found (dirval-cmd: {dirval_cmd})",
|
||||
stdout="",
|
||||
)
|
||||
|
||||
|
||||
def parallel_validate(subdirs: List[Path], dirval_cmd: str, workers: int, timeout: float) -> List[ValidationResult]:
|
||||
results: List[ValidationResult] = []
|
||||
if not subdirs:
|
||||
return results
|
||||
|
||||
print(f"Validating {len(subdirs)} directories with {workers} workers (dirval: {dirval_cmd})...")
|
||||
start = time.time()
|
||||
|
||||
with ThreadPoolExecutor(max_workers=workers) as pool:
|
||||
future_map = {pool.submit(run_dirval_validate, sd, dirval_cmd, timeout): sd for sd in subdirs}
|
||||
for fut in as_completed(future_map):
|
||||
res = fut.result()
|
||||
status = "ok" if res.ok else "error"
|
||||
print(f"[{status}] {res.subdir}")
|
||||
results.append(res)
|
||||
|
||||
elapsed = time.time() - start
|
||||
print(f"Validation finished in {elapsed:.2f}s")
|
||||
return results
|
||||
|
||||
|
||||
def print_dir_listing(path: Path, max_items: int = 50) -> None:
|
||||
try:
|
||||
entries = sorted(path.iterdir(), key=lambda p: (not p.is_dir(), p.name.lower()))
|
||||
except Exception as e:
|
||||
print(f" (unable to list: {e})")
|
||||
return
|
||||
|
||||
for i, entry in enumerate(entries):
|
||||
typ = "<DIR>" if entry.is_dir() else " "
|
||||
print(f" {typ} {entry.name}")
|
||||
if i + 1 >= max_items and len(entries) > i + 1:
|
||||
print(f" ... (+{len(entries) - (i+1)} more)")
|
||||
break
|
||||
|
||||
|
||||
def confirm(prompt: str) -> bool:
|
||||
try:
|
||||
return input(prompt).strip().lower() in {"y", "yes"}
|
||||
except EOFError:
|
||||
return False
|
||||
|
||||
|
||||
def delete_path(path: Path) -> Tuple[Path, bool, Optional[str]]:
|
||||
try:
|
||||
shutil.rmtree(path)
|
||||
return path, True, None
|
||||
except Exception as e:
|
||||
return path, False, str(e)
|
||||
|
||||
|
||||
def process_deletions(failures: List[ValidationResult], assume_yes: bool) -> int:
|
||||
deleted_count = 0
|
||||
for res in failures:
|
||||
print("\n" + "=" * 80)
|
||||
print(f"Validation failed for: {res.subdir}")
|
||||
if res.stderr:
|
||||
print(f"stderr: {res.stderr}")
|
||||
if res.stdout:
|
||||
print(f"stdout: {res.stdout}")
|
||||
print("Contents:")
|
||||
print_dir_listing(res.subdir)
|
||||
|
||||
should_delete = assume_yes or confirm("Delete this subdirectory? [y/N]: ")
|
||||
if not should_delete:
|
||||
continue
|
||||
|
||||
print(f"Deleting: {res.subdir}")
|
||||
path, ok, err = delete_path(res.subdir)
|
||||
if ok:
|
||||
print(f"Deleted: {path}")
|
||||
deleted_count += 1
|
||||
else:
|
||||
print(f"Failed to delete {path}: {err}")
|
||||
|
||||
return deleted_count
|
||||
|
||||
|
||||
def parse_args(argv: Optional[List[str]] = None) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Validate (and optionally delete) failed backup subdirectories in parallel using dirval."
|
||||
)
|
||||
scope = parser.add_mutually_exclusive_group(required=True)
|
||||
scope.add_argument("--id", dest="backup_id", help="Backup folder name under /Backups.")
|
||||
scope.add_argument("--all", dest="all_mode", action="store_true", help="Scan all /Backups/* folders.")
|
||||
|
||||
parser.add_argument(
|
||||
"--dirval-cmd",
|
||||
default="dirval",
|
||||
help="dirval executable/command to run (default: 'dirval').",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--workers",
|
||||
type=int,
|
||||
default=max(2, multiprocessing.cpu_count()),
|
||||
help="Number of parallel validator workers (default: CPU count).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--timeout",
|
||||
type=float,
|
||||
default=300.0,
|
||||
help="Per-directory dirval timeout in seconds (supports floats; default: 300).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--yes",
|
||||
action="store_true",
|
||||
help="Do not prompt; delete failing directories automatically.",
|
||||
)
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: Optional[List[str]] = None) -> int:
|
||||
args = parse_args(argv)
|
||||
|
||||
try:
|
||||
subdirs = discover_target_subdirs(args.backup_id, bool(args.all_mode))
|
||||
except Exception as e:
|
||||
print(f"ERROR: {e}", file=sys.stderr)
|
||||
return 2
|
||||
|
||||
if not subdirs:
|
||||
print("No subdirectories to validate. Nothing to do.")
|
||||
return 0
|
||||
|
||||
results = parallel_validate(subdirs, args.dirval_cmd, args.workers, args.timeout)
|
||||
failures = [r for r in results if not r.ok]
|
||||
|
||||
if not failures:
|
||||
print("\nAll directories validated successfully. No action required.")
|
||||
return 0
|
||||
|
||||
print(f"\n{len(failures)} directory(ies) failed validation.")
|
||||
deleted = process_deletions(failures, assume_yes=args.yes)
|
||||
kept = len(failures) - deleted
|
||||
print(f"\nSummary: deleted={deleted}, kept={kept}, ok={len(results) - len(failures)}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
2
requirements.yml
Normal file
2
requirements.yml
Normal file
@@ -0,0 +1,2 @@
|
||||
pkgmgr:
|
||||
- dirval
|
187
test.py
Normal file
187
test.py
Normal file
@@ -0,0 +1,187 @@
|
||||
#!/usr/bin/env python3
|
||||
import io
|
||||
import sys
|
||||
import time
|
||||
import tempfile
|
||||
import unittest
|
||||
import contextlib
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
# Import cleanup main.py
|
||||
HERE = Path(__file__).resolve().parent
|
||||
sys.path.insert(0, str(HERE))
|
||||
import main # noqa: E402
|
||||
|
||||
# Keep tests snappy but reliable:
|
||||
# - "timeout" dirs sleep 0.3s in fake dirval
|
||||
# - we pass --timeout 0.1s -> they will time out
|
||||
FAKE_TIMEOUT_SLEEP = 0.3 # 300 ms
|
||||
SHORT_TIMEOUT = "0.1" # 100 ms
|
||||
|
||||
FAKE_DIRVAL = f"""#!/usr/bin/env python3
|
||||
import sys, time, argparse, pathlib
|
||||
|
||||
def main():
|
||||
p = argparse.ArgumentParser()
|
||||
p.add_argument("path")
|
||||
p.add_argument("--validate", action="store_true")
|
||||
args = p.parse_args()
|
||||
|
||||
d = pathlib.Path(args.path)
|
||||
name = d.name.lower()
|
||||
|
||||
# Simulate a slow validation for timeout* dirs
|
||||
if "timeout" in name:
|
||||
time.sleep({FAKE_TIMEOUT_SLEEP})
|
||||
print("Simulated long run...")
|
||||
return 0
|
||||
|
||||
# VALID file -> success
|
||||
if (d / "VALID").exists():
|
||||
print("ok")
|
||||
return 0
|
||||
|
||||
# otherwise -> fail
|
||||
print("failed")
|
||||
return 1
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
"""
|
||||
|
||||
class CleanupBackupsUsingDirvalTests(unittest.TestCase):
|
||||
def setUp(self):
|
||||
# temp /Backups root
|
||||
self.tmpdir = tempfile.TemporaryDirectory()
|
||||
self.backups_root = Path(self.tmpdir.name)
|
||||
|
||||
# fake dirval on disk
|
||||
self.dirval = self.backups_root / "dirval"
|
||||
self.dirval.write_text(FAKE_DIRVAL, encoding="utf-8")
|
||||
self.dirval.chmod(0o755)
|
||||
|
||||
# structure:
|
||||
# /Backups/ID1/backup-docker-to-local/{goodA, badB, timeoutC}
|
||||
# /Backups/ID2/backup-docker-to-local/{goodX, badY}
|
||||
self.id1 = self.backups_root / "ID1" / "backup-docker-to-local"
|
||||
self.id2 = self.backups_root / "ID2" / "backup-docker-to-local"
|
||||
for p in [self.id1, self.id2]:
|
||||
p.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self.goodA = self.id1 / "goodA"
|
||||
self.badB = self.id1 / "badB"
|
||||
self.timeoutC = self.id1 / "timeoutC"
|
||||
self.goodX = self.id2 / "goodX"
|
||||
self.badY = self.id2 / "badY"
|
||||
for p in [self.goodA, self.badB, self.timeoutC, self.goodX, self.badY]:
|
||||
p.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# mark valids
|
||||
(self.goodA / "VALID").write_text("1", encoding="utf-8")
|
||||
(self.goodX / "VALID").write_text("1", encoding="utf-8")
|
||||
|
||||
# Capture stdout/stderr
|
||||
self._stdout = io.StringIO()
|
||||
self._stderr = io.StringIO()
|
||||
self.stdout_cm = contextlib.redirect_stdout(self._stdout)
|
||||
self.stderr_cm = contextlib.redirect_stderr(self._stderr)
|
||||
self.stdout_cm.__enter__()
|
||||
self.stderr_cm.__enter__()
|
||||
|
||||
# Patch BACKUPS_ROOT to temp root
|
||||
self.backups_patcher = patch.object(main, "BACKUPS_ROOT", self.backups_root)
|
||||
self.backups_patcher.start()
|
||||
|
||||
def tearDown(self):
|
||||
self.backups_patcher.stop()
|
||||
self.stdout_cm.__exit__(None, None, None)
|
||||
self.stderr_cm.__exit__(None, None, None)
|
||||
self.tmpdir.cleanup()
|
||||
|
||||
def run_main(self, argv):
|
||||
start = time.time()
|
||||
rc = main.main(argv)
|
||||
out = self._stdout.getvalue()
|
||||
err = self._stderr.getvalue()
|
||||
dur = time.time() - start
|
||||
self._stdout.seek(0); self._stdout.truncate(0)
|
||||
self._stderr.seek(0); self._stderr.truncate(0)
|
||||
return rc, out, err, dur
|
||||
|
||||
def test_id_mode_yes_deletes_failures(self):
|
||||
rc, out, err, _ = self.run_main([
|
||||
"--id", "ID1",
|
||||
"--dirval-cmd", str(self.dirval),
|
||||
"--workers", "4",
|
||||
"--timeout", SHORT_TIMEOUT,
|
||||
"--yes",
|
||||
])
|
||||
self.assertEqual(rc, 0, msg=err or out)
|
||||
self.assertTrue(self.goodA.exists(), "goodA should remain")
|
||||
self.assertFalse(self.badB.exists(), "badB should be deleted")
|
||||
self.assertFalse(self.timeoutC.exists(), "timeoutC should be deleted (timeout treated as failure)")
|
||||
self.assertIn("Summary:", out)
|
||||
|
||||
def test_all_mode(self):
|
||||
rc, out, err, _ = self.run_main([
|
||||
"--all",
|
||||
"--dirval-cmd", str(self.dirval),
|
||||
"--workers", "4",
|
||||
"--timeout", SHORT_TIMEOUT,
|
||||
"--yes",
|
||||
])
|
||||
self.assertEqual(rc, 0, msg=err or out)
|
||||
self.assertTrue(self.goodA.exists())
|
||||
self.assertFalse(self.badB.exists())
|
||||
self.assertFalse(self.timeoutC.exists())
|
||||
self.assertTrue(self.goodX.exists())
|
||||
self.assertFalse(self.badY.exists())
|
||||
|
||||
def test_dirval_missing_errors(self):
|
||||
rc, out, err, _ = self.run_main([
|
||||
"--id", "ID1",
|
||||
"--dirval-cmd", str(self.backups_root / "nope-dirval"),
|
||||
"--timeout", SHORT_TIMEOUT,
|
||||
"--yes",
|
||||
])
|
||||
self.assertEqual(rc, 0, msg=err or out)
|
||||
self.assertIn("dirval not found", out + err)
|
||||
|
||||
def test_no_targets_message(self):
|
||||
empty = self.backups_root / "EMPTY" / "backup-docker-to-local"
|
||||
empty.mkdir(parents=True, exist_ok=True)
|
||||
rc, out, err, _ = self.run_main([
|
||||
"--id", "EMPTY",
|
||||
"--dirval-cmd", str(self.dirval),
|
||||
"--timeout", SHORT_TIMEOUT,
|
||||
])
|
||||
self.assertEqual(rc, 0)
|
||||
self.assertIn("No subdirectories to validate. Nothing to do.", out)
|
||||
|
||||
def test_interactive_keeps_when_no(self):
|
||||
with patch("builtins.input", return_value=""):
|
||||
rc, out, err, _ = self.run_main([
|
||||
"--id", "ID2",
|
||||
"--dirval-cmd", str(self.dirval),
|
||||
"--workers", "1",
|
||||
"--timeout", SHORT_TIMEOUT,
|
||||
])
|
||||
self.assertEqual(rc, 0, msg=err or out)
|
||||
self.assertTrue(self.badY.exists(), "badY should be kept without confirmation")
|
||||
self.assertTrue(self.goodX.exists())
|
||||
|
||||
def test_interactive_yes_deletes(self):
|
||||
with patch("builtins.input", return_value="y"):
|
||||
rc, out, err, _ = self.run_main([
|
||||
"--id", "ID2",
|
||||
"--dirval-cmd", str(self.dirval),
|
||||
"--workers", "1",
|
||||
"--timeout", SHORT_TIMEOUT,
|
||||
])
|
||||
self.assertEqual(rc, 0, msg=err or out)
|
||||
self.assertFalse(self.badY.exists(), "badY should be deleted")
|
||||
self.assertTrue(self.goodX.exists())
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main(verbosity=2)
|
Reference in New Issue
Block a user