Compare commits

...

64 Commits

Author SHA1 Message Date
607102e7f8 chore(claude): add project settings with sandbox and ask rules
Some checks failed
CI / security-codeql (push) Has been cancelled
CI / test-unit (push) Has been cancelled
CI / test-integration (push) Has been cancelled
CI / test-env-virtual (push) Has been cancelled
CI / test-env-nix (push) Has been cancelled
CI / test-e2e (push) Has been cancelled
CI / test-virgin-user (push) Has been cancelled
CI / test-virgin-root (push) Has been cancelled
CI / lint-shell (push) Has been cancelled
CI / lint-python (push) Has been cancelled
CI / lint-docker (push) Has been cancelled
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 19:54:34 +02:00
133cf63b9f Release version 1.13.3 2026-03-26 17:10:21 +01:00
6334936e8a fix(ci): resolve workflow and docker scan findings 2026-03-26 16:44:02 +01:00
946965f016 fix(ci): grant reusable workflows security permissions 2026-03-26 16:33:40 +01:00
541a7f679f feat(ci): add docker lint and codeql workflows 2026-03-26 16:30:36 +01:00
128f71745a refactor(ci): organize workflow scripts and gate publish on main 2026-03-26 15:58:18 +01:00
df2ce636c8 fix(ci): make mark-stable main-only and cancel stale runs 2026-03-26 14:57:04 +01:00
3b0dabf2a7 Release version 1.13.2 2026-03-26 12:26:55 +01:00
697370c906 Merge branch 'fix/nix-centos' 2026-03-26 12:26:26 +01:00
bc57172d92 fix(nix): fail fast when bootstrap is unavailable 2026-03-26 07:56:55 +01:00
0e7e23dce5 Release version 1.13.1 2026-03-20 02:57:25 +01:00
9d53f4c6f5 Fix GPG verification runtime handling 2026-03-20 02:51:51 +01:00
a46d85b541 Release version 1.13.0 2026-03-20 01:29:38 +01:00
acaea11eb6 Set CentOS image to latest 2026-03-20 01:28:49 +01:00
056d21a859 Release version 1.12.5 2026-02-24 09:35:39 +01:00
612ba5069d Increase stable gate wait time to 2 hours 2026-02-24 09:34:45 +01:00
551e245218 Release version 1.12.4 2026-02-24 09:32:01 +01:00
814523eac2 Gate stable tag updates on successful main CI 2026-02-24 09:30:24 +01:00
4f2c5013a7 Release version 1.12.3 2026-02-24 08:29:34 +01:00
e01bb8c39a nix: pin flake input to nixos-25.11 and track flake.lock 2026-02-24 08:23:33 +01:00
461a3c334d Release version 1.12.2 2026-02-24 07:40:55 +01:00
e3de46c6a4 Removed infinito-sphinx from package manager, because it's managed now via docker in infinito.nexus 2026-02-24 07:40:01 +01:00
b20882f492 Release version 1.12.1
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2026-02-14 23:26:17 +01:00
430f21735e fix(nix): prefer distro nix binaries over PATH lookup 2026-02-14 23:23:16 +01:00
acf1b69b70 Release version 1.12.0 2026-02-08 18:26:25 +01:00
7d574e67ec Add concurrency groups to CI and mark-stable workflows
Introduce explicit concurrency settings to the CI and mark-stable
workflows to serialize runs per repository and ref. This prevents
overlapping executions for the same branch or tag and makes pipeline
behavior more predictable during rapid pushes.

https://chatgpt.com/share/6988bef0-1a0c-800f-93df-7a6c1bdc0331
2026-02-08 18:25:31 +01:00
aad6814fc5 Release version 1.11.2 2026-02-08 18:21:50 +01:00
411cd2df66 Remove tag trigger from mark-stable workflow
Stop running the mark-stable workflow on v* tag pushes so it executes
only on branch updates. This prevents duplicate or unintended runs
after version tags are created as part of the release process.

https://chatgpt.com/share/6988bef0-1a0c-800f-93df-7a6c1bdc0331
2026-02-08 18:20:48 +01:00
849d29c044 Release version 1.11.1 2026-02-08 18:18:09 +01:00
0947dea01e Fix release push to send branch and version tag together
Push master and the newly created version tag in a single git push command
so the CI release workflow can detect the tag on HEAD. This aligns the
release script with the new master-based release pipeline and prevents
missed automated releases caused by separate branch and tag pushes.

https://chatgpt.com/share/6988bef0-1a0c-800f-93df-7a6c1bdc0331
2026-02-08 17:51:15 +01:00
5d7e1fdbb3 Release version 1.11.0
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2026-01-21 01:18:31 +01:00
ac6981ad4d feat(pkgmgr): add slim Docker image target and publish slim variants
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- add dedicated `slim` Dockerfile stage based on `full`
- move image cleanup into slim stage via slim.sh
- extend build script to support `--target slim`
- publish pkgmgr-*-slim images for all distros

https://chatgpt.com/share/69701a4e-b000-800f-be7e-162dcb93b1d2
2026-01-21 01:13:59 +01:00
f3a7b69bac Added correct changelog entry 2026-01-20 10:49:39 +01:00
5bcad7f5f3 Release version 1.10.0 2026-01-20 10:44:58 +01:00
d39582d1da feat(docker): introduce slim.sh for safe image cleanup and run it during build
- add verbose distro-aware cleanup script (apk/apt/pacman/dnf/yum)
- remove package manager caches, logs, tmp and user caches
- keep runtime-critical files untouched
- execute cleanup during image build to reduce final size

https://chatgpt.com/share/696f4ab6-fae8-800f-9a46-e73eb8317791
2026-01-20 10:28:16 +01:00
043d389a76 Release version 1.9.5 2026-01-16 10:09:43 +01:00
cc1e543ebc git(core): include cwd and git output in pull_args error
Show the working directory and captured git output when `git pull`
fails via pull_args(). This makes debugging repository-specific
failures (missing upstream, auth issues, detached HEAD, etc.)
significantly easier, especially when pulling multiple repositories.

https://chatgpt.com/share/6969ff2c-ed2c-800f-b506-5834b6b81141
2026-01-16 10:04:40 +01:00
25a0579809 Release version 1.9.4
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2026-01-13 14:48:50 +01:00
d4e461bb63 fix(nix): run installer via su instead of sudo to avoid PAM failures in minimal containers
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
https://chatgpt.com/share/69662b41-2768-800f-a721-292889889547
2026-01-13 14:43:12 +01:00
1864d0700e Release version 1.9.3
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2026-01-07 13:44:40 +01:00
a9bd8d202f packaging(arch): make nix optional on non-x86_64 architectures
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Arch Linux ARM currently ships a broken/out-of-sync nix package with
unresolvable dependencies. Declare nix as a hard dependency only on
x86_64 and as optional on other architectures, allowing installation
while relying on the official Nix installer bootstrap.

https://chatgpt.com/share/695e483c-1f68-800f-9f94-87d5295b871d
2026-01-07 13:43:32 +01:00
28df54503e Release version 1.9.2
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-21 15:30:22 +01:00
aa489811e3 fix(config): package and load default configs correctly
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Ship default YAML configs inside the pkgmgr package
- Ensure defaults are loaded when no user config exists
- Keep user configs fully respected and non-overwritten
- Fix config update command to copy packaged defaults reliably

https://chatgpt.com/share/6947e74f-573c-800f-b93d-5ed341fcd1a3
2025-12-21 15:26:01 +01:00
f66af0157b Release version 1.9.1
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-21 13:38:58 +01:00
b0b3ccf5aa fix(packaging): stop including legacy pkgmgr.installers package
- Restrict setuptools package discovery to src/ (pkgmgr* only)
- Drop config/ as a Python package mapping (keep config as plain data dir)
- Remove config_defaults fallback paths and use config/ exclusively
- Add unit + integration tests for defaults.yaml loading and CLI update copying

https://chatgpt.com/share/6947e74f-573c-800f-b93d-5ed341fcd1a3
2025-12-21 13:25:38 +01:00
e178afde31 Release version 1.9.0
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-20 14:37:58 +01:00
9802293871 ***feat(mirror): add remote repository visibility support***
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
* Add mirror visibility subcommand and provision --public flag
* Implement core visibility API with provider support (GitHub, Gitea)
* Extend provider interface and EnsureStatus
* Add unit, integration and e2e tests for visibility handling

https://chatgpt.com/share/6946a44e-4f48-800f-8124-9c0b9b2b6b04
2025-12-20 14:26:55 +01:00
a2138c9985 refactor(mirror): probe remotes with detailed reasons and provision all git mirrors
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Add probe_remote_reachable_detail and improved GitRunError metadata
- Print short failure reasons for unreachable remotes
- Provision each git mirror URL via ensure_remote_repository_for_url

https://chatgpt.com/share/6946956e-f738-800f-a446-e2c8bf5595f4
2025-12-20 13:23:24 +01:00
10998e50ad ci(test-virgin-user): preserve NIX_CONFIG across sudo to avoid GitHub API rate limits
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
https://chatgpt.com/share/6945565e-f1b0-800f-86d5-8d0083fe3390
2025-12-19 14:42:36 +01:00
a20814cb37 Release version 1.8.7
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-19 14:15:47 +01:00
feb5ba267f refactor(release): move file helpers into files package
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
https://chatgpt.com/share/69454ef4-e038-800f-a14b-4e633e76f241
2025-12-19 14:11:04 +01:00
591be4ef35 test(release): update pyproject version tests for PEP 621 and RuntimeError handling
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
- Adjust tests to expect RuntimeError instead of SystemExit
- Add coverage for missing [project] section in pyproject.toml
- Keep spec macro %{?dist} intact in test fixtures
- Minor cleanup and reformatting of test cases

https://chatgpt.com/share/69454836-4698-800f-9d19-7e67e8e789d6
2025-12-19 14:06:33 +01:00
3e6ef0fd68 release: fix pyproject.toml version update for PEP 621 projects
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Update version handling to correctly modify [project].version in pyproject.toml.
The previous implementation only matched top-level version assignments and
failed for PEP 621 layouts.

- Restrict update to the [project] section
- Allow leading whitespace in version lines
- Replace sys.exit() with proper exceptions
- Remove unused sys import

https://chatgpt.com/share/69454836-4698-800f-9d19-7e67e8e789d6
2025-12-19 13:42:26 +01:00
3d5c770def Solved ruff F401
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-18 19:16:15 +01:00
f4339a746a executet 'ruff format --check .'
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-18 14:04:44 +01:00
763f02a9a4 Release version 1.8.6
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-17 23:50:31 +01:00
2eec873a17 Solved Debian Bug
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
https://chatgpt.com/share/69432655-a948-800f-8c0d-353921cdf644
2025-12-17 23:29:04 +01:00
17ee947930 ci: pass NIX_CONFIG with GitHub token into all test containers
- Add NIX_CONFIG with GitHub access token to all CI test workflows
- Export NIX_CONFIG in Makefile for propagation to test scripts
- Forward NIX_CONFIG explicitly into all docker run invocations
- Prevent GitHub API rate limit errors during Nix-based tests

https://chatgpt.com/share/69432655-a948-800f-8c0d-353921cdf644
2025-12-17 23:29:04 +01:00
b989bdd4eb Release version 1.8.5 2025-12-17 23:29:04 +01:00
c4da8368d8 --- Release Error --- 2025-12-17 23:28:45 +01:00
997c265cfb refactor(git): introduce GitRunError hierarchy, surface non-repo errors, and improve verification queries
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
* Replace legacy GitError usage with a clearer exception hierarchy:

  * GitBaseError as the common root for all git-related failures
  * GitRunError for subprocess execution failures
  * GitQueryError for read-only query failures
  * GitCommandError for state-changing command failures
  * GitNotRepositoryError to explicitly signal “not a git repository” situations
* Update git runner to detect “not a git repository” stderr and raise GitNotRepositoryError with rich context (cwd, command, stderr)
* Refactor repository verification to use dedicated query helpers instead of ad-hoc subprocess calls:

  * get_remote_head_commit (ls-remote) for pull mode
  * get_head_commit for local mode
  * get_latest_signing_key (%GK) for signature verification
* Add strict vs best-effort behavior in verify_repository:

  * Best-effort collection for reporting (does not block when no verification config exists)
  * Strict retrieval and explicit error messages when verification is configured
  * Clear failure cases when commit/signing key cannot be determined
* Add new unit tests covering:

  * get_latest_signing_key output stripping and error wrapping
  * get_remote_head_commit parsing, empty output, and error wrapping
  * verify_repository success/failure scenarios and “do not swallow GitNotRepositoryError”
* Adjust imports and exception handling across actions/commands/queries to align with GitRunError-based handling while keeping GitNotRepositoryError uncaught for debugging clarity

https://chatgpt.com/share/6943173c-508c-800f-8879-af75d131c79b
2025-12-17 21:48:03 +01:00
955028288f Release version 1.8.4
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-17 11:20:16 +01:00
866572e252 ci(docker): fix repo mount path for pkgmgr as base layer of Infinito.Nexus
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
Standardize Docker/CI/test environments to mount pkgmgr at /opt/src/pkgmgr.
This makes the layering explicit: pkgmgr is the lower-level foundation used by
Infinito.Nexus.

Infra-only change (Docker, CI, shell scripts). No runtime or Nix semantics changed.

https://chatgpt.com/share/69427fe7-e288-800f-90a4-c1c3c11a8484
2025-12-17 11:03:02 +01:00
b0a733369e Optimized output for debugging
Some checks failed
Mark stable commit / test-unit (push) Has been cancelled
Mark stable commit / test-integration (push) Has been cancelled
Mark stable commit / test-env-virtual (push) Has been cancelled
Mark stable commit / test-env-nix (push) Has been cancelled
Mark stable commit / test-e2e (push) Has been cancelled
Mark stable commit / test-virgin-user (push) Has been cancelled
Mark stable commit / test-virgin-root (push) Has been cancelled
Mark stable commit / lint-shell (push) Has been cancelled
Mark stable commit / lint-python (push) Has been cancelled
Mark stable commit / mark-stable (push) Has been cancelled
2025-12-17 10:51:56 +01:00
281 changed files with 6095 additions and 1838 deletions

12
.claude/settings.json Normal file
View File

@@ -0,0 +1,12 @@
{
"permissions": {
"ask": [
"Skill(update-config)",
"Skill(update-config:*)"
]
},
"sandbox": {
"enabled": true,
"autoAllowBashIfSandboxed": true
}
}

View File

@@ -2,34 +2,72 @@ name: CI
on:
push:
branches-ignore:
- main
branches:
- '**'
pull_request:
permissions:
contents: read
concurrency:
group: global-ci-${{ github.repository }}-${{ github.ref_name }}
cancel-in-progress: false
jobs:
security-codeql:
permissions:
contents: read
packages: read
security-events: write
uses: ./.github/workflows/security-codeql.yml
test-unit:
permissions:
contents: read
uses: ./.github/workflows/test-unit.yml
test-integration:
permissions:
contents: read
uses: ./.github/workflows/test-integration.yml
test-env-virtual:
permissions:
contents: read
uses: ./.github/workflows/test-env-virtual.yml
test-env-nix:
permissions:
contents: read
uses: ./.github/workflows/test-env-nix.yml
test-e2e:
permissions:
contents: read
uses: ./.github/workflows/test-e2e.yml
test-virgin-user:
permissions:
contents: read
uses: ./.github/workflows/test-virgin-user.yml
test-virgin-root:
permissions:
contents: read
uses: ./.github/workflows/test-virgin-root.yml
lint-shell:
permissions:
contents: read
uses: ./.github/workflows/lint-shell.yml
lint-python:
permissions:
contents: read
uses: ./.github/workflows/lint-python.yml
lint-docker:
permissions:
contents: read
security-events: write
uses: ./.github/workflows/lint-docker.yml

40
.github/workflows/lint-docker.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: Docker Linter
on:
workflow_call:
permissions:
contents: read
jobs:
lint-docker:
name: Lint Dockerfile
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run hadolint (produce SARIF)
id: hadolint
continue-on-error: true
uses: hadolint/hadolint-action@2332a7b74a6de0dda2e2221d575162eba76ba5e5
with:
dockerfile: ./Dockerfile
format: sarif
output-file: hadolint-results.sarif
failure-threshold: warning
- name: Upload analysis results to GitHub
if: always()
uses: github/codeql-action/upload-sarif@v4
with:
sarif_file: hadolint-results.sarif
wait-for-processing: true
category: hadolint
- name: Fail if SARIF contains warnings or errors
if: always()
run: python3 src/pkgmgr/github/check_hadolint_sarif.py hadolint-results.sarif

View File

@@ -3,6 +3,9 @@ name: Ruff (Python code sniffer)
on:
workflow_call:
permissions:
contents: read
jobs:
lint-python:
runs-on: ubuntu-latest

View File

@@ -3,6 +3,9 @@ name: ShellCheck
on:
workflow_call:
permissions:
contents: read
jobs:
lint-shell:
runs-on: ubuntu-latest

View File

@@ -1,110 +1,39 @@
name: Mark stable commit
concurrency:
group: mark-stable-${{ github.repository }}-main
cancel-in-progress: true
on:
push:
branches:
- main # still run tests for main
tags:
- 'v*' # run tests for version tags (e.g. v0.9.1)
- 'v*'
jobs:
test-unit:
uses: ./.github/workflows/test-unit.yml
test-integration:
uses: ./.github/workflows/test-integration.yml
test-env-virtual:
uses: ./.github/workflows/test-env-virtual.yml
test-env-nix:
uses: ./.github/workflows/test-env-nix.yml
test-e2e:
uses: ./.github/workflows/test-e2e.yml
test-virgin-user:
uses: ./.github/workflows/test-virgin-user.yml
test-virgin-root:
uses: ./.github/workflows/test-virgin-root.yml
lint-shell:
uses: ./.github/workflows/lint-shell.yml
lint-python:
uses: ./.github/workflows/lint-python.yml
mark-stable:
needs:
- lint-shell
- lint-python
- test-unit
- test-integration
- test-env-nix
- test-env-virtual
- test-e2e
- test-virgin-user
- test-virgin-root
runs-on: ubuntu-latest
# Only run this job if the push is for a version tag (v*)
if: startsWith(github.ref, 'refs/tags/v')
timeout-minutes: 330
permissions:
contents: write # Required to move/update the tag
actions: read
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true # We need all tags for version comparison
fetch-tags: true # We need tags and main history for version comparison
- name: Check whether tagged commit is on main
id: branch-check
run: bash scripts/github/common/check-tagged-commit-on-main.sh
- name: Wait for CI success on main for this commit
if: steps.branch-check.outputs.is_on_main == 'true'
env:
GH_TOKEN: ${{ github.token }}
run: bash scripts/github/mark-stable/wait-for-main-ci-success.sh
- name: Move 'stable' tag only if this version is the highest
run: |
set -euo pipefail
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
echo "Ref: $GITHUB_REF"
echo "SHA: $GITHUB_SHA"
VERSION="${GITHUB_REF#refs/tags/}"
echo "Current version tag: ${VERSION}"
echo "Collecting all version tags..."
ALL_V_TAGS="$(git tag --list 'v*' || true)"
if [[ -z "${ALL_V_TAGS}" ]]; then
echo "No version tags found. Skipping stable update."
exit 0
fi
echo "All version tags:"
echo "${ALL_V_TAGS}"
# Determine highest version using natural version sorting
LATEST_TAG="$(printf '%s\n' ${ALL_V_TAGS} | sort -V | tail -n1)"
echo "Highest version tag: ${LATEST_TAG}"
if [[ "${VERSION}" != "${LATEST_TAG}" ]]; then
echo "Current version ${VERSION} is NOT the highest version."
echo "Stable tag will NOT be updated."
exit 0
fi
echo "Current version ${VERSION} IS the highest version."
echo "Updating 'stable' tag..."
# Delete existing stable tag (local + remote)
git tag -d stable 2>/dev/null || true
git push origin :refs/tags/stable || true
# Create new stable tag
git tag stable "$GITHUB_SHA"
git push origin stable
echo "✅ Stable tag updated to ${VERSION}."
if: steps.branch-check.outputs.is_on_main == 'true'
run: bash scripts/github/mark-stable/mark-stable-if-highest-version.sh

View File

@@ -21,44 +21,30 @@ jobs:
fetch-depth: 0
- name: Checkout workflow_run commit and refresh tags
run: |
set -euo pipefail
git checkout -f "${{ github.event.workflow_run.head_sha }}"
git fetch --tags --force
git tag --list 'stable' 'v*' --sort=version:refname | tail -n 20
env:
WORKFLOW_RUN_SHA: ${{ github.event.workflow_run.head_sha }}
run: bash scripts/github/publish-containers/checkout-workflow-run-commit.sh
- name: Check whether tagged commit is on main
id: branch-check
env:
TARGET_SHA: ${{ github.event.workflow_run.head_sha }}
run: bash scripts/github/common/check-tagged-commit-on-main.sh
- name: Compute version and stable flag
id: info
run: |
set -euo pipefail
SHA="$(git rev-parse HEAD)"
V_TAG="$(git tag --points-at "${SHA}" --list 'v*' | sort -V | tail -n1)"
if [[ -z "${V_TAG}" ]]; then
echo "No version tag found for ${SHA}. Skipping publish."
echo "should_publish=false" >> "$GITHUB_OUTPUT"
exit 0
fi
VERSION="${V_TAG#v}"
STABLE_SHA="$(git rev-parse -q --verify refs/tags/stable^{commit} 2>/dev/null || true)"
IS_STABLE=false
[[ -n "${STABLE_SHA}" && "${STABLE_SHA}" == "${SHA}" ]] && IS_STABLE=true
echo "should_publish=true" >> "$GITHUB_OUTPUT"
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
echo "is_stable=${IS_STABLE}" >> "$GITHUB_OUTPUT"
if: steps.branch-check.outputs.is_on_main == 'true'
run: bash scripts/github/publish-containers/compute-publish-container-info.sh
- name: Set up Docker Buildx
if: ${{ steps.info.outputs.should_publish == 'true' }}
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f
with:
use: true
- name: Login to GHCR
if: ${{ steps.info.outputs.should_publish == 'true' }}
uses: docker/login-action@v3
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -66,9 +52,8 @@ jobs:
- name: Publish all images
if: ${{ steps.info.outputs.should_publish == 'true' }}
run: |
set -euo pipefail
OWNER="${{ github.repository_owner }}" \
VERSION="${{ steps.info.outputs.version }}" \
IS_STABLE="${{ steps.info.outputs.is_stable }}" \
bash scripts/build/publish.sh
env:
OWNER: ${{ github.repository_owner }}
VERSION: ${{ steps.info.outputs.version }}
IS_STABLE: ${{ steps.info.outputs.is_stable }}
run: bash scripts/github/publish-containers/publish-container-images.sh

47
.github/workflows/security-codeql.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: CodeQL Advanced
on:
workflow_call:
jobs:
analyze:
name: Check security
runs-on: ubuntu-latest
permissions:
security-events: write
packages: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: actions
build-mode: none
- language: python
build-mode: none
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v4
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
queries: security-extended,security-and-quality
- name: Run manual build steps
if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code.'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v4
with:
category: "/language:${{ matrix.language }}"

View File

@@ -3,6 +3,9 @@ name: Test End-To-End
on:
workflow_call:
permissions:
contents: read
jobs:
test-e2e:
runs-on: ubuntu-latest
@@ -11,7 +14,9 @@ jobs:
fail-fast: false
matrix:
distro: [arch, debian, ubuntu, fedora, centos]
env:
NIX_CONFIG: |
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -3,6 +3,9 @@ name: Test Virgin Nix (flake only)
on:
workflow_call:
permissions:
contents: read
jobs:
test-env-nix:
runs-on: ubuntu-latest
@@ -12,7 +15,9 @@ jobs:
fail-fast: false
matrix:
distro: [arch, debian, ubuntu, fedora, centos]
env:
NIX_CONFIG: |
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -3,6 +3,9 @@ name: Test OS Containers
on:
workflow_call:
permissions:
contents: read
jobs:
test-env-virtual:
runs-on: ubuntu-latest
@@ -11,7 +14,9 @@ jobs:
fail-fast: false
matrix:
distro: [arch, debian, ubuntu, fedora, centos]
env:
NIX_CONFIG: |
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -3,11 +3,16 @@ name: Test Code Integration
on:
workflow_call:
permissions:
contents: read
jobs:
test-integration:
runs-on: ubuntu-latest
timeout-minutes: 30
env:
NIX_CONFIG: |
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -3,11 +3,16 @@ name: Test Units
on:
workflow_call:
permissions:
contents: read
jobs:
test-unit:
runs-on: ubuntu-latest
timeout-minutes: 30
env:
NIX_CONFIG: |
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -3,6 +3,9 @@ name: Test Virgin Root
on:
workflow_call:
permissions:
contents: read
jobs:
test-virgin-root:
runs-on: ubuntu-latest
@@ -11,7 +14,9 @@ jobs:
fail-fast: false
matrix:
distro: [arch, debian, ubuntu, fedora, centos]
env:
NIX_CONFIG: |
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -19,27 +24,26 @@ jobs:
- name: Show Docker version
run: docker version
# 🔹 BUILD virgin image if missing
- name: Build virgin container (${{ matrix.distro }})
run: |
set -euo pipefail
PKGMGR_DISTRO="${{ matrix.distro }}" make build-missing-virgin
# 🔹 RUN test inside virgin image
- name: Virgin ${{ matrix.distro }} pkgmgr test (root)
run: |
set -euo pipefail
docker run --rm \
-v "$PWD":/src \
-v "$PWD":/opt/src/pkgmgr \
-v pkgmgr_repos:/root/Repositories \
-v pkgmgr_pip_cache:/root/.cache/pip \
-w /src \
-e NIX_CONFIG="${NIX_CONFIG}" \
-w /opt/src/pkgmgr \
"pkgmgr-${{ matrix.distro }}-virgin" \
bash -lc '
set -euo pipefail
git config --global --add safe.directory /src
git config --global --add safe.directory /opt/src/pkgmgr
make install
make setup
@@ -50,5 +54,5 @@ jobs:
pkgmgr version pkgmgr
echo ">>> Running Nix-based: nix run .#pkgmgr -- version pkgmgr"
nix run /src#pkgmgr -- version pkgmgr
nix run /opt/src/pkgmgr#pkgmgr -- version pkgmgr
'

View File

@@ -3,6 +3,9 @@ name: Test Virgin User
on:
workflow_call:
permissions:
contents: read
jobs:
test-virgin-user:
runs-on: ubuntu-latest
@@ -11,7 +14,9 @@ jobs:
fail-fast: false
matrix:
distro: [arch, debian, ubuntu, fedora, centos]
env:
NIX_CONFIG: |
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -19,20 +24,19 @@ jobs:
- name: Show Docker version
run: docker version
# 🔹 BUILD virgin image if missing
- name: Build virgin container (${{ matrix.distro }})
run: |
set -euo pipefail
PKGMGR_DISTRO="${{ matrix.distro }}" make build-missing-virgin
# 🔹 RUN test inside virgin image as non-root
- name: Virgin ${{ matrix.distro }} pkgmgr test (user)
run: |
set -euo pipefail
docker run --rm \
-v "$PWD":/src \
-w /src \
-v "$PWD":/opt/src/pkgmgr \
-e NIX_CONFIG="${NIX_CONFIG}" \
-w /opt/src/pkgmgr \
"pkgmgr-${{ matrix.distro }}-virgin" \
bash -lc '
set -euo pipefail
@@ -42,23 +46,25 @@ jobs:
useradd -m dev
echo "dev ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/dev
chmod 0440 /etc/sudoers.d/dev
chown -R dev:dev /src
chown -R dev:dev /opt/src/pkgmgr
mkdir -p /nix/store /nix/var/nix /nix/var/log/nix /nix/var/nix/profiles
chown -R dev:dev /nix
chmod 0755 /nix
chmod 1777 /nix/store
sudo -H -u dev env \
HOME=/home/dev \
NIX_CONFIG="$NIX_CONFIG" \
PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 \
bash -lc "
set -euo pipefail
cd /opt/src/pkgmgr
make setup-venv
. \"\$HOME/.venvs/pkgmgr/bin/activate\"
sudo -H -u dev env HOME=/home/dev PKGMGR_DISABLE_NIX_FLAKE_INSTALLER=1 bash -lc "
set -euo pipefail
cd /src
pkgmgr version pkgmgr
make setup-venv
. \"\$HOME/.venvs/pkgmgr/bin/activate\"
pkgmgr version pkgmgr
export NIX_REMOTE=local
nix run /src#pkgmgr -- version pkgmgr
"
export NIX_REMOTE=local
nix run /opt/src/pkgmgr#pkgmgr -- version pkgmgr
"
'

3
.gitignore vendored
View File

@@ -24,10 +24,9 @@ package-manager-*
.DS_Store
Thumbs.db
# Nix Cache to speed up tests
# Nix cache to speed up tests
.nix/
.nix-dev-installed
flake.lock
# Ignore logs
*.log

View File

@@ -1,3 +1,137 @@
## [1.13.3] - 2026-03-26
* CI pipelines now include automated security scanning (CodeQL, Docker lint), increasing detection of vulnerabilities and misconfigurations
* Workflow permissions were tightened and fixed, ensuring secure and reliable execution of reusable workflows
* Publishing and “stable” tagging are now restricted to the `main` branch, preventing accidental releases from other branches
* Stale CI runs are automatically cancelled, reducing wasted resources and speeding up feedback cycles
* Overall CI reliability and security posture improved, with fewer false positives and more consistent pipeline results
## [1.13.2] - 2026-03-26
* Fail fast with a clear error when the Nix bootstrap or nix binary is unavailable instead of continuing with a broken startup path.
## [1.13.1] - 2026-03-20
* Fixed misleading GPG verification failures by adding explicit git and gnupg runtime dependencies and surfacing signing-key lookup errors accurately.
## [1.13.0] - 2026-03-20
* Set CentOS docker image to latest
## [1.12.5] - 2026-02-24
* The stable-tag workflow now waits up to two hours for a successful main-branch CI run on the same commit before updating stable.
## [1.12.4] - 2026-02-24
* The release pipeline now updates the stable tag only for v* tags after a successful CI run on main for the same commit, while avoiding duplicate test executions.
## [1.12.3] - 2026-02-24
* Stabilized Nix-based builds by switching to nixos-25.11 and committing flake.lock, ensuring reproducible pkgmgr test/runtime environments (with pip) and avoiding transient sphinx/Python 3.11 breakage.
## [1.12.2] - 2026-02-24
* Removed infinito-sphinx package
## [1.12.1] - 2026-02-14
* pkgmgr now prefers distro-managed nix binaries on Arch before profile/PATH resolution, preventing libllhttp mismatch failures after pacman system upgrades.
## [1.12.0] - 2026-02-08
* Adds explicit concurrency groups to the CI and mark-stable workflows to prevent overlapping runs on the same branch and make pipeline execution more predictable.
## [1.11.2] - 2026-02-08
* Removes the v* tag trigger from the mark-stable workflow so it runs only on branch pushes and avoids duplicate executions during releases.
## [1.11.1] - 2026-02-08
* Implements pushing the branch and the version tag together in a single command so the CI release workflow can reliably detect the version tag on HEAD.
## [1.11.0] - 2026-01-21
* Adds a dedicated slim Docker image for pkgmgr and publishes slim variants for all supported distros.
## [1.10.0] - 2026-01-20
* Introduce safe verbose image cleanup to reduce Docker image size and build artifacts
## [1.9.5] - 2026-01-16
* Release patch: improve git pull error diagnostics
## [1.9.4] - 2026-01-13
* fix(ci): replace sudo with su for user switching to avoid PAM failures in minimal container images
## [1.9.3] - 2026-01-07
* Made the Nix dependency optional on non-x86_64 architectures to avoid broken Arch Linux ARM repository packages.
## [1.9.2] - 2025-12-21
* Default configuration files are now packaged and loaded correctly when no user config exists, while fully preserving custom user configurations.
## [1.9.1] - 2025-12-21
* Fixed installation issues and improved loading of default configuration files.
## [1.9.0] - 2025-12-20
* * New ***mirror visibility*** command to set remote Git repositories to ***public*** or ***private***.
* New ***--public*** flag for ***mirror provision*** to create repositories and immediately make them public.
* All configured git mirrors are now provisioned.
## [1.8.7] - 2025-12-19
* * **Release version updates now correctly modify ***pyproject.toml*** files that follow PEP 621**, ensuring the ***[project].version*** field is updated as expected.
* **Invalid or incomplete ***pyproject.toml*** files are now handled gracefully** with clear error messages instead of abrupt process termination.
* **RPM spec files remain compatible during releases**: existing macros such as ***%{?dist}*** are preserved and no longer accidentally modified.
## [1.8.6] - 2025-12-17
* Prevent Rate Limits during GitHub Nix Setups
## [1.8.5] - 2025-12-17
* * Clearer Git error handling, especially when a directory is not a Git repository.
* More reliable repository verification with improved commit and GPG signature checks.
* Better error messages and overall robustness when working with Git-based workflows.
## [1.9.0] - 2025-12-17
* Automated release.
## [1.8.4] - 2025-12-17
* * Made pkgmgrs base-layer role explicit by standardizing the Docker/CI mount path to *`/opt/src/pkgmgr`*.
## [1.8.3] - 2025-12-16
* MIRRORS now supports plain URL entries, ensuring metadata-only sources like PyPI are recorded without ever being added to the Git configuration.

View File

@@ -33,6 +33,7 @@ CMD ["bash"]
# - inherits from virgin
# - builds + installs pkgmgr
# - sets entrypoint + default cmd
# - NOTE: does NOT run slim.sh (that is done in slim stage)
# ============================================================
FROM virgin AS full
@@ -42,14 +43,25 @@ WORKDIR /build
COPY . .
# Build and install distro-native package-manager package
RUN set -euo pipefail; \
RUN set -eu; \
echo "Building and installing package-manager via make install..."; \
make install; \
cd /; rm -rf /build
rm -rf /build
# Entry point
COPY scripts/docker/entry.sh /usr/local/bin/docker-entry.sh
WORKDIR /src
WORKDIR /opt/src/pkgmgr
ENTRYPOINT ["/usr/local/bin/docker-entry.sh"]
CMD ["pkgmgr", "--help"]
# ============================================================
# Target: slim
# - based on full
# - runs slim.sh
# ============================================================
FROM full AS slim
COPY scripts/docker/slim.sh /usr/local/bin/slim.sh
RUN chmod +x /usr/local/bin/slim.sh && /usr/local/bin/slim.sh

View File

@@ -10,6 +10,10 @@ DISTROS ?= arch debian ubuntu fedora centos
PKGMGR_DISTRO ?= arch
export PKGMGR_DISTRO
# Nix Config Variable (To avoid rate limit)
NIX_CONFIG ?=
export NIX_CONFIG
# ------------------------------------------------------------
# Base images
# (kept for documentation/reference; actual build logic is in scripts/build)

27
flake.lock generated Normal file
View File

@@ -0,0 +1,27 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1771714954,
"narHash": "sha256-nhZJPnBavtu40/L2aqpljrfUNb2rxmWTmSjK2c9UKds=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "afbbf774e2087c3d734266c22f96fca2e78d3620",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-25.11",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

View File

@@ -6,7 +6,7 @@
};
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11";
};
outputs = { self, nixpkgs }:
@@ -32,7 +32,7 @@
rec {
pkgmgr = pyPkgs.buildPythonApplication {
pname = "package-manager";
version = "1.8.3";
version = "1.13.3";
# Use the git repo as source
src = ./.;
@@ -51,6 +51,8 @@
pyPkgs.pyyaml
pyPkgs.jinja2
pyPkgs.pip
pkgs.git
pkgs.gnupg
];
doCheck = false;
@@ -87,6 +89,7 @@
buildInputs = [
pythonWithDeps
pkgs.git
pkgs.gnupg
ansiblePkg
];

View File

@@ -1,15 +1,25 @@
# Maintainer: Kevin Veen-Birkenbach <info@veen.world>
pkgname=package-manager
pkgver=1.8.3
pkgver=1.13.3
pkgrel=1
pkgdesc="Local-flake wrapper for Kevin's package-manager (Nix-based)."
arch=('any')
url="https://github.com/kevinveenbirkenbach/package-manager"
license=('MIT')
# Nix is the only runtime dependency; Python is provided by the Nix closure.
depends=('nix')
# Nix is required at runtime to run pkgmgr via the flake.
# On Arch x86_64 we can depend on the distro package.
# On other arches (e.g. ARM) we only declare it as optional because the
# repo package may be broken/out-of-sync; installation can be done via the official installer.
depends=()
optdepends=('nix: required to run pkgmgr via flake')
if [[ "${CARCH}" == "x86_64" ]]; then
depends=('nix')
optdepends=()
fi
makedepends=('rsync')
install=${pkgname}.install

View File

@@ -1,9 +1,9 @@
post_install() {
/usr/lib/package-manager/nix/init.sh || echo ">>> ERROR: /usr/lib/package-manager/nix/init.sh not found or not executable."
/usr/lib/package-manager/nix/init.sh
}
post_upgrade() {
/usr/lib/package-manager/nix/init.sh || echo ">>> ERROR: /usr/lib/package-manager/nix/init.sh not found or not executable."
/usr/lib/package-manager/nix/init.sh
}
post_remove() {

View File

@@ -1,3 +1,163 @@
package-manager (1.13.3-1) unstable; urgency=medium
* CI pipelines now include automated security scanning (CodeQL, Docker lint), increasing detection of vulnerabilities and misconfigurations
* Workflow permissions were tightened and fixed, ensuring secure and reliable execution of reusable workflows
* Publishing and “stable” tagging are now restricted to the `main` branch, preventing accidental releases from other branches
* Stale CI runs are automatically cancelled, reducing wasted resources and speeding up feedback cycles
* Overall CI reliability and security posture improved, with fewer false positives and more consistent pipeline results
-- Kevin Veen-Birkenbach <kevin@veen.world> Thu, 26 Mar 2026 17:10:21 +0100
package-manager (1.13.2-1) unstable; urgency=medium
* Fail fast with a clear error when the Nix bootstrap or nix binary is unavailable instead of continuing with a broken startup path.
-- Kevin Veen-Birkenbach <kevin@veen.world> Thu, 26 Mar 2026 12:26:55 +0100
package-manager (1.13.1-1) unstable; urgency=medium
* Fixed misleading GPG verification failures by adding explicit git and gnupg runtime dependencies and surfacing signing-key lookup errors accurately.
-- Kevin Veen-Birkenbach <kevin@veen.world> Fri, 20 Mar 2026 02:57:25 +0100
package-manager (1.13.0-1) unstable; urgency=medium
* Set CentOS docker image to latest
-- Kevin Veen-Birkenbach <kevin@veen.world> Fri, 20 Mar 2026 01:29:38 +0100
package-manager (1.12.5-1) unstable; urgency=medium
* The stable-tag workflow now waits up to two hours for a successful main-branch CI run on the same commit before updating stable.
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 24 Feb 2026 09:35:39 +0100
package-manager (1.12.4-1) unstable; urgency=medium
* The release pipeline now updates the stable tag only for v* tags after a successful CI run on main for the same commit, while avoiding duplicate test executions.
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 24 Feb 2026 09:32:01 +0100
package-manager (1.12.3-1) unstable; urgency=medium
* Stabilized Nix-based builds by switching to nixos-25.11 and committing flake.lock, ensuring reproducible pkgmgr test/runtime environments (with pip) and avoiding transient sphinx/Python 3.11 breakage.
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 24 Feb 2026 08:29:34 +0100
package-manager (1.12.2-1) unstable; urgency=medium
* Removed infinito-sphinx package
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 24 Feb 2026 07:40:55 +0100
package-manager (1.12.1-1) unstable; urgency=medium
* pkgmgr now prefers distro-managed nix binaries on Arch before profile/PATH resolution, preventing libllhttp mismatch failures after pacman system upgrades.
-- Kevin Veen-Birkenbach <kevin@veen.world> Sat, 14 Feb 2026 23:26:17 +0100
package-manager (1.12.0-1) unstable; urgency=medium
* Adds explicit concurrency groups to the CI and mark-stable workflows to prevent overlapping runs on the same branch and make pipeline execution more predictable.
-- Kevin Veen-Birkenbach <kevin@veen.world> Sun, 08 Feb 2026 18:26:25 +0100
package-manager (1.11.2-1) unstable; urgency=medium
* Removes the v* tag trigger from the mark-stable workflow so it runs only on branch pushes and avoids duplicate executions during releases.
-- Kevin Veen-Birkenbach <kevin@veen.world> Sun, 08 Feb 2026 18:21:50 +0100
package-manager (1.11.1-1) unstable; urgency=medium
* Implements pushing the branch and the version tag together in a single command so the CI release workflow can reliably detect the version tag on HEAD.
-- Kevin Veen-Birkenbach <kevin@veen.world> Sun, 08 Feb 2026 18:18:09 +0100
package-manager (1.11.0-1) unstable; urgency=medium
* Adds a dedicated slim Docker image for pkgmgr and publishes slim variants for all supported distros.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 21 Jan 2026 01:18:31 +0100
package-manager (1.10.0-1) unstable; urgency=medium
* Automated release.
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 20 Jan 2026 10:44:58 +0100
package-manager (1.9.5-1) unstable; urgency=medium
* Release patch: improve git pull error diagnostics
-- Kevin Veen-Birkenbach <kevin@veen.world> Fri, 16 Jan 2026 10:09:43 +0100
package-manager (1.9.4-1) unstable; urgency=medium
* fix(ci): replace sudo with su for user switching to avoid PAM failures in minimal container images
-- Kevin Veen-Birkenbach <kevin@veen.world> Tue, 13 Jan 2026 14:48:50 +0100
package-manager (1.9.3-1) unstable; urgency=medium
* Made the Nix dependency optional on non-x86_64 architectures to avoid broken Arch Linux ARM repository packages.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 07 Jan 2026 13:44:40 +0100
package-manager (1.9.2-1) unstable; urgency=medium
* Default configuration files are now packaged and loaded correctly when no user config exists, while fully preserving custom user configurations.
-- Kevin Veen-Birkenbach <kevin@veen.world> Sun, 21 Dec 2025 15:30:22 +0100
package-manager (1.9.1-1) unstable; urgency=medium
* Fixed installation issues and improved loading of default configuration files.
-- Kevin Veen-Birkenbach <kevin@veen.world> Sun, 21 Dec 2025 13:38:58 +0100
package-manager (1.9.0-1) unstable; urgency=medium
* * New ***mirror visibility*** command to set remote Git repositories to ***public*** or ***private***.
* New ***--public*** flag for ***mirror provision*** to create repositories and immediately make them public.
* All configured git mirrors are now provisioned.
-- Kevin Veen-Birkenbach <kevin@veen.world> Sat, 20 Dec 2025 14:37:58 +0100
package-manager (1.8.7-1) unstable; urgency=medium
* * **Release version updates now correctly modify ***pyproject.toml*** files that follow PEP 621**, ensuring the ***[project].version*** field is updated as expected.
* **Invalid or incomplete ***pyproject.toml*** files are now handled gracefully** with clear error messages instead of abrupt process termination.
* **RPM spec files remain compatible during releases**: existing macros such as ***%{?dist}*** are preserved and no longer accidentally modified.
-- Kevin Veen-Birkenbach <kevin@veen.world> Fri, 19 Dec 2025 14:15:47 +0100
package-manager (1.8.6-1) unstable; urgency=medium
* Prevent Rate Limits during GitHub Nix Setups
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 17 Dec 2025 23:50:31 +0100
package-manager (1.8.5-1) unstable; urgency=medium
* * Clearer Git error handling, especially when a directory is not a Git repository.
* More reliable repository verification with improved commit and GPG signature checks.
* Better error messages and overall robustness when working with Git-based workflows.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 17 Dec 2025 22:15:48 +0100
package-manager (1.9.0-1) unstable; urgency=medium
* Automated release.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 17 Dec 2025 22:10:31 +0100
package-manager (1.8.4-1) unstable; urgency=medium
* * Made pkgmgrs base-layer role explicit by standardizing the Docker/CI mount path to *`/opt/src/pkgmgr`*.
-- Kevin Veen-Birkenbach <kevin@veen.world> Wed, 17 Dec 2025 11:20:16 +0100
package-manager (1.8.3-1) unstable; urgency=medium
* MIRRORS now supports plain URL entries, ensuring metadata-only sources like PyPI are recorded without ever being added to the Git configuration.

View File

@@ -3,7 +3,7 @@ set -e
case "$1" in
configure)
/usr/lib/package-manager/nix/init.sh || echo ">>> ERROR: /usr/lib/package-manager/nix/init.sh not found or not executable."
/usr/lib/package-manager/nix/init.sh
;;
esac

View File

@@ -1,5 +1,5 @@
Name: package-manager
Version: 1.8.3
Version: 1.13.3
Release: 1%{?dist}
Summary: Wrapper that runs Kevin's package-manager via Nix flake
@@ -62,7 +62,7 @@ rm -rf \
%{buildroot}/usr/lib/package-manager/.gitkeep || true
%post
/usr/lib/package-manager/nix/init.sh || echo ">>> ERROR: /usr/lib/package-manager/nix/init.sh not found or not executable."
/usr/lib/package-manager/nix/init.sh
%postun
echo ">>> package-manager removed. Nix itself was not removed."
@@ -74,6 +74,91 @@ echo ">>> package-manager removed. Nix itself was not removed."
/usr/lib/package-manager/
%changelog
* Thu Mar 26 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.13.3-1
- CI pipelines now include automated security scanning (CodeQL, Docker lint), increasing detection of vulnerabilities and misconfigurations
* Workflow permissions were tightened and fixed, ensuring secure and reliable execution of reusable workflows
* Publishing and “stable” tagging are now restricted to the `main` branch, preventing accidental releases from other branches
* Stale CI runs are automatically cancelled, reducing wasted resources and speeding up feedback cycles
* Overall CI reliability and security posture improved, with fewer false positives and more consistent pipeline results
* Thu Mar 26 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.13.2-1
- Fail fast with a clear error when the Nix bootstrap or nix binary is unavailable instead of continuing with a broken startup path.
* Fri Mar 20 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.13.1-1
- Fixed misleading GPG verification failures by adding explicit git and gnupg runtime dependencies and surfacing signing-key lookup errors accurately.
* Fri Mar 20 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.13.0-1
- Set CentOS docker image to latest
* Tue Feb 24 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.12.5-1
- The stable-tag workflow now waits up to two hours for a successful main-branch CI run on the same commit before updating stable.
* Tue Feb 24 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.12.4-1
- The release pipeline now updates the stable tag only for v* tags after a successful CI run on main for the same commit, while avoiding duplicate test executions.
* Tue Feb 24 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.12.3-1
- Stabilized Nix-based builds by switching to nixos-25.11 and committing flake.lock, ensuring reproducible pkgmgr test/runtime environments (with pip) and avoiding transient sphinx/Python 3.11 breakage.
* Tue Feb 24 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.12.2-1
- Removed infinito-sphinx package
* Sat Feb 14 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.12.1-1
- pkgmgr now prefers distro-managed nix binaries on Arch before profile/PATH resolution, preventing libllhttp mismatch failures after pacman system upgrades.
* Sun Feb 08 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.12.0-1
- Adds explicit concurrency groups to the CI and mark-stable workflows to prevent overlapping runs on the same branch and make pipeline execution more predictable.
* Sun Feb 08 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.11.2-1
- Removes the v* tag trigger from the mark-stable workflow so it runs only on branch pushes and avoids duplicate executions during releases.
* Sun Feb 08 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.11.1-1
- Implements pushing the branch and the version tag together in a single command so the CI release workflow can reliably detect the version tag on HEAD.
* Wed Jan 21 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.11.0-1
- Adds a dedicated slim Docker image for pkgmgr and publishes slim variants for all supported distros.
* Tue Jan 20 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.10.0-1
- Automated release.
* Fri Jan 16 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.9.5-1
- Release patch: improve git pull error diagnostics
* Tue Jan 13 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.9.4-1
- fix(ci): replace sudo with su for user switching to avoid PAM failures in minimal container images
* Wed Jan 07 2026 Kevin Veen-Birkenbach <kevin@veen.world> - 1.9.3-1
- Made the Nix dependency optional on non-x86_64 architectures to avoid broken Arch Linux ARM repository packages.
* Sun Dec 21 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.9.2-1
- Default configuration files are now packaged and loaded correctly when no user config exists, while fully preserving custom user configurations.
* Sun Dec 21 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.9.1-1
- Fixed installation issues and improved loading of default configuration files.
* Sat Dec 20 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.9.0-1
- * New ***mirror visibility*** command to set remote Git repositories to ***public*** or ***private***.
* New ***--public*** flag for ***mirror provision*** to create repositories and immediately make them public.
* All configured git mirrors are now provisioned.
* Fri Dec 19 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.8.7-1
- * **Release version updates now correctly modify ***pyproject.toml*** files that follow PEP 621**, ensuring the ***[project].version*** field is updated as expected.
* **Invalid or incomplete ***pyproject.toml*** files are now handled gracefully** with clear error messages instead of abrupt process termination.
* **RPM spec files remain compatible during releases**: existing macros such as ***%{?dist}*** are preserved and no longer accidentally modified.
* Wed Dec 17 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.8.6-1
- Prevent Rate Limits during GitHub Nix Setups
* Wed Dec 17 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.8.5-1
- * Clearer Git error handling, especially when a directory is not a Git repository.
* More reliable repository verification with improved commit and GPG signature checks.
* Better error messages and overall robustness when working with Git-based workflows.
* Wed Dec 17 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.9.0-1
- Automated release.
* Wed Dec 17 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.8.4-1
- * Made pkgmgrs base-layer role explicit by standardizing the Docker/CI mount path to *`/opt/src/pkgmgr`*.
* Tue Dec 16 2025 Kevin Veen-Birkenbach <kevin@veen.world> - 1.8.3-1
- MIRRORS now supports plain URL entries, ensuring metadata-only sources like PyPI are recorded without ever being added to the Git configuration.

View File

@@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "kpmx"
version = "1.8.3"
version = "1.13.3"
description = "Kevin's package-manager tool (pkgmgr)"
readme = "README.md"
requires-python = ">=3.9"
@@ -43,11 +43,12 @@ pkgmgr = "pkgmgr.cli:main"
# -----------------------------
# Source layout: all packages live under "src/"
[tool.setuptools]
package-dir = { "" = "src", "config" = "config" }
package-dir = { "" = "src" }
include-package-data = true
[tool.setuptools.packages.find]
where = ["src", "."]
include = ["pkgmgr*", "config*"]
where = ["src"]
include = ["pkgmgr*"]
[tool.setuptools.package-data]
"config" = ["defaults.yaml"]
"pkgmgr.config" = ["*.yml", "*.yaml"]

View File

@@ -5,7 +5,7 @@ set -euo pipefail
: "${BASE_IMAGE_DEBIAN:=debian:stable-slim}"
: "${BASE_IMAGE_UBUNTU:=ubuntu:latest}"
: "${BASE_IMAGE_FEDORA:=fedora:latest}"
: "${BASE_IMAGE_CENTOS:=quay.io/centos/centos:stream9}"
: "${BASE_IMAGE_CENTOS:=quay.io/centos/centos:latest}"
resolve_base_image() {
local PKGMGR_DISTRO="$1"

View File

@@ -33,7 +33,7 @@ Usage: PKGMGR_DISTRO=<distro> $0 [options]
Build options:
--missing Build only if the image does not already exist (local build only)
--no-cache Build with --no-cache
--target <name> Build a specific Dockerfile target (e.g. virgin)
--target <name> Build a specific Dockerfile target (e.g. virgin, slim)
--tag <image> Override the output image tag (default: ${default_tag})
Publish options:
@@ -47,7 +47,7 @@ Publish options:
Notes:
- --publish implies --push and requires --registry, --owner, and --version.
- Local build (no --push) uses "docker build" and creates local images like "pkgmgr-arch" / "pkgmgr-arch-virgin".
- Local build (no --push) uses "docker build" and creates local images like "pkgmgr-arch" / "pkgmgr-arch-virgin" / "pkgmgr-arch-slim".
EOF
}
@@ -57,7 +57,7 @@ while [[ $# -gt 0 ]]; do
--missing) MISSING_ONLY=1; shift ;;
--target)
TARGET="${2:-}"
[[ -n "${TARGET}" ]] || { echo "ERROR: --target requires a value (e.g. virgin)"; exit 2; }
[[ -n "${TARGET}" ]] || { echo "ERROR: --target requires a value (e.g. virgin|slim)"; exit 2; }
shift 2
;;
--tag)

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
# Publish all distro images (full + virgin) to a registry via image.sh --publish
# Publish all distro images (full + virgin + slim) to a registry via image.sh --publish
#
# Required env:
# OWNER (e.g. GITHUB_REPOSITORY_OWNER)
@@ -11,6 +11,9 @@ set -euo pipefail
# REGISTRY (default: ghcr.io)
# IS_STABLE (default: false)
# DISTROS (default: "arch debian ubuntu fedora centos")
#
# Notes:
# - This expects Dockerfile targets: virgin, full (default), slim
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
@@ -33,7 +36,10 @@ for d in ${DISTROS}; do
echo "[publish] PKGMGR_DISTRO=${d}"
echo "============================================================"
# ----------------------------------------------------------
# virgin
# -> ghcr.io/<owner>/pkgmgr-<distro>-virgin:{latest,<version>,stable?}
# ----------------------------------------------------------
PKGMGR_DISTRO="${d}" bash "${SCRIPT_DIR}/image.sh" \
--publish \
--registry "${REGISTRY}" \
@@ -42,13 +48,29 @@ for d in ${DISTROS}; do
--stable "${IS_STABLE}" \
--target virgin
# ----------------------------------------------------------
# full (default target)
# -> ghcr.io/<owner>/pkgmgr-<distro>:{latest,<version>,stable?}
# ----------------------------------------------------------
PKGMGR_DISTRO="${d}" bash "${SCRIPT_DIR}/image.sh" \
--publish \
--registry "${REGISTRY}" \
--owner "${OWNER}" \
--version "${VERSION}" \
--stable "${IS_STABLE}"
# ----------------------------------------------------------
# slim
# -> ghcr.io/<owner>/pkgmgr-<distro>-slim:{latest,<version>,stable?}
# + alias for default distro: ghcr.io/<owner>/pkgmgr-slim:{...}
# ----------------------------------------------------------
PKGMGR_DISTRO="${d}" bash "${SCRIPT_DIR}/image.sh" \
--publish \
--registry "${REGISTRY}" \
--owner "${OWNER}" \
--version "${VERSION}" \
--stable "${IS_STABLE}" \
--target slim
done
echo

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[docker] Starting package-manager container"
echo "[docker-pkgmgr] Starting package-manager container"
# ---------------------------------------------------------------------------
# Log distribution info
@@ -9,19 +9,19 @@ echo "[docker] Starting package-manager container"
if [[ -f /etc/os-release ]]; then
# shellcheck disable=SC1091
. /etc/os-release
echo "[docker] Detected distro: ${ID:-unknown} (like: ${ID_LIKE:-})"
echo "[docker-pkgmgr] Detected distro: ${ID:-unknown} (like: ${ID_LIKE:-})"
fi
# Always use /src (mounted from host) as working directory
echo "[docker] Using /src as working directory"
cd /src
# Always use /opt/src/pkgmgr (mounted from host) as working directory
echo "[docker-pkgmgr] Using /opt/src/pkgmgr as working directory"
cd /opt/src/pkgmgr
# ---------------------------------------------------------------------------
# DEV mode: rebuild package-manager from the mounted /src tree
# DEV mode: rebuild package-manager from the mounted /opt/src/pkgmgr tree
# ---------------------------------------------------------------------------
if [[ "${REINSTALL_PKGMGR:-0}" == "1" ]]; then
echo "[docker] DEV mode enabled (REINSTALL_PKGMGR=1)"
echo "[docker] Rebuilding package-manager from /src via scripts/installation/package.sh..."
echo "[docker-pkgmgr] DEV mode enabled (REINSTALL_PKGMGR=1)"
echo "[docker-pkgmgr] Rebuilding package-manager from /opt/src/pkgmgr via scripts/installation/package.sh..."
bash scripts/installation/package.sh || exit 1
fi
@@ -29,9 +29,9 @@ fi
# Hand off to pkgmgr or arbitrary command
# ---------------------------------------------------------------------------
if [[ $# -eq 0 ]]; then
echo "[docker] No arguments provided. Showing pkgmgr help..."
echo "[docker-pkgmgr] No arguments provided. Showing pkgmgr help..."
exec pkgmgr --help
else
echo "[docker] Executing command: $*"
echo "[docker-pkgmgr] Executing command: $*"
exec "$@"
fi

130
scripts/docker/slim.sh Normal file
View File

@@ -0,0 +1,130 @@
#!/usr/bin/env bash
set -euo pipefail
log() { echo "[cleanup] $*"; }
warn() { echo "[cleanup][WARN] $*" >&2; }
MODE="${MODE:-safe}" # safe | aggressive
# safe: caches/logs/tmp only
# aggressive: safe + docs/man/info (optional)
ID="unknown"
if [ -f /etc/os-release ]; then
# shellcheck disable=SC1091
. /etc/os-release
ID="${ID:-unknown}"
fi
log "Starting image cleanup"
log "Mode: ${MODE}"
log "Detected OS: ${ID}"
# ------------------------------------------------------------
# Package manager caches (SAFE)
# ------------------------------------------------------------
case "${ID}" in
alpine)
log "Cleaning apk cache"
if [ -d /var/cache/apk ]; then
du -sh /var/cache/apk || true
rm -rvf /var/cache/apk/* || true
else
log "apk cache directory not present (already clean)"
fi
;;
arch)
log "Cleaning pacman cache"
du -sh /var/cache/pacman/pkg 2>/dev/null || true
pacman -Scc --noconfirm || true
rm -rvf /var/cache/pacman/pkg/* || true
;;
debian|ubuntu)
log "Cleaning apt cache"
du -sh /var/lib/apt/lists 2>/dev/null || true
apt-get clean || true
rm -rvf /var/lib/apt/lists/* || true
;;
fedora)
log "Cleaning dnf cache"
du -sh /var/cache/dnf 2>/dev/null || true
dnf clean all || true
rm -rvf /var/cache/dnf/* || true
;;
centos|rhel)
log "Cleaning yum/dnf cache"
du -sh /var/cache/yum /var/cache/dnf 2>/dev/null || true
(command -v dnf >/dev/null 2>&1 && dnf clean all) || true
(command -v yum >/dev/null 2>&1 && yum clean all) || true
rm -rvf /var/cache/yum/* /var/cache/dnf/* || true
;;
*)
warn "Unknown distro '${ID}' — skipping package manager cleanup"
;;
esac
# ------------------------------------------------------------
# Python caches (SAFE)
# ------------------------------------------------------------
log "Cleaning pip cache"
du -sh /root/.cache/pip 2>/dev/null || true
rm -rvf /root/.cache/pip 2>/dev/null || true
rm -rvf /home/*/.cache/pip 2>/dev/null || true
log "Cleaning __pycache__ directories"
find /opt /usr /root /home -type d -name "__pycache__" -print -prune 2>/dev/null || true
find /opt /usr /root /home -type d -name "__pycache__" -prune -exec rm -rvf {} + 2>/dev/null || true
# ------------------------------------------------------------
# Logs (SAFE)
# ------------------------------------------------------------
log "Truncating log files (keeping paths intact)"
if [ -d /var/log ]; then
find /var/log -type f -name "*.log" -print 2>/dev/null || true
find /var/log -type f -name "*.log" -exec sh -lc ': > "$1" 2>/dev/null || true' _ {} \; 2>/dev/null || true
find /var/log -type f -name "*.out" -print 2>/dev/null || true
find /var/log -type f -name "*.out" -exec sh -lc ': > "$1" 2>/dev/null || true' _ {} \; 2>/dev/null || true
fi
if command -v journalctl >/dev/null 2>&1; then
log "Vacuuming journald logs"
journalctl --disk-usage || true
journalctl --vacuum-size=10M || true
journalctl --vacuum-time=1s || true
journalctl --disk-usage || true
else
log "journald not present (skipping)"
fi
# ------------------------------------------------------------
# Temporary files (SAFE)
# ------------------------------------------------------------
log "Cleaning temporary directories"
if [ -d /tmp ]; then
du -sh /tmp 2>/dev/null || true
rm -rvf /tmp/* || true
fi
if [ -d /var/tmp ]; then
du -sh /var/tmp 2>/dev/null || true
rm -rvf /var/tmp/* || true
fi
# ------------------------------------------------------------
# Generic caches (SAFE)
# ------------------------------------------------------------
log "Cleaning generic caches"
du -sh /root/.cache 2>/dev/null || true
rm -rvf /root/.cache/* 2>/dev/null || true
rm -rvf /home/*/.cache/* 2>/dev/null || true
# ------------------------------------------------------------
# Optional aggressive extras (still safe for runtime)
# ------------------------------------------------------------
if [[ "${MODE}" == "aggressive" ]]; then
log "Aggressive mode enabled: removing docs/man/info"
du -sh /usr/share/doc /usr/share/man /usr/share/info 2>/dev/null || true
rm -rvf /usr/share/doc/* /usr/share/man/* /usr/share/info/* 2>/dev/null || true
fi
log "Cleanup finished successfully"

View File

@@ -0,0 +1,14 @@
#!/usr/bin/env bash
set -euo pipefail
TARGET_SHA="${TARGET_SHA:-${GITHUB_SHA:?GITHUB_SHA must be set}}"
git fetch --no-tags origin main
if git merge-base --is-ancestor "${TARGET_SHA}" "origin/main"; then
echo "is_on_main=true" >> "$GITHUB_OUTPUT"
echo "Target commit ${TARGET_SHA} is contained in origin/main."
else
echo "is_on_main=false" >> "$GITHUB_OUTPUT"
echo "Target commit ${TARGET_SHA} is not contained in origin/main. Skipping main-only action."
fi

View File

@@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -euo pipefail
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
echo "Ref: $GITHUB_REF"
echo "SHA: $GITHUB_SHA"
VERSION="${GITHUB_REF#refs/tags/}"
echo "Current version tag: ${VERSION}"
echo "Collecting all version tags..."
ALL_V_TAGS="$(git tag --list 'v*' || true)"
if [[ -z "${ALL_V_TAGS}" ]]; then
echo "No version tags found. Skipping stable update."
exit 0
fi
echo "All version tags:"
echo "${ALL_V_TAGS}"
LATEST_TAG="$(printf '%s\n' "${ALL_V_TAGS}" | sort -V | tail -n1)"
echo "Highest version tag: ${LATEST_TAG}"
if [[ "${VERSION}" != "${LATEST_TAG}" ]]; then
echo "Current version ${VERSION} is NOT the highest version."
echo "Stable tag will NOT be updated."
exit 0
fi
echo "Current version ${VERSION} IS the highest version."
echo "Updating 'stable' tag..."
git tag -d stable 2>/dev/null || true
git push origin :refs/tags/stable || true
git tag stable "$GITHUB_SHA"
git push origin stable
echo "Stable tag updated to ${VERSION}."

View File

@@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -euo pipefail
SHA="${GITHUB_SHA}"
API_URL="https://api.github.com/repos/${GITHUB_REPOSITORY}/actions/workflows/ci.yml/runs?head_sha=${SHA}&event=push&per_page=20"
WAIT_INTERVAL_SECONDS=20
MAX_ATTEMPTS=990 # 5 hours 30 minutes max wait
STATUS=""
CONCLUSION=""
echo "Waiting for CI on main for ${SHA} (up to 5 hours 30 minutes)..."
for attempt in $(seq 1 "${MAX_ATTEMPTS}"); do
RESPONSE="$(curl -fsSL \
-H "Authorization: Bearer ${GH_TOKEN}" \
-H "Accept: application/vnd.github+json" \
"${API_URL}")"
STATUS="$(printf '%s' "${RESPONSE}" | jq -r '.workflow_runs[] | select(.head_branch=="main") | .status' | head -n1)"
CONCLUSION="$(printf '%s' "${RESPONSE}" | jq -r '.workflow_runs[] | select(.head_branch=="main") | .conclusion' | head -n1)"
if [[ -n "${STATUS}" ]]; then
echo "CI status=${STATUS} conclusion=${CONCLUSION:-none} (attempt ${attempt}/${MAX_ATTEMPTS})"
else
echo "No CI run for main found yet (attempt ${attempt}/${MAX_ATTEMPTS})"
fi
if [[ "${STATUS}" == "completed" ]]; then
if [[ "${CONCLUSION}" == "success" ]]; then
echo "CI succeeded for ${SHA}."
break
fi
echo "CI failed for ${SHA} (conclusion=${CONCLUSION})."
exit 1
fi
sleep "${WAIT_INTERVAL_SECONDS}"
done
if [[ "${STATUS}" != "completed" || "${CONCLUSION}" != "success" ]]; then
echo "Timed out waiting for successful CI on main for ${SHA}."
exit 1
fi

View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
set -euo pipefail
WORKFLOW_RUN_SHA="${WORKFLOW_RUN_SHA:?WORKFLOW_RUN_SHA must be set}"
git checkout -f "${WORKFLOW_RUN_SHA}"
git fetch --tags --force
git tag --list 'stable' 'v*' --sort=version:refname | tail -n 20

View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bash
set -euo pipefail
SHA="$(git rev-parse HEAD)"
V_TAG="$(git tag --points-at "${SHA}" --list 'v*' | sort -V | tail -n1)"
if [[ -z "${V_TAG}" ]]; then
echo "No version tag found for ${SHA}. Skipping publish."
echo "should_publish=false" >> "$GITHUB_OUTPUT"
exit 0
fi
VERSION="${V_TAG#v}"
STABLE_SHA="$(git rev-parse -q --verify 'refs/tags/stable^{commit}' 2>/dev/null || true)"
IS_STABLE=false
[[ -n "${STABLE_SHA}" && "${STABLE_SHA}" == "${SHA}" ]] && IS_STABLE=true
{
echo "should_publish=true"
echo "version=${VERSION}"
echo "is_stable=${IS_STABLE}"
} >> "$GITHUB_OUTPUT"

View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
set -euo pipefail
: "${OWNER:?OWNER must be set}"
: "${VERSION:?VERSION must be set}"
: "${IS_STABLE:?IS_STABLE must be set}"
bash scripts/build/publish.sh

View File

@@ -38,11 +38,7 @@ echo "[aur-builder-setup] Configuring sudoers for aur_builder..."
${ROOT_CMD} bash -c "echo '%aur_builder ALL=(ALL) NOPASSWD: /usr/bin/pacman' > /etc/sudoers.d/aur_builder"
${ROOT_CMD} chmod 0440 /etc/sudoers.d/aur_builder
if command -v sudo >/dev/null 2>&1; then
RUN_AS_AUR=(sudo -u aur_builder bash -lc)
else
RUN_AS_AUR=(su - aur_builder -c)
fi
RUN_AS_AUR=(su - aur_builder -s /bin/bash -c)
echo "[aur-builder-setup] Ensuring yay is installed for aur_builder..."

View File

@@ -16,6 +16,7 @@ fi
pacman -S --noconfirm --needed \
base-devel \
git \
gnupg \
rsync \
curl \
ca-certificates \

View File

@@ -6,7 +6,7 @@ echo "[arch/package] Building Arch package (makepkg --nodeps) in an isolated bui
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
# We must not build inside /src (mounted repo). Build in /tmp to avoid permission issues.
# We must not build inside /opt/src/pkgmgr (mounted repo). Build in /tmp to avoid permission issues.
BUILD_ROOT="/tmp/package-manager-arch-build"
PKG_SRC_DIR="${PROJECT_ROOT}/packaging/arch"
PKG_BUILD_DIR="${BUILD_ROOT}/packaging/arch"

View File

@@ -6,6 +6,7 @@ echo "[centos/dependencies] Installing CentOS build dependencies..."
dnf -y update
dnf -y install \
git \
gnupg2 \
rsync \
rpm-build \
make \

View File

@@ -9,6 +9,7 @@ DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
debhelper \
dpkg-dev \
git \
gnupg \
rsync \
bash \
curl \

View File

@@ -6,6 +6,7 @@ echo "[fedora/dependencies] Installing Fedora build dependencies..."
dnf -y update
dnf -y install \
git \
gnupg2 \
rsync \
rpm-build \
make \

View File

@@ -9,6 +9,7 @@ DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
debhelper \
dpkg-dev \
git \
gnupg \
tzdata \
lsb-release \
rsync \

View File

@@ -37,10 +37,16 @@ fi
# ---------------------------------------------------------------------------
if ! command -v nix >/dev/null 2>&1; then
if [[ -x "${FLAKE_DIR}/nix/init.sh" ]]; then
"${FLAKE_DIR}/nix/init.sh" || true
"${FLAKE_DIR}/nix/init.sh"
fi
fi
if ! command -v nix >/dev/null 2>&1; then
echo "[launcher] ERROR: 'nix' binary not found on PATH after init." >&2
echo "[launcher] Nix is required to run pkgmgr (no Python fallback)." >&2
exit 1
fi
# ---------------------------------------------------------------------------
# Primary path: use Nix flake if available (with GitHub 403 retry)
# ---------------------------------------------------------------------------
@@ -51,7 +57,3 @@ if declare -F run_with_github_403_retry >/dev/null; then
else
exec nix run "${FLAKE_DIR}#pkgmgr" -- "$@"
fi
echo "[launcher] ERROR: 'nix' binary not found on PATH after init."
echo "[launcher] Nix is required to run pkgmgr (no Python fallback)."
exit 1

View File

@@ -49,11 +49,7 @@ install_nix_with_retry() {
if [[ -n "$run_as" ]]; then
chown "$run_as:$run_as" "$installer" 2>/dev/null || true
echo "[init-nix] Running installer as user '$run_as' ($mode_flag)..."
if command -v sudo >/dev/null 2>&1; then
sudo -u "$run_as" bash -lc "sh '$installer' $mode_flag"
else
su - "$run_as" -c "sh '$installer' $mode_flag"
fi
su - "$run_as" -s /bin/bash -c "bash -lc \"sh '$installer' $mode_flag\""
else
echo "[init-nix] Running installer as current user ($mode_flag)..."
sh "$installer" "$mode_flag"

View File

@@ -36,16 +36,17 @@ real_exe() {
# Resolve nix binary path robustly (works across distros + Arch /usr/sbin)
resolve_nix_bin() {
local nix_cmd=""
nix_cmd="$(command -v nix 2>/dev/null || true)"
[[ -n "$nix_cmd" ]] && real_exe "$nix_cmd" && return 0
# IMPORTANT: prefer system locations before /usr/local to avoid self-symlink traps
# IMPORTANT: prefer distro-managed locations first.
# This avoids pinning /usr/local/bin/nix to a stale user-profile nix binary.
[[ -x /usr/sbin/nix ]] && { echo "/usr/sbin/nix"; return 0; } # Arch package can land here
[[ -x /usr/bin/nix ]] && { echo "/usr/bin/nix"; return 0; }
[[ -x /bin/nix ]] && { echo "/bin/nix"; return 0; }
# /usr/local last, and only if it resolves to a real executable
local nix_cmd=""
nix_cmd="$(command -v nix 2>/dev/null || true)"
[[ -n "$nix_cmd" ]] && real_exe "$nix_cmd" && return 0
# /usr/local after system locations, and only if it resolves to a real executable
[[ -e /usr/local/bin/nix ]] && real_exe "/usr/local/bin/nix" && return 0
[[ -x /nix/var/nix/profiles/default/bin/nix ]] && {

View File

@@ -6,12 +6,13 @@ echo ">>> Running E2E tests: $PKGMGR_DISTRO"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "$(pwd):/opt/src/pkgmgr" \
-v "pkgmgr_nix_store_${PKGMGR_DISTRO}:/nix" \
-v "pkgmgr_nix_cache_${PKGMGR_DISTRO}:/root/.cache/nix" \
-e REINSTALL_PKGMGR=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
--workdir /src \
-e NIX_CONFIG="${NIX_CONFIG}" \
--workdir /opt/src/pkgmgr \
"pkgmgr-${PKGMGR_DISTRO}" \
bash -lc '
set -euo pipefail
@@ -40,14 +41,14 @@ docker run --rm \
}
# Mark the mounted repository as safe to avoid Git ownership errors.
# Newer Git (e.g. on Ubuntu) complains about the gitdir (/src/.git),
# older versions about the worktree (/src). Nix turns "." into the
# flake input "git+file:///src", which then uses Git under the hood.
# Newer Git (e.g. on Ubuntu) complains about the gitdir (/opt/src/pkgmgr/.git),
# older versions about the worktree (/opt/src/pkgmgr). Nix turns "." into the
# flake input "git+file:///opt/src/pkgmgr", which then uses Git under the hood.
if command -v git >/dev/null 2>&1; then
# Worktree path
git config --global --add safe.directory /src || true
git config --global --add safe.directory /opt/src/pkgmgr || true
# Gitdir path shown in the "dubious ownership" error
git config --global --add safe.directory /src/.git || true
git config --global --add safe.directory /opt/src/pkgmgr/.git || true
# Ephemeral CI containers: allow all paths as a last resort
git config --global --add safe.directory "*" || true
fi
@@ -55,6 +56,6 @@ docker run --rm \
# Run the E2E tests inside the Nix development shell
nix develop .#default --no-write-lock-file -c \
python3 -m unittest discover \
-s /src/tests/e2e \
-s /opt/src/pkgmgr/tests/e2e \
-p "$TEST_PATTERN"
'

View File

@@ -9,18 +9,19 @@ echo ">>> Image: ${IMAGE}"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "$(pwd):/opt/src/pkgmgr" \
-v "pkgmgr_nix_store_${PKGMGR_DISTRO}:/nix" \
-v "pkgmgr_nix_cache_${PKGMGR_DISTRO}:/root/.cache/nix" \
--workdir /src \
--workdir /opt/src/pkgmgr \
-e REINSTALL_PKGMGR=1 \
-e NIX_CONFIG="${NIX_CONFIG}" \
"${IMAGE}" \
bash -lc '
set -euo pipefail
if command -v git >/dev/null 2>&1; then
git config --global --add safe.directory /src || true
git config --global --add safe.directory /src/.git || true
git config --global --add safe.directory /opt/src/pkgmgr || true
git config --global --add safe.directory /opt/src/pkgmgr/.git || true
git config --global --add safe.directory "*" || true
fi
@@ -38,9 +39,9 @@ docker run --rm \
# ------------------------------------------------------------
# Retry helper for GitHub API rate-limit (HTTP 403)
# ------------------------------------------------------------
if [[ -f /src/scripts/nix/lib/retry_403.sh ]]; then
if [[ -f /opt/src/pkgmgr/scripts/nix/lib/retry_403.sh ]]; then
# shellcheck source=./scripts/nix/lib/retry_403.sh
source /src/scripts/nix/lib/retry_403.sh
source /opt/src/pkgmgr/scripts/nix/lib/retry_403.sh
elif [[ -f ./scripts/nix/lib/retry_403.sh ]]; then
# shellcheck source=./scripts/nix/lib/retry_403.sh
source ./scripts/nix/lib/retry_403.sh

View File

@@ -17,8 +17,9 @@ echo
# ------------------------------------------------------------
if OUTPUT=$(docker run --rm \
-e REINSTALL_PKGMGR=1 \
-v "$(pwd):/src" \
-w /src \
-v "$(pwd):/opt/src/pkgmgr" \
-w /opt/src/pkgmgr \
-e NIX_CONFIG="${NIX_CONFIG}" \
"${IMAGE}" \
bash -lc '
set -euo pipefail

View File

@@ -6,19 +6,20 @@ echo ">>> Running INTEGRATION tests in ${PKGMGR_DISTRO} container"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "$(pwd):/opt/src/pkgmgr" \
-v "pkgmgr_nix_store_${PKGMGR_DISTRO}:/nix" \
-v "pkgmgr_nix_cache_${PKGMGR_DISTRO}:/root/.cache/nix" \
--workdir /src \
--workdir /opt/src/pkgmgr \
-e REINSTALL_PKGMGR=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
-e NIX_CONFIG="${NIX_CONFIG}" \
"pkgmgr-${PKGMGR_DISTRO}" \
bash -lc '
set -e;
git config --global --add safe.directory /src || true;
git config --global --add safe.directory /opt/src/pkgmgr || true;
nix develop .#default --no-write-lock-file -c \
python3 -m unittest discover \
-s tests/integration \
-t /src \
-t /opt/src/pkgmgr \
-p "$TEST_PATTERN";
'

View File

@@ -6,19 +6,20 @@ echo ">>> Running UNIT tests in ${PKGMGR_DISTRO} container"
echo "============================================================"
docker run --rm \
-v "$(pwd):/src" \
-v "$(pwd):/opt/src/pkgmgr" \
-v "pkgmgr_nix_cache_${PKGMGR_DISTRO}:/root/.cache/nix" \
-v "pkgmgr_nix_store_${PKGMGR_DISTRO}:/nix" \
--workdir /src \
--workdir /opt/src/pkgmgr \
-e REINSTALL_PKGMGR=1 \
-e TEST_PATTERN="${TEST_PATTERN}" \
-e NIX_CONFIG="${NIX_CONFIG}" \
"pkgmgr-${PKGMGR_DISTRO}" \
bash -lc '
set -e;
git config --global --add safe.directory /src || true;
git config --global --add safe.directory /opt/src/pkgmgr || true;
nix develop .#default --no-write-lock-file -c \
python3 -m unittest discover \
-s tests/unit \
-t /src \
-t /opt/src/pkgmgr \
-p "$TEST_PATTERN";
'

View File

@@ -25,12 +25,12 @@ __all__ = ["cli"]
def __getattr__(name: str) -> Any:
"""
Lazily expose ``pkgmgr.cli`` as attribute on the top-level package.
"""
Lazily expose ``pkgmgr.cli`` as attribute on the top-level package.
This keeps ``import pkgmgr`` lightweight while still allowing
``from pkgmgr import cli`` in tests and entry points.
"""
if name == "cli":
return import_module("pkgmgr.cli")
raise AttributeError(f"module 'pkgmgr' has no attribute {name!r}")
This keeps ``import pkgmgr`` lightweight while still allowing
``from pkgmgr import cli`` in tests and entry points.
"""
if name == "cli":
return import_module("pkgmgr.cli")
raise AttributeError(f"module 'pkgmgr' has no attribute {name!r}")

View File

@@ -3,4 +3,4 @@ from __future__ import annotations
# expose subpackages for patch() / resolve_name() friendliness
from . import release as release # noqa: F401
__all__ = ["release"]
__all__ = ["release"]

View File

@@ -2,7 +2,7 @@ from __future__ import annotations
from typing import Optional
from pkgmgr.core.git.errors import GitError
from pkgmgr.core.git.errors import GitRunError
from pkgmgr.core.git.queries import get_current_branch
from pkgmgr.core.git.commands import (
GitDeleteRemoteBranchError,
@@ -32,7 +32,7 @@ def close_branch(
if not name:
try:
name = get_current_branch(cwd=cwd)
except GitError as exc:
except GitRunError as exc:
raise RuntimeError(f"Failed to detect current branch: {exc}") from exc
if not name:
@@ -48,14 +48,18 @@ def close_branch(
# Confirmation
if not force:
answer = input(
f"Merge branch '{name}' into '{target_base}' and delete it afterwards? (y/N): "
).strip().lower()
answer = (
input(
f"Merge branch '{name}' into '{target_base}' and delete it afterwards? (y/N): "
)
.strip()
.lower()
)
if answer != "y":
print("Aborted closing branch.")
return
# Execute workflow (commands raise specific GitError subclasses)
# Execute workflow (commands raise specific GitRunError subclasses)
fetch("origin", cwd=cwd)
checkout(target_base, cwd=cwd)
pull("origin", target_base, cwd=cwd)

View File

@@ -2,7 +2,7 @@ from __future__ import annotations
from typing import Optional
from pkgmgr.core.git.errors import GitError
from pkgmgr.core.git.errors import GitRunError
from pkgmgr.core.git.queries import get_current_branch
from pkgmgr.core.git.commands import (
GitDeleteRemoteBranchError,
@@ -26,7 +26,7 @@ def drop_branch(
if not name:
try:
name = get_current_branch(cwd=cwd)
except GitError as exc:
except GitRunError as exc:
raise RuntimeError(f"Failed to detect current branch: {exc}") from exc
if not name:
@@ -41,15 +41,19 @@ def drop_branch(
# Confirmation
if not force:
answer = input(
f"Delete branch '{name}' locally and on origin? This is destructive! (y/N): "
).strip().lower()
answer = (
input(
f"Delete branch '{name}' locally and on origin? This is destructive! (y/N): "
)
.strip()
.lower()
)
if answer != "y":
print("Aborted dropping branch.")
return
delete_local_branch(name, cwd=cwd, force=False)
# Remote delete (special-case message)
try:
delete_remote_branch("origin", name, cwd=cwd)

View File

@@ -30,7 +30,7 @@ def open_branch(
resolved_base = resolve_base_branch(base_branch, fallback_base, cwd=cwd)
# Workflow (commands raise specific GitError subclasses)
# Workflow (commands raise specific GitBaseError subclasses)
fetch("origin", cwd=cwd)
checkout(resolved_base, cwd=cwd)
pull("origin", resolved_base, cwd=cwd)

View File

@@ -1,15 +1,18 @@
import yaml
import os
from pkgmgr.core.config.save import save_user_config
from pkgmgr.core.config.save import save_user_config
def interactive_add(config,USER_CONFIG_PATH:str):
def interactive_add(config, USER_CONFIG_PATH: str):
"""Interactively prompt the user to add a new repository entry to the user config."""
print("Adding a new repository configuration entry.")
new_entry = {}
new_entry["provider"] = input("Provider (e.g., github.com): ").strip()
new_entry["account"] = input("Account (e.g., yourusername): ").strip()
new_entry["repository"] = input("Repository name (e.g., mytool): ").strip()
new_entry["command"] = input("Command (optional, leave blank to auto-detect): ").strip()
new_entry["command"] = input(
"Command (optional, leave blank to auto-detect): "
).strip()
new_entry["description"] = input("Description (optional): ").strip()
new_entry["replacement"] = input("Replacement (optional): ").strip()
new_entry["alias"] = input("Alias (optional): ").strip()
@@ -25,12 +28,12 @@ def interactive_add(config,USER_CONFIG_PATH:str):
confirm = input("Add this entry to user config? (y/N): ").strip().lower()
if confirm == "y":
if os.path.exists(USER_CONFIG_PATH):
with open(USER_CONFIG_PATH, 'r') as f:
with open(USER_CONFIG_PATH, "r") as f:
user_config = yaml.safe_load(f) or {}
else:
user_config = {"repositories": []}
user_config.setdefault("repositories", [])
user_config["repositories"].append(new_entry)
save_user_config(user_config,USER_CONFIG_PATH)
save_user_config(user_config, USER_CONFIG_PATH)
else:
print("Entry not added.")
print("Entry not added.")

View File

@@ -107,11 +107,15 @@ def config_init(
# Already known?
if key in default_keys:
skipped += 1
print(f"[SKIP] (defaults) {provider}/{account}/{repo_name}")
print(
f"[SKIP] (defaults) {provider}/{account}/{repo_name}"
)
continue
if key in existing_keys:
skipped += 1
print(f"[SKIP] (user-config) {provider}/{account}/{repo_name}")
print(
f"[SKIP] (user-config) {provider}/{account}/{repo_name}"
)
continue
print(f"[ADD] {provider}/{account}/{repo_name}")
@@ -121,7 +125,9 @@ def config_init(
if verified_commit:
print(f"[INFO] Latest commit: {verified_commit}")
else:
print("[WARN] Could not read commit (not a git repo or no commits).")
print(
"[WARN] Could not read commit (not a git repo or no commits)."
)
entry: Dict[str, Any] = {
"provider": provider,

View File

@@ -1,6 +1,7 @@
import yaml
from pkgmgr.core.config.load import load_config
def show_config(selected_repos, user_config_path, full_config=False):
"""Display configuration for one or more repositories, or the entire merged config."""
if full_config:
@@ -8,8 +9,10 @@ def show_config(selected_repos, user_config_path, full_config=False):
print(yaml.dump(merged, default_flow_style=False))
else:
for repo in selected_repos:
identifier = f'{repo.get("provider")}/{repo.get("account")}/{repo.get("repository")}'
identifier = (
f"{repo.get('provider')}/{repo.get('account')}/{repo.get('repository')}"
)
print(f"Repository: {identifier}")
for key, value in repo.items():
print(f" {key}: {value}")
print("-" * 40)
print("-" * 40)

View File

@@ -66,10 +66,7 @@ def _ensure_repo_dir(
repo_dir = get_repo_dir(repositories_base_dir, repo)
if not os.path.exists(repo_dir):
print(
f"Repository directory '{repo_dir}' does not exist. "
"Cloning it now..."
)
print(f"Repository directory '{repo_dir}' does not exist. Cloning it now...")
clone_repos(
[repo],
repositories_base_dir,
@@ -79,10 +76,7 @@ def _ensure_repo_dir(
clone_mode,
)
if not os.path.exists(repo_dir):
print(
f"Cloning failed for repository {identifier}. "
"Skipping installation."
)
print(f"Cloning failed for repository {identifier}. Skipping installation.")
return None
return repo_dir
@@ -115,7 +109,9 @@ def _verify_repo(
if silent:
# Non-interactive mode: continue with a warning.
print(f"[Warning] Continuing despite verification failure for {identifier} (--silent).")
print(
f"[Warning] Continuing despite verification failure for {identifier} (--silent)."
)
else:
choice = input("Continue anyway? [y/N]: ").strip().lower()
if choice != "y":
@@ -232,12 +228,16 @@ def install_repos(
code = exc.code if isinstance(exc.code, int) else str(exc.code)
failures.append((identifier, f"installer failed (exit={code})"))
if not quiet:
print(f"[Warning] install: repository {identifier} failed (exit={code}). Continuing...")
print(
f"[Warning] install: repository {identifier} failed (exit={code}). Continuing..."
)
continue
except Exception as exc:
failures.append((identifier, f"unexpected error: {exc}"))
if not quiet:
print(f"[Warning] install: repository {identifier} hit an unexpected error: {exc}. Continuing...")
print(
f"[Warning] install: repository {identifier} hit an unexpected error: {exc}. Continuing..."
)
continue
if failures and emit_summary and not quiet:

View File

@@ -14,6 +14,10 @@ from pkgmgr.actions.install.installers.python import PythonInstaller # noqa: F4
from pkgmgr.actions.install.installers.makefile import MakefileInstaller # noqa: F401
# OS-specific installers
from pkgmgr.actions.install.installers.os_packages.arch_pkgbuild import ArchPkgbuildInstaller # noqa: F401
from pkgmgr.actions.install.installers.os_packages.debian_control import DebianControlInstaller # noqa: F401
from pkgmgr.actions.install.installers.os_packages.arch_pkgbuild import (
ArchPkgbuildInstaller as ArchPkgbuildInstaller,
) # noqa: F401
from pkgmgr.actions.install.installers.os_packages.debian_control import (
DebianControlInstaller as DebianControlInstaller,
) # noqa: F401
from pkgmgr.actions.install.installers.os_packages.rpm_spec import RpmSpecInstaller # noqa: F401

View File

@@ -41,7 +41,9 @@ class BaseInstaller(ABC):
return caps
for matcher in CAPABILITY_MATCHERS:
if matcher.applies_to_layer(self.layer) and matcher.is_provided(ctx, self.layer):
if matcher.applies_to_layer(self.layer) and matcher.is_provided(
ctx, self.layer
):
caps.add(matcher.name)
return caps

View File

@@ -16,7 +16,9 @@ class MakefileInstaller(BaseInstaller):
def supports(self, ctx: RepoContext) -> bool:
if os.environ.get("PKGMGR_DISABLE_MAKEFILE_INSTALLER") == "1":
if not ctx.quiet:
print("[INFO] PKGMGR_DISABLE_MAKEFILE_INSTALLER=1 skipping MakefileInstaller.")
print(
"[INFO] PKGMGR_DISABLE_MAKEFILE_INSTALLER=1 skipping MakefileInstaller."
)
return False
makefile_path = os.path.join(ctx.repo_dir, self.MAKEFILE_NAME)
@@ -46,7 +48,9 @@ class MakefileInstaller(BaseInstaller):
return
if not ctx.quiet:
print(f"[pkgmgr] Running make install for {ctx.identifier} (MakefileInstaller)")
print(
f"[pkgmgr] Running make install for {ctx.identifier} (MakefileInstaller)"
)
run_command("make install", cwd=ctx.repo_dir, preview=ctx.preview)

View File

@@ -57,7 +57,9 @@ class NixConflictResolver:
# 3) Fallback: output-name based lookup (also covers nix suggesting: `nix profile remove pkgmgr`)
if not tokens:
tokens = self._profile.find_remove_tokens_for_output(ctx, self._runner, output)
tokens = self._profile.find_remove_tokens_for_output(
ctx, self._runner, output
)
if tokens:
if not quiet:
@@ -94,7 +96,9 @@ class NixConflictResolver:
continue
if not quiet:
print("[nix] conflict detected but could not resolve profile entries to remove.")
print(
"[nix] conflict detected but could not resolve profile entries to remove."
)
return False
return False

View File

@@ -75,7 +75,9 @@ class NixFlakeInstaller(BaseInstaller):
# Core install path
# ---------------------------------------------------------------------
def _install_only(self, ctx: "RepoContext", output: str, allow_failure: bool) -> None:
def _install_only(
self, ctx: "RepoContext", output: str, allow_failure: bool
) -> None:
install_cmd = f"nix profile install {self._installable(ctx, output)}"
if not ctx.quiet:
@@ -96,7 +98,9 @@ class NixFlakeInstaller(BaseInstaller):
output=output,
):
if not ctx.quiet:
print(f"[nix] output '{output}' successfully installed after conflict cleanup.")
print(
f"[nix] output '{output}' successfully installed after conflict cleanup."
)
return
if not ctx.quiet:
@@ -107,20 +111,26 @@ class NixFlakeInstaller(BaseInstaller):
# If indices are supported, try legacy index-upgrade path.
if self._indices_supported is not False:
indices = self._profile.find_installed_indices_for_output(ctx, self._runner, output)
indices = self._profile.find_installed_indices_for_output(
ctx, self._runner, output
)
upgraded = False
for idx in indices:
if self._upgrade_index(ctx, idx):
upgraded = True
if not ctx.quiet:
print(f"[nix] output '{output}' successfully upgraded (index {idx}).")
print(
f"[nix] output '{output}' successfully upgraded (index {idx})."
)
if upgraded:
return
if indices and not ctx.quiet:
print(f"[nix] upgrade failed; removing indices {indices} and reinstalling '{output}'.")
print(
f"[nix] upgrade failed; removing indices {indices} and reinstalling '{output}'."
)
for idx in indices:
self._remove_index(ctx, idx)
@@ -139,7 +149,9 @@ class NixFlakeInstaller(BaseInstaller):
print(f"[nix] output '{output}' successfully re-installed.")
return
print(f"[ERROR] Failed to install Nix flake output '{output}' (exit {final.returncode})")
print(
f"[ERROR] Failed to install Nix flake output '{output}' (exit {final.returncode})"
)
if not allow_failure:
raise SystemExit(final.returncode)
@@ -149,7 +161,9 @@ class NixFlakeInstaller(BaseInstaller):
# force_update path
# ---------------------------------------------------------------------
def _force_upgrade_output(self, ctx: "RepoContext", output: str, allow_failure: bool) -> None:
def _force_upgrade_output(
self, ctx: "RepoContext", output: str, allow_failure: bool
) -> None:
# Prefer token path if indices unsupported (new nix)
if self._indices_supported is False:
self._remove_tokens_for_output(ctx, output)
@@ -158,14 +172,18 @@ class NixFlakeInstaller(BaseInstaller):
print(f"[nix] output '{output}' successfully upgraded.")
return
indices = self._profile.find_installed_indices_for_output(ctx, self._runner, output)
indices = self._profile.find_installed_indices_for_output(
ctx, self._runner, output
)
upgraded_any = False
for idx in indices:
if self._upgrade_index(ctx, idx):
upgraded_any = True
if not ctx.quiet:
print(f"[nix] output '{output}' successfully upgraded (index {idx}).")
print(
f"[nix] output '{output}' successfully upgraded (index {idx})."
)
if upgraded_any:
if not ctx.quiet:
@@ -173,7 +191,9 @@ class NixFlakeInstaller(BaseInstaller):
return
if indices and not ctx.quiet:
print(f"[nix] upgrade failed; removing indices {indices} and reinstalling '{output}'.")
print(
f"[nix] upgrade failed; removing indices {indices} and reinstalling '{output}'."
)
for idx in indices:
self._remove_index(ctx, idx)
@@ -223,7 +243,9 @@ class NixFlakeInstaller(BaseInstaller):
return
if not ctx.quiet:
print(f"[nix] indices unsupported; removing by token(s): {', '.join(tokens)}")
print(
f"[nix] indices unsupported; removing by token(s): {', '.join(tokens)}"
)
for t in tokens:
self._runner.run(ctx, f"nix profile remove {t}", allow_failure=True)

View File

@@ -101,7 +101,9 @@ class NixProfileInspector:
data = self.list_json(ctx, runner)
entries = normalize_elements(data)
tokens: List[str] = [out] # critical: matches nix's own suggestion for conflicts
tokens: List[str] = [
out
] # critical: matches nix's own suggestion for conflicts
for e in entries:
if entry_matches_output(e, out):

View File

@@ -48,7 +48,9 @@ class NixProfileListReader:
return uniq
def indices_matching_store_prefixes(self, ctx: "RepoContext", prefixes: List[str]) -> List[int]:
def indices_matching_store_prefixes(
self, ctx: "RepoContext", prefixes: List[str]
) -> List[int]:
prefixes = [self._store_prefix(p) for p in prefixes if p]
prefixes = [p for p in prefixes if p]
if not prefixes:

View File

@@ -11,6 +11,7 @@ if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
from .runner import CommandRunner
@dataclass(frozen=True)
class RetryPolicy:
max_attempts: int = 7
@@ -35,13 +36,19 @@ class GitHubRateLimitRetry:
install_cmd: str,
) -> RunResult:
quiet = bool(getattr(ctx, "quiet", False))
delays = list(self._fibonacci_backoff(self._policy.base_delay_seconds, self._policy.max_attempts))
delays = list(
self._fibonacci_backoff(
self._policy.base_delay_seconds, self._policy.max_attempts
)
)
last: RunResult | None = None
for attempt, base_delay in enumerate(delays, start=1):
if not quiet:
print(f"[nix] attempt {attempt}/{self._policy.max_attempts}: {install_cmd}")
print(
f"[nix] attempt {attempt}/{self._policy.max_attempts}: {install_cmd}"
)
res = runner.run(ctx, install_cmd, allow_failure=True)
last = res
@@ -56,7 +63,9 @@ class GitHubRateLimitRetry:
if attempt >= self._policy.max_attempts:
break
jitter = random.randint(self._policy.jitter_seconds_min, self._policy.jitter_seconds_max)
jitter = random.randint(
self._policy.jitter_seconds_min, self._policy.jitter_seconds_max
)
wait_time = base_delay + jitter
if not quiet:
@@ -67,7 +76,11 @@ class GitHubRateLimitRetry:
time.sleep(wait_time)
return last if last is not None else RunResult(returncode=1, stdout="", stderr="nix install retry failed")
return (
last
if last is not None
else RunResult(returncode=1, stdout="", stderr="nix install retry failed")
)
@staticmethod
def _is_github_rate_limit_error(text: str) -> bool:

View File

@@ -9,6 +9,7 @@ from .types import RunResult
if TYPE_CHECKING:
from pkgmgr.actions.install.context import RepoContext
class CommandRunner:
"""
Executes commands (shell=True) inside a repository directory (if provided).
@@ -40,7 +41,9 @@ class CommandRunner:
raise
return RunResult(returncode=1, stdout="", stderr=str(e))
res = RunResult(returncode=p.returncode, stdout=p.stdout or "", stderr=p.stderr or "")
res = RunResult(
returncode=p.returncode, stdout=p.stdout or "", stderr=p.stderr or ""
)
if res.returncode != 0 and not quiet:
self._print_compact_failure(res)

View File

@@ -20,7 +20,9 @@ class NixConflictTextParser:
tokens: List[str] = []
for m in pat.finditer(text or ""):
t = (m.group(1) or "").strip()
if (t.startswith("'") and t.endswith("'")) or (t.startswith('"') and t.endswith('"')):
if (t.startswith("'") and t.endswith("'")) or (
t.startswith('"') and t.endswith('"')
):
t = t[1:-1]
if t:
tokens.append(t)

View File

@@ -14,7 +14,9 @@ class PythonInstaller(BaseInstaller):
def supports(self, ctx: RepoContext) -> bool:
if os.environ.get("PKGMGR_DISABLE_PYTHON_INSTALLER") == "1":
print("[INFO] PythonInstaller disabled via PKGMGR_DISABLE_PYTHON_INSTALLER.")
print(
"[INFO] PythonInstaller disabled via PKGMGR_DISABLE_PYTHON_INSTALLER."
)
return False
return os.path.exists(os.path.join(ctx.repo_dir, "pyproject.toml"))

View File

@@ -132,7 +132,11 @@ class InstallationPipeline:
continue
if not quiet:
if ctx.force_update and state.layer is not None and installer_layer == state.layer:
if (
ctx.force_update
and state.layer is not None
and installer_layer == state.layer
):
print(
f"[pkgmgr] Running installer {installer.__class__.__name__} "
f"for {identifier} in '{repo_dir}' (upgrade requested)..."

View File

@@ -14,6 +14,7 @@ from .list_cmd import list_mirrors
from .diff_cmd import diff_mirrors
from .merge_cmd import merge_mirrors
from .setup_cmd import setup_mirrors
from .visibility_cmd import set_mirror_visibility
__all__ = [
"Repository",
@@ -22,4 +23,5 @@ __all__ = [
"diff_mirrors",
"merge_mirrors",
"setup_mirrors",
"set_mirror_visibility",
]

View File

@@ -3,7 +3,7 @@ from __future__ import annotations
import os
from typing import Optional, Set
from pkgmgr.core.git.errors import GitError
from pkgmgr.core.git.errors import GitRunError
from pkgmgr.core.git.commands import (
GitAddRemoteError,
GitAddRemotePushUrlError,
@@ -90,7 +90,7 @@ def determine_primary_remote_url(
def has_origin_remote(repo_dir: str) -> bool:
try:
return "origin" in list_remotes(cwd=repo_dir)
except GitError:
except GitRunError:
return False
@@ -122,7 +122,7 @@ def _ensure_additional_push_urls(
try:
existing = get_remote_push_urls("origin", cwd=repo_dir)
except GitError:
except GitRunError:
existing = set()
for url in sorted(desired - existing):

View File

@@ -16,6 +16,7 @@ from .types import MirrorMap, Repository
# Helpers
# -----------------------------------------------------------------------------
def _repo_key(repo: Repository) -> Tuple[str, str, str]:
"""
Normalised key for identifying a repository in config files.
@@ -47,6 +48,7 @@ def _load_user_config(path: str) -> Dict[str, object]:
# Main merge command
# -----------------------------------------------------------------------------
def merge_mirrors(
selected_repos: List[Repository],
repositories_base_dir: str,

View File

@@ -11,35 +11,37 @@ from .types import Repository
from .url_utils import normalize_provider_host, parse_repo_from_git_url
def ensure_remote_repository(
repo: Repository,
repositories_base_dir: str,
all_repos: List[Repository],
def _provider_hint_from_host(host: str) -> str | None:
h = (host or "").lower()
if h == "github.com":
return "github"
# Best-effort default for self-hosted git domains
return "gitea" if h else None
def ensure_remote_repository_for_url(
*,
url: str,
private_default: bool,
description: str,
preview: bool,
) -> None:
ctx = build_context(repo, repositories_base_dir, all_repos)
primary_url = determine_primary_remote_url(repo, ctx)
if not primary_url:
print("[INFO] No primary URL found; skipping remote provisioning.")
return
host_raw, owner, name = parse_repo_from_git_url(primary_url)
host_raw, owner, name = parse_repo_from_git_url(url)
host = normalize_provider_host(host_raw)
if not host or not owner or not name:
print("[WARN] Could not parse remote URL:", primary_url)
print(f"[WARN] Could not parse repo from URL: {url}")
return
spec = RepoSpec(
host=host,
owner=owner,
name=name,
private=bool(repo.get("private", True)),
description=str(repo.get("description", "")),
private=private_default,
description=description,
)
provider_kind = str(repo.get("provider", "")).lower() or None
provider_kind = _provider_hint_from_host(host)
try:
result = ensure_remote_repo(
@@ -56,4 +58,29 @@ def ensure_remote_repository(
if result.url:
print(f"[REMOTE ENSURE] URL: {result.url}")
except Exception as exc: # noqa: BLE001
print(f"[ERROR] Remote provisioning failed: {exc}")
print(f"[ERROR] Remote provisioning failed for {url!r}: {exc}")
def ensure_remote_repository(
repo: Repository,
repositories_base_dir: str,
all_repos: List[Repository],
preview: bool,
) -> None:
"""
Backwards-compatible wrapper: ensure the *primary* remote repository
derived from the primary URL.
"""
ctx = build_context(repo, repositories_base_dir, all_repos)
primary_url = determine_primary_remote_url(repo, ctx)
if not primary_url:
print("[INFO] No primary URL found; skipping remote provisioning.")
return
ensure_remote_repository_for_url(
url=primary_url,
private_default=bool(repo.get("private", True)),
description=str(repo.get("description", "")),
preview=preview,
)

View File

@@ -2,12 +2,15 @@ from __future__ import annotations
from typing import List
from pkgmgr.core.git.queries import probe_remote_reachable
from pkgmgr.core.git.queries import probe_remote_reachable_detail
from pkgmgr.core.remote_provisioning import ProviderHint, RepoSpec, set_repo_visibility
from pkgmgr.core.remote_provisioning.visibility import VisibilityOptions
from .context import build_context
from .git_remote import ensure_origin_remote, determine_primary_remote_url
from .remote_provision import ensure_remote_repository
from .git_remote import determine_primary_remote_url, ensure_origin_remote
from .remote_provision import ensure_remote_repository_for_url
from .types import Repository
from .url_utils import normalize_provider_host, parse_repo_from_git_url
def _is_git_remote_url(url: str) -> bool:
@@ -25,6 +28,64 @@ def _is_git_remote_url(url: str) -> bool:
return False
def _provider_hint_from_host(host: str) -> str | None:
h = (host or "").lower()
if h == "github.com":
return "github"
return "gitea" if h else None
def _apply_visibility_for_url(
*,
url: str,
private: bool,
description: str,
preview: bool,
) -> None:
host_raw, owner, name = parse_repo_from_git_url(url)
host = normalize_provider_host(host_raw)
if not host or not owner or not name:
print(f"[WARN] Could not parse repo from URL: {url}")
return
spec = RepoSpec(
host=host,
owner=owner,
name=name,
private=private,
description=description,
)
provider_kind = _provider_hint_from_host(host)
res = set_repo_visibility(
spec,
private=private,
provider_hint=ProviderHint(kind=provider_kind),
options=VisibilityOptions(preview=preview),
)
print(f"[REMOTE VISIBILITY] {res.status.upper()}: {res.message}")
def _print_probe_result(name: str | None, url: str, *, cwd: str) -> None:
"""
Print probe result for a git remote URL, including a short failure reason.
"""
ok, reason = probe_remote_reachable_detail(url, cwd=cwd)
prefix = f"{name}: " if name else ""
if ok:
print(f"[OK] {prefix}{url}")
return
print(f"[WARN] {prefix}{url}")
if reason:
reason = reason.strip()
if len(reason) > 240:
reason = reason[:240].rstrip() + ""
print(f" reason: {reason}")
def _setup_local_mirrors_for_repo(
repo: Repository,
repositories_base_dir: str,
@@ -48,6 +109,7 @@ def _setup_remote_mirrors_for_repo(
all_repos: List[Repository],
preview: bool,
ensure_remote: bool,
ensure_visibility: str | None,
) -> None:
ctx = build_context(repo, repositories_base_dir, all_repos)
@@ -56,33 +118,78 @@ def _setup_remote_mirrors_for_repo(
print(f"[MIRROR SETUP:REMOTE] dir: {ctx.repo_dir}")
print("------------------------------------------------------------")
if ensure_remote:
ensure_remote_repository(
repo,
repositories_base_dir,
all_repos,
preview,
)
git_mirrors = {
k: v for k, v in ctx.resolved_mirrors.items() if _is_git_remote_url(v)
}
# Probe only git URLs (do not try ls-remote against PyPI etc.)
# If there are no mirrors at all, probe the primary git URL.
git_mirrors = {k: v for k, v in ctx.resolved_mirrors.items() if _is_git_remote_url(v)}
def _desired_private_default() -> bool:
# default behavior: repo['private'] (or True)
if ensure_visibility == "public":
return False
if ensure_visibility == "private":
return True
return bool(repo.get("private", True))
def _should_enforce_visibility() -> bool:
return ensure_visibility in ("public", "private")
def _visibility_private_value() -> bool:
return ensure_visibility == "private"
description = str(repo.get("description", ""))
# If there are no git mirrors, fall back to primary (git) URL.
if not git_mirrors:
primary = determine_primary_remote_url(repo, ctx)
if not primary or not _is_git_remote_url(primary):
print("[INFO] No git mirrors to probe.")
print("[INFO] No git mirrors to probe or provision.")
print()
return
ok = probe_remote_reachable(primary, cwd=ctx.repo_dir)
print("[OK]" if ok else "[WARN]", primary)
if ensure_remote:
print(f"[REMOTE ENSURE] ensuring primary: {primary}")
ensure_remote_repository_for_url(
url=primary,
private_default=_desired_private_default(),
description=description,
preview=preview,
)
# IMPORTANT: enforce visibility only if requested
if _should_enforce_visibility():
_apply_visibility_for_url(
url=primary,
private=_visibility_private_value(),
description=description,
preview=preview,
)
print()
_print_probe_result(None, primary, cwd=ctx.repo_dir)
print()
return
# Provision ALL git mirrors (if requested)
if ensure_remote:
for name, url in git_mirrors.items():
print(f"[REMOTE ENSURE] ensuring mirror {name!r}: {url}")
ensure_remote_repository_for_url(
url=url,
private_default=_desired_private_default(),
description=description,
preview=preview,
)
if _should_enforce_visibility():
_apply_visibility_for_url(
url=url,
private=_visibility_private_value(),
description=description,
preview=preview,
)
print()
# Probe ALL git mirrors
for name, url in git_mirrors.items():
ok = probe_remote_reachable(url, cwd=ctx.repo_dir)
print(f"[OK] {name}: {url}" if ok else f"[WARN] {name}: {url}")
_print_probe_result(name, url, cwd=ctx.repo_dir)
print()
@@ -95,6 +202,7 @@ def setup_mirrors(
local: bool = True,
remote: bool = True,
ensure_remote: bool = False,
ensure_visibility: str | None = None,
) -> None:
for repo in selected_repos:
if local:
@@ -112,4 +220,5 @@ def setup_mirrors(
all_repos,
preview,
ensure_remote,
ensure_visibility,
)

View File

@@ -17,7 +17,7 @@ def hostport_from_git_url(url: str) -> Tuple[str, Optional[str]]:
netloc = netloc.split("@", 1)[1]
if netloc.startswith("[") and "]" in netloc:
host = netloc[1:netloc.index("]")]
host = netloc[1 : netloc.index("]")]
rest = netloc[netloc.index("]") + 1 :]
port = rest[1:] if rest.startswith(":") else None
return host.strip(), (port.strip() if port else None)
@@ -43,7 +43,7 @@ def normalize_provider_host(host: str) -> str:
return ""
if host.startswith("[") and "]" in host:
host = host[1:host.index("]")]
host = host[1 : host.index("]")]
if ":" in host and host.count(":") == 1:
host = host.rsplit(":", 1)[0]

View File

@@ -0,0 +1,134 @@
from __future__ import annotations
from typing import List
from pkgmgr.core.remote_provisioning import ProviderHint, RepoSpec, set_repo_visibility
from pkgmgr.core.remote_provisioning.visibility import VisibilityOptions
from .context import build_context
from .git_remote import determine_primary_remote_url
from .types import Repository
from .url_utils import normalize_provider_host, parse_repo_from_git_url
def _is_git_remote_url(url: str) -> bool:
# Keep same semantics as setup_cmd.py / git_remote.py
u = (url or "").strip()
if not u:
return False
if u.startswith("git@"):
return True
if u.startswith("ssh://"):
return True
if (u.startswith("https://") or u.startswith("http://")) and u.endswith(".git"):
return True
return False
def _provider_hint_from_host(host: str) -> str | None:
h = (host or "").lower()
if h == "github.com":
return "github"
# Best-effort default for self-hosted git domains
return "gitea" if h else None
def _apply_visibility_for_url(
*,
url: str,
private: bool,
description: str,
preview: bool,
) -> None:
host_raw, owner, name = parse_repo_from_git_url(url)
host = normalize_provider_host(host_raw)
if not host or not owner or not name:
print(f"[WARN] Could not parse repo from URL: {url}")
return
spec = RepoSpec(
host=host,
owner=owner,
name=name,
private=private,
description=description,
)
provider_kind = _provider_hint_from_host(host)
res = set_repo_visibility(
spec,
private=private,
provider_hint=ProviderHint(kind=provider_kind),
options=VisibilityOptions(preview=preview),
)
print(f"[REMOTE VISIBILITY] {res.status.upper()}: {res.message}")
def set_mirror_visibility(
selected_repos: List[Repository],
repositories_base_dir: str,
all_repos: List[Repository],
*,
visibility: str,
preview: bool = False,
) -> None:
"""
Set remote repository visibility for all git mirrors of each selected repo.
visibility:
- "private"
- "public"
"""
v = (visibility or "").strip().lower()
if v not in ("private", "public"):
raise ValueError("visibility must be 'private' or 'public'")
desired_private = v == "private"
for repo in selected_repos:
ctx = build_context(repo, repositories_base_dir, all_repos)
print("------------------------------------------------------------")
print(f"[MIRROR VISIBILITY] {ctx.identifier}")
print(f"[MIRROR VISIBILITY] dir: {ctx.repo_dir}")
print(f"[MIRROR VISIBILITY] target: {v}")
print("------------------------------------------------------------")
git_mirrors = {
name: url
for name, url in ctx.resolved_mirrors.items()
if url and _is_git_remote_url(url)
}
# If there are no git mirrors, fall back to primary (git) URL.
if not git_mirrors:
primary = determine_primary_remote_url(repo, ctx)
if not primary or not _is_git_remote_url(primary):
print(
"[INFO] No git mirrors found (and no primary git URL). Nothing to do."
)
print()
continue
print(f"[MIRROR VISIBILITY] applying to primary: {primary}")
_apply_visibility_for_url(
url=primary,
private=desired_private,
description=str(repo.get("description", "")),
preview=preview,
)
print()
continue
# Apply to ALL git mirrors
for name, url in git_mirrors.items():
print(f"[MIRROR VISIBILITY] applying to mirror {name!r}: {url}")
_apply_visibility_for_url(
url=url,
private=desired_private,
description=str(repo.get("description", "")),
preview=preview,
)
print()

View File

@@ -4,7 +4,16 @@ from pkgmgr.core.repository.dir import get_repo_dir
from pkgmgr.core.command.run import run_command
import sys
def exec_proxy_command(proxy_prefix: str, selected_repos, repositories_base_dir, all_repos, proxy_command: str, extra_args, preview: bool):
def exec_proxy_command(
proxy_prefix: str,
selected_repos,
repositories_base_dir,
all_repos,
proxy_command: str,
extra_args,
preview: bool,
):
"""Execute a given proxy command with extra arguments for each repository."""
error_repos = []
max_exit_code = 0
@@ -22,7 +31,9 @@ def exec_proxy_command(proxy_prefix: str, selected_repos, repositories_base_dir,
try:
run_command(full_cmd, cwd=repo_dir, preview=preview)
except SystemExit as e:
print(f"[ERROR] Command failed in {repo_identifier} with exit code {e.code}.")
print(
f"[ERROR] Command failed in {repo_identifier} with exit code {e.code}."
)
error_repos.append((repo_identifier, e.code))
max_exit_code = max(max_exit_code, e.code)
@@ -30,4 +41,4 @@ def exec_proxy_command(proxy_prefix: str, selected_repos, repositories_base_dir,
print("\nSummary of failed commands:")
for repo_identifier, exit_code in error_repos:
print(f"- {repo_identifier} failed with exit code {exit_code}")
sys.exit(max_exit_code)
sys.exit(max_exit_code)

View File

@@ -1,519 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
File and metadata update helpers for the release workflow.
Responsibilities:
- Update pyproject.toml with the new version.
- Update flake.nix, PKGBUILD, RPM spec files where present.
- Prepend release entries to CHANGELOG.md.
- Maintain distribution-specific changelog files:
* debian/changelog
* RPM spec %changelog section
including maintainer metadata where applicable.
"""
from __future__ import annotations
import os
import re
import subprocess
import sys
import tempfile
from datetime import date, datetime
from typing import Optional, Tuple
from pkgmgr.core.git.queries import get_config_value
# ---------------------------------------------------------------------------
# Editor helper for interactive changelog messages
# ---------------------------------------------------------------------------
def _open_editor_for_changelog(initial_message: Optional[str] = None) -> str:
"""
Open $EDITOR (fallback 'nano') so the user can enter a changelog message.
The temporary file is pre-filled with commented instructions and an
optional initial_message. Lines starting with '#' are ignored when the
message is read back.
Returns the final message (may be empty string if user leaves it blank).
"""
editor = os.environ.get("EDITOR", "nano")
with tempfile.NamedTemporaryFile(
mode="w+",
delete=False,
encoding="utf-8",
) as tmp:
tmp_path = tmp.name
tmp.write(
"# Write the changelog entry for this release.\n"
"# Lines starting with '#' will be ignored.\n"
"# Empty result will fall back to a generic message.\n\n"
)
if initial_message:
tmp.write(initial_message.strip() + "\n")
tmp.flush()
try:
subprocess.call([editor, tmp_path])
except FileNotFoundError:
print(
f"[WARN] Editor {editor!r} not found; proceeding without "
"interactive changelog message."
)
try:
with open(tmp_path, "r", encoding="utf-8") as f:
content = f.read()
finally:
try:
os.remove(tmp_path)
except OSError:
pass
lines = [line for line in content.splitlines() if not line.strip().startswith("#")]
return "\n".join(lines).strip()
# ---------------------------------------------------------------------------
# File update helpers (pyproject + extra packaging + changelog)
# ---------------------------------------------------------------------------
def update_pyproject_version(
pyproject_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in pyproject.toml with the new version.
The function looks for a line matching:
version = "X.Y.Z"
and replaces the version part with the given new_version string.
If the file does not exist, it is skipped without failing the release.
"""
if not os.path.exists(pyproject_path):
print(
f"[INFO] pyproject.toml not found at: {pyproject_path}, "
"skipping version update."
)
return
try:
with open(pyproject_path, "r", encoding="utf-8") as f:
content = f.read()
except OSError as exc:
print(
f"[WARN] Could not read pyproject.toml at {pyproject_path}: {exc}. "
"Skipping version update."
)
return
pattern = r'^(version\s*=\s*")([^"]+)(")'
new_content, count = re.subn(
pattern,
lambda m: f'{m.group(1)}{new_version}{m.group(3)}',
content,
flags=re.MULTILINE,
)
if count == 0:
print("[ERROR] Could not find version line in pyproject.toml")
sys.exit(1)
if preview:
print(f"[PREVIEW] Would update pyproject.toml version to {new_version}")
return
with open(pyproject_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated pyproject.toml version to {new_version}")
def update_flake_version(
flake_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in flake.nix, if present.
"""
if not os.path.exists(flake_path):
print("[INFO] flake.nix not found, skipping.")
return
try:
with open(flake_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read flake.nix: {exc}")
return
pattern = r'(version\s*=\s*")([^"]+)(")'
new_content, count = re.subn(
pattern,
lambda m: f'{m.group(1)}{new_version}{m.group(3)}',
content,
)
if count == 0:
print("[WARN] No version assignment found in flake.nix, skipping.")
return
if preview:
print(f"[PREVIEW] Would update flake.nix version to {new_version}")
return
with open(flake_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated flake.nix version to {new_version}")
def update_pkgbuild_version(
pkgbuild_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in PKGBUILD, if present.
Expects:
pkgver=1.2.3
pkgrel=1
"""
if not os.path.exists(pkgbuild_path):
print("[INFO] PKGBUILD not found, skipping.")
return
try:
with open(pkgbuild_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read PKGBUILD: {exc}")
return
ver_pattern = r"^(pkgver\s*=\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
if ver_count == 0:
print("[WARN] No pkgver line found in PKGBUILD.")
new_content = content
rel_pattern = r"^(pkgrel\s*=\s*)(.+)$"
new_content, rel_count = re.subn(
rel_pattern,
lambda m: f"{m.group(1)}1",
new_content,
flags=re.MULTILINE,
)
if rel_count == 0:
print("[WARN] No pkgrel line found in PKGBUILD.")
if preview:
print(f"[PREVIEW] Would update PKGBUILD to pkgver={new_version}, pkgrel=1")
return
with open(pkgbuild_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated PKGBUILD to pkgver={new_version}, pkgrel=1")
def update_spec_version(
spec_path: str,
new_version: str,
preview: bool = False,
) -> None:
"""
Update the version in an RPM spec file, if present.
"""
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping.")
return
try:
with open(spec_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read spec file: {exc}")
return
ver_pattern = r"^(Version:\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
if ver_count == 0:
print("[WARN] No 'Version:' line found in spec file.")
rel_pattern = r"^(Release:\s*)(.+)$"
def _release_repl(m: re.Match[str]) -> str: # type: ignore[name-defined]
rest = m.group(2).strip()
match = re.match(r"^(\d+)(.*)$", rest)
if match:
suffix = match.group(2)
else:
suffix = ""
return f"{m.group(1)}1{suffix}"
new_content, rel_count = re.subn(
rel_pattern,
_release_repl,
new_content,
flags=re.MULTILINE,
)
if rel_count == 0:
print("[WARN] No 'Release:' line found in spec file.")
if preview:
print(
"[PREVIEW] Would update spec file "
f"{os.path.basename(spec_path)} to Version: {new_version}, Release: 1..."
)
return
with open(spec_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(
f"Updated spec file {os.path.basename(spec_path)} "
f"to Version: {new_version}, Release: 1..."
)
def update_changelog(
changelog_path: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> str:
"""
Prepend a new release section to CHANGELOG.md with the new version,
current date, and a message.
"""
today = date.today().isoformat()
if message is None:
if preview:
message = "Automated release."
else:
print(
"\n[INFO] No release message provided, opening editor for "
"changelog entry...\n"
)
editor_message = _open_editor_for_changelog()
if not editor_message:
message = "Automated release."
else:
message = editor_message
header = f"## [{new_version}] - {today}\n"
header += f"\n* {message}\n\n"
if os.path.exists(changelog_path):
try:
with open(changelog_path, "r", encoding="utf-8") as f:
changelog = f.read()
except Exception as exc:
print(f"[WARN] Could not read existing CHANGELOG.md: {exc}")
changelog = ""
else:
changelog = ""
new_changelog = header + "\n" + changelog if changelog else header
print("\n================ CHANGELOG ENTRY ================")
print(header.rstrip())
print("=================================================\n")
if preview:
print(f"[PREVIEW] Would prepend new entry for {new_version} to CHANGELOG.md")
return message
with open(changelog_path, "w", encoding="utf-8") as f:
f.write(new_changelog)
print(f"Updated CHANGELOG.md with version {new_version}")
return message
# ---------------------------------------------------------------------------
# Debian changelog helpers (with Git config fallback for maintainer)
# ---------------------------------------------------------------------------
def _get_debian_author() -> Tuple[str, str]:
"""
Determine the maintainer name/email for debian/changelog entries.
"""
name = os.environ.get("DEBFULLNAME")
email = os.environ.get("DEBEMAIL")
if not name:
name = os.environ.get("GIT_AUTHOR_NAME")
if not email:
email = os.environ.get("GIT_AUTHOR_EMAIL")
if not name:
name = get_config_value("user.name")
if not email:
email = get_config_value("user.email")
if not name:
name = "Unknown Maintainer"
if not email:
email = "unknown@example.com"
return name, email
def update_debian_changelog(
debian_changelog_path: str,
package_name: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> None:
"""
Prepend a new entry to debian/changelog, if it exists.
"""
if not os.path.exists(debian_changelog_path):
print("[INFO] debian/changelog not found, skipping.")
return
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
date_str = now.strftime("%a, %d %b %Y %H:%M:%S %z")
author_name, author_email = _get_debian_author()
first_line = f"{package_name} ({debian_version}) unstable; urgency=medium"
body_line = message.strip() if message else f"Automated release {new_version}."
stanza = (
f"{first_line}\n\n"
f" * {body_line}\n\n"
f" -- {author_name} <{author_email}> {date_str}\n\n"
)
if preview:
print(
"[PREVIEW] Would prepend the following stanza to debian/changelog:\n"
f"{stanza}"
)
return
try:
with open(debian_changelog_path, "r", encoding="utf-8") as f:
existing = f.read()
except Exception as exc:
print(f"[WARN] Could not read debian/changelog: {exc}")
existing = ""
new_content = stanza + existing
with open(debian_changelog_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated debian/changelog with version {debian_version}")
# ---------------------------------------------------------------------------
# Fedora / RPM spec %changelog helper
# ---------------------------------------------------------------------------
def update_spec_changelog(
spec_path: str,
package_name: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> None:
"""
Prepend a new entry to the %changelog section of an RPM spec file,
if present.
Typical RPM-style entry:
* Tue Dec 09 2025 John Doe <john@example.com> - 0.5.1-1
- Your changelog message
"""
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping spec changelog update.")
return
try:
with open(spec_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read spec file for changelog update: {exc}")
return
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
date_str = now.strftime("%a %b %d %Y")
# Reuse Debian maintainer discovery for author name/email.
author_name, author_email = _get_debian_author()
body_line = message.strip() if message else f"Automated release {new_version}."
stanza = (
f"* {date_str} {author_name} <{author_email}> - {debian_version}\n"
f"- {body_line}\n\n"
)
marker = "%changelog"
idx = content.find(marker)
if idx == -1:
# No %changelog section yet: append one at the end.
new_content = content.rstrip() + "\n\n%changelog\n" + stanza
else:
# Insert stanza right after the %changelog line.
before = content[: idx + len(marker)]
after = content[idx + len(marker) :]
new_content = before + "\n" + stanza + after.lstrip("\n")
if preview:
print(
"[PREVIEW] Would update RPM %changelog section with the following "
"stanza:\n"
f"{stanza}"
)
return
try:
with open(spec_path, "w", encoding="utf-8") as f:
f.write(new_content)
except Exception as exc:
print(f"[WARN] Failed to write updated spec changelog section: {exc}")
return
print(
f"Updated RPM %changelog section in {os.path.basename(spec_path)} "
f"for {package_name} {debian_version}"
)

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Backwards-compatible facade for the release file update helpers.
Implementations live in this package:
pkgmgr.actions.release.files.*
Keep this package stable so existing imports continue to work, e.g.:
from pkgmgr.actions.release.files import update_pyproject_version
"""
from __future__ import annotations
from .editor import _open_editor_for_changelog
from .pyproject import update_pyproject_version
from .flake import update_flake_version
from .pkgbuild import update_pkgbuild_version
from .rpm_spec import update_spec_version
from .changelog_md import update_changelog
from .debian import _get_debian_author, update_debian_changelog
from .rpm_changelog import update_spec_changelog
__all__ = [
"_open_editor_for_changelog",
"update_pyproject_version",
"update_flake_version",
"update_pkgbuild_version",
"update_spec_version",
"update_changelog",
"_get_debian_author",
"update_debian_changelog",
"update_spec_changelog",
]

View File

@@ -0,0 +1,62 @@
from __future__ import annotations
import os
from datetime import date
from typing import Optional
from .editor import _open_editor_for_changelog
def update_changelog(
changelog_path: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> str:
"""
Prepend a new release section to CHANGELOG.md with the new version,
current date, and a message.
"""
today = date.today().isoformat()
if message is None:
if preview:
message = "Automated release."
else:
print(
"\n[INFO] No release message provided, opening editor for changelog entry...\n"
)
editor_message = _open_editor_for_changelog()
if not editor_message:
message = "Automated release."
else:
message = editor_message
header = f"## [{new_version}] - {today}\n"
header += f"\n* {message}\n\n"
if os.path.exists(changelog_path):
try:
with open(changelog_path, "r", encoding="utf-8") as f:
changelog = f.read()
except Exception as exc:
print(f"[WARN] Could not read existing CHANGELOG.md: {exc}")
changelog = ""
else:
changelog = ""
new_changelog = header + "\n" + changelog if changelog else header
print("\n================ CHANGELOG ENTRY ================")
print(header.rstrip())
print("=================================================\n")
if preview:
print(f"[PREVIEW] Would prepend new entry for {new_version} to CHANGELOG.md")
return message
with open(changelog_path, "w", encoding="utf-8") as f:
f.write(new_changelog)
print(f"Updated CHANGELOG.md with version {new_version}")
return message

View File

@@ -0,0 +1,74 @@
from __future__ import annotations
import os
from datetime import datetime
from typing import Optional, Tuple
from pkgmgr.core.git.queries import get_config_value
def _get_debian_author() -> Tuple[str, str]:
name = os.environ.get("DEBFULLNAME")
email = os.environ.get("DEBEMAIL")
if not name:
name = os.environ.get("GIT_AUTHOR_NAME")
if not email:
email = os.environ.get("GIT_AUTHOR_EMAIL")
if not name:
name = get_config_value("user.name")
if not email:
email = get_config_value("user.email")
if not name:
name = "Unknown Maintainer"
if not email:
email = "unknown@example.com"
return name, email
def update_debian_changelog(
debian_changelog_path: str,
package_name: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> None:
if not os.path.exists(debian_changelog_path):
print("[INFO] debian/changelog not found, skipping.")
return
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
date_str = now.strftime("%a, %d %b %Y %H:%M:%S %z")
author_name, author_email = _get_debian_author()
first_line = f"{package_name} ({debian_version}) unstable; urgency=medium"
body_line = message.strip() if message else f"Automated release {new_version}."
stanza = (
f"{first_line}\n\n"
f" * {body_line}\n\n"
f" -- {author_name} <{author_email}> {date_str}\n\n"
)
if preview:
print(
"[PREVIEW] Would prepend the following stanza to debian/changelog:\n"
f"{stanza}"
)
return
try:
with open(debian_changelog_path, "r", encoding="utf-8") as f:
existing = f.read()
except Exception as exc:
print(f"[WARN] Could not read debian/changelog: {exc}")
existing = ""
with open(debian_changelog_path, "w", encoding="utf-8") as f:
f.write(stanza + existing)
print(f"Updated debian/changelog with version {debian_version}")

View File

@@ -0,0 +1,45 @@
from __future__ import annotations
import os
import subprocess
import tempfile
from typing import Optional
def _open_editor_for_changelog(initial_message: Optional[str] = None) -> str:
editor = os.environ.get("EDITOR", "nano")
with tempfile.NamedTemporaryFile(
mode="w+",
delete=False,
encoding="utf-8",
) as tmp:
tmp_path = tmp.name
tmp.write(
"# Write the changelog entry for this release.\n"
"# Lines starting with '#' will be ignored.\n"
"# Empty result will fall back to a generic message.\n\n"
)
if initial_message:
tmp.write(initial_message.strip() + "\n")
tmp.flush()
try:
subprocess.call([editor, tmp_path])
except FileNotFoundError:
print(
f"[WARN] Editor {editor!r} not found; proceeding without "
"interactive changelog message."
)
try:
with open(tmp_path, "r", encoding="utf-8") as f:
content = f.read()
finally:
try:
os.remove(tmp_path)
except OSError:
pass
lines = [line for line in content.splitlines() if not line.strip().startswith("#")]
return "\n".join(lines).strip()

View File

@@ -0,0 +1,39 @@
from __future__ import annotations
import os
import re
def update_flake_version(
flake_path: str, new_version: str, preview: bool = False
) -> None:
if not os.path.exists(flake_path):
print("[INFO] flake.nix not found, skipping.")
return
try:
with open(flake_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read flake.nix: {exc}")
return
pattern = r'(version\s*=\s*")([^"]+)(")'
new_content, count = re.subn(
pattern,
lambda m: f"{m.group(1)}{new_version}{m.group(3)}",
content,
)
if count == 0:
print("[WARN] No version found in flake.nix.")
return
if preview:
print(f"[PREVIEW] Would update flake.nix version to {new_version}")
return
with open(flake_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated flake.nix version to {new_version}")

View File

@@ -0,0 +1,41 @@
from __future__ import annotations
import os
import re
def update_pkgbuild_version(
pkgbuild_path: str, new_version: str, preview: bool = False
) -> None:
if not os.path.exists(pkgbuild_path):
print("[INFO] PKGBUILD not found, skipping.")
return
try:
with open(pkgbuild_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read PKGBUILD: {exc}")
return
content, _ = re.subn(
r"^(pkgver\s*=\s*)(.+)$",
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
content, _ = re.subn(
r"^(pkgrel\s*=\s*)(.+)$",
lambda m: f"{m.group(1)}1",
content,
flags=re.MULTILINE,
)
if preview:
print(f"[PREVIEW] Would update PKGBUILD to pkgver={new_version}, pkgrel=1")
return
with open(pkgbuild_path, "w", encoding="utf-8") as f:
f.write(content)
print(f"Updated PKGBUILD to pkgver={new_version}, pkgrel=1")

View File

@@ -0,0 +1,45 @@
from __future__ import annotations
import os
import re
def update_pyproject_version(
pyproject_path: str, new_version: str, preview: bool = False
) -> None:
if not os.path.exists(pyproject_path):
print(f"[INFO] pyproject.toml not found at: {pyproject_path}, skipping.")
return
try:
with open(pyproject_path, "r", encoding="utf-8") as f:
content = f.read()
except OSError as exc:
print(f"[WARN] Could not read pyproject.toml: {exc}")
return
m = re.search(r"(?ms)^\s*\[project\]\s*$.*?(?=^\s*\[|\Z)", content)
if not m:
raise RuntimeError("Missing [project] section in pyproject.toml")
project_block = m.group(0)
ver_pat = r'(?m)^(\s*version\s*=\s*")([^"]+)(")\s*$'
new_block, count = re.subn(
ver_pat,
lambda mm: f"{mm.group(1)}{new_version}{mm.group(3)}",
project_block,
)
if count == 0:
raise RuntimeError("Missing version key in [project] section")
new_content = content[: m.start()] + new_block + content[m.end() :]
if preview:
print(f"[PREVIEW] Would update pyproject.toml version to {new_version}")
return
with open(pyproject_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(f"Updated pyproject.toml version to {new_version}")

View File

@@ -0,0 +1,67 @@
from __future__ import annotations
import os
from datetime import datetime
from typing import Optional
from .debian import _get_debian_author
def update_spec_changelog(
spec_path: str,
package_name: str,
new_version: str,
message: Optional[str] = None,
preview: bool = False,
) -> None:
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping spec changelog update.")
return
try:
with open(spec_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read spec file for changelog update: {exc}")
return
debian_version = f"{new_version}-1"
now = datetime.now().astimezone()
date_str = now.strftime("%a %b %d %Y")
author_name, author_email = _get_debian_author()
body_line = message.strip() if message else f"Automated release {new_version}."
stanza = (
f"* {date_str} {author_name} <{author_email}> - {debian_version}\n"
f"- {body_line}\n\n"
)
marker = "%changelog"
idx = content.find(marker)
if idx == -1:
new_content = content.rstrip() + "\n\n%changelog\n" + stanza
else:
before = content[: idx + len(marker)]
after = content[idx + len(marker) :]
new_content = before + "\n" + stanza + after.lstrip("\n")
if preview:
print(
"[PREVIEW] Would update RPM %changelog section with the following stanza:\n"
f"{stanza}"
)
return
try:
with open(spec_path, "w", encoding="utf-8") as f:
f.write(new_content)
except Exception as exc:
print(f"[WARN] Failed to write updated spec changelog section: {exc}")
return
print(
f"Updated RPM %changelog section in {os.path.basename(spec_path)} "
f"for {package_name} {debian_version}"
)

View File

@@ -0,0 +1,66 @@
from __future__ import annotations
import os
import re
def update_spec_version(
spec_path: str, new_version: str, preview: bool = False
) -> None:
"""
Update the version in an RPM spec file, if present.
"""
if not os.path.exists(spec_path):
print("[INFO] RPM spec file not found, skipping.")
return
try:
with open(spec_path, "r", encoding="utf-8") as f:
content = f.read()
except Exception as exc:
print(f"[WARN] Could not read spec file: {exc}")
return
ver_pattern = r"^(Version:\s*)(.+)$"
new_content, ver_count = re.subn(
ver_pattern,
lambda m: f"{m.group(1)}{new_version}",
content,
flags=re.MULTILINE,
)
if ver_count == 0:
print("[WARN] No 'Version:' line found in spec file.")
rel_pattern = r"^(Release:\s*)(.+)$"
def _release_repl(m: re.Match[str]) -> str:
rest = m.group(2).strip()
match = re.match(r"^(\d+)(.*)$", rest)
suffix = match.group(2) if match else ""
return f"{m.group(1)}1{suffix}"
new_content, rel_count = re.subn(
rel_pattern,
_release_repl,
new_content,
flags=re.MULTILINE,
)
if rel_count == 0:
print("[WARN] No 'Release:' line found in spec file.")
if preview:
print(
"[PREVIEW] Would update spec file "
f"{os.path.basename(spec_path)} to Version: {new_version}, Release: 1..."
)
return
with open(spec_path, "w", encoding="utf-8") as f:
f.write(new_content)
print(
f"Updated spec file {os.path.basename(spec_path)} "
f"to Version: {new_version}, Release: 1..."
)

View File

@@ -80,7 +80,9 @@ def is_highest_version_tag(tag: str) -> bool:
return True
latest = max(parsed_all)
print(f"[INFO] Latest tag (parsed): v{'.'.join(map(str, latest))}, Current tag: {tag}")
print(
f"[INFO] Latest tag (parsed): v{'.'.join(map(str, latest))}, Current tag: {tag}"
)
return parsed_current >= latest
@@ -93,7 +95,9 @@ def update_latest_tag(new_tag: str, *, preview: bool = False) -> None:
- 'latest' is forced (floating tag), therefore the push uses --force.
"""
target_ref = f"{new_tag}^{{}}"
print(f"[INFO] Updating 'latest' tag to point at {new_tag} (commit {target_ref})...")
print(
f"[INFO] Updating 'latest' tag to point at {new_tag} (commit {target_ref})..."
)
tag_force_annotated(
name="latest",

View File

@@ -5,8 +5,8 @@ import sys
from typing import Optional
from pkgmgr.actions.branch import close_branch
from pkgmgr.core.git import GitError
from pkgmgr.core.git.commands import add, commit, push, tag_annotated
from pkgmgr.core.git import GitRunError, run
from pkgmgr.core.git.commands import add, commit, tag_annotated
from pkgmgr.core.git.queries import get_current_branch
from pkgmgr.core.repository.paths import resolve_repo_paths
@@ -40,7 +40,7 @@ def _release_impl(
# Determine current branch early
try:
branch = get_current_branch() or "main"
except GitError:
except GitRunError:
branch = "main"
print(f"Releasing on branch: {branch}")
@@ -76,7 +76,9 @@ def _release_impl(
if paths.arch_pkgbuild:
update_pkgbuild_version(paths.arch_pkgbuild, new_ver_str, preview=preview)
else:
print("[INFO] No PKGBUILD found (packaging/arch/PKGBUILD or PKGBUILD). Skipping.")
print(
"[INFO] No PKGBUILD found (packaging/arch/PKGBUILD or PKGBUILD). Skipping."
)
if paths.rpm_spec:
update_spec_version(paths.rpm_spec, new_ver_str, preview=preview)
@@ -123,42 +125,48 @@ def _release_impl(
paths.rpm_spec,
paths.debian_changelog,
]
existing_files = [p for p in files_to_add if isinstance(p, str) and p and os.path.exists(p)]
existing_files = [
p for p in files_to_add if isinstance(p, str) and p and os.path.exists(p)
]
if preview:
add(existing_files, preview=True)
commit(commit_msg, all=True, preview=True)
tag_annotated(new_tag, tag_msg, preview=True)
push("origin", branch, preview=True)
push("origin", new_tag, preview=True)
run(["push", "origin", branch, new_tag], preview=True)
if is_highest_version_tag(new_tag):
update_latest_tag(new_tag, preview=True)
else:
print(f"[PREVIEW] Skipping 'latest' update (tag {new_tag} is not the highest).")
print(
f"[PREVIEW] Skipping 'latest' update (tag {new_tag} is not the highest)."
)
if close and branch not in ("main", "master"):
if force:
print(f"[PREVIEW] Would delete branch {branch} (forced).")
else:
print(f"[PREVIEW] Would ask whether to delete branch {branch} after release.")
print(
f"[PREVIEW] Would ask whether to delete branch {branch} after release."
)
return
add(existing_files, preview=False)
commit(commit_msg, all=True, preview=False)
tag_annotated(new_tag, tag_msg, preview=False)
# Push branch and ONLY the newly created version tag (no --tags)
push("origin", branch, preview=False)
push("origin", new_tag, preview=False)
# Push branch and ONLY the newly created version tag in one command (no --tags)
run(["push", "origin", branch, new_tag], preview=False)
# Update 'latest' only if this is the highest version tag
try:
if is_highest_version_tag(new_tag):
update_latest_tag(new_tag, preview=False)
else:
print(f"[INFO] Skipping 'latest' update (tag {new_tag} is not the highest).")
except GitError as exc:
print(
f"[INFO] Skipping 'latest' update (tag {new_tag} is not the highest)."
)
except GitRunError as exc:
print(f"[WARN] Failed to update floating 'latest' tag for {new_tag}: {exc}")
print("'latest' tag was not updated.")
@@ -166,7 +174,9 @@ def _release_impl(
if close:
if branch in ("main", "master"):
print(f"[INFO] close=True but current branch is {branch}; skipping branch deletion.")
print(
f"[INFO] close=True but current branch is {branch}; skipping branch deletion."
)
return
if not should_delete_branch(force=force):

View File

@@ -55,7 +55,9 @@ def clone_repos(
clone_url = _build_clone_url(repo, clone_mode)
if not clone_url:
print(f"[WARNING] Cannot build clone URL for '{repo_identifier}'. Skipping.")
print(
f"[WARNING] Cannot build clone URL for '{repo_identifier}'. Skipping."
)
continue
shallow = clone_mode == "shallow"
@@ -84,7 +86,11 @@ def clone_repos(
continue
print(f"[WARNING] SSH clone failed for '{repo_identifier}': {exc}")
choice = input("Do you want to attempt HTTPS clone instead? (y/N): ").strip().lower()
choice = (
input("Do you want to attempt HTTPS clone instead? (y/N): ")
.strip()
.lower()
)
if choice != "y":
print(f"[INFO] HTTPS clone not attempted for '{repo_identifier}'.")
continue

View File

@@ -63,6 +63,4 @@ def _strip_git_suffix(name: str) -> str:
def _ensure_valid_repo_name(name: str) -> None:
if not _NAME_RE.fullmatch(name):
raise ValueError(
"Repository name must match: lowercase a-z, 0-9, '_' and '-'."
)
raise ValueError("Repository name must match: lowercase a-z, 0-9, '_' and '-'.")

View File

@@ -66,9 +66,7 @@ class TemplateRenderer:
for root, _, files in os.walk(self.templates_dir):
for fn in files:
if fn.endswith(".j2"):
rel = os.path.relpath(
os.path.join(root, fn), self.templates_dir
)
rel = os.path.relpath(os.path.join(root, fn), self.templates_dir)
print(f"[Preview] Would render template: {rel} -> {rel[:-3]}")
@staticmethod

View File

@@ -24,9 +24,13 @@ def deinstall_repos(
# Remove alias link/file (interactive)
if os.path.exists(alias_path):
confirm = input(
f"Are you sure you want to delete link '{alias_path}' for {repo_identifier}? [y/N]: "
).strip().lower()
confirm = (
input(
f"Are you sure you want to delete link '{alias_path}' for {repo_identifier}? [y/N]: "
)
.strip()
.lower()
)
if confirm == "y":
if preview:
print(f"[Preview] Would remove link '{alias_path}'.")

View File

@@ -3,22 +3,33 @@ import os
from pkgmgr.core.repository.identifier import get_repo_identifier
from pkgmgr.core.repository.dir import get_repo_dir
def delete_repos(selected_repos, repositories_base_dir, all_repos, preview=False):
for repo in selected_repos:
repo_identifier = get_repo_identifier(repo, all_repos)
repo_dir = get_repo_dir(repositories_base_dir, repo)
if os.path.exists(repo_dir):
confirm = input(f"Are you sure you want to delete directory '{repo_dir}' for {repo_identifier}? [y/N]: ").strip().lower()
confirm = (
input(
f"Are you sure you want to delete directory '{repo_dir}' for {repo_identifier}? [y/N]: "
)
.strip()
.lower()
)
if confirm == "y":
if preview:
print(f"[Preview] Would delete directory '{repo_dir}' for {repo_identifier}.")
print(
f"[Preview] Would delete directory '{repo_dir}' for {repo_identifier}."
)
else:
try:
shutil.rmtree(repo_dir)
print(f"Deleted repository directory '{repo_dir}' for {repo_identifier}.")
print(
f"Deleted repository directory '{repo_dir}' for {repo_identifier}."
)
except Exception as e:
print(f"Error deleting '{repo_dir}' for {repo_identifier}: {e}")
else:
print(f"Skipped deletion of '{repo_dir}' for {repo_identifier}.")
else:
print(f"Repository directory '{repo_dir}' not found for {repo_identifier}.")
print(f"Repository directory '{repo_dir}' not found for {repo_identifier}.")

View File

@@ -233,9 +233,7 @@ def list_repositories(
categories.append(str(repo["category"]))
yaml_tags: List[str] = list(map(str, repo.get("tags", [])))
display_tags: List[str] = sorted(
set(yaml_tags + list(map(str, extra_tags)))
)
display_tags: List[str] = sorted(set(yaml_tags + list(map(str, extra_tags))))
rows.append(
{
@@ -288,13 +286,7 @@ def list_repositories(
status_padded = status.ljust(status_width)
status_colored = _color_status(status_padded)
print(
f"{ident_col} "
f"{status_colored} "
f"{cat_col} "
f"{tag_col} "
f"{dir_col}"
)
print(f"{ident_col} {status_colored} {cat_col} {tag_col} {dir_col}")
# ------------------------------------------------------------------
# Detailed section (alias value red, same status coloring)

Some files were not shown because too many files have changed in this diff Show More