47 Commits

Author SHA1 Message Date
328f69ea5e Update GitHub Actions workflows for improved functionality
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m44s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Successful in 21s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 7s
Removed the read-only flag from Docker volume mounts in the container CVE scan workflow to allow modifications. Added `max-parallel` and `fetch-depth` configurations to the PostgreSQL compatibility matrix workflow for better performance and efficiency.
2026-02-14 22:04:58 +01:00
c0077e3dd8 Add -u root flag to container CVE scan workflow
Some checks failed
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m41s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 9s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Failing after 11m28s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Failing after 11m55s
This ensures the container runs with root user privileges, providing better compatibility and avoiding potential permission issues. The change affects the development workflow configuration for container CVE scanning.
2026-02-14 19:47:34 +01:00
af6ea11079 Refactor Docker Scout integration in CVE scan workflow
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m14s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Simplified the Docker Scout configuration logic by removing unnecessary checks and utilizing Docker's standard auth configuration. Updated environment variable usage and volume mounts to streamline the setup process for scanning containers.
2026-02-14 19:32:50 +01:00
5a7f32541f Add Docker Scout login fallback and temporary caching.
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 1m57s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
This update introduces a fallback mechanism for Docker Scout login when DockerHub credentials are unavailable, ensuring the workflow does not fail. It also replaces direct Docker config usage with temporary caching to improve flexibility and reduce dependency on runner environment setups.
2026-02-14 19:03:30 +01:00
dd3f18bb06 Make Docker Scout scans non-blocking and update config paths.
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m10s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Set `continue-on-error: true` for Docker Scout steps to ensure workflows proceed even if scans fail. Updated volume paths and environment variables for Docker config and credentials to improve scanning compatibility.
2026-02-14 18:55:52 +01:00
f4b18b6cf1 Update Docker Hub Scout config to use local login credentials
Some checks failed
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Failing after 1m56s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Replaced the use of Docker Hub secrets with a mounted local docker configuration file for authentication. Added a check to ensure the login config exists before running scans, preventing unnecessary failures. This change enhances flexibility and aligns with local environment setups.
2026-02-14 18:50:46 +01:00
a220e5de99 Add Docker Hub authentication for Scout scans
Some checks failed
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 22s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Failing after 1m53s
This update ensures Docker Scout scans use Docker Hub authentication. If the required credentials are absent, the scans are skipped with a corresponding message. This improves security and prevents unnecessary scan failures.
2026-02-14 18:31:10 +01:00
a5ffafaf9e Update CVE scanning workflow to use JSON format and new tools
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m9s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Replaced Trivy output format from table to JSON for better processing. Added a summary step to parse and count severities using a Python script. Integrated Docker Scout scans for both backend and frontend, and updated uploaded artifacts to include the new JSON and Scout scan outputs.
2026-02-14 18:24:08 +01:00
d17752b611 Add CVE scan workflow for development branch
Some checks failed
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Failing after 2m20s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
This commit introduces a GitHub Actions workflow to scan for CVEs in backend and frontend container images. It uses Trivy for scanning and uploads the reports as artifacts, providing better visibility into vulnerabilities in development builds.
2026-02-14 18:16:54 +01:00
fe05c40426 Merge branch 'main' of https://git.nesterovic.cc/nessi/NexaPG into development
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 10s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
2026-02-14 17:47:34 +01:00
5a0478f47d harden(frontend): switch to nginx:alpine-slim with non-root runtime and nginx dir permission fixes 2026-02-14 17:47:26 +01:00
1cea82f5d9 Merge pull request 'Update frontend to use unprivileged Nginx on port 8080' (#34) from development into main
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 21s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m33s
Reviewed-on: #34
2026-02-14 16:18:34 +00:00
418034f639 Update NEXAPG_VERSION to 0.2.2
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Successful in 23s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
Bumped the version from 0.2.1 to 0.2.2 in the configuration file. This likely reflects a new release or minor update to the application.
2026-02-14 17:17:57 +01:00
489dde812f Update frontend to use unprivileged Nginx on port 8080
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Switch from `nginx:1.29-alpine-slim` to `nginxinc/nginx-unprivileged:stable-alpine` for improved security by running as a non-root user. Changed the exposed port from 80 to 8080 in the configurations to reflect the unprivileged setup. Adjusted the `docker-compose.yml` and `nginx.conf` accordingly.
2026-02-14 17:13:18 +01:00
c2e4e614e0 Merge pull request 'CI cleanup: remove temporary Alpine smoke job, keep PG matrix on development, and keep Alpine backend default' (#33) from development into main
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 28s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m51s
Reviewed-on: #33
2026-02-14 16:00:57 +00:00
344071193c Update NEXAPG_VERSION to 0.2.1
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 9s
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Successful in 20s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 13s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 12s
Bumped the version from 0.2.0 to 0.2.1 to reflect recent changes or updates. This ensures the system aligns with the latest versioning conventions.
2026-02-14 16:58:31 +01:00
03118e59d7 Remove backend Alpine smoke (PG16) job from CI workflow
Some checks failed
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Has been cancelled
PostgreSQL Compatibility Matrix / PG18 smoke (push) Has been cancelled
PostgreSQL Compatibility Matrix / PG16 smoke (push) Has been cancelled
The backend Alpine smoke test targeting PostgreSQL 16 was removed from the CI configuration. This cleanup simplifies the workflow by eliminating redundancy, as the functionality might be covered elsewhere or deemed unnecessary.
2026-02-14 16:58:10 +01:00
15fea78505 Update Python base image to Alpine version for backend
Some checks failed
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / Backend Alpine smoke (PG16) (push) Failing after 6s
This change switches the base image from "slim" to "alpine" to reduce the overall image size and improve security. The updated image is more lightweight and better suited for environments where optimization is critical.
2026-02-14 16:52:10 +01:00
89d3a39679 Add new features and enhancements to CI workflows and backend.
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / Backend Alpine smoke (PG16) (push) Successful in 44s
Enhanced CI workflows by adding an Alpine-based smoke test for the backend with PostgreSQL 16. Updated the Docker build process to support dynamic base images and added provenance, SBOM, and labels to Docker builds. Extended branch compatibility checks and refined backend configurations for broader usage scenarios.
2026-02-14 16:48:10 +01:00
f614eb1cf8 Merge pull request 'NX-10x: Reliability, error handling, runtime UX hardening, and migration safety gate (NX-101, NX-102, NX-103, NX-104)' (#32) from development into main
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 19s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m14s
Reviewed-on: #32
2026-02-14 15:28:44 +00:00
6de3100615 [NX-104 Issue] Filter out restrict/unrestrict lines in schema comparison.
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Successful in 22s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 7s
Updated the pg_dump commands in the migration-safety workflow to use `sed` for removing restrict/unrestrict lines. This ensures consistent schema comparison by ignoring irrelevant metadata.
2026-02-14 16:23:05 +01:00
cbe1cf26fa [NX-104 Issue] Add migration safety CI workflow
Some checks failed
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Failing after 30s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 7s
Introduces a GitHub Actions workflow to ensure Alembic migrations are safe and reversible. The workflow validates schema consistency by testing upgrade and downgrade operations and comparing schemas before and after the roundtrip.
2026-02-14 16:07:36 +01:00
5c566cd90d [NX-103 Issue] Add offline state handling for unreachable targets
Introduced a mechanism to detect and handle when a target is unreachable, including a detailed offline state message with host and port information. Updated the UI to display a card notifying users of the target's offline status and styled the card accordingly in CSS.
2026-02-14 15:58:22 +01:00
1ad237d750 Optimize collector loop to account for actual execution time.
Previously, the loop did not consider the time spent on `collect_once`, potentially causing delays. By adjusting the sleep duration dynamically, the poll interval remains consistent as intended.
2026-02-14 15:50:31 +01:00
d9dfde1c87 [NX-102 Issue] Add exponential backoff with jitter for retry logic
Introduced an exponential backoff mechanism with a configurable base, max delay, and jitter factor to handle retries for target failures. This improves resilience by reducing the load during repeated failures and avoids synchronized retry storms. Additionally, stale target cleanup logic has been implemented to prevent unnecessary state retention.
2026-02-14 11:44:49 +01:00
117710cc0a [NX-101 Issue] Refactor error handling to use consistent API error format
Replaced all inline error messages with the standardized `api_error` helper for consistent error response formatting. This improves clarity, maintainability, and ensures uniform error structures across the application. Updated logging for collector failures to include error class and switched to warning level for target unreachable scenarios.
2026-02-14 11:30:56 +01:00
9aecbea68b Add consistent API error handling and documentation
Introduced standardized error response formats for API errors, including middleware for consistent request IDs and exception handlers. Updated the frontend to parse and process these error responses, and documented the error format in the README for reference.
2026-02-13 17:30:05 +01:00
cd91b20278 Merge pull request 'Replace python-jose with PyJWT and update its usage' (#6) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m27s
Reviewed-on: #6
2026-02-13 12:23:40 +00:00
fd9853957a Merge branch 'main' of https://git.nesterovic.cc/nessi/NexaPG into development
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 7s
2026-02-13 13:20:49 +01:00
9c68f11d74 Replace python-jose with PyJWT and update its usage.
Switched the dependency from `python-jose` to `PyJWT` to handle JWT encoding and decoding. Updated related code to use `PyJWT`'s `InvalidTokenError` instead of `JWTError`. Also bumped the application version from `0.1.7` to `0.1.8`.
2026-02-13 13:20:46 +01:00
6848a66d88 Merge pull request 'Update backend requirements - security hardening' (#5) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m32s
Reviewed-on: #5
2026-02-13 12:07:48 +00:00
a9a49eba4e Merge branch 'main' of https://git.nesterovic.cc/nessi/NexaPG into development
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 11s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
2026-02-13 13:01:26 +01:00
9ccde7ca37 Update backend requirements - security hardening 2026-02-13 13:01:22 +01:00
88c3345647 Merge pull request 'Use lighter base images for frontend containers' (#4) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 9s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m24s
Reviewed-on: #4
2026-02-13 11:43:59 +00:00
d9f3de9468 Use lighter base images for frontend containers
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
Switched Node.js and Nginx images from 'bookworm' to 'alpine' variants to reduce image size. Added `apk upgrade --no-cache` for updated Alpine packages in the Nginx container. This optimizes resource usage and enhances performance.
2026-02-13 11:26:52 +01:00
e62aaaf5a0 Merge pull request 'Update base images in Dockerfile to use bookworm variants' (#3) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 2m7s
Reviewed-on: #3
2026-02-13 10:20:20 +00:00
ef84273868 Update base images in Dockerfile to use bookworm variants
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
Replaced alpine with bookworm-slim for Node.js and nginx to bookworm. This ensures compatibility with the latest updates and improves consistency across images. Adjusted the health check command for nginx accordingly.
2026-02-13 11:15:17 +01:00
6c59b21088 Merge pull request 'Add first and last name fields for users' (#2) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m13s
Reviewed-on: #2
2026-02-13 10:09:02 +00:00
cd1795b9ff Add first and last name fields for users
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 12s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 11s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 10s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 11s
This commit introduces optional `first_name` and `last_name` fields to the user model, including database migrations, backend, and frontend support. It enhances user profiles, updates user creation and editing flows, and refines the UI to display full names where available.
2026-02-13 10:57:10 +01:00
e0242bc823 Refactor deployment process to use prebuilt Docker images
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Replaced local builds with prebuilt backend and frontend Docker images for simplified deployment. Updated documentation and Makefile to reflect the changes and added a bootstrap script for quick setup of deployment files. Removed deprecated `VITE_API_URL` variable and references to streamline the setup.
2026-02-13 10:43:34 +01:00
75f8106ca5 Merge pull request 'Merge Fixes and Technical changes from development into main branch' (#1) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 4m30s
Reviewed-on: #1
2026-02-13 09:13:04 +00:00
4e4f8ad5d4 Update NEXAPG version to 0.1.3
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
This increments the application version from 0.1.2 to 0.1.3. It likely reflects bug fixes, improvements, or minor feature additions.
2026-02-13 10:11:00 +01:00
5c5d51350f Fix stale refresh usage in QueryInsightsPage effect hooks
Replaced `refresh` with `useRef` to ensure the latest value is always used in async operations. Added cleanup logic to handle component unmounts during API calls, preventing state updates on unmounted components.
2026-02-13 10:06:56 +01:00
ba1559e790 Improve agentless mode messaging for host-level metrics
Updated the messaging and UI to clarify unavailability of host-level metrics, such as CPU, RAM, and disk space, in agentless mode. Added clear formatting and new functions to handle missing metrics gracefully in the frontend.
2026-02-13 10:01:24 +01:00
ab9d03be8a Add GitHub Actions workflow for Docker image release
This workflow automates building and publishing Docker images upon a release or manual trigger. It includes steps for version resolution, Docker Hub login, and caching to optimize builds for both backend and frontend images.
2026-02-13 09:55:08 +01:00
07a7236282 Add user password change functionality
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Introduced a backend API endpoint for changing user passwords with validation. Added a new "User Settings" page in the frontend to allow users to update their passwords, including a matching UI update for navigation and styles.
2026-02-13 09:32:54 +01:00
bd53bce231 Add service update notification and version check enhancements
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Introduced a front-end mechanism to notify users of available service updates and enhanced the service info page to reflect update status dynamically. Removed backend audit log writes for version checks to streamline operations and improve performance. Updated styling to visually highlight update notifications.
2026-02-13 09:24:53 +01:00
40 changed files with 1478 additions and 162 deletions

View File

@@ -58,7 +58,3 @@ INIT_ADMIN_PASSWORD=ChangeMe123!
# ------------------------------
# Host port mapped to frontend container port 80.
FRONTEND_PORT=5173
# Base API URL used at frontend build time.
# For reverse proxy + SSL, keep this relative to avoid mixed-content issues.
# Example direct mode: VITE_API_URL=http://localhost:8000/api/v1
VITE_API_URL=/api/v1

View File

@@ -0,0 +1,158 @@
name: Container CVE Scan (development)
on:
push:
branches: ["development"]
workflow_dispatch:
jobs:
cve-scan:
name: Scan backend/frontend images for CVEs
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker Hub login (for Scout)
if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Prepare Docker auth config for Scout container
if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
run: |
mkdir -p "$RUNNER_TEMP/scout-docker-config"
cp "$HOME/.docker/config.json" "$RUNNER_TEMP/scout-docker-config/config.json"
chmod 600 "$RUNNER_TEMP/scout-docker-config/config.json"
- name: Build backend image (local)
uses: docker/build-push-action@v6
with:
context: ./backend
file: ./backend/Dockerfile
push: false
load: true
tags: nexapg-backend:dev-scan
provenance: false
sbom: false
- name: Build frontend image (local)
uses: docker/build-push-action@v6
with:
context: ./frontend
file: ./frontend/Dockerfile
push: false
load: true
tags: nexapg-frontend:dev-scan
build-args: |
VITE_API_URL=/api/v1
provenance: false
sbom: false
- name: Trivy scan (backend)
uses: aquasecurity/trivy-action@0.24.0
with:
image-ref: nexapg-backend:dev-scan
format: json
output: trivy-backend.json
severity: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
ignore-unfixed: false
exit-code: 0
- name: Trivy scan (frontend)
uses: aquasecurity/trivy-action@0.24.0
with:
image-ref: nexapg-frontend:dev-scan
format: json
output: trivy-frontend.json
severity: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
ignore-unfixed: false
exit-code: 0
- name: Summarize Trivy severities
run: |
python - <<'PY'
import json
from collections import Counter
def summarize(path):
c = Counter()
with open(path, "r", encoding="utf-8") as f:
data = json.load(f)
for result in data.get("Results", []):
for v in result.get("Vulnerabilities", []) or []:
c[v.get("Severity", "UNKNOWN")] += 1
for sev in ["CRITICAL", "HIGH", "MEDIUM", "LOW", "UNKNOWN"]:
c.setdefault(sev, 0)
return c
for label, path in [("backend", "trivy-backend.json"), ("frontend", "trivy-frontend.json")]:
s = summarize(path)
print(f"===== Trivy {label} =====")
print(f"CRITICAL={s['CRITICAL']} HIGH={s['HIGH']} MEDIUM={s['MEDIUM']} LOW={s['LOW']} UNKNOWN={s['UNKNOWN']}")
print()
PY
- name: Docker Scout scan (backend)
continue-on-error: true
run: |
if [ -z "${{ secrets.DOCKERHUB_USERNAME }}" ] || [ -z "${{ secrets.DOCKERHUB_TOKEN }}" ]; then
echo "Docker Hub Scout scan skipped: DOCKERHUB_USERNAME/DOCKERHUB_TOKEN not set." > scout-backend.txt
exit 0
fi
docker run --rm \
-u root \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$RUNNER_TEMP/scout-docker-config:/root/.docker" \
-e DOCKER_CONFIG=/root/.docker \
-e DOCKER_SCOUT_HUB_USER="${{ secrets.DOCKERHUB_USERNAME }}" \
-e DOCKER_SCOUT_HUB_PASSWORD="${{ secrets.DOCKERHUB_TOKEN }}" \
docker/scout-cli:latest cves nexapg-backend:dev-scan \
--only-severity critical,high,medium,low > scout-backend.txt 2>&1 || {
echo "" >> scout-backend.txt
echo "Docker Scout backend scan failed (non-blocking)." >> scout-backend.txt
}
- name: Docker Scout scan (frontend)
continue-on-error: true
run: |
if [ -z "${{ secrets.DOCKERHUB_USERNAME }}" ] || [ -z "${{ secrets.DOCKERHUB_TOKEN }}" ]; then
echo "Docker Hub Scout scan skipped: DOCKERHUB_USERNAME/DOCKERHUB_TOKEN not set." > scout-frontend.txt
exit 0
fi
docker run --rm \
-u root \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$RUNNER_TEMP/scout-docker-config:/root/.docker" \
-e DOCKER_CONFIG=/root/.docker \
-e DOCKER_SCOUT_HUB_USER="${{ secrets.DOCKERHUB_USERNAME }}" \
-e DOCKER_SCOUT_HUB_PASSWORD="${{ secrets.DOCKERHUB_TOKEN }}" \
docker/scout-cli:latest cves nexapg-frontend:dev-scan \
--only-severity critical,high,medium,low > scout-frontend.txt 2>&1 || {
echo "" >> scout-frontend.txt
echo "Docker Scout frontend scan failed (non-blocking)." >> scout-frontend.txt
}
- name: Print scan summary
run: |
echo "===== Docker Scout backend ====="
test -f scout-backend.txt && cat scout-backend.txt || echo "scout-backend.txt not available"
echo
echo "===== Docker Scout frontend ====="
test -f scout-frontend.txt && cat scout-frontend.txt || echo "scout-frontend.txt not available"
- name: Upload scan reports
uses: actions/upload-artifact@v3
with:
name: container-cve-scan-reports
path: |
trivy-backend.json
trivy-frontend.json
scout-backend.txt
scout-frontend.txt

107
.github/workflows/docker-release.yml vendored Normal file
View File

@@ -0,0 +1,107 @@
name: Docker Publish (Release)
on:
release:
types: [published]
workflow_dispatch:
inputs:
version:
description: "Version tag to publish (e.g. 0.1.2 or v0.1.2)"
required: false
type: string
jobs:
publish:
name: Build and Push Docker Images
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
attestations: write
env:
# Optional repo variable. If unset, DOCKERHUB_USERNAME is used.
IMAGE_NAMESPACE: ${{ vars.DOCKERHUB_NAMESPACE }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Resolve version/tag
id: ver
shell: bash
run: |
RAW_TAG="${{ github.event.release.tag_name }}"
if [ -z "$RAW_TAG" ]; then
RAW_TAG="${{ inputs.version }}"
fi
if [ -z "$RAW_TAG" ]; then
RAW_TAG="${GITHUB_REF_NAME}"
fi
CLEAN_TAG="${RAW_TAG#v}"
echo "raw=$RAW_TAG" >> "$GITHUB_OUTPUT"
echo "clean=$CLEAN_TAG" >> "$GITHUB_OUTPUT"
- name: Set image namespace
id: ns
shell: bash
run: |
NS="${IMAGE_NAMESPACE}"
if [ -z "$NS" ]; then
NS="${{ secrets.DOCKERHUB_USERNAME }}"
fi
if [ -z "$NS" ]; then
echo "Missing Docker Hub namespace. Set repo var DOCKERHUB_NAMESPACE or secret DOCKERHUB_USERNAME."
exit 1
fi
echo "value=$NS" >> "$GITHUB_OUTPUT"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push backend image
uses: docker/build-push-action@v6
with:
context: ./backend
file: ./backend/Dockerfile
push: true
provenance: mode=max
sbom: true
labels: |
org.opencontainers.image.title=NexaPG Backend
org.opencontainers.image.vendor=Nesterovic IT-Services e.U.
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.version=${{ steps.ver.outputs.clean }}
tags: |
${{ steps.ns.outputs.value }}/nexapg-backend:${{ steps.ver.outputs.clean }}
${{ steps.ns.outputs.value }}/nexapg-backend:latest
cache-from: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-backend:buildcache
cache-to: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-backend:buildcache,mode=max
- name: Build and push frontend image
uses: docker/build-push-action@v6
with:
context: ./frontend
file: ./frontend/Dockerfile
push: true
provenance: mode=max
sbom: true
build-args: |
VITE_API_URL=/api/v1
labels: |
org.opencontainers.image.title=NexaPG Frontend
org.opencontainers.image.vendor=Nesterovic IT-Services e.U.
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.version=${{ steps.ver.outputs.clean }}
tags: |
${{ steps.ns.outputs.value }}/nexapg-frontend:${{ steps.ver.outputs.clean }}
${{ steps.ns.outputs.value }}/nexapg-frontend:latest
cache-from: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-frontend:buildcache
cache-to: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-frontend:buildcache,mode=max

86
.github/workflows/migration-safety.yml vendored Normal file
View File

@@ -0,0 +1,86 @@
name: Migration Safety
on:
push:
branches: ["main", "master"]
pull_request:
jobs:
migration-safety:
name: Alembic upgrade/downgrade safety
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: nexapg
POSTGRES_USER: nexapg
POSTGRES_PASSWORD: nexapg
ports:
- 5432:5432
options: >-
--health-cmd "pg_isready -U nexapg -d nexapg"
--health-interval 5s
--health-timeout 5s
--health-retries 30
env:
DB_HOST: postgres
DB_PORT: 5432
DB_NAME: nexapg
DB_USER: nexapg
DB_PASSWORD: nexapg
JWT_SECRET_KEY: ci-jwt-secret-key
ENCRYPTION_KEY: MDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDA=
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install backend dependencies
run: pip install -r backend/requirements.txt
- name: Install PostgreSQL client tools
run: sudo apt-get update && sudo apt-get install -y postgresql-client
- name: Wait for PostgreSQL
env:
PGPASSWORD: nexapg
run: |
for i in $(seq 1 60); do
if pg_isready -h postgres -p 5432 -U nexapg -d nexapg; then
exit 0
fi
sleep 2
done
echo "PostgreSQL did not become ready in time."
exit 1
- name: Alembic upgrade -> downgrade -1 -> upgrade
working-directory: backend
run: |
alembic upgrade head
alembic downgrade -1
alembic upgrade head
- name: Validate schema consistency after roundtrip
env:
PGPASSWORD: nexapg
run: |
cd backend
alembic upgrade head
pg_dump -h postgres -p 5432 -U nexapg -d nexapg --schema-only --no-owner --no-privileges \
| sed '/^\\restrict /d; /^\\unrestrict /d' > /tmp/schema_head_before.sql
alembic downgrade -1
alembic upgrade head
pg_dump -h postgres -p 5432 -U nexapg -d nexapg --schema-only --no-owner --no-privileges \
| sed '/^\\restrict /d; /^\\unrestrict /d' > /tmp/schema_head_after.sql
diff -u /tmp/schema_head_before.sql /tmp/schema_head_after.sql

View File

@@ -2,7 +2,7 @@ name: PostgreSQL Compatibility Matrix
on:
push:
branches: ["main", "master"]
branches: ["main", "master", "development"]
pull_request:
jobs:
@@ -11,6 +11,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
fail-fast: false
max-parallel: 3
matrix:
pg_version: ["14", "15", "16", "17", "18"]
@@ -32,6 +33,8 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5

View File

@@ -1,7 +1,8 @@
.PHONY: up down logs migrate
up:
docker compose up -d --build
docker compose pull
docker compose up -d
down:
docker compose down

View File

@@ -9,7 +9,7 @@ It combines FastAPI, React, and PostgreSQL in a Docker Compose stack with RBAC,
## Table of Contents
- [Quick Start](#quick-start)
- [Quick Deploy (Prebuilt Images)](#quick-deploy-prebuilt-images)
- [Prerequisites](#prerequisites)
- [Make Commands](#make-commands)
- [Configuration Reference (`.env`)](#configuration-reference-env)
@@ -17,6 +17,7 @@ It combines FastAPI, React, and PostgreSQL in a Docker Compose stack with RBAC,
- [Service Information](#service-information)
- [Target Owner Notifications](#target-owner-notifications)
- [API Overview](#api-overview)
- [API Error Format](#api-error-format)
- [`pg_stat_statements` Requirement](#pg_stat_statements-requirement)
- [Reverse Proxy / SSL Guidance](#reverse-proxy--ssl-guidance)
- [PostgreSQL Compatibility Smoke Test](#postgresql-compatibility-smoke-test)
@@ -93,27 +94,50 @@ Optional:
- `psql` for manual DB checks
## Quick Start
## Quick Deploy (Prebuilt Images)
1. Copy environment template:
If you only want to run NexaPG from published Docker Hub images, use the bootstrap script:
```bash
cp .env.example .env
mkdir -p /opt/NexaPG
cd /opt/NexaPG
wget -O bootstrap-compose.sh https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/ops/scripts/bootstrap-compose.sh
chmod +x bootstrap-compose.sh
./bootstrap-compose.sh
```
2. Generate a Fernet key and set `ENCRYPTION_KEY` in `.env`:
This downloads:
- `docker-compose.yml`
- `.env.example`
- `Makefile`
Then:
```bash
# generate JWT secret
python -c "import secrets; print(secrets.token_urlsafe(64))"
# generate Fernet encryption key
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
```
3. Start the stack:
```bash
# put both values into .env (JWT_SECRET_KEY / ENCRYPTION_KEY)
# note: .env is auto-created by bootstrap if it does not exist
make up
```
4. Open the application:
Manual download alternative:
```bash
mkdir -p /opt/NexaPG
cd /opt/NexaPG
wget https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/docker-compose.yml
wget https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/.env.example
wget https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/Makefile
cp .env.example .env
```
`make up` pulls `nesterovicit/nexapg-backend:latest` and `nesterovicit/nexapg-frontend:latest`, then starts the stack.
Open the application:
- Frontend: `http://<SERVER_IP>:<FRONTEND_PORT>`
- API base: `http://<SERVER_IP>:<BACKEND_PORT>/api/v1`
@@ -127,7 +151,7 @@ Initial admin bootstrap user (created from `.env` if missing):
## Make Commands
```bash
make up # build and start all services
make up # pull latest images and start all services
make down # stop all services
make logs # follow compose logs
make migrate # optional/manual: run alembic upgrade head in backend container
@@ -183,12 +207,6 @@ Note: Migrations run automatically when the backend container starts (`entrypoin
| Variable | Description |
|---|---|
| `FRONTEND_PORT` | Host port mapped to frontend container port `80` |
| `VITE_API_URL` | Frontend API base URL (build-time) |
Recommended values for `VITE_API_URL`:
- Reverse proxy setup: `/api/v1`
- Direct backend access: `http://<SERVER_IP>:<BACKEND_PORT>/api/v1`
## Core Functional Areas
@@ -302,6 +320,37 @@ Email alert routing is target-specific:
- `GET /api/v1/service/info`
- `POST /api/v1/service/info/check`
## API Error Format
All 4xx/5xx responses use a consistent JSON payload:
```json
{
"code": "validation_error",
"message": "Request validation failed",
"details": [],
"request_id": "c8f0f888-2365-4b86-a5de-b3f0e9df4a4b"
}
```
Common fields:
- `code`: stable machine-readable error code
- `message`: human-readable summary
- `details`: optional extra context (validation list, debug context, etc.)
- `request_id`: request correlation ID (also returned in `X-Request-ID` header)
Common error codes:
- `bad_request` (`400`)
- `unauthorized` (`401`)
- `forbidden` (`403`)
- `not_found` (`404`)
- `conflict` (`409`)
- `validation_error` (`422`)
- `target_unreachable` (`503`)
- `internal_error` (`500`)
## `pg_stat_statements` Requirement
Query Insights requires `pg_stat_statements` on the monitored target:
@@ -318,7 +367,7 @@ For production, serve frontend and API under the same public origin via reverse
- Frontend URL example: `https://monitor.example.com`
- Proxy API path `/api/` to backend service
- Use `VITE_API_URL=/api/v1`
- Route `/api/v1` to the backend service
This prevents mixed-content and CORS issues.
@@ -351,8 +400,7 @@ docker compose logs --tail=200 db
### CORS or mixed-content issues behind SSL proxy
- Set `VITE_API_URL=/api/v1`
- Ensure proxy forwards `/api/` to backend
- Ensure proxy forwards `/api/` (or `/api/v1`) to backend
- Set correct frontend origin(s) in `CORS_ORIGINS`
### `rejected SSL upgrade` for a target

View File

@@ -1,4 +1,5 @@
FROM python:3.12-slim AS base
ARG PYTHON_BASE_IMAGE=python:3.13-alpine
FROM ${PYTHON_BASE_IMAGE} AS base
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
@@ -6,7 +7,17 @@ ENV PIP_NO_CACHE_DIR=1
WORKDIR /app
RUN addgroup --system app && adduser --system --ingroup app app
RUN if command -v apt-get >/dev/null 2>&1; then \
apt-get update && apt-get upgrade -y && rm -rf /var/lib/apt/lists/*; \
elif command -v apk >/dev/null 2>&1; then \
apk upgrade --no-cache; \
fi
RUN if addgroup --help 2>&1 | grep -q -- '--system'; then \
addgroup --system app && adduser --system --ingroup app app; \
else \
addgroup -S app && adduser -S -G app app; \
fi
COPY requirements.txt /app/requirements.txt
RUN pip install --upgrade pip && pip install -r /app/requirements.txt

View File

@@ -0,0 +1,26 @@
"""add user first and last name fields
Revision ID: 0009_user_profile_fields
Revises: 0008_service_settings
Create Date: 2026-02-13
"""
from alembic import op
import sqlalchemy as sa
revision = "0009_user_profile_fields"
down_revision = "0008_service_settings"
branch_labels = None
depends_on = None
def upgrade() -> None:
op.add_column("users", sa.Column("first_name", sa.String(length=120), nullable=True))
op.add_column("users", sa.Column("last_name", sa.String(length=120), nullable=True))
def downgrade() -> None:
op.drop_column("users", "last_name")
op.drop_column("users", "first_name")

View File

@@ -9,6 +9,7 @@ from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db
from app.core.deps import require_roles
from app.core.errors import api_error
from app.models.models import EmailNotificationSettings, User
from app.schemas.admin_settings import EmailSettingsOut, EmailSettingsTestRequest, EmailSettingsUpdate
from app.services.audit import write_audit_log
@@ -96,9 +97,9 @@ async def test_email_settings(
) -> dict:
settings = await _get_or_create_settings(db)
if not settings.smtp_host:
raise HTTPException(status_code=400, detail="SMTP host is not configured")
raise HTTPException(status_code=400, detail=api_error("smtp_host_missing", "SMTP host is not configured"))
if not settings.from_email:
raise HTTPException(status_code=400, detail="From email is not configured")
raise HTTPException(status_code=400, detail=api_error("smtp_from_email_missing", "From email is not configured"))
password = decrypt_secret(settings.encrypted_smtp_password) if settings.encrypted_smtp_password else None
message = EmailMessage()
@@ -126,7 +127,10 @@ async def test_email_settings(
smtp.login(settings.smtp_username, password or "")
smtp.send_message(message)
except Exception as exc:
raise HTTPException(status_code=400, detail=f"SMTP test failed: {exc}")
raise HTTPException(
status_code=400,
detail=api_error("smtp_test_failed", "SMTP test failed", {"error": str(exc)}),
) from exc
await write_audit_log(db, "admin.email_settings.test", admin.id, {"recipient": str(payload.recipient)})
return {"status": "sent", "recipient": str(payload.recipient)}

View File

@@ -3,6 +3,7 @@ from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db
from app.core.deps import require_roles
from app.core.errors import api_error
from app.core.security import hash_password
from app.models.models import User
from app.schemas.user import UserCreate, UserOut, UserUpdate
@@ -22,8 +23,14 @@ async def list_users(admin: User = Depends(require_roles("admin")), db: AsyncSes
async def create_user(payload: UserCreate, admin: User = Depends(require_roles("admin")), db: AsyncSession = Depends(get_db)) -> UserOut:
exists = await db.scalar(select(User).where(User.email == payload.email))
if exists:
raise HTTPException(status_code=409, detail="Email already exists")
user = User(email=payload.email, password_hash=hash_password(payload.password), role=payload.role)
raise HTTPException(status_code=409, detail=api_error("email_exists", "Email already exists"))
user = User(
email=payload.email,
first_name=payload.first_name,
last_name=payload.last_name,
password_hash=hash_password(payload.password),
role=payload.role,
)
db.add(user)
await db.commit()
await db.refresh(user)
@@ -40,10 +47,17 @@ async def update_user(
) -> UserOut:
user = await db.scalar(select(User).where(User.id == user_id))
if not user:
raise HTTPException(status_code=404, detail="User not found")
raise HTTPException(status_code=404, detail=api_error("user_not_found", "User not found"))
update_data = payload.model_dump(exclude_unset=True)
if "password" in update_data and update_data["password"]:
user.password_hash = hash_password(update_data.pop("password"))
next_email = update_data.get("email")
if next_email and next_email != user.email:
existing = await db.scalar(select(User).where(User.email == next_email))
if existing and existing.id != user.id:
raise HTTPException(status_code=409, detail=api_error("email_exists", "Email already exists"))
if "password" in update_data:
raw_password = update_data.pop("password")
if raw_password:
user.password_hash = hash_password(raw_password)
for key, value in update_data.items():
setattr(user, key, value)
await db.commit()
@@ -55,10 +69,10 @@ async def update_user(
@router.delete("/{user_id}")
async def delete_user(user_id: int, admin: User = Depends(require_roles("admin")), db: AsyncSession = Depends(get_db)) -> dict:
if user_id == admin.id:
raise HTTPException(status_code=400, detail="Cannot delete yourself")
raise HTTPException(status_code=400, detail=api_error("cannot_delete_self", "Cannot delete yourself"))
user = await db.scalar(select(User).where(User.id == user_id))
if not user:
raise HTTPException(status_code=404, detail="User not found")
raise HTTPException(status_code=404, detail=api_error("user_not_found", "User not found"))
await db.delete(user)
await db.commit()
await write_audit_log(db, "admin.user.delete", admin.id, {"deleted_user_id": user_id})

View File

@@ -4,6 +4,7 @@ from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db
from app.core.deps import get_current_user, require_roles
from app.core.errors import api_error
from app.models.models import AlertDefinition, Target, User
from app.schemas.alert import (
AlertDefinitionCreate,
@@ -33,7 +34,7 @@ async def _validate_target_exists(db: AsyncSession, target_id: int | None) -> No
return
target_exists = await db.scalar(select(Target.id).where(Target.id == target_id))
if target_exists is None:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
@router.get("/status", response_model=AlertStatusResponse)
@@ -101,7 +102,7 @@ async def update_alert_definition(
) -> AlertDefinitionOut:
definition = await db.scalar(select(AlertDefinition).where(AlertDefinition.id == definition_id))
if definition is None:
raise HTTPException(status_code=404, detail="Alert definition not found")
raise HTTPException(status_code=404, detail=api_error("alert_definition_not_found", "Alert definition not found"))
updates = payload.model_dump(exclude_unset=True)
if "target_id" in updates:
@@ -131,7 +132,7 @@ async def delete_alert_definition(
) -> dict:
definition = await db.scalar(select(AlertDefinition).where(AlertDefinition.id == definition_id))
if definition is None:
raise HTTPException(status_code=404, detail="Alert definition not found")
raise HTTPException(status_code=404, detail=api_error("alert_definition_not_found", "Alert definition not found"))
await db.delete(definition)
await db.commit()
invalidate_alert_cache()
@@ -148,7 +149,7 @@ async def test_alert_definition(
_ = user
target = await db.scalar(select(Target).where(Target.id == payload.target_id))
if target is None:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
try:
value = await run_scalar_sql_for_target(target, payload.sql_text)
return AlertDefinitionTestResponse(ok=True, value=value)

View File

@@ -1,10 +1,11 @@
from fastapi import APIRouter, Depends, HTTPException, status
from jose import JWTError, jwt
import jwt
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.config import get_settings
from app.core.db import get_db
from app.core.deps import get_current_user
from app.core.errors import api_error
from app.core.security import create_access_token, create_refresh_token, verify_password
from app.models.models import User
from app.schemas.auth import LoginRequest, RefreshRequest, TokenResponse
@@ -19,7 +20,10 @@ settings = get_settings()
async def login(payload: LoginRequest, db: AsyncSession = Depends(get_db)) -> TokenResponse:
user = await db.scalar(select(User).where(User.email == payload.email))
if not user or not verify_password(payload.password, user.password_hash):
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid credentials")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_credentials", "Invalid credentials"),
)
await write_audit_log(db, action="auth.login", user_id=user.id, payload={"email": user.email})
return TokenResponse(access_token=create_access_token(str(user.id)), refresh_token=create_refresh_token(str(user.id)))
@@ -29,15 +33,24 @@ async def login(payload: LoginRequest, db: AsyncSession = Depends(get_db)) -> To
async def refresh(payload: RefreshRequest, db: AsyncSession = Depends(get_db)) -> TokenResponse:
try:
token_payload = jwt.decode(payload.refresh_token, settings.jwt_secret_key, algorithms=[settings.jwt_algorithm])
except JWTError as exc:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid refresh token") from exc
except jwt.InvalidTokenError as exc:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_refresh_token", "Invalid refresh token"),
) from exc
if token_payload.get("type") != "refresh":
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid refresh token type")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_refresh_token_type", "Invalid refresh token type"),
)
user_id = token_payload.get("sub")
user = await db.scalar(select(User).where(User.id == int(user_id)))
if not user:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="User not found")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("user_not_found", "User not found"),
)
await write_audit_log(db, action="auth.refresh", user_id=user.id, payload={})
return TokenResponse(access_token=create_access_token(str(user.id)), refresh_token=create_refresh_token(str(user.id)))

View File

@@ -1,7 +1,12 @@
from fastapi import APIRouter, Depends
from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db
from app.core.deps import get_current_user
from app.core.errors import api_error
from app.core.security import hash_password, verify_password
from app.models.models import User
from app.schemas.user import UserOut
from app.schemas.user import UserOut, UserPasswordChange
from app.services.audit import write_audit_log
router = APIRouter()
@@ -9,3 +14,27 @@ router = APIRouter()
@router.get("/me", response_model=UserOut)
async def me(user: User = Depends(get_current_user)) -> UserOut:
return UserOut.model_validate(user)
@router.post("/me/password")
async def change_password(
payload: UserPasswordChange,
user: User = Depends(get_current_user),
db: AsyncSession = Depends(get_db),
) -> dict:
if not verify_password(payload.current_password, user.password_hash):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=api_error("invalid_current_password", "Current password is incorrect"),
)
if verify_password(payload.new_password, user.password_hash):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=api_error("password_reuse_not_allowed", "New password must be different"),
)
user.password_hash = hash_password(payload.new_password)
await db.commit()
await write_audit_log(db, action="auth.password_change", user_id=user.id, payload={})
return {"status": "ok"}

View File

@@ -11,7 +11,6 @@ from app.core.db import get_db
from app.core.deps import get_current_user
from app.models.models import ServiceInfoSettings, User
from app.schemas.service_info import ServiceInfoCheckResult, ServiceInfoOut
from app.services.audit import write_audit_log
from app.services.service_info import (
UPSTREAM_REPO_WEB,
fetch_latest_from_upstream,
@@ -71,6 +70,7 @@ async def check_service_version(
user: User = Depends(get_current_user),
db: AsyncSession = Depends(get_db),
) -> ServiceInfoCheckResult:
_ = user
row = await _get_or_create_service_settings(db)
check_time = utcnow()
latest, latest_ref, error = await fetch_latest_from_upstream()
@@ -85,17 +85,6 @@ async def check_service_version(
row.update_available = False
await db.commit()
await db.refresh(row)
await write_audit_log(
db,
"service.info.check",
user.id,
{
"latest_version": row.latest_version,
"latest_ref": row.release_check_url,
"update_available": row.update_available,
"last_check_error": row.last_check_error,
},
)
return ServiceInfoCheckResult(
latest_version=row.latest_version,
latest_ref=(row.release_check_url or None),

View File

@@ -8,6 +8,7 @@ from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db
from app.core.deps import get_current_user, require_roles
from app.core.errors import api_error
from app.models.models import Metric, QueryStat, Target, TargetOwner, User
from app.schemas.metric import MetricOut, QueryStatOut
from app.schemas.overview import DatabaseOverviewOut
@@ -85,7 +86,10 @@ async def _discover_databases(payload: TargetCreate) -> list[str]:
)
return [row["datname"] for row in rows if row["datname"]]
except Exception as exc:
raise HTTPException(status_code=400, detail=f"Database discovery failed: {exc}")
raise HTTPException(
status_code=400,
detail=api_error("database_discovery_failed", "Database discovery failed", {"error": str(exc)}),
)
finally:
if conn:
await conn.close()
@@ -131,7 +135,10 @@ async def test_target_connection(
version = await conn.fetchval("SHOW server_version")
return {"ok": True, "message": "Connection successful", "server_version": version}
except Exception as exc:
raise HTTPException(status_code=400, detail=f"Connection failed: {exc}")
raise HTTPException(
status_code=400,
detail=api_error("connection_test_failed", "Connection test failed", {"error": str(exc)}),
)
finally:
if conn:
await conn.close()
@@ -147,7 +154,10 @@ async def create_target(
if owner_ids:
owners_exist = (await db.scalars(select(User.id).where(User.id.in_(owner_ids)))).all()
if len(set(owners_exist)) != len(owner_ids):
raise HTTPException(status_code=400, detail="One or more owner users were not found")
raise HTTPException(
status_code=400,
detail=api_error("owner_users_not_found", "One or more owner users were not found"),
)
encrypted_password = encrypt_secret(payload.password)
created_targets: list[Target] = []
@@ -155,7 +165,10 @@ async def create_target(
if payload.discover_all_databases:
databases = await _discover_databases(payload)
if not databases:
raise HTTPException(status_code=400, detail="No databases discovered on target")
raise HTTPException(
status_code=400,
detail=api_error("no_databases_discovered", "No databases discovered on target"),
)
group_id = str(uuid4())
base_tags = payload.tags or {}
for dbname in databases:
@@ -194,7 +207,10 @@ async def create_target(
await _set_target_owners(db, target.id, owner_ids, user.id)
if not created_targets:
raise HTTPException(status_code=400, detail="All discovered databases already exist as targets")
raise HTTPException(
status_code=400,
detail=api_error("all_discovered_databases_exist", "All discovered databases already exist as targets"),
)
await db.commit()
for item in created_targets:
await db.refresh(item)
@@ -247,7 +263,7 @@ async def get_target(target_id: int, user: User = Depends(get_current_user), db:
_ = user
target = await db.scalar(select(Target).where(Target.id == target_id))
if not target:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
owner_map = await _owners_by_target_ids(db, [target.id])
return _target_out_with_owners(target, owner_map.get(target.id, []))
@@ -261,7 +277,7 @@ async def update_target(
) -> TargetOut:
target = await db.scalar(select(Target).where(Target.id == target_id))
if not target:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
updates = payload.model_dump(exclude_unset=True)
owner_user_ids = updates.pop("owner_user_ids", None)
@@ -273,7 +289,10 @@ async def update_target(
if owner_user_ids is not None:
owners_exist = (await db.scalars(select(User.id).where(User.id.in_(owner_user_ids)))).all()
if len(set(owners_exist)) != len(set(owner_user_ids)):
raise HTTPException(status_code=400, detail="One or more owner users were not found")
raise HTTPException(
status_code=400,
detail=api_error("owner_users_not_found", "One or more owner users were not found"),
)
await _set_target_owners(db, target.id, owner_user_ids, user.id)
await db.commit()
@@ -292,12 +311,15 @@ async def set_target_owners(
) -> list[TargetOwnerOut]:
target = await db.scalar(select(Target).where(Target.id == target_id))
if not target:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
owner_user_ids = sorted(set(payload.user_ids))
if owner_user_ids:
owners_exist = (await db.scalars(select(User.id).where(User.id.in_(owner_user_ids)))).all()
if len(set(owners_exist)) != len(set(owner_user_ids)):
raise HTTPException(status_code=400, detail="One or more owner users were not found")
raise HTTPException(
status_code=400,
detail=api_error("owner_users_not_found", "One or more owner users were not found"),
)
await _set_target_owners(db, target_id, owner_user_ids, user.id)
await db.commit()
await write_audit_log(db, "target.owners.update", user.id, {"target_id": target_id, "owner_user_ids": owner_user_ids})
@@ -321,7 +343,7 @@ async def get_target_owners(
_ = user
target = await db.scalar(select(Target).where(Target.id == target_id))
if not target:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
rows = (
await db.execute(
select(User.id, User.email, User.role)
@@ -341,7 +363,7 @@ async def delete_target(
) -> dict:
target = await db.scalar(select(Target).where(Target.id == target_id))
if not target:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
await db.delete(target)
await db.commit()
await write_audit_log(db, "target.delete", user.id, {"target_id": target_id})
@@ -369,7 +391,22 @@ async def get_metrics(
async def _live_conn(target: Target) -> asyncpg.Connection:
try:
return await asyncpg.connect(dsn=build_target_dsn(target))
except (OSError, asyncpg.PostgresError) as exc:
raise HTTPException(
status_code=503,
detail=api_error(
"target_unreachable",
"Target database is not reachable",
{
"target_id": target.id,
"host": target.host,
"port": target.port,
"error": str(exc),
},
),
) from exc
@router.get("/{target_id}/locks")
@@ -377,7 +414,7 @@ async def get_locks(target_id: int, user: User = Depends(get_current_user), db:
_ = user
target = await db.scalar(select(Target).where(Target.id == target_id))
if not target:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
conn = await _live_conn(target)
try:
rows = await conn.fetch(
@@ -398,7 +435,7 @@ async def get_activity(target_id: int, user: User = Depends(get_current_user), d
_ = user
target = await db.scalar(select(Target).where(Target.id == target_id))
if not target:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
conn = await _live_conn(target)
try:
rows = await conn.fetch(
@@ -420,7 +457,7 @@ async def get_top_queries(target_id: int, user: User = Depends(get_current_user)
_ = user
target = await db.scalar(select(Target).where(Target.id == target_id))
if not target:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
if not target.use_pg_stat_statements:
return []
rows = (
@@ -450,5 +487,20 @@ async def get_overview(target_id: int, user: User = Depends(get_current_user), d
_ = user
target = await db.scalar(select(Target).where(Target.id == target_id))
if not target:
raise HTTPException(status_code=404, detail="Target not found")
raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
try:
return await get_target_overview(target)
except (OSError, asyncpg.PostgresError) as exc:
raise HTTPException(
status_code=503,
detail=api_error(
"target_unreachable",
"Target database is not reachable",
{
"target_id": target.id,
"host": target.host,
"port": target.port,
"error": str(exc),
},
),
) from exc

View File

@@ -2,7 +2,7 @@ from functools import lru_cache
from pydantic import field_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
NEXAPG_VERSION = "0.1.1"
NEXAPG_VERSION = "0.2.2"
class Settings(BaseSettings):

View File

@@ -1,10 +1,11 @@
from fastapi import Depends, HTTPException, status
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from jose import JWTError, jwt
import jwt
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.config import get_settings
from app.core.db import get_db
from app.core.errors import api_error
from app.models.models import User
settings = get_settings()
@@ -16,27 +17,42 @@ async def get_current_user(
db: AsyncSession = Depends(get_db),
) -> User:
if not credentials:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Missing token")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("missing_token", "Missing token"),
)
token = credentials.credentials
try:
payload = jwt.decode(token, settings.jwt_secret_key, algorithms=[settings.jwt_algorithm])
except JWTError as exc:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid token") from exc
except jwt.InvalidTokenError as exc:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_token", "Invalid token"),
) from exc
if payload.get("type") != "access":
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid token type")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_token_type", "Invalid token type"),
)
user_id = payload.get("sub")
user = await db.scalar(select(User).where(User.id == int(user_id)))
if not user:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="User not found")
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("user_not_found", "User not found"),
)
return user
def require_roles(*roles: str):
async def role_dependency(user: User = Depends(get_current_user)) -> User:
if user.role not in roles:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Forbidden")
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=api_error("forbidden", "Forbidden"),
)
return user
return role_dependency

View File

@@ -0,0 +1,38 @@
from __future__ import annotations
from typing import Any
def error_payload(code: str, message: str, details: Any, request_id: str) -> dict[str, Any]:
return {
"code": code,
"message": message,
"details": details,
"request_id": request_id,
}
def api_error(code: str, message: str, details: Any = None) -> dict[str, Any]:
return {
"code": code,
"message": message,
"details": details,
}
def http_status_to_code(status_code: int) -> str:
mapping = {
400: "bad_request",
401: "unauthorized",
403: "forbidden",
404: "not_found",
405: "method_not_allowed",
409: "conflict",
422: "validation_error",
429: "rate_limited",
500: "internal_error",
502: "bad_gateway",
503: "service_unavailable",
504: "gateway_timeout",
}
return mapping.get(status_code, f"http_{status_code}")

View File

@@ -1,5 +1,5 @@
from datetime import datetime, timedelta, timezone
from jose import jwt
import jwt
from passlib.context import CryptContext
from app.core.config import get_settings

View File

@@ -1,12 +1,17 @@
import asyncio
import logging
from uuid import uuid4
from contextlib import asynccontextmanager
from fastapi import FastAPI
from fastapi import FastAPI, HTTPException, Request
from fastapi.exceptions import RequestValidationError
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from starlette.exceptions import HTTPException as StarletteHTTPException
from sqlalchemy import select
from app.api.router import api_router
from app.core.config import get_settings
from app.core.db import SessionLocal
from app.core.errors import error_payload, http_status_to_code
from app.core.logging import configure_logging
from app.core.security import hash_password
from app.models.models import User
@@ -57,4 +62,67 @@ app.add_middleware(
allow_methods=["*"],
allow_headers=["*"],
)
@app.middleware("http")
async def request_id_middleware(request: Request, call_next):
request_id = request.headers.get("x-request-id") or str(uuid4())
request.state.request_id = request_id
response = await call_next(request)
response.headers["X-Request-ID"] = request_id
return response
@app.exception_handler(HTTPException)
@app.exception_handler(StarletteHTTPException)
async def http_exception_handler(request: Request, exc: HTTPException | StarletteHTTPException):
request_id = getattr(request.state, "request_id", str(uuid4()))
code = http_status_to_code(exc.status_code)
message = "Request failed"
details = None
if isinstance(exc.detail, str):
message = exc.detail
elif isinstance(exc.detail, dict):
code = str(exc.detail.get("code", code))
message = str(exc.detail.get("message", message))
details = exc.detail.get("details")
elif isinstance(exc.detail, list):
message = "Request validation failed"
details = exc.detail
return JSONResponse(
status_code=exc.status_code,
content=error_payload(code=code, message=message, details=details, request_id=request_id),
)
@app.exception_handler(RequestValidationError)
async def request_validation_exception_handler(request: Request, exc: RequestValidationError):
request_id = getattr(request.state, "request_id", str(uuid4()))
return JSONResponse(
status_code=422,
content=error_payload(
code="validation_error",
message="Request validation failed",
details=exc.errors(),
request_id=request_id,
),
)
@app.exception_handler(Exception)
async def unhandled_exception_handler(request: Request, exc: Exception):
request_id = getattr(request.state, "request_id", str(uuid4()))
logger.exception("unhandled_exception request_id=%s", request_id, exc_info=exc)
return JSONResponse(
status_code=500,
content=error_payload(
code="internal_error",
message="Internal server error",
details=None,
request_id=request_id,
),
)
app.include_router(api_router, prefix=settings.api_v1_prefix)

View File

@@ -9,6 +9,8 @@ class User(Base):
id: Mapped[int] = mapped_column(Integer, primary_key=True)
email: Mapped[str] = mapped_column(String(255), unique=True, index=True, nullable=False)
first_name: Mapped[str | None] = mapped_column(String(120), nullable=True)
last_name: Mapped[str | None] = mapped_column(String(120), nullable=True)
password_hash: Mapped[str] = mapped_column(String(255), nullable=False)
role: Mapped[str] = mapped_column(String(20), nullable=False, default="viewer")
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), server_default=func.now(), nullable=False)

View File

@@ -1,10 +1,12 @@
from datetime import datetime
from pydantic import BaseModel, EmailStr
from pydantic import BaseModel, EmailStr, field_validator
class UserOut(BaseModel):
id: int
email: EmailStr
first_name: str | None = None
last_name: str | None = None
role: str
created_at: datetime
@@ -13,11 +15,27 @@ class UserOut(BaseModel):
class UserCreate(BaseModel):
email: EmailStr
first_name: str | None = None
last_name: str | None = None
password: str
role: str = "viewer"
class UserUpdate(BaseModel):
email: EmailStr | None = None
first_name: str | None = None
last_name: str | None = None
password: str | None = None
role: str | None = None
class UserPasswordChange(BaseModel):
current_password: str
new_password: str
@field_validator("new_password")
@classmethod
def validate_new_password(cls, value: str) -> str:
if len(value) < 8:
raise ValueError("new_password must be at least 8 characters")
return value

View File

@@ -11,6 +11,7 @@ from sqlalchemy import desc, func, select
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.config import get_settings
from app.core.errors import api_error
from app.models.models import AlertDefinition, Metric, QueryStat, Target
from app.schemas.alert import AlertStatusItem, AlertStatusResponse
from app.services.collector import build_target_dsn
@@ -144,25 +145,40 @@ def get_standard_alert_reference() -> list[dict[str, str]]:
def validate_alert_thresholds(comparison: str, warning_threshold: float | None, alert_threshold: float) -> None:
if comparison not in _ALLOWED_COMPARISONS:
raise HTTPException(status_code=400, detail=f"Invalid comparison. Use one of {sorted(_ALLOWED_COMPARISONS)}")
raise HTTPException(
status_code=400,
detail=api_error(
"invalid_comparison",
f"Invalid comparison. Use one of {sorted(_ALLOWED_COMPARISONS)}",
),
)
if warning_threshold is None:
return
if comparison in {"gte", "gt"} and warning_threshold > alert_threshold:
raise HTTPException(status_code=400, detail="For gte/gt, warning_threshold must be <= alert_threshold")
raise HTTPException(
status_code=400,
detail=api_error("invalid_thresholds", "For gte/gt, warning_threshold must be <= alert_threshold"),
)
if comparison in {"lte", "lt"} and warning_threshold < alert_threshold:
raise HTTPException(status_code=400, detail="For lte/lt, warning_threshold must be >= alert_threshold")
raise HTTPException(
status_code=400,
detail=api_error("invalid_thresholds", "For lte/lt, warning_threshold must be >= alert_threshold"),
)
def validate_alert_sql(sql_text: str) -> str:
sql = sql_text.strip().rstrip(";")
lowered = sql.lower().strip()
if not lowered.startswith("select"):
raise HTTPException(status_code=400, detail="Alert SQL must start with SELECT")
raise HTTPException(status_code=400, detail=api_error("invalid_alert_sql", "Alert SQL must start with SELECT"))
if _FORBIDDEN_SQL_WORDS.search(lowered):
raise HTTPException(status_code=400, detail="Only read-only SELECT statements are allowed")
raise HTTPException(
status_code=400,
detail=api_error("invalid_alert_sql", "Only read-only SELECT statements are allowed"),
)
if ";" in sql:
raise HTTPException(status_code=400, detail="Only a single SQL statement is allowed")
raise HTTPException(status_code=400, detail=api_error("invalid_alert_sql", "Only a single SQL statement is allowed"))
return sql

View File

@@ -1,6 +1,7 @@
import asyncio
import logging
from datetime import datetime, timezone
from random import uniform
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.exc import SQLAlchemyError
@@ -15,6 +16,9 @@ logger = logging.getLogger(__name__)
settings = get_settings()
_failure_state: dict[int, dict[str, object]] = {}
_failure_log_interval_seconds = 300
_backoff_base_seconds = max(3, int(settings.poll_interval_seconds))
_backoff_max_seconds = 300
_backoff_jitter_factor = 0.15
def build_target_dsn(target: Target) -> str:
@@ -181,31 +185,66 @@ async def collect_once() -> None:
async with SessionLocal() as db:
targets = (await db.scalars(select(Target))).all()
active_target_ids = {target.id for target in targets}
stale_target_ids = [target_id for target_id in _failure_state.keys() if target_id not in active_target_ids]
for stale_target_id in stale_target_ids:
_failure_state.pop(stale_target_id, None)
for target in targets:
now = datetime.now(timezone.utc)
state = _failure_state.get(target.id)
if state:
next_attempt_at = state.get("next_attempt_at")
if isinstance(next_attempt_at, datetime) and now < next_attempt_at:
continue
try:
await collect_target(target)
prev = _failure_state.pop(target.id, None)
if prev:
first_failure_at = prev.get("first_failure_at")
downtime_seconds = None
if isinstance(first_failure_at, datetime):
downtime_seconds = max(0, int((now - first_failure_at).total_seconds()))
logger.info(
"collector_target_recovered target=%s after_failures=%s last_error=%s",
"collector_target_recovered target=%s after_failures=%s downtime_seconds=%s last_error=%s",
target.id,
prev.get("count", 0),
downtime_seconds,
prev.get("error"),
)
except (OSError, SQLAlchemyError, asyncpg.PostgresError, Exception) as exc:
now = datetime.now(timezone.utc)
current_error = str(exc)
error_class = exc.__class__.__name__
state = _failure_state.get(target.id)
if state is None:
next_delay = min(_backoff_max_seconds, _backoff_base_seconds)
jitter = next_delay * _backoff_jitter_factor
next_delay = max(1, int(next_delay + uniform(-jitter, jitter)))
next_attempt_at = now.timestamp() + next_delay
_failure_state[target.id] = {
"count": 1,
"first_failure_at": now,
"last_log_at": now,
"error": current_error,
"next_attempt_at": datetime.fromtimestamp(next_attempt_at, tz=timezone.utc),
}
logger.exception("collector_error target=%s err=%s", target.id, exc)
logger.warning(
"collector_target_unreachable target=%s error_class=%s err=%s consecutive_failures=%s retry_in_seconds=%s",
target.id,
error_class,
current_error,
1,
next_delay,
)
continue
count = int(state.get("count", 0)) + 1
raw_backoff = min(_backoff_max_seconds, _backoff_base_seconds * (2 ** min(count - 1, 10)))
jitter = raw_backoff * _backoff_jitter_factor
next_delay = max(1, int(raw_backoff + uniform(-jitter, jitter)))
state["next_attempt_at"] = datetime.fromtimestamp(now.timestamp() + next_delay, tz=timezone.utc)
last_log_at = state.get("last_log_at")
last_logged_error = str(state.get("error", ""))
should_log = False
@@ -220,18 +259,23 @@ async def collect_once() -> None:
if should_log:
state["last_log_at"] = now
state["error"] = current_error
logger.error(
"collector_error_throttled target=%s err=%s consecutive_failures=%s",
logger.warning(
"collector_target_unreachable target=%s error_class=%s err=%s consecutive_failures=%s retry_in_seconds=%s",
target.id,
error_class,
current_error,
count,
next_delay,
)
async def collector_loop(stop_event: asyncio.Event) -> None:
while not stop_event.is_set():
cycle_started = asyncio.get_running_loop().time()
await collect_once()
elapsed = asyncio.get_running_loop().time() - cycle_started
sleep_for = max(0.0, settings.poll_interval_seconds - elapsed)
try:
await asyncio.wait_for(stop_event.wait(), timeout=settings.poll_interval_seconds)
await asyncio.wait_for(stop_event.wait(), timeout=sleep_for)
except asyncio.TimeoutError:
pass

View File

@@ -17,10 +17,10 @@ class DiskSpaceProvider:
class NullDiskSpaceProvider(DiskSpaceProvider):
async def get_free_bytes(self, target_host: str) -> DiskSpaceProbeResult:
return DiskSpaceProbeResult(
source="none",
source="agentless",
status="unavailable",
free_bytes=None,
message=f"No infra probe configured for host {target_host}. Add SSH/Agent provider later.",
message=f"Agentless mode: host-level free disk is not available for {target_host}.",
)

View File

@@ -1,4 +1,5 @@
fastapi==0.116.1
fastapi==0.129.0
starlette==0.52.1
uvicorn[standard]==0.35.0
gunicorn==23.0.0
sqlalchemy[asyncio]==2.0.44
@@ -7,7 +8,7 @@ alembic==1.16.5
pydantic==2.11.7
pydantic-settings==2.11.0
email-validator==2.2.0
python-jose[cryptography]==3.5.0
PyJWT==2.11.0
passlib[argon2]==1.7.4
cryptography==45.0.7
python-multipart==0.0.20
cryptography==46.0.5
python-multipart==0.0.22

View File

@@ -18,8 +18,8 @@ services:
retries: 10
backend:
build:
context: ./backend
image: nesterovicit/nexapg-backend:latest
pull_policy: always
container_name: nexapg-backend
restart: unless-stopped
environment:
@@ -47,16 +47,14 @@ services:
- "${BACKEND_PORT}:8000"
frontend:
build:
context: ./frontend
args:
VITE_API_URL: ${VITE_API_URL}
image: nesterovicit/nexapg-frontend:latest
pull_policy: always
container_name: nexapg-frontend
restart: unless-stopped
depends_on:
- backend
ports:
- "${FRONTEND_PORT}:80"
- "${FRONTEND_PORT}:8080"
volumes:
pg_data:

View File

@@ -7,8 +7,12 @@ ARG VITE_API_URL=/api/v1
ENV VITE_API_URL=${VITE_API_URL}
RUN npm run build
FROM nginx:1.29-alpine
FROM nginx:1-alpine-slim
RUN apk upgrade --no-cache \
&& mkdir -p /var/cache/nginx /var/run /var/log/nginx /tmp/nginx \
&& chown -R nginx:nginx /var/cache/nginx /var/run /var/log/nginx /tmp/nginx
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
HEALTHCHECK --interval=30s --timeout=3s --retries=5 CMD wget -qO- http://127.0.0.1/ || exit 1
USER 101
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s --retries=5 CMD nginx -t || exit 1

View File

@@ -1,5 +1,5 @@
server {
listen 80;
listen 8080;
server_name _;
root /usr/share/nginx/html;

View File

@@ -9,6 +9,7 @@ import { QueryInsightsPage } from "./pages/QueryInsightsPage";
import { AlertsPage } from "./pages/AlertsPage";
import { AdminUsersPage } from "./pages/AdminUsersPage";
import { ServiceInfoPage } from "./pages/ServiceInfoPage";
import { UserSettingsPage } from "./pages/UserSettingsPage";
function Protected({ children }) {
const { tokens } = useAuth();
@@ -18,9 +19,10 @@ function Protected({ children }) {
}
function Layout({ children }) {
const { me, logout, uiMode, setUiMode, alertToasts, dismissAlertToast } = useAuth();
const { me, logout, uiMode, setUiMode, alertToasts, dismissAlertToast, serviceUpdateAvailable } = useAuth();
const navigate = useNavigate();
const navClass = ({ isActive }) => `nav-btn${isActive ? " active" : ""}`;
const fullName = [me?.first_name, me?.last_name].filter(Boolean).join(" ").trim();
return (
<div className="shell">
@@ -62,7 +64,10 @@ function Layout({ children }) {
</span>
<span className="nav-label">Alerts</span>
</NavLink>
<NavLink to="/service-info" className={navClass}>
<NavLink
to="/service-info"
className={({ isActive }) => `nav-btn${isActive ? " active" : ""}${serviceUpdateAvailable ? " update-available" : ""}`}
>
<span className="nav-icon" aria-hidden="true">
<svg viewBox="0 0 24 24">
<path d="M12 22a10 10 0 1 0 0-20 10 10 0 0 0 0 20zm0-11v6m0-10h.01" />
@@ -97,8 +102,12 @@ function Layout({ children }) {
</button>
<small>{uiMode === "easy" ? "Simple health guidance" : "Advanced DBA metrics"}</small>
</div>
<div>{me?.email}</div>
<div className="role">{me?.role}</div>
<div className="profile-name">{fullName || me?.email}</div>
{fullName && <div className="profile-email">{me?.email}</div>}
<div className="role profile-role">{me?.role}</div>
<NavLink to="/user-settings" className={({ isActive }) => `profile-btn${isActive ? " active" : ""}`}>
User Settings
</NavLink>
<button className="logout-btn" onClick={logout}>Logout</button>
</div>
</aside>
@@ -160,6 +169,7 @@ export function App() {
<Route path="/query-insights" element={<QueryInsightsPage />} />
<Route path="/alerts" element={<AlertsPage />} />
<Route path="/service-info" element={<ServiceInfoPage />} />
<Route path="/user-settings" element={<UserSettingsPage />} />
<Route path="/admin/users" element={<AdminUsersPage />} />
</Routes>
</Layout>

View File

@@ -35,8 +35,21 @@ export async function apiFetch(path, options = {}, tokens, onUnauthorized) {
}
}
if (!res.ok) {
const txt = await res.text();
throw new Error(txt || `HTTP ${res.status}`);
const raw = await res.text();
let parsed = null;
try {
parsed = raw ? JSON.parse(raw) : null;
} catch {
parsed = null;
}
const message = parsed?.message || raw || `HTTP ${res.status}`;
const err = new Error(message);
err.status = res.status;
err.code = parsed?.code || null;
err.details = parsed?.details || null;
err.requestId = parsed?.request_id || res.headers.get("x-request-id") || null;
throw err;
}
if (res.status === 204) return null;
return res.json();

View File

@@ -19,8 +19,11 @@ const TEMPLATE_VARIABLES = [
export function AdminUsersPage() {
const { tokens, refresh, me } = useAuth();
const emptyCreateForm = { email: "", first_name: "", last_name: "", password: "", role: "viewer" };
const [users, setUsers] = useState([]);
const [form, setForm] = useState({ email: "", password: "", role: "viewer" });
const [form, setForm] = useState(emptyCreateForm);
const [editingUserId, setEditingUserId] = useState(null);
const [editForm, setEditForm] = useState({ email: "", first_name: "", last_name: "", password: "", role: "viewer" });
const [emailSettings, setEmailSettings] = useState({
enabled: false,
smtp_host: "",
@@ -79,7 +82,7 @@ export function AdminUsersPage() {
e.preventDefault();
try {
await apiFetch("/admin/users", { method: "POST", body: JSON.stringify(form) }, tokens, refresh);
setForm({ email: "", password: "", role: "viewer" });
setForm(emptyCreateForm);
await load();
} catch (e) {
setError(String(e.message || e));
@@ -95,6 +98,39 @@ export function AdminUsersPage() {
}
};
const startEdit = (user) => {
setEditingUserId(user.id);
setEditForm({
email: user.email || "",
first_name: user.first_name || "",
last_name: user.last_name || "",
password: "",
role: user.role || "viewer",
});
};
const cancelEdit = () => {
setEditingUserId(null);
setEditForm({ email: "", first_name: "", last_name: "", password: "", role: "viewer" });
};
const saveEdit = async (userId) => {
try {
const payload = {
email: editForm.email,
first_name: editForm.first_name.trim() || null,
last_name: editForm.last_name.trim() || null,
role: editForm.role,
};
if (editForm.password.trim()) payload.password = editForm.password;
await apiFetch(`/admin/users/${userId}`, { method: "PUT", body: JSON.stringify(payload) }, tokens, refresh);
cancelEdit();
await load();
} catch (e) {
setError(String(e.message || e));
}
};
const saveSmtp = async (e) => {
e.preventDefault();
setError("");
@@ -165,6 +201,22 @@ export function AdminUsersPage() {
<p className="muted">Create accounts and manage access roles.</p>
</div>
<form className="grid three admin-user-form" onSubmit={create}>
<div className="admin-field">
<label>First Name</label>
<input
value={form.first_name}
placeholder="Jane"
onChange={(e) => setForm({ ...form, first_name: e.target.value })}
/>
</div>
<div className="admin-field">
<label>Last Name</label>
<input
value={form.last_name}
placeholder="Doe"
onChange={(e) => setForm({ ...form, last_name: e.target.value })}
/>
</div>
<div className="admin-field">
<label>Email</label>
<input value={form.email} placeholder="user@example.com" onChange={(e) => setForm({ ...form, email: e.target.value })} />
@@ -197,6 +249,7 @@ export function AdminUsersPage() {
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Email</th>
<th>Role</th>
<th>Action</th>
@@ -206,11 +259,70 @@ export function AdminUsersPage() {
{users.map((u) => (
<tr key={u.id} className="admin-user-row">
<td className="user-col-id">{u.id}</td>
<td className="user-col-email">{u.email}</td>
<td>
<span className={`pill role-pill role-${u.role}`}>{u.role}</span>
<td className="user-col-name">
{editingUserId === u.id ? (
<div className="admin-inline-grid two">
<input
value={editForm.first_name}
placeholder="First name"
onChange={(e) => setEditForm({ ...editForm, first_name: e.target.value })}
/>
<input
value={editForm.last_name}
placeholder="Last name"
onChange={(e) => setEditForm({ ...editForm, last_name: e.target.value })}
/>
</div>
) : (
<span className="user-col-name-value">{[u.first_name, u.last_name].filter(Boolean).join(" ") || "-"}</span>
)}
</td>
<td className="user-col-email">
{editingUserId === u.id ? (
<input
value={editForm.email}
placeholder="user@example.com"
onChange={(e) => setEditForm({ ...editForm, email: e.target.value })}
/>
) : (
u.email
)}
</td>
<td>
{editingUserId === u.id ? (
<select value={editForm.role} onChange={(e) => setEditForm({ ...editForm, role: e.target.value })}>
<option value="viewer">viewer</option>
<option value="operator">operator</option>
<option value="admin">admin</option>
</select>
) : (
<span className={`pill role-pill role-${u.role}`}>{u.role}</span>
)}
</td>
<td className="admin-user-actions">
{editingUserId === u.id && (
<input
type="password"
className="admin-inline-password"
value={editForm.password}
placeholder="New password (optional)"
onChange={(e) => setEditForm({ ...editForm, password: e.target.value })}
/>
)}
{editingUserId === u.id ? (
<>
<button className="table-action-btn primary small-btn" onClick={() => saveEdit(u.id)}>
Save
</button>
<button className="table-action-btn small-btn" onClick={cancelEdit}>
Cancel
</button>
</>
) : (
<button className="table-action-btn edit small-btn" onClick={() => startEdit(u)}>
Edit
</button>
)}
{u.id !== me.id && (
<button className="table-action-btn delete small-btn" onClick={() => remove(u.id)}>
<span aria-hidden="true">

View File

@@ -1,4 +1,4 @@
import React, { useEffect, useState } from "react";
import React, { useEffect, useRef, useState } from "react";
import { apiFetch } from "../api";
import { useAuth } from "../state";
@@ -62,6 +62,7 @@ function buildQueryTips(row) {
export function QueryInsightsPage() {
const { tokens, refresh } = useAuth();
const refreshRef = useRef(refresh);
const [targets, setTargets] = useState([]);
const [targetId, setTargetId] = useState("");
const [rows, setRows] = useState([]);
@@ -71,6 +72,10 @@ export function QueryInsightsPage() {
const [error, setError] = useState("");
const [loading, setLoading] = useState(true);
useEffect(() => {
refreshRef.current = refresh;
}, [refresh]);
useEffect(() => {
(async () => {
try {
@@ -89,17 +94,26 @@ export function QueryInsightsPage() {
useEffect(() => {
if (!targetId) return;
let active = true;
(async () => {
try {
const data = await apiFetch(`/targets/${targetId}/top-queries`, {}, tokens, refresh);
const data = await apiFetch(`/targets/${targetId}/top-queries`, {}, tokens, refreshRef.current);
if (!active) return;
setRows(data);
setSelectedQuery(data[0] || null);
setPage(1);
setSelectedQuery((prev) => {
if (!prev) return data[0] || null;
const keep = data.find((row) => row.queryid === prev.queryid);
return keep || data[0] || null;
});
setPage((prev) => (prev === 1 ? prev : 1));
} catch (e) {
setError(String(e.message || e));
if (active) setError(String(e.message || e));
}
})();
}, [targetId, tokens, refresh]);
return () => {
active = false;
};
}, [targetId, tokens?.accessToken, tokens?.refreshToken]);
const dedupedByQueryId = [...rows].reduce((acc, row) => {
if (!row?.queryid) return acc;

View File

@@ -14,7 +14,7 @@ function formatUptime(seconds) {
}
export function ServiceInfoPage() {
const { tokens, refresh } = useAuth();
const { tokens, refresh, serviceInfo } = useAuth();
const [info, setInfo] = useState(null);
const [message, setMessage] = useState("");
const [error, setError] = useState("");
@@ -30,6 +30,10 @@ export function ServiceInfoPage() {
load().catch((e) => setError(String(e.message || e)));
}, []);
useEffect(() => {
if (serviceInfo) setInfo(serviceInfo);
}, [serviceInfo]);
const checkNow = async () => {
try {
setBusy(true);
@@ -56,14 +60,28 @@ export function ServiceInfoPage() {
}
return (
<div>
<div className="service-page">
<h2>Service Information</h2>
<p className="muted">Runtime details, installed version, and update check status for this NexaPG instance.</p>
{error && <div className="card error">{error}</div>}
{message && <div className="test-connection-result ok">{message}</div>}
{message && <div className="test-connection-result ok service-msg">{message}</div>}
<div className={`card service-hero ${info.update_available ? "update" : "ok"}`}>
<div>
<strong className="service-hero-title">
{info.update_available ? `Update available: ${info.latest_version}` : "Service is up to date"}
</strong>
<p className="muted service-hero-sub">
Automatic release checks run every 30 seconds. Source: official NexaPG upstream releases.
</p>
</div>
<button type="button" className="secondary-btn" disabled={busy} onClick={checkNow}>
Check Now
</button>
</div>
<div className="grid three">
<div className="card">
<div className="card service-card">
<h3>Application</h3>
<div className="overview-kv">
<span>App Name</span>
@@ -74,7 +92,7 @@ export function ServiceInfoPage() {
<strong>{info.api_prefix}</strong>
</div>
</div>
<div className="card">
<div className="card service-card">
<h3>Runtime</h3>
<div className="overview-kv">
<span>Host</span>
@@ -85,7 +103,7 @@ export function ServiceInfoPage() {
<strong>{formatUptime(info.uptime_seconds)}</strong>
</div>
</div>
<div className="card">
<div className="card service-card">
<h3>Version Status</h3>
<div className="overview-kv">
<span>Current NexaPG Version</span>
@@ -93,21 +111,16 @@ export function ServiceInfoPage() {
<span>Latest Known Version</span>
<strong>{info.latest_version || "-"}</strong>
<span>Update Status</span>
<strong className={info.update_available ? "lag-bad" : "pill primary"}>
<strong className={info.update_available ? "service-status-update" : "service-status-ok"}>
{info.update_available ? "Update available" : "Up to date"}
</strong>
<span>Last Check</span>
<strong>{info.last_checked_at ? new Date(info.last_checked_at).toLocaleString() : "never"}</strong>
</div>
<div className="form-actions" style={{ marginTop: 12 }}>
<button type="button" className="secondary-btn" disabled={busy} onClick={checkNow}>
Check for Updates
</button>
</div>
</div>
</div>
<div className="card">
<div className="card service-card">
<h3>Release Source</h3>
<p className="muted">
Update checks run against the official NexaPG repository. This source is fixed in code and cannot be changed
@@ -121,7 +134,7 @@ export function ServiceInfoPage() {
</div>
</div>
<div className="card">
<div className="card service-card">
<h3>Version Control Policy</h3>
<p className="muted">
Version and update-source settings are not editable in the app. Only code maintainers of the official NexaPG

View File

@@ -41,6 +41,19 @@ function formatNumber(value, digits = 2) {
return Number(value).toFixed(digits);
}
function formatHostMetricUnavailable() {
return "N/A (agentless)";
}
function formatDiskSpaceAgentless(diskSpace) {
if (!diskSpace) return formatHostMetricUnavailable();
if (diskSpace.free_bytes !== null && diskSpace.free_bytes !== undefined) {
return formatBytes(diskSpace.free_bytes);
}
if (diskSpace.status === "unavailable") return formatHostMetricUnavailable();
return "-";
}
function MetricsTooltip({ active, payload, label }) {
if (!active || !payload || payload.length === 0) return null;
const row = payload[0]?.payload || {};
@@ -63,6 +76,10 @@ function didMetricSeriesChange(prev = [], next = []) {
return prevLast?.ts !== nextLast?.ts || Number(prevLast?.value) !== Number(nextLast?.value);
}
function isTargetUnreachableError(err) {
return err?.code === "target_unreachable" || err?.status === 503;
}
async function loadMetric(targetId, metric, range, tokens, refresh) {
const { from, to } = toQueryRange(range);
return apiFetch(
@@ -86,6 +103,7 @@ export function TargetDetailPage() {
const [targetMeta, setTargetMeta] = useState(null);
const [owners, setOwners] = useState([]);
const [groupTargets, setGroupTargets] = useState([]);
const [offlineState, setOfflineState] = useState(null);
const [error, setError] = useState("");
const [loading, setLoading] = useState(true);
const refreshRef = useRef(refresh);
@@ -101,22 +119,16 @@ export function TargetDetailPage() {
setLoading(true);
}
try {
const [connections, xacts, cache, locksTable, activityTable, overviewData, targetInfo, ownerRows, allTargets] = await Promise.all([
const [connections, xacts, cache, targetInfo, ownerRows, allTargets] = await Promise.all([
loadMetric(id, "connections_total", range, tokens, refreshRef.current),
loadMetric(id, "xacts_total", range, tokens, refreshRef.current),
loadMetric(id, "cache_hit_ratio", range, tokens, refreshRef.current),
apiFetch(`/targets/${id}/locks`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/activity`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/overview`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/owners`, {}, tokens, refreshRef.current),
apiFetch("/targets", {}, tokens, refreshRef.current),
]);
if (!active) return;
setSeries({ connections, xacts, cache });
setLocks(locksTable);
setActivity(activityTable);
setOverview(overviewData);
setTargetMeta(targetInfo);
setOwners(ownerRows);
const groupId = targetInfo?.tags?.monitor_group_id;
@@ -128,6 +140,34 @@ export function TargetDetailPage() {
} else {
setGroupTargets([]);
}
try {
const [locksTable, activityTable, overviewData] = await Promise.all([
apiFetch(`/targets/${id}/locks`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/activity`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/overview`, {}, tokens, refreshRef.current),
]);
if (!active) return;
setLocks(locksTable);
setActivity(activityTable);
setOverview(overviewData);
setOfflineState(null);
} catch (liveErr) {
if (!active) return;
if (isTargetUnreachableError(liveErr)) {
setLocks([]);
setActivity([]);
setOverview(null);
setOfflineState({
message:
"Target is currently unreachable. Check host/port, network route, SSL mode, and database availability.",
host: liveErr?.details?.host || targetInfo?.host || "-",
port: liveErr?.details?.port || targetInfo?.port || "-",
requestId: liveErr?.requestId || null,
});
} else {
throw liveErr;
}
}
setError("");
} catch (e) {
if (active) setError(String(e.message || e));
@@ -268,6 +308,17 @@ export function TargetDetailPage() {
<span className="muted">Responsible users:</span>
{owners.length > 0 ? owners.map((item) => <span key={item.user_id} className="owner-pill">{item.email}</span>) : <span className="muted">none assigned</span>}
</div>
{offlineState && (
<div className="card target-offline-card">
<h3>Target Offline</h3>
<p>{offlineState.message}</p>
<div className="target-offline-meta">
<span><strong>Host:</strong> {offlineState.host}</span>
<span><strong>Port:</strong> {offlineState.port}</span>
{offlineState.requestId ? <span><strong>Request ID:</strong> {offlineState.requestId}</span> : null}
</div>
</div>
)}
{uiMode === "easy" && overview && easySummary && (
<>
<div className={`card easy-status ${easySummary.health}`}>
@@ -346,6 +397,9 @@ export function TargetDetailPage() {
{uiMode === "dba" && overview && (
<div className="card">
<h3>Database Overview</h3>
<p className="muted" style={{ marginTop: 2 }}>
Agentless mode: host-level CPU, RAM, and free-disk metrics are not available.
</p>
<div className="grid three overview-kv">
<div><span>PostgreSQL Version</span><strong>{overview.instance.server_version || "-"}</strong></div>
<div>
@@ -366,8 +420,8 @@ export function TargetDetailPage() {
<div title="Total WAL directory size (when available)">
<span>WAL Size</span><strong>{formatBytes(overview.storage.wal_directory_size_bytes)}</strong>
</div>
<div title="Optional metric via future Agent/SSH provider">
<span>Free Disk</span><strong>{formatBytes(overview.storage.disk_space.free_bytes)}</strong>
<div title={overview.storage.disk_space?.message || "Agentless mode: host-level free disk is unavailable."}>
<span>Free Disk</span><strong>{formatDiskSpaceAgentless(overview.storage.disk_space)}</strong>
</div>
<div title="Replication replay delay on standby">
<span>Replay Lag</span>
@@ -378,6 +432,12 @@ export function TargetDetailPage() {
<div><span>Replication Slots</span><strong>{overview.replication.replication_slots_count ?? "-"}</strong></div>
<div><span>Repl Clients</span><strong>{overview.replication.active_replication_clients ?? "-"}</strong></div>
<div><span>Autovacuum Workers</span><strong>{overview.performance.autovacuum_workers ?? "-"}</strong></div>
<div title="Host CPU requires OS-level telemetry">
<span>Host CPU</span><strong>{formatHostMetricUnavailable()}</strong>
</div>
<div title="Host RAM requires OS-level telemetry">
<span>Host RAM</span><strong>{formatHostMetricUnavailable()}</strong>
</div>
</div>
<div className="grid two">

View File

@@ -0,0 +1,100 @@
import React, { useState } from "react";
import { apiFetch } from "../api";
import { useAuth } from "../state";
export function UserSettingsPage() {
const { tokens, refresh } = useAuth();
const [form, setForm] = useState({
current_password: "",
new_password: "",
confirm_password: "",
});
const [message, setMessage] = useState("");
const [error, setError] = useState("");
const [busy, setBusy] = useState(false);
const submit = async (e) => {
e.preventDefault();
setMessage("");
setError("");
if (form.new_password.length < 8) {
setError("New password must be at least 8 characters.");
return;
}
if (form.new_password !== form.confirm_password) {
setError("Password confirmation does not match.");
return;
}
try {
setBusy(true);
await apiFetch(
"/me/password",
{
method: "POST",
body: JSON.stringify({
current_password: form.current_password,
new_password: form.new_password,
}),
},
tokens,
refresh
);
setForm({ current_password: "", new_password: "", confirm_password: "" });
setMessage("Password changed successfully.");
} catch (e) {
setError(String(e.message || e));
} finally {
setBusy(false);
}
};
return (
<div className="user-settings-page">
<h2>User Settings</h2>
<p className="muted">Manage your personal account security settings.</p>
{error && <div className="card error">{error}</div>}
{message && <div className="test-connection-result ok">{message}</div>}
<div className="card user-settings-card">
<h3>Change Password</h3>
<form className="grid two" onSubmit={submit}>
<div className="admin-field field-full">
<label>Current password</label>
<input
type="password"
value={form.current_password}
onChange={(e) => setForm({ ...form, current_password: e.target.value })}
required
/>
</div>
<div className="admin-field">
<label>New password</label>
<input
type="password"
value={form.new_password}
onChange={(e) => setForm({ ...form, new_password: e.target.value })}
minLength={8}
required
/>
</div>
<div className="admin-field">
<label>Confirm new password</label>
<input
type="password"
value={form.confirm_password}
onChange={(e) => setForm({ ...form, confirm_password: e.target.value })}
minLength={8}
required
/>
</div>
<div className="form-actions field-full">
<button className="primary-btn" type="submit" disabled={busy}>
{busy ? "Saving..." : "Update Password"}
</button>
</div>
</form>
</div>
</div>
);
}

View File

@@ -29,6 +29,7 @@ export function AuthProvider({ children }) {
const [uiMode, setUiModeState] = useState(loadUiMode);
const [alertStatus, setAlertStatus] = useState({ warnings: [], alerts: [], warning_count: 0, alert_count: 0 });
const [alertToasts, setAlertToasts] = useState([]);
const [serviceInfo, setServiceInfo] = useState(null);
const knownAlertKeysRef = useRef(new Set());
const hasAlertSnapshotRef = useRef(false);
@@ -175,6 +176,49 @@ export function AuthProvider({ children }) {
};
}, [tokens?.accessToken, tokens?.refreshToken]);
useEffect(() => {
if (!tokens?.accessToken) {
setServiceInfo(null);
return;
}
let mounted = true;
const request = async (path, method = "GET") => {
const doFetch = async (accessToken) =>
fetch(`${API_URL}${path}`, {
method,
headers: { Authorization: `Bearer ${accessToken}` },
});
let res = await doFetch(tokens.accessToken);
if (res.status === 401 && tokens.refreshToken) {
const refreshed = await refresh();
if (refreshed?.accessToken) {
res = await doFetch(refreshed.accessToken);
}
}
if (!res.ok) return null;
return res.json();
};
const runServiceCheck = async () => {
await request("/service/info/check", "POST");
const info = await request("/service/info", "GET");
if (mounted && info) setServiceInfo(info);
};
runServiceCheck().catch(() => {});
const timer = setInterval(() => {
runServiceCheck().catch(() => {});
}, 30000);
return () => {
mounted = false;
clearInterval(timer);
};
}, [tokens?.accessToken, tokens?.refreshToken]);
const setUiMode = (nextMode) => {
const mode = nextMode === "easy" ? "easy" : "dba";
setUiModeState(mode);
@@ -193,8 +237,10 @@ export function AuthProvider({ children }) {
alertStatus,
alertToasts,
dismissAlertToast,
serviceInfo,
serviceUpdateAvailable: !!serviceInfo?.update_available,
}),
[tokens, me, uiMode, alertStatus, alertToasts]
[tokens, me, uiMode, alertStatus, alertToasts, serviceInfo]
);
return <AuthCtx.Provider value={value}>{children}</AuthCtx.Provider>;
}

View File

@@ -114,6 +114,27 @@ a {
background: linear-gradient(180deg, #74e8ff, #25bdf3);
}
.nav-btn.update-available {
border-color: #c7962f;
background: linear-gradient(180deg, #3e2f14, #2f240f);
color: #ffecc4;
box-shadow: inset 0 0 0 1px #f6c75a38, 0 8px 20px #2d1d0680;
}
.nav-btn.update-available .nav-icon {
border-color: #d3a240;
background: linear-gradient(180deg, #5a441a, #433312);
}
.nav-btn.update-available:hover {
border-color: #ffd46e;
background: linear-gradient(180deg, #523d18, #3b2d12);
}
.nav-btn.update-available::before {
background: linear-gradient(180deg, #ffe4a3, #e0ac3e);
}
.nav-btn.admin-nav {
border-color: #5b4da1;
background: linear-gradient(180deg, #1c2a58, #18224a);
@@ -174,6 +195,23 @@ a {
color: #d7e4fa;
}
.profile-name {
font-size: 15px;
font-weight: 700;
line-height: 1.25;
}
.profile-email {
margin-top: 2px;
font-size: 12px;
color: #a6bcda;
word-break: break-all;
}
.profile-role {
margin-top: 4px;
}
.mode-switch-block {
margin-bottom: 12px;
padding: 10px;
@@ -1094,6 +1132,39 @@ button {
border-color: #38bdf8;
}
.profile-btn {
width: 100%;
display: inline-flex;
align-items: center;
justify-content: center;
margin-top: 8px;
border: 1px solid #3a63a1;
border-radius: 10px;
background: linear-gradient(180deg, #15315d, #11274c);
color: #e7f2ff;
min-height: 40px;
font-weight: 650;
}
.profile-btn:hover {
border-color: #58b0e8;
background: linear-gradient(180deg, #1a427a, #15335f);
}
.profile-btn.active {
border-color: #66c7f4;
box-shadow: inset 0 0 0 1px #66c7f455;
}
.user-settings-page h2 {
margin-top: 4px;
margin-bottom: 4px;
}
.user-settings-card {
max-width: 760px;
}
table {
width: 100%;
border-collapse: collapse;
@@ -1201,6 +1272,31 @@ td {
font-weight: 600;
}
.user-col-name-value {
font-weight: 600;
}
.admin-inline-grid {
display: grid;
gap: 8px;
}
.admin-inline-grid.two {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
.admin-inline-password {
min-width: 190px;
}
.admin-user-actions {
display: flex;
gap: 8px;
align-items: center;
justify-content: flex-start;
flex-wrap: wrap;
}
.role-pill {
display: inline-flex;
align-items: center;
@@ -1279,6 +1375,51 @@ td {
color: #9eb8d6;
}
.service-page .service-msg {
margin-bottom: 10px;
}
.service-hero {
margin-bottom: 12px;
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
}
.service-hero.ok {
border-color: #2f8f63;
background: linear-gradient(90deg, #123827, #102e42);
}
.service-hero.update {
border-color: #dfab3e;
background: linear-gradient(90deg, #4a3511, #2f2452);
box-shadow: 0 12px 28px #2b1f066b;
}
.service-hero-title {
display: inline-block;
font-size: 18px;
margin-bottom: 3px;
}
.service-hero-sub {
margin: 0;
}
.service-card {
box-shadow: 0 10px 24px #0416343d;
}
.service-status-ok {
color: #6ef0ad;
}
.service-status-update {
color: #ffd77e;
}
.alerts-subtitle {
margin-top: 2px;
color: #a6c0df;
@@ -1881,6 +2022,29 @@ select:-webkit-autofill {
font-size: 12px;
}
.target-offline-card {
border-color: #a85757;
background: linear-gradient(130deg, #2c1724 0%, #1f1f38 100%);
}
.target-offline-card h3 {
margin: 0 0 8px;
color: #fecaca;
}
.target-offline-card p {
margin: 0 0 10px;
color: #fde2e2;
}
.target-offline-meta {
display: flex;
flex-wrap: wrap;
gap: 16px;
font-size: 12px;
color: #d4d4f5;
}
.chart-tooltip {
background: #0f1934ee;
border: 1px solid #2f4a8b;

View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -euo pipefail
# Usage:
# bash bootstrap-compose.sh
# BASE_URL="https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main" bash bootstrap-compose.sh
BASE_URL="${BASE_URL:-https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main}"
echo "[bootstrap] Using base URL: ${BASE_URL}"
fetch_file() {
local path="$1"
local out="$2"
if command -v wget >/dev/null 2>&1; then
wget -q -O "${out}" "${BASE_URL}/${path}"
elif command -v curl >/dev/null 2>&1; then
curl -fsSL "${BASE_URL}/${path}" -o "${out}"
else
echo "[bootstrap] ERROR: wget or curl is required"
exit 1
fi
}
fetch_file "docker-compose.yml" "docker-compose.yml"
fetch_file ".env.example" ".env.example"
fetch_file "Makefile" "Makefile"
if [[ ! -f ".env" ]]; then
cp .env.example .env
echo "[bootstrap] Created .env from .env.example"
else
echo "[bootstrap] .env already exists, keeping it"
fi
echo
echo "[bootstrap] Next steps:"
echo " 1) Edit .env (set JWT_SECRET_KEY and ENCRYPTION_KEY at minimum)"
echo " 2) Run: make up"
echo