39 Commits
0.1.2 ... main

Author SHA1 Message Date
a220e5de99 Add Docker Hub authentication for Scout scans
Some checks failed
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 22s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Failing after 1m53s
This update ensures Docker Scout scans use Docker Hub authentication. If the required credentials are absent, the scans are skipped with a corresponding message. This improves security and prevents unnecessary scan failures.
2026-02-14 18:31:10 +01:00
a5ffafaf9e Update CVE scanning workflow to use JSON format and new tools
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m9s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Replaced Trivy output format from table to JSON for better processing. Added a summary step to parse and count severities using a Python script. Integrated Docker Scout scans for both backend and frontend, and updated uploaded artifacts to include the new JSON and Scout scan outputs.
2026-02-14 18:24:08 +01:00
d17752b611 Add CVE scan workflow for development branch
Some checks failed
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Failing after 2m20s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
This commit introduces a GitHub Actions workflow to scan for CVEs in backend and frontend container images. It uses Trivy for scanning and uploads the reports as artifacts, providing better visibility into vulnerabilities in development builds.
2026-02-14 18:16:54 +01:00
fe05c40426 Merge branch 'main' of https://git.nesterovic.cc/nessi/NexaPG into development
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 10s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
2026-02-14 17:47:34 +01:00
5a0478f47d harden(frontend): switch to nginx:alpine-slim with non-root runtime and nginx dir permission fixes 2026-02-14 17:47:26 +01:00
1cea82f5d9 Merge pull request 'Update frontend to use unprivileged Nginx on port 8080' (#34) from development into main
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 21s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m33s
Reviewed-on: #34
2026-02-14 16:18:34 +00:00
418034f639 Update NEXAPG_VERSION to 0.2.2
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Successful in 23s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
Bumped the version from 0.2.1 to 0.2.2 in the configuration file. This likely reflects a new release or minor update to the application.
2026-02-14 17:17:57 +01:00
489dde812f Update frontend to use unprivileged Nginx on port 8080
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Switch from `nginx:1.29-alpine-slim` to `nginxinc/nginx-unprivileged:stable-alpine` for improved security by running as a non-root user. Changed the exposed port from 80 to 8080 in the configurations to reflect the unprivileged setup. Adjusted the `docker-compose.yml` and `nginx.conf` accordingly.
2026-02-14 17:13:18 +01:00
c2e4e614e0 Merge pull request 'CI cleanup: remove temporary Alpine smoke job, keep PG matrix on development, and keep Alpine backend default' (#33) from development into main
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 28s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m51s
Reviewed-on: #33
2026-02-14 16:00:57 +00:00
344071193c Update NEXAPG_VERSION to 0.2.1
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 9s
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Successful in 20s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 13s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 12s
Bumped the version from 0.2.0 to 0.2.1 to reflect recent changes or updates. This ensures the system aligns with the latest versioning conventions.
2026-02-14 16:58:31 +01:00
03118e59d7 Remove backend Alpine smoke (PG16) job from CI workflow
Some checks failed
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Has been cancelled
PostgreSQL Compatibility Matrix / PG18 smoke (push) Has been cancelled
PostgreSQL Compatibility Matrix / PG16 smoke (push) Has been cancelled
The backend Alpine smoke test targeting PostgreSQL 16 was removed from the CI configuration. This cleanup simplifies the workflow by eliminating redundancy, as the functionality might be covered elsewhere or deemed unnecessary.
2026-02-14 16:58:10 +01:00
15fea78505 Update Python base image to Alpine version for backend
Some checks failed
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / Backend Alpine smoke (PG16) (push) Failing after 6s
This change switches the base image from "slim" to "alpine" to reduce the overall image size and improve security. The updated image is more lightweight and better suited for environments where optimization is critical.
2026-02-14 16:52:10 +01:00
89d3a39679 Add new features and enhancements to CI workflows and backend.
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / Backend Alpine smoke (PG16) (push) Successful in 44s
Enhanced CI workflows by adding an Alpine-based smoke test for the backend with PostgreSQL 16. Updated the Docker build process to support dynamic base images and added provenance, SBOM, and labels to Docker builds. Extended branch compatibility checks and refined backend configurations for broader usage scenarios.
2026-02-14 16:48:10 +01:00
f614eb1cf8 Merge pull request 'NX-10x: Reliability, error handling, runtime UX hardening, and migration safety gate (NX-101, NX-102, NX-103, NX-104)' (#32) from development into main
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 19s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m14s
Reviewed-on: #32
2026-02-14 15:28:44 +00:00
6de3100615 [NX-104 Issue] Filter out restrict/unrestrict lines in schema comparison.
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Successful in 22s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 7s
Updated the pg_dump commands in the migration-safety workflow to use `sed` for removing restrict/unrestrict lines. This ensures consistent schema comparison by ignoring irrelevant metadata.
2026-02-14 16:23:05 +01:00
cbe1cf26fa [NX-104 Issue] Add migration safety CI workflow
Some checks failed
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Failing after 30s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 7s
Introduces a GitHub Actions workflow to ensure Alembic migrations are safe and reversible. The workflow validates schema consistency by testing upgrade and downgrade operations and comparing schemas before and after the roundtrip.
2026-02-14 16:07:36 +01:00
5c566cd90d [NX-103 Issue] Add offline state handling for unreachable targets
Introduced a mechanism to detect and handle when a target is unreachable, including a detailed offline state message with host and port information. Updated the UI to display a card notifying users of the target's offline status and styled the card accordingly in CSS.
2026-02-14 15:58:22 +01:00
1ad237d750 Optimize collector loop to account for actual execution time.
Previously, the loop did not consider the time spent on `collect_once`, potentially causing delays. By adjusting the sleep duration dynamically, the poll interval remains consistent as intended.
2026-02-14 15:50:31 +01:00
d9dfde1c87 [NX-102 Issue] Add exponential backoff with jitter for retry logic
Introduced an exponential backoff mechanism with a configurable base, max delay, and jitter factor to handle retries for target failures. This improves resilience by reducing the load during repeated failures and avoids synchronized retry storms. Additionally, stale target cleanup logic has been implemented to prevent unnecessary state retention.
2026-02-14 11:44:49 +01:00
117710cc0a [NX-101 Issue] Refactor error handling to use consistent API error format
Replaced all inline error messages with the standardized `api_error` helper for consistent error response formatting. This improves clarity, maintainability, and ensures uniform error structures across the application. Updated logging for collector failures to include error class and switched to warning level for target unreachable scenarios.
2026-02-14 11:30:56 +01:00
9aecbea68b Add consistent API error handling and documentation
Introduced standardized error response formats for API errors, including middleware for consistent request IDs and exception handlers. Updated the frontend to parse and process these error responses, and documented the error format in the README for reference.
2026-02-13 17:30:05 +01:00
cd91b20278 Merge pull request 'Replace python-jose with PyJWT and update its usage' (#6) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m27s
Reviewed-on: #6
2026-02-13 12:23:40 +00:00
fd9853957a Merge branch 'main' of https://git.nesterovic.cc/nessi/NexaPG into development
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 7s
2026-02-13 13:20:49 +01:00
9c68f11d74 Replace python-jose with PyJWT and update its usage.
Switched the dependency from `python-jose` to `PyJWT` to handle JWT encoding and decoding. Updated related code to use `PyJWT`'s `InvalidTokenError` instead of `JWTError`. Also bumped the application version from `0.1.7` to `0.1.8`.
2026-02-13 13:20:46 +01:00
6848a66d88 Merge pull request 'Update backend requirements - security hardening' (#5) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m32s
Reviewed-on: #5
2026-02-13 12:07:48 +00:00
a9a49eba4e Merge branch 'main' of https://git.nesterovic.cc/nessi/NexaPG into development
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 11s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
2026-02-13 13:01:26 +01:00
9ccde7ca37 Update backend requirements - security hardening 2026-02-13 13:01:22 +01:00
88c3345647 Merge pull request 'Use lighter base images for frontend containers' (#4) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 9s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m24s
Reviewed-on: #4
2026-02-13 11:43:59 +00:00
d9f3de9468 Use lighter base images for frontend containers
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
Switched Node.js and Nginx images from 'bookworm' to 'alpine' variants to reduce image size. Added `apk upgrade --no-cache` for updated Alpine packages in the Nginx container. This optimizes resource usage and enhances performance.
2026-02-13 11:26:52 +01:00
e62aaaf5a0 Merge pull request 'Update base images in Dockerfile to use bookworm variants' (#3) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 2m7s
Reviewed-on: #3
2026-02-13 10:20:20 +00:00
ef84273868 Update base images in Dockerfile to use bookworm variants
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
Replaced alpine with bookworm-slim for Node.js and nginx to bookworm. This ensures compatibility with the latest updates and improves consistency across images. Adjusted the health check command for nginx accordingly.
2026-02-13 11:15:17 +01:00
6c59b21088 Merge pull request 'Add first and last name fields for users' (#2) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m13s
Reviewed-on: #2
2026-02-13 10:09:02 +00:00
cd1795b9ff Add first and last name fields for users
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 12s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 11s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 10s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 11s
This commit introduces optional `first_name` and `last_name` fields to the user model, including database migrations, backend, and frontend support. It enhances user profiles, updates user creation and editing flows, and refines the UI to display full names where available.
2026-02-13 10:57:10 +01:00
e0242bc823 Refactor deployment process to use prebuilt Docker images
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Replaced local builds with prebuilt backend and frontend Docker images for simplified deployment. Updated documentation and Makefile to reflect the changes and added a bootstrap script for quick setup of deployment files. Removed deprecated `VITE_API_URL` variable and references to streamline the setup.
2026-02-13 10:43:34 +01:00
75f8106ca5 Merge pull request 'Merge Fixes and Technical changes from development into main branch' (#1) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 4m30s
Reviewed-on: #1
2026-02-13 09:13:04 +00:00
4e4f8ad5d4 Update NEXAPG version to 0.1.3
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
This increments the application version from 0.1.2 to 0.1.3. It likely reflects bug fixes, improvements, or minor feature additions.
2026-02-13 10:11:00 +01:00
5c5d51350f Fix stale refresh usage in QueryInsightsPage effect hooks
Replaced `refresh` with `useRef` to ensure the latest value is always used in async operations. Added cleanup logic to handle component unmounts during API calls, preventing state updates on unmounted components.
2026-02-13 10:06:56 +01:00
ba1559e790 Improve agentless mode messaging for host-level metrics
Updated the messaging and UI to clarify unavailability of host-level metrics, such as CPU, RAM, and disk space, in agentless mode. Added clear formatting and new functions to handle missing metrics gracefully in the frontend.
2026-02-13 10:01:24 +01:00
ab9d03be8a Add GitHub Actions workflow for Docker image release
This workflow automates building and publishing Docker images upon a release or manual trigger. It includes steps for version resolution, Docker Hub login, and caching to optimize builds for both backend and frontend images.
2026-02-13 09:55:08 +01:00
36 changed files with 1135 additions and 132 deletions

View File

@@ -58,7 +58,3 @@ INIT_ADMIN_PASSWORD=ChangeMe123!
# ------------------------------ # ------------------------------
# Host port mapped to frontend container port 80. # Host port mapped to frontend container port 80.
FRONTEND_PORT=5173 FRONTEND_PORT=5173
# Base API URL used at frontend build time.
# For reverse proxy + SSL, keep this relative to avoid mixed-content issues.
# Example direct mode: VITE_API_URL=http://localhost:8000/api/v1
VITE_API_URL=/api/v1

View File

@@ -0,0 +1,137 @@
name: Container CVE Scan (development)
on:
push:
branches: ["development"]
workflow_dispatch:
jobs:
cve-scan:
name: Scan backend/frontend images for CVEs
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker Hub login (for Scout)
if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build backend image (local)
uses: docker/build-push-action@v6
with:
context: ./backend
file: ./backend/Dockerfile
push: false
load: true
tags: nexapg-backend:dev-scan
provenance: false
sbom: false
- name: Build frontend image (local)
uses: docker/build-push-action@v6
with:
context: ./frontend
file: ./frontend/Dockerfile
push: false
load: true
tags: nexapg-frontend:dev-scan
build-args: |
VITE_API_URL=/api/v1
provenance: false
sbom: false
- name: Trivy scan (backend)
uses: aquasecurity/trivy-action@0.24.0
with:
image-ref: nexapg-backend:dev-scan
format: json
output: trivy-backend.json
severity: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
ignore-unfixed: false
exit-code: 0
- name: Trivy scan (frontend)
uses: aquasecurity/trivy-action@0.24.0
with:
image-ref: nexapg-frontend:dev-scan
format: json
output: trivy-frontend.json
severity: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
ignore-unfixed: false
exit-code: 0
- name: Summarize Trivy severities
run: |
python - <<'PY'
import json
from collections import Counter
def summarize(path):
c = Counter()
with open(path, "r", encoding="utf-8") as f:
data = json.load(f)
for result in data.get("Results", []):
for v in result.get("Vulnerabilities", []) or []:
c[v.get("Severity", "UNKNOWN")] += 1
for sev in ["CRITICAL", "HIGH", "MEDIUM", "LOW", "UNKNOWN"]:
c.setdefault(sev, 0)
return c
for label, path in [("backend", "trivy-backend.json"), ("frontend", "trivy-frontend.json")]:
s = summarize(path)
print(f"===== Trivy {label} =====")
print(f"CRITICAL={s['CRITICAL']} HIGH={s['HIGH']} MEDIUM={s['MEDIUM']} LOW={s['LOW']} UNKNOWN={s['UNKNOWN']}")
print()
PY
- name: Docker Scout scan (backend)
run: |
if [ -z "${{ secrets.DOCKERHUB_USERNAME }}" ] || [ -z "${{ secrets.DOCKERHUB_TOKEN }}" ]; then
echo "Docker Hub Scout scan skipped: DOCKERHUB_USERNAME/DOCKERHUB_TOKEN not set." > scout-backend.txt
exit 0
fi
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-e DOCKER_SCOUT_HUB_USER="${{ secrets.DOCKERHUB_USERNAME }}" \
-e DOCKER_SCOUT_HUB_PAT="${{ secrets.DOCKERHUB_TOKEN }}" \
docker/scout-cli:latest cves nexapg-backend:dev-scan \
--only-severity critical,high,medium,low > scout-backend.txt
- name: Docker Scout scan (frontend)
run: |
if [ -z "${{ secrets.DOCKERHUB_USERNAME }}" ] || [ -z "${{ secrets.DOCKERHUB_TOKEN }}" ]; then
echo "Docker Hub Scout scan skipped: DOCKERHUB_USERNAME/DOCKERHUB_TOKEN not set." > scout-frontend.txt
exit 0
fi
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-e DOCKER_SCOUT_HUB_USER="${{ secrets.DOCKERHUB_USERNAME }}" \
-e DOCKER_SCOUT_HUB_PAT="${{ secrets.DOCKERHUB_TOKEN }}" \
docker/scout-cli:latest cves nexapg-frontend:dev-scan \
--only-severity critical,high,medium,low > scout-frontend.txt
- name: Print scan summary
run: |
echo "===== Docker Scout backend ====="
test -f scout-backend.txt && cat scout-backend.txt || echo "scout-backend.txt not available"
echo
echo "===== Docker Scout frontend ====="
test -f scout-frontend.txt && cat scout-frontend.txt || echo "scout-frontend.txt not available"
- name: Upload scan reports
uses: actions/upload-artifact@v3
with:
name: container-cve-scan-reports
path: |
trivy-backend.json
trivy-frontend.json
scout-backend.txt
scout-frontend.txt

107
.github/workflows/docker-release.yml vendored Normal file
View File

@@ -0,0 +1,107 @@
name: Docker Publish (Release)
on:
release:
types: [published]
workflow_dispatch:
inputs:
version:
description: "Version tag to publish (e.g. 0.1.2 or v0.1.2)"
required: false
type: string
jobs:
publish:
name: Build and Push Docker Images
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
attestations: write
env:
# Optional repo variable. If unset, DOCKERHUB_USERNAME is used.
IMAGE_NAMESPACE: ${{ vars.DOCKERHUB_NAMESPACE }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Resolve version/tag
id: ver
shell: bash
run: |
RAW_TAG="${{ github.event.release.tag_name }}"
if [ -z "$RAW_TAG" ]; then
RAW_TAG="${{ inputs.version }}"
fi
if [ -z "$RAW_TAG" ]; then
RAW_TAG="${GITHUB_REF_NAME}"
fi
CLEAN_TAG="${RAW_TAG#v}"
echo "raw=$RAW_TAG" >> "$GITHUB_OUTPUT"
echo "clean=$CLEAN_TAG" >> "$GITHUB_OUTPUT"
- name: Set image namespace
id: ns
shell: bash
run: |
NS="${IMAGE_NAMESPACE}"
if [ -z "$NS" ]; then
NS="${{ secrets.DOCKERHUB_USERNAME }}"
fi
if [ -z "$NS" ]; then
echo "Missing Docker Hub namespace. Set repo var DOCKERHUB_NAMESPACE or secret DOCKERHUB_USERNAME."
exit 1
fi
echo "value=$NS" >> "$GITHUB_OUTPUT"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push backend image
uses: docker/build-push-action@v6
with:
context: ./backend
file: ./backend/Dockerfile
push: true
provenance: mode=max
sbom: true
labels: |
org.opencontainers.image.title=NexaPG Backend
org.opencontainers.image.vendor=Nesterovic IT-Services e.U.
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.version=${{ steps.ver.outputs.clean }}
tags: |
${{ steps.ns.outputs.value }}/nexapg-backend:${{ steps.ver.outputs.clean }}
${{ steps.ns.outputs.value }}/nexapg-backend:latest
cache-from: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-backend:buildcache
cache-to: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-backend:buildcache,mode=max
- name: Build and push frontend image
uses: docker/build-push-action@v6
with:
context: ./frontend
file: ./frontend/Dockerfile
push: true
provenance: mode=max
sbom: true
build-args: |
VITE_API_URL=/api/v1
labels: |
org.opencontainers.image.title=NexaPG Frontend
org.opencontainers.image.vendor=Nesterovic IT-Services e.U.
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.version=${{ steps.ver.outputs.clean }}
tags: |
${{ steps.ns.outputs.value }}/nexapg-frontend:${{ steps.ver.outputs.clean }}
${{ steps.ns.outputs.value }}/nexapg-frontend:latest
cache-from: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-frontend:buildcache
cache-to: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-frontend:buildcache,mode=max

86
.github/workflows/migration-safety.yml vendored Normal file
View File

@@ -0,0 +1,86 @@
name: Migration Safety
on:
push:
branches: ["main", "master"]
pull_request:
jobs:
migration-safety:
name: Alembic upgrade/downgrade safety
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: nexapg
POSTGRES_USER: nexapg
POSTGRES_PASSWORD: nexapg
ports:
- 5432:5432
options: >-
--health-cmd "pg_isready -U nexapg -d nexapg"
--health-interval 5s
--health-timeout 5s
--health-retries 30
env:
DB_HOST: postgres
DB_PORT: 5432
DB_NAME: nexapg
DB_USER: nexapg
DB_PASSWORD: nexapg
JWT_SECRET_KEY: ci-jwt-secret-key
ENCRYPTION_KEY: MDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDA=
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install backend dependencies
run: pip install -r backend/requirements.txt
- name: Install PostgreSQL client tools
run: sudo apt-get update && sudo apt-get install -y postgresql-client
- name: Wait for PostgreSQL
env:
PGPASSWORD: nexapg
run: |
for i in $(seq 1 60); do
if pg_isready -h postgres -p 5432 -U nexapg -d nexapg; then
exit 0
fi
sleep 2
done
echo "PostgreSQL did not become ready in time."
exit 1
- name: Alembic upgrade -> downgrade -1 -> upgrade
working-directory: backend
run: |
alembic upgrade head
alembic downgrade -1
alembic upgrade head
- name: Validate schema consistency after roundtrip
env:
PGPASSWORD: nexapg
run: |
cd backend
alembic upgrade head
pg_dump -h postgres -p 5432 -U nexapg -d nexapg --schema-only --no-owner --no-privileges \
| sed '/^\\restrict /d; /^\\unrestrict /d' > /tmp/schema_head_before.sql
alembic downgrade -1
alembic upgrade head
pg_dump -h postgres -p 5432 -U nexapg -d nexapg --schema-only --no-owner --no-privileges \
| sed '/^\\restrict /d; /^\\unrestrict /d' > /tmp/schema_head_after.sql
diff -u /tmp/schema_head_before.sql /tmp/schema_head_after.sql

View File

@@ -2,7 +2,7 @@ name: PostgreSQL Compatibility Matrix
on: on:
push: push:
branches: ["main", "master"] branches: ["main", "master", "development"]
pull_request: pull_request:
jobs: jobs:

View File

@@ -1,7 +1,8 @@
.PHONY: up down logs migrate .PHONY: up down logs migrate
up: up:
docker compose up -d --build docker compose pull
docker compose up -d
down: down:
docker compose down docker compose down

View File

@@ -9,7 +9,7 @@ It combines FastAPI, React, and PostgreSQL in a Docker Compose stack with RBAC,
## Table of Contents ## Table of Contents
- [Quick Start](#quick-start) - [Quick Deploy (Prebuilt Images)](#quick-deploy-prebuilt-images)
- [Prerequisites](#prerequisites) - [Prerequisites](#prerequisites)
- [Make Commands](#make-commands) - [Make Commands](#make-commands)
- [Configuration Reference (`.env`)](#configuration-reference-env) - [Configuration Reference (`.env`)](#configuration-reference-env)
@@ -17,6 +17,7 @@ It combines FastAPI, React, and PostgreSQL in a Docker Compose stack with RBAC,
- [Service Information](#service-information) - [Service Information](#service-information)
- [Target Owner Notifications](#target-owner-notifications) - [Target Owner Notifications](#target-owner-notifications)
- [API Overview](#api-overview) - [API Overview](#api-overview)
- [API Error Format](#api-error-format)
- [`pg_stat_statements` Requirement](#pg_stat_statements-requirement) - [`pg_stat_statements` Requirement](#pg_stat_statements-requirement)
- [Reverse Proxy / SSL Guidance](#reverse-proxy--ssl-guidance) - [Reverse Proxy / SSL Guidance](#reverse-proxy--ssl-guidance)
- [PostgreSQL Compatibility Smoke Test](#postgresql-compatibility-smoke-test) - [PostgreSQL Compatibility Smoke Test](#postgresql-compatibility-smoke-test)
@@ -93,27 +94,50 @@ Optional:
- `psql` for manual DB checks - `psql` for manual DB checks
## Quick Start ## Quick Deploy (Prebuilt Images)
1. Copy environment template: If you only want to run NexaPG from published Docker Hub images, use the bootstrap script:
```bash ```bash
cp .env.example .env mkdir -p /opt/NexaPG
cd /opt/NexaPG
wget -O bootstrap-compose.sh https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/ops/scripts/bootstrap-compose.sh
chmod +x bootstrap-compose.sh
./bootstrap-compose.sh
``` ```
2. Generate a Fernet key and set `ENCRYPTION_KEY` in `.env`: This downloads:
- `docker-compose.yml`
- `.env.example`
- `Makefile`
Then:
```bash ```bash
# generate JWT secret
python -c "import secrets; print(secrets.token_urlsafe(64))"
# generate Fernet encryption key
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
``` # put both values into .env (JWT_SECRET_KEY / ENCRYPTION_KEY)
# note: .env is auto-created by bootstrap if it does not exist
3. Start the stack:
```bash
make up make up
``` ```
4. Open the application: Manual download alternative:
```bash
mkdir -p /opt/NexaPG
cd /opt/NexaPG
wget https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/docker-compose.yml
wget https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/.env.example
wget https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/Makefile
cp .env.example .env
```
`make up` pulls `nesterovicit/nexapg-backend:latest` and `nesterovicit/nexapg-frontend:latest`, then starts the stack.
Open the application:
- Frontend: `http://<SERVER_IP>:<FRONTEND_PORT>` - Frontend: `http://<SERVER_IP>:<FRONTEND_PORT>`
- API base: `http://<SERVER_IP>:<BACKEND_PORT>/api/v1` - API base: `http://<SERVER_IP>:<BACKEND_PORT>/api/v1`
@@ -127,7 +151,7 @@ Initial admin bootstrap user (created from `.env` if missing):
## Make Commands ## Make Commands
```bash ```bash
make up # build and start all services make up # pull latest images and start all services
make down # stop all services make down # stop all services
make logs # follow compose logs make logs # follow compose logs
make migrate # optional/manual: run alembic upgrade head in backend container make migrate # optional/manual: run alembic upgrade head in backend container
@@ -183,12 +207,6 @@ Note: Migrations run automatically when the backend container starts (`entrypoin
| Variable | Description | | Variable | Description |
|---|---| |---|---|
| `FRONTEND_PORT` | Host port mapped to frontend container port `80` | | `FRONTEND_PORT` | Host port mapped to frontend container port `80` |
| `VITE_API_URL` | Frontend API base URL (build-time) |
Recommended values for `VITE_API_URL`:
- Reverse proxy setup: `/api/v1`
- Direct backend access: `http://<SERVER_IP>:<BACKEND_PORT>/api/v1`
## Core Functional Areas ## Core Functional Areas
@@ -302,6 +320,37 @@ Email alert routing is target-specific:
- `GET /api/v1/service/info` - `GET /api/v1/service/info`
- `POST /api/v1/service/info/check` - `POST /api/v1/service/info/check`
## API Error Format
All 4xx/5xx responses use a consistent JSON payload:
```json
{
"code": "validation_error",
"message": "Request validation failed",
"details": [],
"request_id": "c8f0f888-2365-4b86-a5de-b3f0e9df4a4b"
}
```
Common fields:
- `code`: stable machine-readable error code
- `message`: human-readable summary
- `details`: optional extra context (validation list, debug context, etc.)
- `request_id`: request correlation ID (also returned in `X-Request-ID` header)
Common error codes:
- `bad_request` (`400`)
- `unauthorized` (`401`)
- `forbidden` (`403`)
- `not_found` (`404`)
- `conflict` (`409`)
- `validation_error` (`422`)
- `target_unreachable` (`503`)
- `internal_error` (`500`)
## `pg_stat_statements` Requirement ## `pg_stat_statements` Requirement
Query Insights requires `pg_stat_statements` on the monitored target: Query Insights requires `pg_stat_statements` on the monitored target:
@@ -318,7 +367,7 @@ For production, serve frontend and API under the same public origin via reverse
- Frontend URL example: `https://monitor.example.com` - Frontend URL example: `https://monitor.example.com`
- Proxy API path `/api/` to backend service - Proxy API path `/api/` to backend service
- Use `VITE_API_URL=/api/v1` - Route `/api/v1` to the backend service
This prevents mixed-content and CORS issues. This prevents mixed-content and CORS issues.
@@ -351,8 +400,7 @@ docker compose logs --tail=200 db
### CORS or mixed-content issues behind SSL proxy ### CORS or mixed-content issues behind SSL proxy
- Set `VITE_API_URL=/api/v1` - Ensure proxy forwards `/api/` (or `/api/v1`) to backend
- Ensure proxy forwards `/api/` to backend
- Set correct frontend origin(s) in `CORS_ORIGINS` - Set correct frontend origin(s) in `CORS_ORIGINS`
### `rejected SSL upgrade` for a target ### `rejected SSL upgrade` for a target

View File

@@ -1,4 +1,5 @@
FROM python:3.12-slim AS base ARG PYTHON_BASE_IMAGE=python:3.13-alpine
FROM ${PYTHON_BASE_IMAGE} AS base
ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1 ENV PYTHONUNBUFFERED=1
@@ -6,7 +7,17 @@ ENV PIP_NO_CACHE_DIR=1
WORKDIR /app WORKDIR /app
RUN addgroup --system app && adduser --system --ingroup app app RUN if command -v apt-get >/dev/null 2>&1; then \
apt-get update && apt-get upgrade -y && rm -rf /var/lib/apt/lists/*; \
elif command -v apk >/dev/null 2>&1; then \
apk upgrade --no-cache; \
fi
RUN if addgroup --help 2>&1 | grep -q -- '--system'; then \
addgroup --system app && adduser --system --ingroup app app; \
else \
addgroup -S app && adduser -S -G app app; \
fi
COPY requirements.txt /app/requirements.txt COPY requirements.txt /app/requirements.txt
RUN pip install --upgrade pip && pip install -r /app/requirements.txt RUN pip install --upgrade pip && pip install -r /app/requirements.txt

View File

@@ -0,0 +1,26 @@
"""add user first and last name fields
Revision ID: 0009_user_profile_fields
Revises: 0008_service_settings
Create Date: 2026-02-13
"""
from alembic import op
import sqlalchemy as sa
revision = "0009_user_profile_fields"
down_revision = "0008_service_settings"
branch_labels = None
depends_on = None
def upgrade() -> None:
op.add_column("users", sa.Column("first_name", sa.String(length=120), nullable=True))
op.add_column("users", sa.Column("last_name", sa.String(length=120), nullable=True))
def downgrade() -> None:
op.drop_column("users", "last_name")
op.drop_column("users", "first_name")

View File

@@ -9,6 +9,7 @@ from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db from app.core.db import get_db
from app.core.deps import require_roles from app.core.deps import require_roles
from app.core.errors import api_error
from app.models.models import EmailNotificationSettings, User from app.models.models import EmailNotificationSettings, User
from app.schemas.admin_settings import EmailSettingsOut, EmailSettingsTestRequest, EmailSettingsUpdate from app.schemas.admin_settings import EmailSettingsOut, EmailSettingsTestRequest, EmailSettingsUpdate
from app.services.audit import write_audit_log from app.services.audit import write_audit_log
@@ -96,9 +97,9 @@ async def test_email_settings(
) -> dict: ) -> dict:
settings = await _get_or_create_settings(db) settings = await _get_or_create_settings(db)
if not settings.smtp_host: if not settings.smtp_host:
raise HTTPException(status_code=400, detail="SMTP host is not configured") raise HTTPException(status_code=400, detail=api_error("smtp_host_missing", "SMTP host is not configured"))
if not settings.from_email: if not settings.from_email:
raise HTTPException(status_code=400, detail="From email is not configured") raise HTTPException(status_code=400, detail=api_error("smtp_from_email_missing", "From email is not configured"))
password = decrypt_secret(settings.encrypted_smtp_password) if settings.encrypted_smtp_password else None password = decrypt_secret(settings.encrypted_smtp_password) if settings.encrypted_smtp_password else None
message = EmailMessage() message = EmailMessage()
@@ -126,7 +127,10 @@ async def test_email_settings(
smtp.login(settings.smtp_username, password or "") smtp.login(settings.smtp_username, password or "")
smtp.send_message(message) smtp.send_message(message)
except Exception as exc: except Exception as exc:
raise HTTPException(status_code=400, detail=f"SMTP test failed: {exc}") raise HTTPException(
status_code=400,
detail=api_error("smtp_test_failed", "SMTP test failed", {"error": str(exc)}),
) from exc
await write_audit_log(db, "admin.email_settings.test", admin.id, {"recipient": str(payload.recipient)}) await write_audit_log(db, "admin.email_settings.test", admin.id, {"recipient": str(payload.recipient)})
return {"status": "sent", "recipient": str(payload.recipient)} return {"status": "sent", "recipient": str(payload.recipient)}

View File

@@ -3,6 +3,7 @@ from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db from app.core.db import get_db
from app.core.deps import require_roles from app.core.deps import require_roles
from app.core.errors import api_error
from app.core.security import hash_password from app.core.security import hash_password
from app.models.models import User from app.models.models import User
from app.schemas.user import UserCreate, UserOut, UserUpdate from app.schemas.user import UserCreate, UserOut, UserUpdate
@@ -22,8 +23,14 @@ async def list_users(admin: User = Depends(require_roles("admin")), db: AsyncSes
async def create_user(payload: UserCreate, admin: User = Depends(require_roles("admin")), db: AsyncSession = Depends(get_db)) -> UserOut: async def create_user(payload: UserCreate, admin: User = Depends(require_roles("admin")), db: AsyncSession = Depends(get_db)) -> UserOut:
exists = await db.scalar(select(User).where(User.email == payload.email)) exists = await db.scalar(select(User).where(User.email == payload.email))
if exists: if exists:
raise HTTPException(status_code=409, detail="Email already exists") raise HTTPException(status_code=409, detail=api_error("email_exists", "Email already exists"))
user = User(email=payload.email, password_hash=hash_password(payload.password), role=payload.role) user = User(
email=payload.email,
first_name=payload.first_name,
last_name=payload.last_name,
password_hash=hash_password(payload.password),
role=payload.role,
)
db.add(user) db.add(user)
await db.commit() await db.commit()
await db.refresh(user) await db.refresh(user)
@@ -40,10 +47,17 @@ async def update_user(
) -> UserOut: ) -> UserOut:
user = await db.scalar(select(User).where(User.id == user_id)) user = await db.scalar(select(User).where(User.id == user_id))
if not user: if not user:
raise HTTPException(status_code=404, detail="User not found") raise HTTPException(status_code=404, detail=api_error("user_not_found", "User not found"))
update_data = payload.model_dump(exclude_unset=True) update_data = payload.model_dump(exclude_unset=True)
if "password" in update_data and update_data["password"]: next_email = update_data.get("email")
user.password_hash = hash_password(update_data.pop("password")) if next_email and next_email != user.email:
existing = await db.scalar(select(User).where(User.email == next_email))
if existing and existing.id != user.id:
raise HTTPException(status_code=409, detail=api_error("email_exists", "Email already exists"))
if "password" in update_data:
raw_password = update_data.pop("password")
if raw_password:
user.password_hash = hash_password(raw_password)
for key, value in update_data.items(): for key, value in update_data.items():
setattr(user, key, value) setattr(user, key, value)
await db.commit() await db.commit()
@@ -55,10 +69,10 @@ async def update_user(
@router.delete("/{user_id}") @router.delete("/{user_id}")
async def delete_user(user_id: int, admin: User = Depends(require_roles("admin")), db: AsyncSession = Depends(get_db)) -> dict: async def delete_user(user_id: int, admin: User = Depends(require_roles("admin")), db: AsyncSession = Depends(get_db)) -> dict:
if user_id == admin.id: if user_id == admin.id:
raise HTTPException(status_code=400, detail="Cannot delete yourself") raise HTTPException(status_code=400, detail=api_error("cannot_delete_self", "Cannot delete yourself"))
user = await db.scalar(select(User).where(User.id == user_id)) user = await db.scalar(select(User).where(User.id == user_id))
if not user: if not user:
raise HTTPException(status_code=404, detail="User not found") raise HTTPException(status_code=404, detail=api_error("user_not_found", "User not found"))
await db.delete(user) await db.delete(user)
await db.commit() await db.commit()
await write_audit_log(db, "admin.user.delete", admin.id, {"deleted_user_id": user_id}) await write_audit_log(db, "admin.user.delete", admin.id, {"deleted_user_id": user_id})

View File

@@ -4,6 +4,7 @@ from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db from app.core.db import get_db
from app.core.deps import get_current_user, require_roles from app.core.deps import get_current_user, require_roles
from app.core.errors import api_error
from app.models.models import AlertDefinition, Target, User from app.models.models import AlertDefinition, Target, User
from app.schemas.alert import ( from app.schemas.alert import (
AlertDefinitionCreate, AlertDefinitionCreate,
@@ -33,7 +34,7 @@ async def _validate_target_exists(db: AsyncSession, target_id: int | None) -> No
return return
target_exists = await db.scalar(select(Target.id).where(Target.id == target_id)) target_exists = await db.scalar(select(Target.id).where(Target.id == target_id))
if target_exists is None: if target_exists is None:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
@router.get("/status", response_model=AlertStatusResponse) @router.get("/status", response_model=AlertStatusResponse)
@@ -101,7 +102,7 @@ async def update_alert_definition(
) -> AlertDefinitionOut: ) -> AlertDefinitionOut:
definition = await db.scalar(select(AlertDefinition).where(AlertDefinition.id == definition_id)) definition = await db.scalar(select(AlertDefinition).where(AlertDefinition.id == definition_id))
if definition is None: if definition is None:
raise HTTPException(status_code=404, detail="Alert definition not found") raise HTTPException(status_code=404, detail=api_error("alert_definition_not_found", "Alert definition not found"))
updates = payload.model_dump(exclude_unset=True) updates = payload.model_dump(exclude_unset=True)
if "target_id" in updates: if "target_id" in updates:
@@ -131,7 +132,7 @@ async def delete_alert_definition(
) -> dict: ) -> dict:
definition = await db.scalar(select(AlertDefinition).where(AlertDefinition.id == definition_id)) definition = await db.scalar(select(AlertDefinition).where(AlertDefinition.id == definition_id))
if definition is None: if definition is None:
raise HTTPException(status_code=404, detail="Alert definition not found") raise HTTPException(status_code=404, detail=api_error("alert_definition_not_found", "Alert definition not found"))
await db.delete(definition) await db.delete(definition)
await db.commit() await db.commit()
invalidate_alert_cache() invalidate_alert_cache()
@@ -148,7 +149,7 @@ async def test_alert_definition(
_ = user _ = user
target = await db.scalar(select(Target).where(Target.id == payload.target_id)) target = await db.scalar(select(Target).where(Target.id == payload.target_id))
if target is None: if target is None:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
try: try:
value = await run_scalar_sql_for_target(target, payload.sql_text) value = await run_scalar_sql_for_target(target, payload.sql_text)
return AlertDefinitionTestResponse(ok=True, value=value) return AlertDefinitionTestResponse(ok=True, value=value)

View File

@@ -1,10 +1,11 @@
from fastapi import APIRouter, Depends, HTTPException, status from fastapi import APIRouter, Depends, HTTPException, status
from jose import JWTError, jwt import jwt
from sqlalchemy import select from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.ext.asyncio import AsyncSession
from app.core.config import get_settings from app.core.config import get_settings
from app.core.db import get_db from app.core.db import get_db
from app.core.deps import get_current_user from app.core.deps import get_current_user
from app.core.errors import api_error
from app.core.security import create_access_token, create_refresh_token, verify_password from app.core.security import create_access_token, create_refresh_token, verify_password
from app.models.models import User from app.models.models import User
from app.schemas.auth import LoginRequest, RefreshRequest, TokenResponse from app.schemas.auth import LoginRequest, RefreshRequest, TokenResponse
@@ -19,7 +20,10 @@ settings = get_settings()
async def login(payload: LoginRequest, db: AsyncSession = Depends(get_db)) -> TokenResponse: async def login(payload: LoginRequest, db: AsyncSession = Depends(get_db)) -> TokenResponse:
user = await db.scalar(select(User).where(User.email == payload.email)) user = await db.scalar(select(User).where(User.email == payload.email))
if not user or not verify_password(payload.password, user.password_hash): if not user or not verify_password(payload.password, user.password_hash):
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid credentials") raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_credentials", "Invalid credentials"),
)
await write_audit_log(db, action="auth.login", user_id=user.id, payload={"email": user.email}) await write_audit_log(db, action="auth.login", user_id=user.id, payload={"email": user.email})
return TokenResponse(access_token=create_access_token(str(user.id)), refresh_token=create_refresh_token(str(user.id))) return TokenResponse(access_token=create_access_token(str(user.id)), refresh_token=create_refresh_token(str(user.id)))
@@ -29,15 +33,24 @@ async def login(payload: LoginRequest, db: AsyncSession = Depends(get_db)) -> To
async def refresh(payload: RefreshRequest, db: AsyncSession = Depends(get_db)) -> TokenResponse: async def refresh(payload: RefreshRequest, db: AsyncSession = Depends(get_db)) -> TokenResponse:
try: try:
token_payload = jwt.decode(payload.refresh_token, settings.jwt_secret_key, algorithms=[settings.jwt_algorithm]) token_payload = jwt.decode(payload.refresh_token, settings.jwt_secret_key, algorithms=[settings.jwt_algorithm])
except JWTError as exc: except jwt.InvalidTokenError as exc:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid refresh token") from exc raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_refresh_token", "Invalid refresh token"),
) from exc
if token_payload.get("type") != "refresh": if token_payload.get("type") != "refresh":
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid refresh token type") raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_refresh_token_type", "Invalid refresh token type"),
)
user_id = token_payload.get("sub") user_id = token_payload.get("sub")
user = await db.scalar(select(User).where(User.id == int(user_id))) user = await db.scalar(select(User).where(User.id == int(user_id)))
if not user: if not user:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="User not found") raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("user_not_found", "User not found"),
)
await write_audit_log(db, action="auth.refresh", user_id=user.id, payload={}) await write_audit_log(db, action="auth.refresh", user_id=user.id, payload={})
return TokenResponse(access_token=create_access_token(str(user.id)), refresh_token=create_refresh_token(str(user.id))) return TokenResponse(access_token=create_access_token(str(user.id)), refresh_token=create_refresh_token(str(user.id)))

View File

@@ -2,6 +2,7 @@ from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db from app.core.db import get_db
from app.core.deps import get_current_user from app.core.deps import get_current_user
from app.core.errors import api_error
from app.core.security import hash_password, verify_password from app.core.security import hash_password, verify_password
from app.models.models import User from app.models.models import User
from app.schemas.user import UserOut, UserPasswordChange from app.schemas.user import UserOut, UserPasswordChange
@@ -22,10 +23,16 @@ async def change_password(
db: AsyncSession = Depends(get_db), db: AsyncSession = Depends(get_db),
) -> dict: ) -> dict:
if not verify_password(payload.current_password, user.password_hash): if not verify_password(payload.current_password, user.password_hash):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Current password is incorrect") raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=api_error("invalid_current_password", "Current password is incorrect"),
)
if verify_password(payload.new_password, user.password_hash): if verify_password(payload.new_password, user.password_hash):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="New password must be different") raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=api_error("password_reuse_not_allowed", "New password must be different"),
)
user.password_hash = hash_password(payload.new_password) user.password_hash = hash_password(payload.new_password)
await db.commit() await db.commit()

View File

@@ -8,6 +8,7 @@ from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db from app.core.db import get_db
from app.core.deps import get_current_user, require_roles from app.core.deps import get_current_user, require_roles
from app.core.errors import api_error
from app.models.models import Metric, QueryStat, Target, TargetOwner, User from app.models.models import Metric, QueryStat, Target, TargetOwner, User
from app.schemas.metric import MetricOut, QueryStatOut from app.schemas.metric import MetricOut, QueryStatOut
from app.schemas.overview import DatabaseOverviewOut from app.schemas.overview import DatabaseOverviewOut
@@ -85,7 +86,10 @@ async def _discover_databases(payload: TargetCreate) -> list[str]:
) )
return [row["datname"] for row in rows if row["datname"]] return [row["datname"] for row in rows if row["datname"]]
except Exception as exc: except Exception as exc:
raise HTTPException(status_code=400, detail=f"Database discovery failed: {exc}") raise HTTPException(
status_code=400,
detail=api_error("database_discovery_failed", "Database discovery failed", {"error": str(exc)}),
)
finally: finally:
if conn: if conn:
await conn.close() await conn.close()
@@ -131,7 +135,10 @@ async def test_target_connection(
version = await conn.fetchval("SHOW server_version") version = await conn.fetchval("SHOW server_version")
return {"ok": True, "message": "Connection successful", "server_version": version} return {"ok": True, "message": "Connection successful", "server_version": version}
except Exception as exc: except Exception as exc:
raise HTTPException(status_code=400, detail=f"Connection failed: {exc}") raise HTTPException(
status_code=400,
detail=api_error("connection_test_failed", "Connection test failed", {"error": str(exc)}),
)
finally: finally:
if conn: if conn:
await conn.close() await conn.close()
@@ -147,7 +154,10 @@ async def create_target(
if owner_ids: if owner_ids:
owners_exist = (await db.scalars(select(User.id).where(User.id.in_(owner_ids)))).all() owners_exist = (await db.scalars(select(User.id).where(User.id.in_(owner_ids)))).all()
if len(set(owners_exist)) != len(owner_ids): if len(set(owners_exist)) != len(owner_ids):
raise HTTPException(status_code=400, detail="One or more owner users were not found") raise HTTPException(
status_code=400,
detail=api_error("owner_users_not_found", "One or more owner users were not found"),
)
encrypted_password = encrypt_secret(payload.password) encrypted_password = encrypt_secret(payload.password)
created_targets: list[Target] = [] created_targets: list[Target] = []
@@ -155,7 +165,10 @@ async def create_target(
if payload.discover_all_databases: if payload.discover_all_databases:
databases = await _discover_databases(payload) databases = await _discover_databases(payload)
if not databases: if not databases:
raise HTTPException(status_code=400, detail="No databases discovered on target") raise HTTPException(
status_code=400,
detail=api_error("no_databases_discovered", "No databases discovered on target"),
)
group_id = str(uuid4()) group_id = str(uuid4())
base_tags = payload.tags or {} base_tags = payload.tags or {}
for dbname in databases: for dbname in databases:
@@ -194,7 +207,10 @@ async def create_target(
await _set_target_owners(db, target.id, owner_ids, user.id) await _set_target_owners(db, target.id, owner_ids, user.id)
if not created_targets: if not created_targets:
raise HTTPException(status_code=400, detail="All discovered databases already exist as targets") raise HTTPException(
status_code=400,
detail=api_error("all_discovered_databases_exist", "All discovered databases already exist as targets"),
)
await db.commit() await db.commit()
for item in created_targets: for item in created_targets:
await db.refresh(item) await db.refresh(item)
@@ -247,7 +263,7 @@ async def get_target(target_id: int, user: User = Depends(get_current_user), db:
_ = user _ = user
target = await db.scalar(select(Target).where(Target.id == target_id)) target = await db.scalar(select(Target).where(Target.id == target_id))
if not target: if not target:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
owner_map = await _owners_by_target_ids(db, [target.id]) owner_map = await _owners_by_target_ids(db, [target.id])
return _target_out_with_owners(target, owner_map.get(target.id, [])) return _target_out_with_owners(target, owner_map.get(target.id, []))
@@ -261,7 +277,7 @@ async def update_target(
) -> TargetOut: ) -> TargetOut:
target = await db.scalar(select(Target).where(Target.id == target_id)) target = await db.scalar(select(Target).where(Target.id == target_id))
if not target: if not target:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
updates = payload.model_dump(exclude_unset=True) updates = payload.model_dump(exclude_unset=True)
owner_user_ids = updates.pop("owner_user_ids", None) owner_user_ids = updates.pop("owner_user_ids", None)
@@ -273,7 +289,10 @@ async def update_target(
if owner_user_ids is not None: if owner_user_ids is not None:
owners_exist = (await db.scalars(select(User.id).where(User.id.in_(owner_user_ids)))).all() owners_exist = (await db.scalars(select(User.id).where(User.id.in_(owner_user_ids)))).all()
if len(set(owners_exist)) != len(set(owner_user_ids)): if len(set(owners_exist)) != len(set(owner_user_ids)):
raise HTTPException(status_code=400, detail="One or more owner users were not found") raise HTTPException(
status_code=400,
detail=api_error("owner_users_not_found", "One or more owner users were not found"),
)
await _set_target_owners(db, target.id, owner_user_ids, user.id) await _set_target_owners(db, target.id, owner_user_ids, user.id)
await db.commit() await db.commit()
@@ -292,12 +311,15 @@ async def set_target_owners(
) -> list[TargetOwnerOut]: ) -> list[TargetOwnerOut]:
target = await db.scalar(select(Target).where(Target.id == target_id)) target = await db.scalar(select(Target).where(Target.id == target_id))
if not target: if not target:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
owner_user_ids = sorted(set(payload.user_ids)) owner_user_ids = sorted(set(payload.user_ids))
if owner_user_ids: if owner_user_ids:
owners_exist = (await db.scalars(select(User.id).where(User.id.in_(owner_user_ids)))).all() owners_exist = (await db.scalars(select(User.id).where(User.id.in_(owner_user_ids)))).all()
if len(set(owners_exist)) != len(set(owner_user_ids)): if len(set(owners_exist)) != len(set(owner_user_ids)):
raise HTTPException(status_code=400, detail="One or more owner users were not found") raise HTTPException(
status_code=400,
detail=api_error("owner_users_not_found", "One or more owner users were not found"),
)
await _set_target_owners(db, target_id, owner_user_ids, user.id) await _set_target_owners(db, target_id, owner_user_ids, user.id)
await db.commit() await db.commit()
await write_audit_log(db, "target.owners.update", user.id, {"target_id": target_id, "owner_user_ids": owner_user_ids}) await write_audit_log(db, "target.owners.update", user.id, {"target_id": target_id, "owner_user_ids": owner_user_ids})
@@ -321,7 +343,7 @@ async def get_target_owners(
_ = user _ = user
target = await db.scalar(select(Target).where(Target.id == target_id)) target = await db.scalar(select(Target).where(Target.id == target_id))
if not target: if not target:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
rows = ( rows = (
await db.execute( await db.execute(
select(User.id, User.email, User.role) select(User.id, User.email, User.role)
@@ -341,7 +363,7 @@ async def delete_target(
) -> dict: ) -> dict:
target = await db.scalar(select(Target).where(Target.id == target_id)) target = await db.scalar(select(Target).where(Target.id == target_id))
if not target: if not target:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
await db.delete(target) await db.delete(target)
await db.commit() await db.commit()
await write_audit_log(db, "target.delete", user.id, {"target_id": target_id}) await write_audit_log(db, "target.delete", user.id, {"target_id": target_id})
@@ -369,7 +391,22 @@ async def get_metrics(
async def _live_conn(target: Target) -> asyncpg.Connection: async def _live_conn(target: Target) -> asyncpg.Connection:
return await asyncpg.connect(dsn=build_target_dsn(target)) try:
return await asyncpg.connect(dsn=build_target_dsn(target))
except (OSError, asyncpg.PostgresError) as exc:
raise HTTPException(
status_code=503,
detail=api_error(
"target_unreachable",
"Target database is not reachable",
{
"target_id": target.id,
"host": target.host,
"port": target.port,
"error": str(exc),
},
),
) from exc
@router.get("/{target_id}/locks") @router.get("/{target_id}/locks")
@@ -377,7 +414,7 @@ async def get_locks(target_id: int, user: User = Depends(get_current_user), db:
_ = user _ = user
target = await db.scalar(select(Target).where(Target.id == target_id)) target = await db.scalar(select(Target).where(Target.id == target_id))
if not target: if not target:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
conn = await _live_conn(target) conn = await _live_conn(target)
try: try:
rows = await conn.fetch( rows = await conn.fetch(
@@ -398,7 +435,7 @@ async def get_activity(target_id: int, user: User = Depends(get_current_user), d
_ = user _ = user
target = await db.scalar(select(Target).where(Target.id == target_id)) target = await db.scalar(select(Target).where(Target.id == target_id))
if not target: if not target:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
conn = await _live_conn(target) conn = await _live_conn(target)
try: try:
rows = await conn.fetch( rows = await conn.fetch(
@@ -420,7 +457,7 @@ async def get_top_queries(target_id: int, user: User = Depends(get_current_user)
_ = user _ = user
target = await db.scalar(select(Target).where(Target.id == target_id)) target = await db.scalar(select(Target).where(Target.id == target_id))
if not target: if not target:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
if not target.use_pg_stat_statements: if not target.use_pg_stat_statements:
return [] return []
rows = ( rows = (
@@ -450,5 +487,20 @@ async def get_overview(target_id: int, user: User = Depends(get_current_user), d
_ = user _ = user
target = await db.scalar(select(Target).where(Target.id == target_id)) target = await db.scalar(select(Target).where(Target.id == target_id))
if not target: if not target:
raise HTTPException(status_code=404, detail="Target not found") raise HTTPException(status_code=404, detail=api_error("target_not_found", "Target not found"))
return await get_target_overview(target) try:
return await get_target_overview(target)
except (OSError, asyncpg.PostgresError) as exc:
raise HTTPException(
status_code=503,
detail=api_error(
"target_unreachable",
"Target database is not reachable",
{
"target_id": target.id,
"host": target.host,
"port": target.port,
"error": str(exc),
},
),
) from exc

View File

@@ -2,7 +2,7 @@ from functools import lru_cache
from pydantic import field_validator from pydantic import field_validator
from pydantic_settings import BaseSettings, SettingsConfigDict from pydantic_settings import BaseSettings, SettingsConfigDict
NEXAPG_VERSION = "0.1.2" NEXAPG_VERSION = "0.2.2"
class Settings(BaseSettings): class Settings(BaseSettings):

View File

@@ -1,10 +1,11 @@
from fastapi import Depends, HTTPException, status from fastapi import Depends, HTTPException, status
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from jose import JWTError, jwt import jwt
from sqlalchemy import select from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.ext.asyncio import AsyncSession
from app.core.config import get_settings from app.core.config import get_settings
from app.core.db import get_db from app.core.db import get_db
from app.core.errors import api_error
from app.models.models import User from app.models.models import User
settings = get_settings() settings = get_settings()
@@ -16,27 +17,42 @@ async def get_current_user(
db: AsyncSession = Depends(get_db), db: AsyncSession = Depends(get_db),
) -> User: ) -> User:
if not credentials: if not credentials:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Missing token") raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("missing_token", "Missing token"),
)
token = credentials.credentials token = credentials.credentials
try: try:
payload = jwt.decode(token, settings.jwt_secret_key, algorithms=[settings.jwt_algorithm]) payload = jwt.decode(token, settings.jwt_secret_key, algorithms=[settings.jwt_algorithm])
except JWTError as exc: except jwt.InvalidTokenError as exc:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid token") from exc raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_token", "Invalid token"),
) from exc
if payload.get("type") != "access": if payload.get("type") != "access":
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid token type") raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("invalid_token_type", "Invalid token type"),
)
user_id = payload.get("sub") user_id = payload.get("sub")
user = await db.scalar(select(User).where(User.id == int(user_id))) user = await db.scalar(select(User).where(User.id == int(user_id)))
if not user: if not user:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="User not found") raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=api_error("user_not_found", "User not found"),
)
return user return user
def require_roles(*roles: str): def require_roles(*roles: str):
async def role_dependency(user: User = Depends(get_current_user)) -> User: async def role_dependency(user: User = Depends(get_current_user)) -> User:
if user.role not in roles: if user.role not in roles:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Forbidden") raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=api_error("forbidden", "Forbidden"),
)
return user return user
return role_dependency return role_dependency

View File

@@ -0,0 +1,38 @@
from __future__ import annotations
from typing import Any
def error_payload(code: str, message: str, details: Any, request_id: str) -> dict[str, Any]:
return {
"code": code,
"message": message,
"details": details,
"request_id": request_id,
}
def api_error(code: str, message: str, details: Any = None) -> dict[str, Any]:
return {
"code": code,
"message": message,
"details": details,
}
def http_status_to_code(status_code: int) -> str:
mapping = {
400: "bad_request",
401: "unauthorized",
403: "forbidden",
404: "not_found",
405: "method_not_allowed",
409: "conflict",
422: "validation_error",
429: "rate_limited",
500: "internal_error",
502: "bad_gateway",
503: "service_unavailable",
504: "gateway_timeout",
}
return mapping.get(status_code, f"http_{status_code}")

View File

@@ -1,5 +1,5 @@
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
from jose import jwt import jwt
from passlib.context import CryptContext from passlib.context import CryptContext
from app.core.config import get_settings from app.core.config import get_settings

View File

@@ -1,12 +1,17 @@
import asyncio import asyncio
import logging import logging
from uuid import uuid4
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from fastapi import FastAPI from fastapi import FastAPI, HTTPException, Request
from fastapi.exceptions import RequestValidationError
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from starlette.exceptions import HTTPException as StarletteHTTPException
from sqlalchemy import select from sqlalchemy import select
from app.api.router import api_router from app.api.router import api_router
from app.core.config import get_settings from app.core.config import get_settings
from app.core.db import SessionLocal from app.core.db import SessionLocal
from app.core.errors import error_payload, http_status_to_code
from app.core.logging import configure_logging from app.core.logging import configure_logging
from app.core.security import hash_password from app.core.security import hash_password
from app.models.models import User from app.models.models import User
@@ -57,4 +62,67 @@ app.add_middleware(
allow_methods=["*"], allow_methods=["*"],
allow_headers=["*"], allow_headers=["*"],
) )
@app.middleware("http")
async def request_id_middleware(request: Request, call_next):
request_id = request.headers.get("x-request-id") or str(uuid4())
request.state.request_id = request_id
response = await call_next(request)
response.headers["X-Request-ID"] = request_id
return response
@app.exception_handler(HTTPException)
@app.exception_handler(StarletteHTTPException)
async def http_exception_handler(request: Request, exc: HTTPException | StarletteHTTPException):
request_id = getattr(request.state, "request_id", str(uuid4()))
code = http_status_to_code(exc.status_code)
message = "Request failed"
details = None
if isinstance(exc.detail, str):
message = exc.detail
elif isinstance(exc.detail, dict):
code = str(exc.detail.get("code", code))
message = str(exc.detail.get("message", message))
details = exc.detail.get("details")
elif isinstance(exc.detail, list):
message = "Request validation failed"
details = exc.detail
return JSONResponse(
status_code=exc.status_code,
content=error_payload(code=code, message=message, details=details, request_id=request_id),
)
@app.exception_handler(RequestValidationError)
async def request_validation_exception_handler(request: Request, exc: RequestValidationError):
request_id = getattr(request.state, "request_id", str(uuid4()))
return JSONResponse(
status_code=422,
content=error_payload(
code="validation_error",
message="Request validation failed",
details=exc.errors(),
request_id=request_id,
),
)
@app.exception_handler(Exception)
async def unhandled_exception_handler(request: Request, exc: Exception):
request_id = getattr(request.state, "request_id", str(uuid4()))
logger.exception("unhandled_exception request_id=%s", request_id, exc_info=exc)
return JSONResponse(
status_code=500,
content=error_payload(
code="internal_error",
message="Internal server error",
details=None,
request_id=request_id,
),
)
app.include_router(api_router, prefix=settings.api_v1_prefix) app.include_router(api_router, prefix=settings.api_v1_prefix)

View File

@@ -9,6 +9,8 @@ class User(Base):
id: Mapped[int] = mapped_column(Integer, primary_key=True) id: Mapped[int] = mapped_column(Integer, primary_key=True)
email: Mapped[str] = mapped_column(String(255), unique=True, index=True, nullable=False) email: Mapped[str] = mapped_column(String(255), unique=True, index=True, nullable=False)
first_name: Mapped[str | None] = mapped_column(String(120), nullable=True)
last_name: Mapped[str | None] = mapped_column(String(120), nullable=True)
password_hash: Mapped[str] = mapped_column(String(255), nullable=False) password_hash: Mapped[str] = mapped_column(String(255), nullable=False)
role: Mapped[str] = mapped_column(String(20), nullable=False, default="viewer") role: Mapped[str] = mapped_column(String(20), nullable=False, default="viewer")
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), server_default=func.now(), nullable=False) created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), server_default=func.now(), nullable=False)

View File

@@ -5,6 +5,8 @@ from pydantic import BaseModel, EmailStr, field_validator
class UserOut(BaseModel): class UserOut(BaseModel):
id: int id: int
email: EmailStr email: EmailStr
first_name: str | None = None
last_name: str | None = None
role: str role: str
created_at: datetime created_at: datetime
@@ -13,12 +15,16 @@ class UserOut(BaseModel):
class UserCreate(BaseModel): class UserCreate(BaseModel):
email: EmailStr email: EmailStr
first_name: str | None = None
last_name: str | None = None
password: str password: str
role: str = "viewer" role: str = "viewer"
class UserUpdate(BaseModel): class UserUpdate(BaseModel):
email: EmailStr | None = None email: EmailStr | None = None
first_name: str | None = None
last_name: str | None = None
password: str | None = None password: str | None = None
role: str | None = None role: str | None = None

View File

@@ -11,6 +11,7 @@ from sqlalchemy import desc, func, select
from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.ext.asyncio import AsyncSession
from app.core.config import get_settings from app.core.config import get_settings
from app.core.errors import api_error
from app.models.models import AlertDefinition, Metric, QueryStat, Target from app.models.models import AlertDefinition, Metric, QueryStat, Target
from app.schemas.alert import AlertStatusItem, AlertStatusResponse from app.schemas.alert import AlertStatusItem, AlertStatusResponse
from app.services.collector import build_target_dsn from app.services.collector import build_target_dsn
@@ -144,25 +145,40 @@ def get_standard_alert_reference() -> list[dict[str, str]]:
def validate_alert_thresholds(comparison: str, warning_threshold: float | None, alert_threshold: float) -> None: def validate_alert_thresholds(comparison: str, warning_threshold: float | None, alert_threshold: float) -> None:
if comparison not in _ALLOWED_COMPARISONS: if comparison not in _ALLOWED_COMPARISONS:
raise HTTPException(status_code=400, detail=f"Invalid comparison. Use one of {sorted(_ALLOWED_COMPARISONS)}") raise HTTPException(
status_code=400,
detail=api_error(
"invalid_comparison",
f"Invalid comparison. Use one of {sorted(_ALLOWED_COMPARISONS)}",
),
)
if warning_threshold is None: if warning_threshold is None:
return return
if comparison in {"gte", "gt"} and warning_threshold > alert_threshold: if comparison in {"gte", "gt"} and warning_threshold > alert_threshold:
raise HTTPException(status_code=400, detail="For gte/gt, warning_threshold must be <= alert_threshold") raise HTTPException(
status_code=400,
detail=api_error("invalid_thresholds", "For gte/gt, warning_threshold must be <= alert_threshold"),
)
if comparison in {"lte", "lt"} and warning_threshold < alert_threshold: if comparison in {"lte", "lt"} and warning_threshold < alert_threshold:
raise HTTPException(status_code=400, detail="For lte/lt, warning_threshold must be >= alert_threshold") raise HTTPException(
status_code=400,
detail=api_error("invalid_thresholds", "For lte/lt, warning_threshold must be >= alert_threshold"),
)
def validate_alert_sql(sql_text: str) -> str: def validate_alert_sql(sql_text: str) -> str:
sql = sql_text.strip().rstrip(";") sql = sql_text.strip().rstrip(";")
lowered = sql.lower().strip() lowered = sql.lower().strip()
if not lowered.startswith("select"): if not lowered.startswith("select"):
raise HTTPException(status_code=400, detail="Alert SQL must start with SELECT") raise HTTPException(status_code=400, detail=api_error("invalid_alert_sql", "Alert SQL must start with SELECT"))
if _FORBIDDEN_SQL_WORDS.search(lowered): if _FORBIDDEN_SQL_WORDS.search(lowered):
raise HTTPException(status_code=400, detail="Only read-only SELECT statements are allowed") raise HTTPException(
status_code=400,
detail=api_error("invalid_alert_sql", "Only read-only SELECT statements are allowed"),
)
if ";" in sql: if ";" in sql:
raise HTTPException(status_code=400, detail="Only a single SQL statement is allowed") raise HTTPException(status_code=400, detail=api_error("invalid_alert_sql", "Only a single SQL statement is allowed"))
return sql return sql

View File

@@ -1,6 +1,7 @@
import asyncio import asyncio
import logging import logging
from datetime import datetime, timezone from datetime import datetime, timezone
from random import uniform
from sqlalchemy import select from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.exc import SQLAlchemyError from sqlalchemy.exc import SQLAlchemyError
@@ -15,6 +16,9 @@ logger = logging.getLogger(__name__)
settings = get_settings() settings = get_settings()
_failure_state: dict[int, dict[str, object]] = {} _failure_state: dict[int, dict[str, object]] = {}
_failure_log_interval_seconds = 300 _failure_log_interval_seconds = 300
_backoff_base_seconds = max(3, int(settings.poll_interval_seconds))
_backoff_max_seconds = 300
_backoff_jitter_factor = 0.15
def build_target_dsn(target: Target) -> str: def build_target_dsn(target: Target) -> str:
@@ -181,31 +185,66 @@ async def collect_once() -> None:
async with SessionLocal() as db: async with SessionLocal() as db:
targets = (await db.scalars(select(Target))).all() targets = (await db.scalars(select(Target))).all()
active_target_ids = {target.id for target in targets}
stale_target_ids = [target_id for target_id in _failure_state.keys() if target_id not in active_target_ids]
for stale_target_id in stale_target_ids:
_failure_state.pop(stale_target_id, None)
for target in targets: for target in targets:
now = datetime.now(timezone.utc)
state = _failure_state.get(target.id)
if state:
next_attempt_at = state.get("next_attempt_at")
if isinstance(next_attempt_at, datetime) and now < next_attempt_at:
continue
try: try:
await collect_target(target) await collect_target(target)
prev = _failure_state.pop(target.id, None) prev = _failure_state.pop(target.id, None)
if prev: if prev:
first_failure_at = prev.get("first_failure_at")
downtime_seconds = None
if isinstance(first_failure_at, datetime):
downtime_seconds = max(0, int((now - first_failure_at).total_seconds()))
logger.info( logger.info(
"collector_target_recovered target=%s after_failures=%s last_error=%s", "collector_target_recovered target=%s after_failures=%s downtime_seconds=%s last_error=%s",
target.id, target.id,
prev.get("count", 0), prev.get("count", 0),
downtime_seconds,
prev.get("error"), prev.get("error"),
) )
except (OSError, SQLAlchemyError, asyncpg.PostgresError, Exception) as exc: except (OSError, SQLAlchemyError, asyncpg.PostgresError, Exception) as exc:
now = datetime.now(timezone.utc)
current_error = str(exc) current_error = str(exc)
error_class = exc.__class__.__name__
state = _failure_state.get(target.id) state = _failure_state.get(target.id)
if state is None: if state is None:
next_delay = min(_backoff_max_seconds, _backoff_base_seconds)
jitter = next_delay * _backoff_jitter_factor
next_delay = max(1, int(next_delay + uniform(-jitter, jitter)))
next_attempt_at = now.timestamp() + next_delay
_failure_state[target.id] = { _failure_state[target.id] = {
"count": 1, "count": 1,
"first_failure_at": now,
"last_log_at": now, "last_log_at": now,
"error": current_error, "error": current_error,
"next_attempt_at": datetime.fromtimestamp(next_attempt_at, tz=timezone.utc),
} }
logger.exception("collector_error target=%s err=%s", target.id, exc) logger.warning(
"collector_target_unreachable target=%s error_class=%s err=%s consecutive_failures=%s retry_in_seconds=%s",
target.id,
error_class,
current_error,
1,
next_delay,
)
continue continue
count = int(state.get("count", 0)) + 1 count = int(state.get("count", 0)) + 1
raw_backoff = min(_backoff_max_seconds, _backoff_base_seconds * (2 ** min(count - 1, 10)))
jitter = raw_backoff * _backoff_jitter_factor
next_delay = max(1, int(raw_backoff + uniform(-jitter, jitter)))
state["next_attempt_at"] = datetime.fromtimestamp(now.timestamp() + next_delay, tz=timezone.utc)
last_log_at = state.get("last_log_at") last_log_at = state.get("last_log_at")
last_logged_error = str(state.get("error", "")) last_logged_error = str(state.get("error", ""))
should_log = False should_log = False
@@ -220,18 +259,23 @@ async def collect_once() -> None:
if should_log: if should_log:
state["last_log_at"] = now state["last_log_at"] = now
state["error"] = current_error state["error"] = current_error
logger.error( logger.warning(
"collector_error_throttled target=%s err=%s consecutive_failures=%s", "collector_target_unreachable target=%s error_class=%s err=%s consecutive_failures=%s retry_in_seconds=%s",
target.id, target.id,
error_class,
current_error, current_error,
count, count,
next_delay,
) )
async def collector_loop(stop_event: asyncio.Event) -> None: async def collector_loop(stop_event: asyncio.Event) -> None:
while not stop_event.is_set(): while not stop_event.is_set():
cycle_started = asyncio.get_running_loop().time()
await collect_once() await collect_once()
elapsed = asyncio.get_running_loop().time() - cycle_started
sleep_for = max(0.0, settings.poll_interval_seconds - elapsed)
try: try:
await asyncio.wait_for(stop_event.wait(), timeout=settings.poll_interval_seconds) await asyncio.wait_for(stop_event.wait(), timeout=sleep_for)
except asyncio.TimeoutError: except asyncio.TimeoutError:
pass pass

View File

@@ -17,10 +17,10 @@ class DiskSpaceProvider:
class NullDiskSpaceProvider(DiskSpaceProvider): class NullDiskSpaceProvider(DiskSpaceProvider):
async def get_free_bytes(self, target_host: str) -> DiskSpaceProbeResult: async def get_free_bytes(self, target_host: str) -> DiskSpaceProbeResult:
return DiskSpaceProbeResult( return DiskSpaceProbeResult(
source="none", source="agentless",
status="unavailable", status="unavailable",
free_bytes=None, free_bytes=None,
message=f"No infra probe configured for host {target_host}. Add SSH/Agent provider later.", message=f"Agentless mode: host-level free disk is not available for {target_host}.",
) )

View File

@@ -1,4 +1,5 @@
fastapi==0.116.1 fastapi==0.129.0
starlette==0.52.1
uvicorn[standard]==0.35.0 uvicorn[standard]==0.35.0
gunicorn==23.0.0 gunicorn==23.0.0
sqlalchemy[asyncio]==2.0.44 sqlalchemy[asyncio]==2.0.44
@@ -7,7 +8,7 @@ alembic==1.16.5
pydantic==2.11.7 pydantic==2.11.7
pydantic-settings==2.11.0 pydantic-settings==2.11.0
email-validator==2.2.0 email-validator==2.2.0
python-jose[cryptography]==3.5.0 PyJWT==2.11.0
passlib[argon2]==1.7.4 passlib[argon2]==1.7.4
cryptography==45.0.7 cryptography==46.0.5
python-multipart==0.0.20 python-multipart==0.0.22

View File

@@ -18,8 +18,8 @@ services:
retries: 10 retries: 10
backend: backend:
build: image: nesterovicit/nexapg-backend:latest
context: ./backend pull_policy: always
container_name: nexapg-backend container_name: nexapg-backend
restart: unless-stopped restart: unless-stopped
environment: environment:
@@ -47,16 +47,14 @@ services:
- "${BACKEND_PORT}:8000" - "${BACKEND_PORT}:8000"
frontend: frontend:
build: image: nesterovicit/nexapg-frontend:latest
context: ./frontend pull_policy: always
args:
VITE_API_URL: ${VITE_API_URL}
container_name: nexapg-frontend container_name: nexapg-frontend
restart: unless-stopped restart: unless-stopped
depends_on: depends_on:
- backend - backend
ports: ports:
- "${FRONTEND_PORT}:80" - "${FRONTEND_PORT}:8080"
volumes: volumes:
pg_data: pg_data:

View File

@@ -7,8 +7,12 @@ ARG VITE_API_URL=/api/v1
ENV VITE_API_URL=${VITE_API_URL} ENV VITE_API_URL=${VITE_API_URL}
RUN npm run build RUN npm run build
FROM nginx:1.29-alpine FROM nginx:1-alpine-slim
RUN apk upgrade --no-cache \
&& mkdir -p /var/cache/nginx /var/run /var/log/nginx /tmp/nginx \
&& chown -R nginx:nginx /var/cache/nginx /var/run /var/log/nginx /tmp/nginx
COPY nginx.conf /etc/nginx/conf.d/default.conf COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/dist /usr/share/nginx/html COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80 USER 101
HEALTHCHECK --interval=30s --timeout=3s --retries=5 CMD wget -qO- http://127.0.0.1/ || exit 1 EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s --retries=5 CMD nginx -t || exit 1

View File

@@ -1,5 +1,5 @@
server { server {
listen 80; listen 8080;
server_name _; server_name _;
root /usr/share/nginx/html; root /usr/share/nginx/html;

View File

@@ -22,6 +22,7 @@ function Layout({ children }) {
const { me, logout, uiMode, setUiMode, alertToasts, dismissAlertToast, serviceUpdateAvailable } = useAuth(); const { me, logout, uiMode, setUiMode, alertToasts, dismissAlertToast, serviceUpdateAvailable } = useAuth();
const navigate = useNavigate(); const navigate = useNavigate();
const navClass = ({ isActive }) => `nav-btn${isActive ? " active" : ""}`; const navClass = ({ isActive }) => `nav-btn${isActive ? " active" : ""}`;
const fullName = [me?.first_name, me?.last_name].filter(Boolean).join(" ").trim();
return ( return (
<div className="shell"> <div className="shell">
@@ -101,8 +102,9 @@ function Layout({ children }) {
</button> </button>
<small>{uiMode === "easy" ? "Simple health guidance" : "Advanced DBA metrics"}</small> <small>{uiMode === "easy" ? "Simple health guidance" : "Advanced DBA metrics"}</small>
</div> </div>
<div>{me?.email}</div> <div className="profile-name">{fullName || me?.email}</div>
<div className="role">{me?.role}</div> {fullName && <div className="profile-email">{me?.email}</div>}
<div className="role profile-role">{me?.role}</div>
<NavLink to="/user-settings" className={({ isActive }) => `profile-btn${isActive ? " active" : ""}`}> <NavLink to="/user-settings" className={({ isActive }) => `profile-btn${isActive ? " active" : ""}`}>
User Settings User Settings
</NavLink> </NavLink>

View File

@@ -35,8 +35,21 @@ export async function apiFetch(path, options = {}, tokens, onUnauthorized) {
} }
} }
if (!res.ok) { if (!res.ok) {
const txt = await res.text(); const raw = await res.text();
throw new Error(txt || `HTTP ${res.status}`); let parsed = null;
try {
parsed = raw ? JSON.parse(raw) : null;
} catch {
parsed = null;
}
const message = parsed?.message || raw || `HTTP ${res.status}`;
const err = new Error(message);
err.status = res.status;
err.code = parsed?.code || null;
err.details = parsed?.details || null;
err.requestId = parsed?.request_id || res.headers.get("x-request-id") || null;
throw err;
} }
if (res.status === 204) return null; if (res.status === 204) return null;
return res.json(); return res.json();

View File

@@ -19,8 +19,11 @@ const TEMPLATE_VARIABLES = [
export function AdminUsersPage() { export function AdminUsersPage() {
const { tokens, refresh, me } = useAuth(); const { tokens, refresh, me } = useAuth();
const emptyCreateForm = { email: "", first_name: "", last_name: "", password: "", role: "viewer" };
const [users, setUsers] = useState([]); const [users, setUsers] = useState([]);
const [form, setForm] = useState({ email: "", password: "", role: "viewer" }); const [form, setForm] = useState(emptyCreateForm);
const [editingUserId, setEditingUserId] = useState(null);
const [editForm, setEditForm] = useState({ email: "", first_name: "", last_name: "", password: "", role: "viewer" });
const [emailSettings, setEmailSettings] = useState({ const [emailSettings, setEmailSettings] = useState({
enabled: false, enabled: false,
smtp_host: "", smtp_host: "",
@@ -79,7 +82,7 @@ export function AdminUsersPage() {
e.preventDefault(); e.preventDefault();
try { try {
await apiFetch("/admin/users", { method: "POST", body: JSON.stringify(form) }, tokens, refresh); await apiFetch("/admin/users", { method: "POST", body: JSON.stringify(form) }, tokens, refresh);
setForm({ email: "", password: "", role: "viewer" }); setForm(emptyCreateForm);
await load(); await load();
} catch (e) { } catch (e) {
setError(String(e.message || e)); setError(String(e.message || e));
@@ -95,6 +98,39 @@ export function AdminUsersPage() {
} }
}; };
const startEdit = (user) => {
setEditingUserId(user.id);
setEditForm({
email: user.email || "",
first_name: user.first_name || "",
last_name: user.last_name || "",
password: "",
role: user.role || "viewer",
});
};
const cancelEdit = () => {
setEditingUserId(null);
setEditForm({ email: "", first_name: "", last_name: "", password: "", role: "viewer" });
};
const saveEdit = async (userId) => {
try {
const payload = {
email: editForm.email,
first_name: editForm.first_name.trim() || null,
last_name: editForm.last_name.trim() || null,
role: editForm.role,
};
if (editForm.password.trim()) payload.password = editForm.password;
await apiFetch(`/admin/users/${userId}`, { method: "PUT", body: JSON.stringify(payload) }, tokens, refresh);
cancelEdit();
await load();
} catch (e) {
setError(String(e.message || e));
}
};
const saveSmtp = async (e) => { const saveSmtp = async (e) => {
e.preventDefault(); e.preventDefault();
setError(""); setError("");
@@ -165,6 +201,22 @@ export function AdminUsersPage() {
<p className="muted">Create accounts and manage access roles.</p> <p className="muted">Create accounts and manage access roles.</p>
</div> </div>
<form className="grid three admin-user-form" onSubmit={create}> <form className="grid three admin-user-form" onSubmit={create}>
<div className="admin-field">
<label>First Name</label>
<input
value={form.first_name}
placeholder="Jane"
onChange={(e) => setForm({ ...form, first_name: e.target.value })}
/>
</div>
<div className="admin-field">
<label>Last Name</label>
<input
value={form.last_name}
placeholder="Doe"
onChange={(e) => setForm({ ...form, last_name: e.target.value })}
/>
</div>
<div className="admin-field"> <div className="admin-field">
<label>Email</label> <label>Email</label>
<input value={form.email} placeholder="user@example.com" onChange={(e) => setForm({ ...form, email: e.target.value })} /> <input value={form.email} placeholder="user@example.com" onChange={(e) => setForm({ ...form, email: e.target.value })} />
@@ -197,6 +249,7 @@ export function AdminUsersPage() {
<thead> <thead>
<tr> <tr>
<th>ID</th> <th>ID</th>
<th>Name</th>
<th>Email</th> <th>Email</th>
<th>Role</th> <th>Role</th>
<th>Action</th> <th>Action</th>
@@ -206,11 +259,70 @@ export function AdminUsersPage() {
{users.map((u) => ( {users.map((u) => (
<tr key={u.id} className="admin-user-row"> <tr key={u.id} className="admin-user-row">
<td className="user-col-id">{u.id}</td> <td className="user-col-id">{u.id}</td>
<td className="user-col-email">{u.email}</td> <td className="user-col-name">
<td> {editingUserId === u.id ? (
<span className={`pill role-pill role-${u.role}`}>{u.role}</span> <div className="admin-inline-grid two">
<input
value={editForm.first_name}
placeholder="First name"
onChange={(e) => setEditForm({ ...editForm, first_name: e.target.value })}
/>
<input
value={editForm.last_name}
placeholder="Last name"
onChange={(e) => setEditForm({ ...editForm, last_name: e.target.value })}
/>
</div>
) : (
<span className="user-col-name-value">{[u.first_name, u.last_name].filter(Boolean).join(" ") || "-"}</span>
)}
</td>
<td className="user-col-email">
{editingUserId === u.id ? (
<input
value={editForm.email}
placeholder="user@example.com"
onChange={(e) => setEditForm({ ...editForm, email: e.target.value })}
/>
) : (
u.email
)}
</td> </td>
<td> <td>
{editingUserId === u.id ? (
<select value={editForm.role} onChange={(e) => setEditForm({ ...editForm, role: e.target.value })}>
<option value="viewer">viewer</option>
<option value="operator">operator</option>
<option value="admin">admin</option>
</select>
) : (
<span className={`pill role-pill role-${u.role}`}>{u.role}</span>
)}
</td>
<td className="admin-user-actions">
{editingUserId === u.id && (
<input
type="password"
className="admin-inline-password"
value={editForm.password}
placeholder="New password (optional)"
onChange={(e) => setEditForm({ ...editForm, password: e.target.value })}
/>
)}
{editingUserId === u.id ? (
<>
<button className="table-action-btn primary small-btn" onClick={() => saveEdit(u.id)}>
Save
</button>
<button className="table-action-btn small-btn" onClick={cancelEdit}>
Cancel
</button>
</>
) : (
<button className="table-action-btn edit small-btn" onClick={() => startEdit(u)}>
Edit
</button>
)}
{u.id !== me.id && ( {u.id !== me.id && (
<button className="table-action-btn delete small-btn" onClick={() => remove(u.id)}> <button className="table-action-btn delete small-btn" onClick={() => remove(u.id)}>
<span aria-hidden="true"> <span aria-hidden="true">

View File

@@ -1,4 +1,4 @@
import React, { useEffect, useState } from "react"; import React, { useEffect, useRef, useState } from "react";
import { apiFetch } from "../api"; import { apiFetch } from "../api";
import { useAuth } from "../state"; import { useAuth } from "../state";
@@ -62,6 +62,7 @@ function buildQueryTips(row) {
export function QueryInsightsPage() { export function QueryInsightsPage() {
const { tokens, refresh } = useAuth(); const { tokens, refresh } = useAuth();
const refreshRef = useRef(refresh);
const [targets, setTargets] = useState([]); const [targets, setTargets] = useState([]);
const [targetId, setTargetId] = useState(""); const [targetId, setTargetId] = useState("");
const [rows, setRows] = useState([]); const [rows, setRows] = useState([]);
@@ -71,6 +72,10 @@ export function QueryInsightsPage() {
const [error, setError] = useState(""); const [error, setError] = useState("");
const [loading, setLoading] = useState(true); const [loading, setLoading] = useState(true);
useEffect(() => {
refreshRef.current = refresh;
}, [refresh]);
useEffect(() => { useEffect(() => {
(async () => { (async () => {
try { try {
@@ -89,17 +94,26 @@ export function QueryInsightsPage() {
useEffect(() => { useEffect(() => {
if (!targetId) return; if (!targetId) return;
let active = true;
(async () => { (async () => {
try { try {
const data = await apiFetch(`/targets/${targetId}/top-queries`, {}, tokens, refresh); const data = await apiFetch(`/targets/${targetId}/top-queries`, {}, tokens, refreshRef.current);
if (!active) return;
setRows(data); setRows(data);
setSelectedQuery(data[0] || null); setSelectedQuery((prev) => {
setPage(1); if (!prev) return data[0] || null;
const keep = data.find((row) => row.queryid === prev.queryid);
return keep || data[0] || null;
});
setPage((prev) => (prev === 1 ? prev : 1));
} catch (e) { } catch (e) {
setError(String(e.message || e)); if (active) setError(String(e.message || e));
} }
})(); })();
}, [targetId, tokens, refresh]); return () => {
active = false;
};
}, [targetId, tokens?.accessToken, tokens?.refreshToken]);
const dedupedByQueryId = [...rows].reduce((acc, row) => { const dedupedByQueryId = [...rows].reduce((acc, row) => {
if (!row?.queryid) return acc; if (!row?.queryid) return acc;

View File

@@ -41,6 +41,19 @@ function formatNumber(value, digits = 2) {
return Number(value).toFixed(digits); return Number(value).toFixed(digits);
} }
function formatHostMetricUnavailable() {
return "N/A (agentless)";
}
function formatDiskSpaceAgentless(diskSpace) {
if (!diskSpace) return formatHostMetricUnavailable();
if (diskSpace.free_bytes !== null && diskSpace.free_bytes !== undefined) {
return formatBytes(diskSpace.free_bytes);
}
if (diskSpace.status === "unavailable") return formatHostMetricUnavailable();
return "-";
}
function MetricsTooltip({ active, payload, label }) { function MetricsTooltip({ active, payload, label }) {
if (!active || !payload || payload.length === 0) return null; if (!active || !payload || payload.length === 0) return null;
const row = payload[0]?.payload || {}; const row = payload[0]?.payload || {};
@@ -63,6 +76,10 @@ function didMetricSeriesChange(prev = [], next = []) {
return prevLast?.ts !== nextLast?.ts || Number(prevLast?.value) !== Number(nextLast?.value); return prevLast?.ts !== nextLast?.ts || Number(prevLast?.value) !== Number(nextLast?.value);
} }
function isTargetUnreachableError(err) {
return err?.code === "target_unreachable" || err?.status === 503;
}
async function loadMetric(targetId, metric, range, tokens, refresh) { async function loadMetric(targetId, metric, range, tokens, refresh) {
const { from, to } = toQueryRange(range); const { from, to } = toQueryRange(range);
return apiFetch( return apiFetch(
@@ -86,6 +103,7 @@ export function TargetDetailPage() {
const [targetMeta, setTargetMeta] = useState(null); const [targetMeta, setTargetMeta] = useState(null);
const [owners, setOwners] = useState([]); const [owners, setOwners] = useState([]);
const [groupTargets, setGroupTargets] = useState([]); const [groupTargets, setGroupTargets] = useState([]);
const [offlineState, setOfflineState] = useState(null);
const [error, setError] = useState(""); const [error, setError] = useState("");
const [loading, setLoading] = useState(true); const [loading, setLoading] = useState(true);
const refreshRef = useRef(refresh); const refreshRef = useRef(refresh);
@@ -101,22 +119,16 @@ export function TargetDetailPage() {
setLoading(true); setLoading(true);
} }
try { try {
const [connections, xacts, cache, locksTable, activityTable, overviewData, targetInfo, ownerRows, allTargets] = await Promise.all([ const [connections, xacts, cache, targetInfo, ownerRows, allTargets] = await Promise.all([
loadMetric(id, "connections_total", range, tokens, refreshRef.current), loadMetric(id, "connections_total", range, tokens, refreshRef.current),
loadMetric(id, "xacts_total", range, tokens, refreshRef.current), loadMetric(id, "xacts_total", range, tokens, refreshRef.current),
loadMetric(id, "cache_hit_ratio", range, tokens, refreshRef.current), loadMetric(id, "cache_hit_ratio", range, tokens, refreshRef.current),
apiFetch(`/targets/${id}/locks`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/activity`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/overview`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}`, {}, tokens, refreshRef.current), apiFetch(`/targets/${id}`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/owners`, {}, tokens, refreshRef.current), apiFetch(`/targets/${id}/owners`, {}, tokens, refreshRef.current),
apiFetch("/targets", {}, tokens, refreshRef.current), apiFetch("/targets", {}, tokens, refreshRef.current),
]); ]);
if (!active) return; if (!active) return;
setSeries({ connections, xacts, cache }); setSeries({ connections, xacts, cache });
setLocks(locksTable);
setActivity(activityTable);
setOverview(overviewData);
setTargetMeta(targetInfo); setTargetMeta(targetInfo);
setOwners(ownerRows); setOwners(ownerRows);
const groupId = targetInfo?.tags?.monitor_group_id; const groupId = targetInfo?.tags?.monitor_group_id;
@@ -128,6 +140,34 @@ export function TargetDetailPage() {
} else { } else {
setGroupTargets([]); setGroupTargets([]);
} }
try {
const [locksTable, activityTable, overviewData] = await Promise.all([
apiFetch(`/targets/${id}/locks`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/activity`, {}, tokens, refreshRef.current),
apiFetch(`/targets/${id}/overview`, {}, tokens, refreshRef.current),
]);
if (!active) return;
setLocks(locksTable);
setActivity(activityTable);
setOverview(overviewData);
setOfflineState(null);
} catch (liveErr) {
if (!active) return;
if (isTargetUnreachableError(liveErr)) {
setLocks([]);
setActivity([]);
setOverview(null);
setOfflineState({
message:
"Target is currently unreachable. Check host/port, network route, SSL mode, and database availability.",
host: liveErr?.details?.host || targetInfo?.host || "-",
port: liveErr?.details?.port || targetInfo?.port || "-",
requestId: liveErr?.requestId || null,
});
} else {
throw liveErr;
}
}
setError(""); setError("");
} catch (e) { } catch (e) {
if (active) setError(String(e.message || e)); if (active) setError(String(e.message || e));
@@ -268,6 +308,17 @@ export function TargetDetailPage() {
<span className="muted">Responsible users:</span> <span className="muted">Responsible users:</span>
{owners.length > 0 ? owners.map((item) => <span key={item.user_id} className="owner-pill">{item.email}</span>) : <span className="muted">none assigned</span>} {owners.length > 0 ? owners.map((item) => <span key={item.user_id} className="owner-pill">{item.email}</span>) : <span className="muted">none assigned</span>}
</div> </div>
{offlineState && (
<div className="card target-offline-card">
<h3>Target Offline</h3>
<p>{offlineState.message}</p>
<div className="target-offline-meta">
<span><strong>Host:</strong> {offlineState.host}</span>
<span><strong>Port:</strong> {offlineState.port}</span>
{offlineState.requestId ? <span><strong>Request ID:</strong> {offlineState.requestId}</span> : null}
</div>
</div>
)}
{uiMode === "easy" && overview && easySummary && ( {uiMode === "easy" && overview && easySummary && (
<> <>
<div className={`card easy-status ${easySummary.health}`}> <div className={`card easy-status ${easySummary.health}`}>
@@ -346,6 +397,9 @@ export function TargetDetailPage() {
{uiMode === "dba" && overview && ( {uiMode === "dba" && overview && (
<div className="card"> <div className="card">
<h3>Database Overview</h3> <h3>Database Overview</h3>
<p className="muted" style={{ marginTop: 2 }}>
Agentless mode: host-level CPU, RAM, and free-disk metrics are not available.
</p>
<div className="grid three overview-kv"> <div className="grid three overview-kv">
<div><span>PostgreSQL Version</span><strong>{overview.instance.server_version || "-"}</strong></div> <div><span>PostgreSQL Version</span><strong>{overview.instance.server_version || "-"}</strong></div>
<div> <div>
@@ -366,8 +420,8 @@ export function TargetDetailPage() {
<div title="Total WAL directory size (when available)"> <div title="Total WAL directory size (when available)">
<span>WAL Size</span><strong>{formatBytes(overview.storage.wal_directory_size_bytes)}</strong> <span>WAL Size</span><strong>{formatBytes(overview.storage.wal_directory_size_bytes)}</strong>
</div> </div>
<div title="Optional metric via future Agent/SSH provider"> <div title={overview.storage.disk_space?.message || "Agentless mode: host-level free disk is unavailable."}>
<span>Free Disk</span><strong>{formatBytes(overview.storage.disk_space.free_bytes)}</strong> <span>Free Disk</span><strong>{formatDiskSpaceAgentless(overview.storage.disk_space)}</strong>
</div> </div>
<div title="Replication replay delay on standby"> <div title="Replication replay delay on standby">
<span>Replay Lag</span> <span>Replay Lag</span>
@@ -378,6 +432,12 @@ export function TargetDetailPage() {
<div><span>Replication Slots</span><strong>{overview.replication.replication_slots_count ?? "-"}</strong></div> <div><span>Replication Slots</span><strong>{overview.replication.replication_slots_count ?? "-"}</strong></div>
<div><span>Repl Clients</span><strong>{overview.replication.active_replication_clients ?? "-"}</strong></div> <div><span>Repl Clients</span><strong>{overview.replication.active_replication_clients ?? "-"}</strong></div>
<div><span>Autovacuum Workers</span><strong>{overview.performance.autovacuum_workers ?? "-"}</strong></div> <div><span>Autovacuum Workers</span><strong>{overview.performance.autovacuum_workers ?? "-"}</strong></div>
<div title="Host CPU requires OS-level telemetry">
<span>Host CPU</span><strong>{formatHostMetricUnavailable()}</strong>
</div>
<div title="Host RAM requires OS-level telemetry">
<span>Host RAM</span><strong>{formatHostMetricUnavailable()}</strong>
</div>
</div> </div>
<div className="grid two"> <div className="grid two">

View File

@@ -195,6 +195,23 @@ a {
color: #d7e4fa; color: #d7e4fa;
} }
.profile-name {
font-size: 15px;
font-weight: 700;
line-height: 1.25;
}
.profile-email {
margin-top: 2px;
font-size: 12px;
color: #a6bcda;
word-break: break-all;
}
.profile-role {
margin-top: 4px;
}
.mode-switch-block { .mode-switch-block {
margin-bottom: 12px; margin-bottom: 12px;
padding: 10px; padding: 10px;
@@ -1255,6 +1272,31 @@ td {
font-weight: 600; font-weight: 600;
} }
.user-col-name-value {
font-weight: 600;
}
.admin-inline-grid {
display: grid;
gap: 8px;
}
.admin-inline-grid.two {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
.admin-inline-password {
min-width: 190px;
}
.admin-user-actions {
display: flex;
gap: 8px;
align-items: center;
justify-content: flex-start;
flex-wrap: wrap;
}
.role-pill { .role-pill {
display: inline-flex; display: inline-flex;
align-items: center; align-items: center;
@@ -1980,6 +2022,29 @@ select:-webkit-autofill {
font-size: 12px; font-size: 12px;
} }
.target-offline-card {
border-color: #a85757;
background: linear-gradient(130deg, #2c1724 0%, #1f1f38 100%);
}
.target-offline-card h3 {
margin: 0 0 8px;
color: #fecaca;
}
.target-offline-card p {
margin: 0 0 10px;
color: #fde2e2;
}
.target-offline-meta {
display: flex;
flex-wrap: wrap;
gap: 16px;
font-size: 12px;
color: #d4d4f5;
}
.chart-tooltip { .chart-tooltip {
background: #0f1934ee; background: #0f1934ee;
border: 1px solid #2f4a8b; border: 1px solid #2f4a8b;

View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -euo pipefail
# Usage:
# bash bootstrap-compose.sh
# BASE_URL="https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main" bash bootstrap-compose.sh
BASE_URL="${BASE_URL:-https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main}"
echo "[bootstrap] Using base URL: ${BASE_URL}"
fetch_file() {
local path="$1"
local out="$2"
if command -v wget >/dev/null 2>&1; then
wget -q -O "${out}" "${BASE_URL}/${path}"
elif command -v curl >/dev/null 2>&1; then
curl -fsSL "${BASE_URL}/${path}" -o "${out}"
else
echo "[bootstrap] ERROR: wget or curl is required"
exit 1
fi
}
fetch_file "docker-compose.yml" "docker-compose.yml"
fetch_file ".env.example" ".env.example"
fetch_file "Makefile" "Makefile"
if [[ ! -f ".env" ]]; then
cp .env.example .env
echo "[bootstrap] Created .env from .env.example"
else
echo "[bootstrap] .env already exists, keeping it"
fi
echo
echo "[bootstrap] Next steps:"
echo " 1) Edit .env (set JWT_SECRET_KEY and ENCRYPTION_KEY at minimum)"
echo " 2) Run: make up"
echo