22 Commits
0.2.1 ... 0.2.4

Author SHA1 Message Date
597579376f [NX-204 Issue] Add secret management guidelines and enhance security notes
Some checks are pending
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Waiting to run
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 2m43s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Proxy Profile Validation / validate (push) Successful in 3s
Python Dependency Security / pip-audit (block high/critical) (push) Successful in 26s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m41s
Introduced a comprehensive guide for secure production secret handling (`docs/security/secret-management.md`). Updated `.env.example` files with clearer comments on best practices, emphasizing not hardcoding secrets and implementing rotation strategies. Enhanced README with a new section linking to the secret management documentation.
2026-02-15 12:29:40 +01:00
f25792b8d8 Adjust Nginx PID file path in Dockerfile
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m41s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Proxy Profile Validation / validate (push) Successful in 3s
Modified the PID file location in the Nginx configuration to use `/tmp/nginx/nginx.pid` instead of the default paths. This ensures compatibility and avoids permission issues during container runtime.
2026-02-15 12:20:04 +01:00
6093c5dea8 [NX-203 Issue] Add production proxy profile with validation and documentation
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m40s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Proxy Profile Validation / validate (push) Successful in 3s
Introduced a secure, repeatable production proxy profile for reverse proxy and HTTPS deployment, including NGINX configuration, environment variables, and CORS guidance. Added CI workflow for static validation of proxy guardrails and detailed documentation to ensure best practices in deployment.
2026-02-15 12:10:41 +01:00
84bc7b0384 Update NEXAPG version to 0.2.4
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 4m21s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Python Dependency Security / pip-audit (block high/critical) (push) Successful in 25s
Bumped the version of NEXAPG from 0.2.2 to 0.2.4 in the configuration file. This ensures the application is aligned with the latest changes or fixes in the updated version.
2026-02-15 11:29:11 +01:00
3932aa56f7 [NX-202 Issue] Add pip-audit CI enforcement for Python dependency security
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m41s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Python Dependency Security / pip-audit (block high/critical) (push) Successful in 50s
This commit integrates pip-audit to enforce vulnerability checks in CI. Dependencies with unresolved HIGH/CRITICAL vulnerabilities will block builds unless explicitly allowlisted. The process is documented, with a strict policy to ensure exceptions are trackable and time-limited.
2026-02-15 10:44:33 +01:00
9657bd7a36 Merge branch 'main' of https://git.nesterovic.cc/nessi/NexaPG into development
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 20s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
2026-02-15 10:33:56 +01:00
574e2eb9a5 Ensure valid Docker Hub namespace in release workflow
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m44s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Added validation to normalize input, reject invalid namespaces, and check for proper formatting in the Docker Hub namespace. This prevents configuration mistakes and ensures compliance with naming requirements.
2026-02-15 10:32:44 +01:00
21a8023bf1 Merge pull request 'Fix CI stability: resolve Docker Scout write/auth issues and harden PG matrix checkout' (#35) from development into main
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 6m20s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 10s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m18s
Reviewed-on: #35
2026-02-14 22:12:28 +00:00
328f69ea5e Update GitHub Actions workflows for improved functionality
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m44s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Successful in 21s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 7s
Removed the read-only flag from Docker volume mounts in the container CVE scan workflow to allow modifications. Added `max-parallel` and `fetch-depth` configurations to the PostgreSQL compatibility matrix workflow for better performance and efficiency.
2026-02-14 22:04:58 +01:00
c0077e3dd8 Add -u root flag to container CVE scan workflow
Some checks failed
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m41s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 9s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Failing after 11m28s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Failing after 11m55s
This ensures the container runs with root user privileges, providing better compatibility and avoiding potential permission issues. The change affects the development workflow configuration for container CVE scanning.
2026-02-14 19:47:34 +01:00
af6ea11079 Refactor Docker Scout integration in CVE scan workflow
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m14s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Simplified the Docker Scout configuration logic by removing unnecessary checks and utilizing Docker's standard auth configuration. Updated environment variable usage and volume mounts to streamline the setup process for scanning containers.
2026-02-14 19:32:50 +01:00
5a7f32541f Add Docker Scout login fallback and temporary caching.
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 1m57s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
This update introduces a fallback mechanism for Docker Scout login when DockerHub credentials are unavailable, ensuring the workflow does not fail. It also replaces direct Docker config usage with temporary caching to improve flexibility and reduce dependency on runner environment setups.
2026-02-14 19:03:30 +01:00
dd3f18bb06 Make Docker Scout scans non-blocking and update config paths.
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m10s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Set `continue-on-error: true` for Docker Scout steps to ensure workflows proceed even if scans fail. Updated volume paths and environment variables for Docker config and credentials to improve scanning compatibility.
2026-02-14 18:55:52 +01:00
f4b18b6cf1 Update Docker Hub Scout config to use local login credentials
Some checks failed
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Failing after 1m56s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Replaced the use of Docker Hub secrets with a mounted local docker configuration file for authentication. Added a check to ensure the login config exists before running scans, preventing unnecessary failures. This change enhances flexibility and aligns with local environment setups.
2026-02-14 18:50:46 +01:00
a220e5de99 Add Docker Hub authentication for Scout scans
Some checks failed
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 22s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Failing after 1m53s
This update ensures Docker Scout scans use Docker Hub authentication. If the required credentials are absent, the scans are skipped with a corresponding message. This improves security and prevents unnecessary scan failures.
2026-02-14 18:31:10 +01:00
a5ffafaf9e Update CVE scanning workflow to use JSON format and new tools
All checks were successful
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Successful in 2m9s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Replaced Trivy output format from table to JSON for better processing. Added a summary step to parse and count severities using a Python script. Integrated Docker Scout scans for both backend and frontend, and updated uploaded artifacts to include the new JSON and Scout scan outputs.
2026-02-14 18:24:08 +01:00
d17752b611 Add CVE scan workflow for development branch
Some checks failed
Container CVE Scan (development) / Scan backend/frontend images for CVEs (push) Failing after 2m20s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
This commit introduces a GitHub Actions workflow to scan for CVEs in backend and frontend container images. It uses Trivy for scanning and uploads the reports as artifacts, providing better visibility into vulnerabilities in development builds.
2026-02-14 18:16:54 +01:00
fe05c40426 Merge branch 'main' of https://git.nesterovic.cc/nessi/NexaPG into development
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 10s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
2026-02-14 17:47:34 +01:00
5a0478f47d harden(frontend): switch to nginx:alpine-slim with non-root runtime and nginx dir permission fixes 2026-02-14 17:47:26 +01:00
1cea82f5d9 Merge pull request 'Update frontend to use unprivileged Nginx on port 8080' (#34) from development into main
All checks were successful
Migration Safety / Alembic upgrade/downgrade safety (push) Successful in 21s
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m33s
Reviewed-on: #34
2026-02-14 16:18:34 +00:00
418034f639 Update NEXAPG_VERSION to 0.2.2
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Migration Safety / Alembic upgrade/downgrade safety (pull_request) Successful in 23s
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
Bumped the version from 0.2.1 to 0.2.2 in the configuration file. This likely reflects a new release or minor update to the application.
2026-02-14 17:17:57 +01:00
489dde812f Update frontend to use unprivileged Nginx on port 8080
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Switch from `nginx:1.29-alpine-slim` to `nginxinc/nginx-unprivileged:stable-alpine` for improved security by running as a non-root user. Changed the exposed port from 80 to 8080 in the configurations to reflect the unprivileged setup. Adjusted the `docker-compose.yml` and `nginx.conf` accordingly.
2026-02-14 17:13:18 +01:00
20 changed files with 880 additions and 12 deletions

View File

@@ -12,6 +12,7 @@ LOG_LEVEL=INFO
# Core Database (internal metadata DB)
# ------------------------------
# Database that stores users, targets, metrics, query stats, and audit logs.
# DEV default only. Use strong unique credentials in production.
DB_NAME=nexapg
DB_USER=nexapg
DB_PASSWORD=nexapg
@@ -23,7 +24,7 @@ DB_PORT=5433
# ------------------------------
# Host port mapped to backend container port 8000.
BACKEND_PORT=8000
# JWT signing secret. Change this in every non-local environment.
# JWT signing secret. Never hardcode in source. Rotate regularly.
JWT_SECRET_KEY=change_this_super_secret
JWT_ALGORITHM=HS256
# Access token lifetime in minutes.
@@ -31,6 +32,7 @@ JWT_ACCESS_TOKEN_MINUTES=15
# Refresh token lifetime in minutes (10080 = 7 days).
JWT_REFRESH_TOKEN_MINUTES=10080
# Key used to encrypt monitored target passwords at rest.
# Never hardcode in source. Rotate with re-encryption plan.
# Generate with:
# python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
ENCRYPTION_KEY=REPLACE_WITH_FERNET_KEY
@@ -56,5 +58,5 @@ INIT_ADMIN_PASSWORD=ChangeMe123!
# ------------------------------
# Frontend
# ------------------------------
# Host port mapped to frontend container port 80.
# Host port mapped to frontend container port 8080.
FRONTEND_PORT=5173

View File

@@ -0,0 +1,158 @@
name: Container CVE Scan (development)
on:
push:
branches: ["development"]
workflow_dispatch:
jobs:
cve-scan:
name: Scan backend/frontend images for CVEs
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker Hub login (for Scout)
if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Prepare Docker auth config for Scout container
if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
run: |
mkdir -p "$RUNNER_TEMP/scout-docker-config"
cp "$HOME/.docker/config.json" "$RUNNER_TEMP/scout-docker-config/config.json"
chmod 600 "$RUNNER_TEMP/scout-docker-config/config.json"
- name: Build backend image (local)
uses: docker/build-push-action@v6
with:
context: ./backend
file: ./backend/Dockerfile
push: false
load: true
tags: nexapg-backend:dev-scan
provenance: false
sbom: false
- name: Build frontend image (local)
uses: docker/build-push-action@v6
with:
context: ./frontend
file: ./frontend/Dockerfile
push: false
load: true
tags: nexapg-frontend:dev-scan
build-args: |
VITE_API_URL=/api/v1
provenance: false
sbom: false
- name: Trivy scan (backend)
uses: aquasecurity/trivy-action@0.24.0
with:
image-ref: nexapg-backend:dev-scan
format: json
output: trivy-backend.json
severity: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
ignore-unfixed: false
exit-code: 0
- name: Trivy scan (frontend)
uses: aquasecurity/trivy-action@0.24.0
with:
image-ref: nexapg-frontend:dev-scan
format: json
output: trivy-frontend.json
severity: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
ignore-unfixed: false
exit-code: 0
- name: Summarize Trivy severities
run: |
python - <<'PY'
import json
from collections import Counter
def summarize(path):
c = Counter()
with open(path, "r", encoding="utf-8") as f:
data = json.load(f)
for result in data.get("Results", []):
for v in result.get("Vulnerabilities", []) or []:
c[v.get("Severity", "UNKNOWN")] += 1
for sev in ["CRITICAL", "HIGH", "MEDIUM", "LOW", "UNKNOWN"]:
c.setdefault(sev, 0)
return c
for label, path in [("backend", "trivy-backend.json"), ("frontend", "trivy-frontend.json")]:
s = summarize(path)
print(f"===== Trivy {label} =====")
print(f"CRITICAL={s['CRITICAL']} HIGH={s['HIGH']} MEDIUM={s['MEDIUM']} LOW={s['LOW']} UNKNOWN={s['UNKNOWN']}")
print()
PY
- name: Docker Scout scan (backend)
continue-on-error: true
run: |
if [ -z "${{ secrets.DOCKERHUB_USERNAME }}" ] || [ -z "${{ secrets.DOCKERHUB_TOKEN }}" ]; then
echo "Docker Hub Scout scan skipped: DOCKERHUB_USERNAME/DOCKERHUB_TOKEN not set." > scout-backend.txt
exit 0
fi
docker run --rm \
-u root \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$RUNNER_TEMP/scout-docker-config:/root/.docker" \
-e DOCKER_CONFIG=/root/.docker \
-e DOCKER_SCOUT_HUB_USER="${{ secrets.DOCKERHUB_USERNAME }}" \
-e DOCKER_SCOUT_HUB_PASSWORD="${{ secrets.DOCKERHUB_TOKEN }}" \
docker/scout-cli:latest cves nexapg-backend:dev-scan \
--only-severity critical,high,medium,low > scout-backend.txt 2>&1 || {
echo "" >> scout-backend.txt
echo "Docker Scout backend scan failed (non-blocking)." >> scout-backend.txt
}
- name: Docker Scout scan (frontend)
continue-on-error: true
run: |
if [ -z "${{ secrets.DOCKERHUB_USERNAME }}" ] || [ -z "${{ secrets.DOCKERHUB_TOKEN }}" ]; then
echo "Docker Hub Scout scan skipped: DOCKERHUB_USERNAME/DOCKERHUB_TOKEN not set." > scout-frontend.txt
exit 0
fi
docker run --rm \
-u root \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$RUNNER_TEMP/scout-docker-config:/root/.docker" \
-e DOCKER_CONFIG=/root/.docker \
-e DOCKER_SCOUT_HUB_USER="${{ secrets.DOCKERHUB_USERNAME }}" \
-e DOCKER_SCOUT_HUB_PASSWORD="${{ secrets.DOCKERHUB_TOKEN }}" \
docker/scout-cli:latest cves nexapg-frontend:dev-scan \
--only-severity critical,high,medium,low > scout-frontend.txt 2>&1 || {
echo "" >> scout-frontend.txt
echo "Docker Scout frontend scan failed (non-blocking)." >> scout-frontend.txt
}
- name: Print scan summary
run: |
echo "===== Docker Scout backend ====="
test -f scout-backend.txt && cat scout-backend.txt || echo "scout-backend.txt not available"
echo
echo "===== Docker Scout frontend ====="
test -f scout-frontend.txt && cat scout-frontend.txt || echo "scout-frontend.txt not available"
- name: Upload scan reports
uses: actions/upload-artifact@v3
with:
name: container-cve-scan-reports
path: |
trivy-backend.json
trivy-frontend.json
scout-backend.txt
scout-frontend.txt

View File

@@ -27,6 +27,20 @@ jobs:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
- name: Dependency security gate (pip-audit)
run: |
python -m pip install --upgrade pip
pip install pip-audit
pip-audit -r backend/requirements.txt --format json --aliases --output pip-audit-backend.json || true
python backend/scripts/pip_audit_gate.py \
--report pip-audit-backend.json \
--allowlist ops/security/pip-audit-allowlist.json
- name: Resolve version/tag
id: ver
shell: bash
@@ -51,10 +65,28 @@ jobs:
if [ -z "$NS" ]; then
NS="${{ secrets.DOCKERHUB_USERNAME }}"
fi
if [ -z "$NS" ]; then
# Normalize accidental input like spaces or uppercase.
NS="$(echo "$NS" | tr '[:upper:]' '[:lower:]' | xargs)"
# Reject clearly invalid placeholders/config mistakes early.
if [ -z "$NS" ] || [ "$NS" = "-" ]; then
echo "Missing Docker Hub namespace. Set repo var DOCKERHUB_NAMESPACE or secret DOCKERHUB_USERNAME."
exit 1
fi
# Namespace must be a single Docker Hub account/org name, not a path/url.
if [[ "$NS" == *"/"* ]] || [[ "$NS" == *":"* ]]; then
echo "Invalid Docker Hub namespace '$NS'. Use only the account/org name (e.g. 'nesterovicit')."
exit 1
fi
if ! [[ "$NS" =~ ^[a-z0-9]+([._-][a-z0-9]+)*$ ]]; then
echo "Invalid Docker Hub namespace '$NS'. Allowed: lowercase letters, digits, ., _, -"
exit 1
fi
echo "Using Docker Hub namespace: $NS"
echo "value=$NS" >> "$GITHUB_OUTPUT"
- name: Set up Docker Buildx

View File

@@ -11,6 +11,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
fail-fast: false
max-parallel: 3
matrix:
pg_version: ["14", "15", "16", "17", "18"]
@@ -32,6 +33,8 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5

View File

@@ -0,0 +1,35 @@
name: Proxy Profile Validation
on:
push:
branches: ["main", "master", "development"]
paths:
- "frontend/**"
- "ops/profiles/prod/**"
- "ops/scripts/validate_proxy_profile.sh"
- ".github/workflows/proxy-profile-validation.yml"
- "README.md"
- ".env.example"
- "ops/.env.example"
pull_request:
paths:
- "frontend/**"
- "ops/profiles/prod/**"
- "ops/scripts/validate_proxy_profile.sh"
- ".github/workflows/proxy-profile-validation.yml"
- "README.md"
- ".env.example"
- "ops/.env.example"
workflow_dispatch:
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Validate proxy profile and mixed-content guardrails
run: bash ops/scripts/validate_proxy_profile.sh

View File

@@ -0,0 +1,53 @@
name: Python Dependency Security
on:
push:
branches: ["main", "master", "development"]
paths:
- "backend/**"
- ".github/workflows/python-dependency-security.yml"
- "ops/security/pip-audit-allowlist.json"
- "docs/security/dependency-exceptions.md"
pull_request:
paths:
- "backend/**"
- ".github/workflows/python-dependency-security.yml"
- "ops/security/pip-audit-allowlist.json"
- "docs/security/dependency-exceptions.md"
workflow_dispatch:
jobs:
pip-audit:
name: pip-audit (block high/critical)
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
- name: Install pip-audit
run: |
python -m pip install --upgrade pip
pip install pip-audit
- name: Run pip-audit (JSON report)
run: |
pip-audit -r backend/requirements.txt --format json --aliases --output pip-audit-backend.json || true
- name: Enforce vulnerability policy
run: |
python backend/scripts/pip_audit_gate.py \
--report pip-audit-backend.json \
--allowlist ops/security/pip-audit-allowlist.json
- name: Upload pip-audit report
uses: actions/upload-artifact@v3
with:
name: pip-audit-security-report
path: pip-audit-backend.json

View File

@@ -20,7 +20,10 @@ It combines FastAPI, React, and PostgreSQL in a Docker Compose stack with RBAC,
- [API Error Format](#api-error-format)
- [`pg_stat_statements` Requirement](#pg_stat_statements-requirement)
- [Reverse Proxy / SSL Guidance](#reverse-proxy--ssl-guidance)
- [Production Proxy Profile](#production-proxy-profile)
- [PostgreSQL Compatibility Smoke Test](#postgresql-compatibility-smoke-test)
- [Dependency Exception Flow](#dependency-exception-flow)
- [Secret Management (Production)](#secret-management-production)
- [Troubleshooting](#troubleshooting)
- [Security Notes](#security-notes)
@@ -206,7 +209,7 @@ Note: Migrations run automatically when the backend container starts (`entrypoin
| Variable | Description |
|---|---|
| `FRONTEND_PORT` | Host port mapped to frontend container port `80` |
| `FRONTEND_PORT` | Host port mapped to frontend container port `8080` |
## Core Functional Areas
@@ -371,6 +374,21 @@ For production, serve frontend and API under the same public origin via reverse
This prevents mixed-content and CORS issues.
## Production Proxy Profile
A secure, repeatable production profile is included:
- `ops/profiles/prod/.env.production.example`
- `ops/profiles/prod/nginx/nexapg.conf`
- `docs/deployment/proxy-production-profile.md`
Highlights:
- explicit CORS recommendations per environment (`dev`, `staging`, `prod`)
- required reverse-proxy header forwarding for backend context
- API path forwarding (`/api/` -> backend)
- mixed-content prevention guidance for HTTPS deployments
## PostgreSQL Compatibility Smoke Test
Run manually against one DSN:
@@ -387,6 +405,29 @@ PG_DSN_CANDIDATES='postgresql://postgres:postgres@postgres:5432/compatdb?sslmode
python backend/scripts/pg_compat_smoke.py
```
## Dependency Exception Flow
Python dependency vulnerabilities are enforced by CI via `pip-audit`.
- CI blocks unresolved `HIGH` and `CRITICAL` findings.
- Missing severity metadata is treated conservatively as `HIGH`.
- Temporary exceptions must be declared in `ops/security/pip-audit-allowlist.json`.
- Full process and required metadata are documented in:
- `docs/security/dependency-exceptions.md`
## Secret Management (Production)
Secret handling guidance is documented in:
- `docs/security/secret-management.md`
It includes:
- secure handling for `JWT_SECRET_KEY`, `ENCRYPTION_KEY`, `DB_PASSWORD`, and SMTP credentials
- clear **Do / Don't** rules
- recommended secret provider patterns (Vault/cloud/orchestrator/CI injection)
- practical rotation basics and operational checklist
## Troubleshooting
### Backend container keeps restarting during `make migrate`
@@ -421,3 +462,5 @@ Set target `sslmode` to `disable` (or correct SSL config on target DB).
- RBAC enforced on protected endpoints
- Audit logs for critical actions
- Collector error logging includes throttling to reduce repeated noise
- Production secret handling and rotation guidance:
- `docs/security/secret-management.md`

View File

@@ -2,7 +2,7 @@ from functools import lru_cache
from pydantic import field_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
NEXAPG_VERSION = "0.2.1"
NEXAPG_VERSION = "0.2.4"
class Settings(BaseSettings):

View File

@@ -0,0 +1,192 @@
#!/usr/bin/env python3
"""Gate pip-audit results with an auditable allowlist policy.
Policy:
- Block unresolved HIGH/CRITICAL vulnerabilities.
- If severity is missing, treat as HIGH by default.
- Allow temporary exceptions via allowlist with expiry metadata.
"""
from __future__ import annotations
import argparse
import datetime as dt
import json
import sys
from pathlib import Path
SEVERITY_ORDER = {"unknown": 0, "low": 1, "medium": 2, "high": 3, "critical": 4}
BLOCKING_SEVERITIES = {"high", "critical"}
def _parse_date(s: str) -> dt.date:
return dt.date.fromisoformat(s)
def _normalize_severity(value: object) -> str:
"""Normalize various pip-audit/osv-style severity payloads."""
if isinstance(value, str):
v = value.strip().lower()
if v in SEVERITY_ORDER:
return v
try:
# CVSS numeric string fallback
score = float(v)
if score >= 9.0:
return "critical"
if score >= 7.0:
return "high"
if score >= 4.0:
return "medium"
return "low"
except ValueError:
return "unknown"
if isinstance(value, (int, float)):
score = float(value)
if score >= 9.0:
return "critical"
if score >= 7.0:
return "high"
if score >= 4.0:
return "medium"
return "low"
if isinstance(value, list):
# OSV sometimes returns a list of dicts. Pick the max-known severity.
best = "unknown"
for item in value:
if isinstance(item, dict):
sev = _normalize_severity(item.get("severity"))
if SEVERITY_ORDER.get(sev, 0) > SEVERITY_ORDER.get(best, 0):
best = sev
return best
if isinstance(value, dict):
return _normalize_severity(value.get("severity"))
return "unknown"
def _load_allowlist(path: Path) -> tuple[list[dict], list[str]]:
if not path.exists():
return [], []
data = json.loads(path.read_text(encoding="utf-8"))
entries = data.get("entries", [])
today = dt.date.today()
active: list[dict] = []
errors: list[str] = []
required = {"id", "reason", "approved_by", "issue", "expires_on"}
for idx, entry in enumerate(entries, start=1):
missing = required - set(entry.keys())
if missing:
errors.append(f"allowlist entry #{idx} missing keys: {', '.join(sorted(missing))}")
continue
try:
expires = _parse_date(str(entry["expires_on"]))
except ValueError:
errors.append(f"allowlist entry #{idx} has invalid expires_on: {entry['expires_on']}")
continue
if expires < today:
errors.append(
f"allowlist entry #{idx} ({entry['id']}) expired on {entry['expires_on']}"
)
continue
active.append(entry)
return active, errors
def _iter_findings(report: object):
# pip-audit JSON can be list[dep] or dict with dependencies.
deps = report if isinstance(report, list) else report.get("dependencies", [])
for dep in deps:
package = dep.get("name", "unknown")
version = dep.get("version", "unknown")
for vuln in dep.get("vulns", []):
vuln_id = vuln.get("id", "unknown")
aliases = vuln.get("aliases", []) or []
severity = _normalize_severity(vuln.get("severity"))
if severity == "unknown":
severity = "high" # conservative default for policy safety
yield {
"package": package,
"version": version,
"id": vuln_id,
"aliases": aliases,
"severity": severity,
"fix_versions": vuln.get("fix_versions", []),
}
def _is_allowlisted(finding: dict, allowlist: list[dict]) -> bool:
ids = {finding["id"], *finding["aliases"]}
pkg = finding["package"]
for entry in allowlist:
entry_pkg = entry.get("package")
if entry["id"] in ids and (not entry_pkg or entry_pkg == pkg):
return True
return False
def main() -> int:
parser = argparse.ArgumentParser()
parser.add_argument("--report", required=True, help="Path to pip-audit JSON report")
parser.add_argument("--allowlist", required=True, help="Path to allowlist JSON")
args = parser.parse_args()
report_path = Path(args.report)
allowlist_path = Path(args.allowlist)
if not report_path.exists():
print(f"[pip-audit-gate] Missing report: {report_path}")
return 1
report = json.loads(report_path.read_text(encoding="utf-8"))
allowlist, allowlist_errors = _load_allowlist(allowlist_path)
if allowlist_errors:
print("[pip-audit-gate] Allowlist validation failed:")
for err in allowlist_errors:
print(f" - {err}")
return 1
unresolved_blocking: list[dict] = []
summary = {"critical": 0, "high": 0, "medium": 0, "low": 0, "unknown": 0}
ignored = 0
for finding in _iter_findings(report):
sev = finding["severity"]
summary[sev] = summary.get(sev, 0) + 1
if _is_allowlisted(finding, allowlist):
ignored += 1
continue
if sev in BLOCKING_SEVERITIES:
unresolved_blocking.append(finding)
print("[pip-audit-gate] Summary:")
print(
f" CRITICAL={summary['critical']} HIGH={summary['high']} "
f"MEDIUM={summary['medium']} LOW={summary['low']} ALLOWLISTED={ignored}"
)
if unresolved_blocking:
print("[pip-audit-gate] Blocking vulnerabilities found:")
for f in unresolved_blocking:
aliases = ", ".join(f["aliases"]) if f["aliases"] else "-"
fixes = ", ".join(f["fix_versions"]) if f["fix_versions"] else "-"
print(
f" - {f['severity'].upper()} {f['package']}=={f['version']} "
f"id={f['id']} aliases=[{aliases}] fixes=[{fixes}]"
)
return 1
print("[pip-audit-gate] No unresolved HIGH/CRITICAL vulnerabilities.")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -54,7 +54,7 @@ services:
depends_on:
- backend
ports:
- "${FRONTEND_PORT}:80"
- "${FRONTEND_PORT}:8080"
volumes:
pg_data:

View File

@@ -0,0 +1,78 @@
# Production Proxy Profile (HTTPS)
This profile defines a secure and repeatable NexaPG deployment behind a reverse proxy.
## Included Profile Files
- `ops/profiles/prod/.env.production.example`
- `ops/profiles/prod/nginx/nexapg.conf`
## CORS Recommendations by Environment
| Environment | Recommended `CORS_ORIGINS` | Notes |
|---|---|---|
| `dev` | `*` or local explicit origins | `*` is acceptable only for local/dev usage. |
| `staging` | Exact staging UI origins | Example: `https://staging-monitor.example.com` |
| `prod` | Exact production UI origin(s) only | No wildcard; use comma-separated HTTPS origins if needed. |
Examples:
```env
# dev only
CORS_ORIGINS=*
# staging
CORS_ORIGINS=https://staging-monitor.example.com
# prod
CORS_ORIGINS=https://monitor.example.com
```
## Reverse Proxy Requirements
For stable auth, CORS, and request context handling, forward these headers to backend:
- `Host`
- `X-Real-IP`
- `X-Forwarded-For`
- `X-Forwarded-Proto`
- `X-Forwarded-Host`
- `X-Forwarded-Port`
Also forward API paths:
- `/api/` -> backend service (`:8000`)
## Mixed-Content Prevention
NexaPG frontend is designed to avoid mixed-content in HTTPS mode:
- Build/runtime default API base is relative (`/api/v1`)
- `frontend/src/api.js` upgrades `http` API URL to `https` when page runs on HTTPS
Recommended production setting:
```env
VITE_API_URL=/api/v1
```
## Validation Checklist
1. Open app over HTTPS and verify:
- login request is `https://.../api/v1/auth/login`
- no browser mixed-content errors in console
2. Verify CORS behavior:
- allowed origin works
- unknown origin is blocked
3. Verify backend receives forwarded protocol:
- proxied responses succeed with no redirect/proto issues
## CI Validation
`Proxy Profile Validation` workflow runs static guardrail checks:
- relative `VITE_API_URL` default
- required API proxy path in frontend NGINX config
- required forwarded headers
- HTTPS mixed-content guard in frontend API resolver
- production profile forbids wildcard CORS

View File

@@ -0,0 +1,53 @@
# Dependency Security Exception Flow (pip-audit)
This document defines the auditable exception process for Python dependency vulnerabilities.
## Policy
- CI blocks unresolved `HIGH` and `CRITICAL` dependency vulnerabilities.
- If a vulnerability does not provide severity metadata, it is treated as `HIGH` by policy.
- Temporary exceptions are allowed only through `ops/security/pip-audit-allowlist.json`.
## Allowlist Location
- File: `ops/security/pip-audit-allowlist.json`
- Format:
```json
{
"entries": [
{
"id": "CVE-2026-12345",
"package": "example-package",
"reason": "Upstream fix not released yet",
"approved_by": "security-owner",
"issue": "NX-202",
"expires_on": "2026-12-31"
}
]
}
```
## Required Fields
- `id`: Vulnerability ID (`CVE-*`, `GHSA-*`, or advisory ID)
- `reason`: Why exception is necessary
- `approved_by`: Approver identity
- `issue`: Tracking issue/ticket
- `expires_on`: Expiry date in `YYYY-MM-DD`
Optional:
- `package`: Restrict exception to one dependency package
## Rules
- Expired allowlist entries fail CI.
- Missing required fields fail CI.
- Exceptions must be time-limited and linked to a tracking issue.
- Removing an exception is required once an upstream fix is available.
## Auditability
- Every exception change is tracked in Git history and code review.
- CI logs include blocked vulnerabilities and allowlisted findings counts.

View File

@@ -0,0 +1,74 @@
# Secret Management (Production)
This guide defines secure handling for NexaPG secrets in production deployments.
## In Scope Secrets
- `JWT_SECRET_KEY`
- `ENCRYPTION_KEY`
- `DB_PASSWORD`
- SMTP credentials (configured in Admin Settings, encrypted at rest)
## Do / Don't
## Do
- Use an external secret source (Vault, cloud secret manager, orchestrator secrets, or CI/CD secret injection).
- Keep secrets out of Git history and out of image layers.
- Use strong random values:
- JWT secret: at least 32+ bytes random
- Fernet key: generated via `Fernet.generate_key()`
- Restrict access to runtime secrets (least privilege).
- Rotate secrets on schedule and on incident.
- Store production `.env` with strict permissions if file-based injection is used:
- owner-only read/write (e.g., `chmod 600 .env`)
- Audit who can read/update secrets in your deployment platform.
## Don't
- Do **not** hardcode secrets in source code.
- Do **not** commit `.env` with real values.
- Do **not** bake production secrets into Dockerfiles or image build args.
- Do **not** share secrets in tickets, chat logs, or CI console output.
- Do **not** reuse the same secrets between environments.
## Recommended Secret Providers
Pick one of these models:
1. Platform/Cloud secrets
- AWS Secrets Manager
- Azure Key Vault
- Google Secret Manager
2. HashiCorp Vault
3. CI/CD secret injection
- Inject as runtime env vars during deployment
4. Docker/Kubernetes secrets
- Prefer secret mounts or orchestrator-native secret stores
If you use plain `.env` files, treat them as sensitive artifacts and protect at OS and backup level.
## Rotation Basics
Minimum baseline:
1. `JWT_SECRET_KEY`
- Rotate on schedule (e.g., quarterly) and immediately after compromise.
- Expect existing sessions/tokens to become invalid after rotation.
2. `ENCRYPTION_KEY`
- Rotate with planned maintenance.
- Re-encrypt stored encrypted values (target passwords, SMTP password) during key transition.
3. `DB_PASSWORD`
- Rotate service account credentials regularly.
- Apply password changes in DB and deployment config atomically.
4. SMTP credentials
- Use dedicated sender account/app password.
- Rotate regularly and after provider-side security alerts.
## Operational Checklist
- [ ] No production secret in repository files.
- [ ] No production secret in container image metadata or build args.
- [ ] Runtime secret source documented for your environment.
- [ ] Secret rotation owner and schedule defined.
- [ ] Incident runbook includes emergency rotation steps.

View File

@@ -7,9 +7,14 @@ ARG VITE_API_URL=/api/v1
ENV VITE_API_URL=${VITE_API_URL}
RUN npm run build
FROM nginx:1.29-alpine-slim
RUN apk upgrade --no-cache
FROM nginx:1-alpine-slim
RUN apk upgrade --no-cache \
&& mkdir -p /var/cache/nginx /var/run /var/log/nginx /tmp/nginx \
&& chown -R nginx:nginx /var/cache/nginx /var/run /var/log/nginx /tmp/nginx \
&& sed -i 's#pid[[:space:]]\+/run/nginx.pid;#pid /tmp/nginx/nginx.pid;#' /etc/nginx/nginx.conf \
&& sed -i 's#pid[[:space:]]\+/var/run/nginx.pid;#pid /tmp/nginx/nginx.pid;#' /etc/nginx/nginx.conf
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
USER 101
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s --retries=5 CMD nginx -t || exit 1

View File

@@ -1,5 +1,5 @@
server {
listen 80;
listen 8080;
server_name _;
root /usr/share/nginx/html;

View File

@@ -12,6 +12,7 @@ LOG_LEVEL=INFO
# Core Database (internal metadata DB)
# ------------------------------
# Database that stores users, targets, metrics, query stats, and audit logs.
# DEV default only. Use strong unique credentials in production.
DB_NAME=nexapg
DB_USER=nexapg
DB_PASSWORD=nexapg
@@ -23,7 +24,7 @@ DB_PORT=5433
# ------------------------------
# Host port mapped to backend container port 8000.
BACKEND_PORT=8000
# JWT signing secret. Change this in every non-local environment.
# JWT signing secret. Never hardcode in source. Rotate regularly.
JWT_SECRET_KEY=change_this_super_secret
JWT_ALGORITHM=HS256
# Access token lifetime in minutes.
@@ -31,6 +32,7 @@ JWT_ACCESS_TOKEN_MINUTES=15
# Refresh token lifetime in minutes (10080 = 7 days).
JWT_REFRESH_TOKEN_MINUTES=10080
# Key used to encrypt monitored target passwords at rest.
# Never hardcode in source. Rotate with re-encryption plan.
# Generate with:
# python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
ENCRYPTION_KEY=REPLACE_WITH_FERNET_KEY
@@ -49,7 +51,7 @@ INIT_ADMIN_PASSWORD=ChangeMe123!
# ------------------------------
# Frontend
# ------------------------------
# Host port mapped to frontend container port 80.
# Host port mapped to frontend container port 8080.
FRONTEND_PORT=5173
# Base API URL used at frontend build time.
# For reverse proxy + SSL, keep this relative to avoid mixed-content issues.

View File

@@ -0,0 +1,48 @@
# NexaPG production profile (reverse proxy + HTTPS)
# Copy to .env and adjust values for your environment.
# ------------------------------
# Application
# ------------------------------
APP_NAME=NexaPG Monitor
ENVIRONMENT=prod
LOG_LEVEL=INFO
# ------------------------------
# Core Database
# ------------------------------
DB_NAME=nexapg
DB_USER=nexapg
DB_PASSWORD=change_me
DB_PORT=5433
# ------------------------------
# Backend
# ------------------------------
BACKEND_PORT=8000
JWT_SECRET_KEY=replace_with_long_random_secret
JWT_ALGORITHM=HS256
JWT_ACCESS_TOKEN_MINUTES=15
JWT_REFRESH_TOKEN_MINUTES=10080
ENCRYPTION_KEY=REPLACE_WITH_FERNET_KEY
# Production CORS:
# - no wildcard
# - set exact public UI origin(s)
CORS_ORIGINS=https://monitor.example.com
POLL_INTERVAL_SECONDS=30
ALERT_ACTIVE_CONNECTION_RATIO_MIN_TOTAL_CONNECTIONS=5
ALERT_ROLLBACK_RATIO_WINDOW_MINUTES=15
ALERT_ROLLBACK_RATIO_MIN_TOTAL_TRANSACTIONS=100
ALERT_ROLLBACK_RATIO_MIN_ROLLBACKS=10
INIT_ADMIN_EMAIL=admin@example.com
INIT_ADMIN_PASSWORD=ChangeMe123!
# ------------------------------
# Frontend
# ------------------------------
# Keep frontend API base relative to avoid HTTPS mixed-content.
FRONTEND_PORT=5173
VITE_API_URL=/api/v1

View File

@@ -0,0 +1,49 @@
# NGINX reverse proxy profile for NexaPG (HTTPS).
# Replace monitor.example.com and certificate paths.
server {
listen 80;
server_name monitor.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name monitor.example.com;
ssl_certificate /etc/letsencrypt/live/monitor.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/monitor.example.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Baseline security headers
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Frontend app
location / {
proxy_pass http://127.0.0.1:5173;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
# API forwarding to backend
location /api/ {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[proxy-profile] validating reverse-proxy and mixed-content guardrails"
require_pattern() {
local file="$1"
local pattern="$2"
local message="$3"
if ! grep -Eq "$pattern" "$file"; then
echo "[proxy-profile] FAIL: $message ($file)"
exit 1
fi
}
# Frontend should default to relative API base in container builds.
require_pattern "frontend/Dockerfile" "ARG VITE_API_URL=/api/v1" \
"VITE_API_URL default must be relative (/api/v1)"
# Frontend runtime proxy should forward /api with forward headers.
require_pattern "frontend/nginx.conf" "location /api/" \
"frontend nginx must proxy /api/"
require_pattern "frontend/nginx.conf" "proxy_set_header X-Forwarded-Proto" \
"frontend nginx must set X-Forwarded-Proto"
require_pattern "frontend/nginx.conf" "proxy_set_header X-Forwarded-For" \
"frontend nginx must set X-Forwarded-For"
require_pattern "frontend/nginx.conf" "proxy_set_header Host" \
"frontend nginx must forward Host"
# Mixed-content guard in frontend API client.
require_pattern "frontend/src/api.js" "window\\.location\\.protocol === \"https:\".*parsed\\.protocol === \"http:\"" \
"frontend api client must contain HTTPS mixed-content protection"
# Production profile must not use wildcard CORS.
require_pattern "ops/profiles/prod/.env.production.example" "^CORS_ORIGINS=https://[^*]+$" \
"production profile must use explicit HTTPS CORS origins"
echo "[proxy-profile] PASS"

View File

@@ -0,0 +1,3 @@
{
"entries": []
}