12 Commits
0.1.2 ... 0.1.6

Author SHA1 Message Date
88c3345647 Merge pull request 'Use lighter base images for frontend containers' (#4) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 9s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m24s
Reviewed-on: #4
2026-02-13 11:43:59 +00:00
d9f3de9468 Use lighter base images for frontend containers
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 7s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
Switched Node.js and Nginx images from 'bookworm' to 'alpine' variants to reduce image size. Added `apk upgrade --no-cache` for updated Alpine packages in the Nginx container. This optimizes resource usage and enhances performance.
2026-02-13 11:26:52 +01:00
e62aaaf5a0 Merge pull request 'Update base images in Dockerfile to use bookworm variants' (#3) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 7s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 2m7s
Reviewed-on: #3
2026-02-13 10:20:20 +00:00
ef84273868 Update base images in Dockerfile to use bookworm variants
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
Replaced alpine with bookworm-slim for Node.js and nginx to bookworm. This ensures compatibility with the latest updates and improves consistency across images. Adjusted the health check command for nginx accordingly.
2026-02-13 11:15:17 +01:00
6c59b21088 Merge pull request 'Add first and last name fields for users' (#2) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 1m13s
Reviewed-on: #2
2026-02-13 10:09:02 +00:00
cd1795b9ff Add first and last name fields for users
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 12s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 11s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 9s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 10s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 11s
This commit introduces optional `first_name` and `last_name` fields to the user model, including database migrations, backend, and frontend support. It enhances user profiles, updates user creation and editing flows, and refines the UI to display full names where available.
2026-02-13 10:57:10 +01:00
e0242bc823 Refactor deployment process to use prebuilt Docker images
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Replaced local builds with prebuilt backend and frontend Docker images for simplified deployment. Updated documentation and Makefile to reflect the changes and added a bootstrap script for quick setup of deployment files. Removed deprecated `VITE_API_URL` variable and references to streamline the setup.
2026-02-13 10:43:34 +01:00
75f8106ca5 Merge pull request 'Merge Fixes and Technical changes from development into main branch' (#1) from development into main
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (push) Successful in 7s
PostgreSQL Compatibility Matrix / PG15 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (push) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (push) Successful in 8s
Docker Publish (Release) / Build and Push Docker Images (release) Successful in 4m30s
Reviewed-on: #1
2026-02-13 09:13:04 +00:00
4e4f8ad5d4 Update NEXAPG version to 0.1.3
All checks were successful
PostgreSQL Compatibility Matrix / PG14 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG15 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG16 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG17 smoke (pull_request) Successful in 8s
PostgreSQL Compatibility Matrix / PG18 smoke (pull_request) Successful in 8s
This increments the application version from 0.1.2 to 0.1.3. It likely reflects bug fixes, improvements, or minor feature additions.
2026-02-13 10:11:00 +01:00
5c5d51350f Fix stale refresh usage in QueryInsightsPage effect hooks
Replaced `refresh` with `useRef` to ensure the latest value is always used in async operations. Added cleanup logic to handle component unmounts during API calls, preventing state updates on unmounted components.
2026-02-13 10:06:56 +01:00
ba1559e790 Improve agentless mode messaging for host-level metrics
Updated the messaging and UI to clarify unavailability of host-level metrics, such as CPU, RAM, and disk space, in agentless mode. Added clear formatting and new functions to handle missing metrics gracefully in the frontend.
2026-02-13 10:01:24 +01:00
ab9d03be8a Add GitHub Actions workflow for Docker image release
This workflow automates building and publishing Docker images upon a release or manual trigger. It includes steps for version resolution, Docker Hub login, and caching to optimize builds for both backend and frontend images.
2026-02-13 09:55:08 +01:00
18 changed files with 438 additions and 55 deletions

View File

@@ -58,7 +58,3 @@ INIT_ADMIN_PASSWORD=ChangeMe123!
# ------------------------------
# Host port mapped to frontend container port 80.
FRONTEND_PORT=5173
# Base API URL used at frontend build time.
# For reverse proxy + SSL, keep this relative to avoid mixed-content issues.
# Example direct mode: VITE_API_URL=http://localhost:8000/api/v1
VITE_API_URL=/api/v1

91
.github/workflows/docker-release.yml vendored Normal file
View File

@@ -0,0 +1,91 @@
name: Docker Publish (Release)
on:
release:
types: [published]
workflow_dispatch:
inputs:
version:
description: "Version tag to publish (e.g. 0.1.2 or v0.1.2)"
required: false
type: string
jobs:
publish:
name: Build and Push Docker Images
runs-on: ubuntu-latest
permissions:
contents: read
env:
# Optional repo variable. If unset, DOCKERHUB_USERNAME is used.
IMAGE_NAMESPACE: ${{ vars.DOCKERHUB_NAMESPACE }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Resolve version/tag
id: ver
shell: bash
run: |
RAW_TAG="${{ github.event.release.tag_name }}"
if [ -z "$RAW_TAG" ]; then
RAW_TAG="${{ inputs.version }}"
fi
if [ -z "$RAW_TAG" ]; then
RAW_TAG="${GITHUB_REF_NAME}"
fi
CLEAN_TAG="${RAW_TAG#v}"
echo "raw=$RAW_TAG" >> "$GITHUB_OUTPUT"
echo "clean=$CLEAN_TAG" >> "$GITHUB_OUTPUT"
- name: Set image namespace
id: ns
shell: bash
run: |
NS="${IMAGE_NAMESPACE}"
if [ -z "$NS" ]; then
NS="${{ secrets.DOCKERHUB_USERNAME }}"
fi
if [ -z "$NS" ]; then
echo "Missing Docker Hub namespace. Set repo var DOCKERHUB_NAMESPACE or secret DOCKERHUB_USERNAME."
exit 1
fi
echo "value=$NS" >> "$GITHUB_OUTPUT"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push backend image
uses: docker/build-push-action@v6
with:
context: ./backend
file: ./backend/Dockerfile
push: true
tags: |
${{ steps.ns.outputs.value }}/nexapg-backend:${{ steps.ver.outputs.clean }}
${{ steps.ns.outputs.value }}/nexapg-backend:latest
cache-from: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-backend:buildcache
cache-to: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-backend:buildcache,mode=max
- name: Build and push frontend image
uses: docker/build-push-action@v6
with:
context: ./frontend
file: ./frontend/Dockerfile
push: true
build-args: |
VITE_API_URL=/api/v1
tags: |
${{ steps.ns.outputs.value }}/nexapg-frontend:${{ steps.ver.outputs.clean }}
${{ steps.ns.outputs.value }}/nexapg-frontend:latest
cache-from: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-frontend:buildcache
cache-to: type=registry,ref=${{ steps.ns.outputs.value }}/nexapg-frontend:buildcache,mode=max

View File

@@ -1,7 +1,8 @@
.PHONY: up down logs migrate
up:
docker compose up -d --build
docker compose pull
docker compose up -d
down:
docker compose down

View File

@@ -9,7 +9,7 @@ It combines FastAPI, React, and PostgreSQL in a Docker Compose stack with RBAC,
## Table of Contents
- [Quick Start](#quick-start)
- [Quick Deploy (Prebuilt Images)](#quick-deploy-prebuilt-images)
- [Prerequisites](#prerequisites)
- [Make Commands](#make-commands)
- [Configuration Reference (`.env`)](#configuration-reference-env)
@@ -93,27 +93,50 @@ Optional:
- `psql` for manual DB checks
## Quick Start
## Quick Deploy (Prebuilt Images)
1. Copy environment template:
If you only want to run NexaPG from published Docker Hub images, use the bootstrap script:
```bash
cp .env.example .env
mkdir -p /opt/NexaPG
cd /opt/NexaPG
wget -O bootstrap-compose.sh https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/ops/scripts/bootstrap-compose.sh
chmod +x bootstrap-compose.sh
./bootstrap-compose.sh
```
2. Generate a Fernet key and set `ENCRYPTION_KEY` in `.env`:
This downloads:
- `docker-compose.yml`
- `.env.example`
- `Makefile`
Then:
```bash
# generate JWT secret
python -c "import secrets; print(secrets.token_urlsafe(64))"
# generate Fernet encryption key
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
```
3. Start the stack:
```bash
# put both values into .env (JWT_SECRET_KEY / ENCRYPTION_KEY)
# note: .env is auto-created by bootstrap if it does not exist
make up
```
4. Open the application:
Manual download alternative:
```bash
mkdir -p /opt/NexaPG
cd /opt/NexaPG
wget https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/docker-compose.yml
wget https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/.env.example
wget https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main/Makefile
cp .env.example .env
```
`make up` pulls `nesterovicit/nexapg-backend:latest` and `nesterovicit/nexapg-frontend:latest`, then starts the stack.
Open the application:
- Frontend: `http://<SERVER_IP>:<FRONTEND_PORT>`
- API base: `http://<SERVER_IP>:<BACKEND_PORT>/api/v1`
@@ -127,7 +150,7 @@ Initial admin bootstrap user (created from `.env` if missing):
## Make Commands
```bash
make up # build and start all services
make up # pull latest images and start all services
make down # stop all services
make logs # follow compose logs
make migrate # optional/manual: run alembic upgrade head in backend container
@@ -183,12 +206,6 @@ Note: Migrations run automatically when the backend container starts (`entrypoin
| Variable | Description |
|---|---|
| `FRONTEND_PORT` | Host port mapped to frontend container port `80` |
| `VITE_API_URL` | Frontend API base URL (build-time) |
Recommended values for `VITE_API_URL`:
- Reverse proxy setup: `/api/v1`
- Direct backend access: `http://<SERVER_IP>:<BACKEND_PORT>/api/v1`
## Core Functional Areas
@@ -318,7 +335,7 @@ For production, serve frontend and API under the same public origin via reverse
- Frontend URL example: `https://monitor.example.com`
- Proxy API path `/api/` to backend service
- Use `VITE_API_URL=/api/v1`
- Route `/api/v1` to the backend service
This prevents mixed-content and CORS issues.
@@ -351,8 +368,7 @@ docker compose logs --tail=200 db
### CORS or mixed-content issues behind SSL proxy
- Set `VITE_API_URL=/api/v1`
- Ensure proxy forwards `/api/` to backend
- Ensure proxy forwards `/api/` (or `/api/v1`) to backend
- Set correct frontend origin(s) in `CORS_ORIGINS`
### `rejected SSL upgrade` for a target

View File

@@ -0,0 +1,26 @@
"""add user first and last name fields
Revision ID: 0009_user_profile_fields
Revises: 0008_service_settings
Create Date: 2026-02-13
"""
from alembic import op
import sqlalchemy as sa
revision = "0009_user_profile_fields"
down_revision = "0008_service_settings"
branch_labels = None
depends_on = None
def upgrade() -> None:
op.add_column("users", sa.Column("first_name", sa.String(length=120), nullable=True))
op.add_column("users", sa.Column("last_name", sa.String(length=120), nullable=True))
def downgrade() -> None:
op.drop_column("users", "last_name")
op.drop_column("users", "first_name")

View File

@@ -23,7 +23,13 @@ async def create_user(payload: UserCreate, admin: User = Depends(require_roles("
exists = await db.scalar(select(User).where(User.email == payload.email))
if exists:
raise HTTPException(status_code=409, detail="Email already exists")
user = User(email=payload.email, password_hash=hash_password(payload.password), role=payload.role)
user = User(
email=payload.email,
first_name=payload.first_name,
last_name=payload.last_name,
password_hash=hash_password(payload.password),
role=payload.role,
)
db.add(user)
await db.commit()
await db.refresh(user)
@@ -42,8 +48,15 @@ async def update_user(
if not user:
raise HTTPException(status_code=404, detail="User not found")
update_data = payload.model_dump(exclude_unset=True)
if "password" in update_data and update_data["password"]:
user.password_hash = hash_password(update_data.pop("password"))
next_email = update_data.get("email")
if next_email and next_email != user.email:
existing = await db.scalar(select(User).where(User.email == next_email))
if existing and existing.id != user.id:
raise HTTPException(status_code=409, detail="Email already exists")
if "password" in update_data:
raw_password = update_data.pop("password")
if raw_password:
user.password_hash = hash_password(raw_password)
for key, value in update_data.items():
setattr(user, key, value)
await db.commit()

View File

@@ -2,7 +2,7 @@ from functools import lru_cache
from pydantic import field_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
NEXAPG_VERSION = "0.1.2"
NEXAPG_VERSION = "0.1.4"
class Settings(BaseSettings):

View File

@@ -9,6 +9,8 @@ class User(Base):
id: Mapped[int] = mapped_column(Integer, primary_key=True)
email: Mapped[str] = mapped_column(String(255), unique=True, index=True, nullable=False)
first_name: Mapped[str | None] = mapped_column(String(120), nullable=True)
last_name: Mapped[str | None] = mapped_column(String(120), nullable=True)
password_hash: Mapped[str] = mapped_column(String(255), nullable=False)
role: Mapped[str] = mapped_column(String(20), nullable=False, default="viewer")
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), server_default=func.now(), nullable=False)

View File

@@ -5,6 +5,8 @@ from pydantic import BaseModel, EmailStr, field_validator
class UserOut(BaseModel):
id: int
email: EmailStr
first_name: str | None = None
last_name: str | None = None
role: str
created_at: datetime
@@ -13,12 +15,16 @@ class UserOut(BaseModel):
class UserCreate(BaseModel):
email: EmailStr
first_name: str | None = None
last_name: str | None = None
password: str
role: str = "viewer"
class UserUpdate(BaseModel):
email: EmailStr | None = None
first_name: str | None = None
last_name: str | None = None
password: str | None = None
role: str | None = None

View File

@@ -17,10 +17,10 @@ class DiskSpaceProvider:
class NullDiskSpaceProvider(DiskSpaceProvider):
async def get_free_bytes(self, target_host: str) -> DiskSpaceProbeResult:
return DiskSpaceProbeResult(
source="none",
source="agentless",
status="unavailable",
free_bytes=None,
message=f"No infra probe configured for host {target_host}. Add SSH/Agent provider later.",
message=f"Agentless mode: host-level free disk is not available for {target_host}.",
)

View File

@@ -18,8 +18,8 @@ services:
retries: 10
backend:
build:
context: ./backend
image: nesterovicit/nexapg-backend:latest
pull_policy: always
container_name: nexapg-backend
restart: unless-stopped
environment:
@@ -47,10 +47,8 @@ services:
- "${BACKEND_PORT}:8000"
frontend:
build:
context: ./frontend
args:
VITE_API_URL: ${VITE_API_URL}
image: nesterovicit/nexapg-frontend:latest
pull_policy: always
container_name: nexapg-frontend
restart: unless-stopped
depends_on:

View File

@@ -7,8 +7,9 @@ ARG VITE_API_URL=/api/v1
ENV VITE_API_URL=${VITE_API_URL}
RUN npm run build
FROM nginx:1.29-alpine
FROM nginx:1.29-alpine-slim
RUN apk upgrade --no-cache
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
HEALTHCHECK --interval=30s --timeout=3s --retries=5 CMD wget -qO- http://127.0.0.1/ || exit 1
HEALTHCHECK --interval=30s --timeout=3s --retries=5 CMD nginx -t || exit 1

View File

@@ -22,6 +22,7 @@ function Layout({ children }) {
const { me, logout, uiMode, setUiMode, alertToasts, dismissAlertToast, serviceUpdateAvailable } = useAuth();
const navigate = useNavigate();
const navClass = ({ isActive }) => `nav-btn${isActive ? " active" : ""}`;
const fullName = [me?.first_name, me?.last_name].filter(Boolean).join(" ").trim();
return (
<div className="shell">
@@ -101,8 +102,9 @@ function Layout({ children }) {
</button>
<small>{uiMode === "easy" ? "Simple health guidance" : "Advanced DBA metrics"}</small>
</div>
<div>{me?.email}</div>
<div className="role">{me?.role}</div>
<div className="profile-name">{fullName || me?.email}</div>
{fullName && <div className="profile-email">{me?.email}</div>}
<div className="role profile-role">{me?.role}</div>
<NavLink to="/user-settings" className={({ isActive }) => `profile-btn${isActive ? " active" : ""}`}>
User Settings
</NavLink>

View File

@@ -19,8 +19,11 @@ const TEMPLATE_VARIABLES = [
export function AdminUsersPage() {
const { tokens, refresh, me } = useAuth();
const emptyCreateForm = { email: "", first_name: "", last_name: "", password: "", role: "viewer" };
const [users, setUsers] = useState([]);
const [form, setForm] = useState({ email: "", password: "", role: "viewer" });
const [form, setForm] = useState(emptyCreateForm);
const [editingUserId, setEditingUserId] = useState(null);
const [editForm, setEditForm] = useState({ email: "", first_name: "", last_name: "", password: "", role: "viewer" });
const [emailSettings, setEmailSettings] = useState({
enabled: false,
smtp_host: "",
@@ -79,7 +82,7 @@ export function AdminUsersPage() {
e.preventDefault();
try {
await apiFetch("/admin/users", { method: "POST", body: JSON.stringify(form) }, tokens, refresh);
setForm({ email: "", password: "", role: "viewer" });
setForm(emptyCreateForm);
await load();
} catch (e) {
setError(String(e.message || e));
@@ -95,6 +98,39 @@ export function AdminUsersPage() {
}
};
const startEdit = (user) => {
setEditingUserId(user.id);
setEditForm({
email: user.email || "",
first_name: user.first_name || "",
last_name: user.last_name || "",
password: "",
role: user.role || "viewer",
});
};
const cancelEdit = () => {
setEditingUserId(null);
setEditForm({ email: "", first_name: "", last_name: "", password: "", role: "viewer" });
};
const saveEdit = async (userId) => {
try {
const payload = {
email: editForm.email,
first_name: editForm.first_name.trim() || null,
last_name: editForm.last_name.trim() || null,
role: editForm.role,
};
if (editForm.password.trim()) payload.password = editForm.password;
await apiFetch(`/admin/users/${userId}`, { method: "PUT", body: JSON.stringify(payload) }, tokens, refresh);
cancelEdit();
await load();
} catch (e) {
setError(String(e.message || e));
}
};
const saveSmtp = async (e) => {
e.preventDefault();
setError("");
@@ -165,6 +201,22 @@ export function AdminUsersPage() {
<p className="muted">Create accounts and manage access roles.</p>
</div>
<form className="grid three admin-user-form" onSubmit={create}>
<div className="admin-field">
<label>First Name</label>
<input
value={form.first_name}
placeholder="Jane"
onChange={(e) => setForm({ ...form, first_name: e.target.value })}
/>
</div>
<div className="admin-field">
<label>Last Name</label>
<input
value={form.last_name}
placeholder="Doe"
onChange={(e) => setForm({ ...form, last_name: e.target.value })}
/>
</div>
<div className="admin-field">
<label>Email</label>
<input value={form.email} placeholder="user@example.com" onChange={(e) => setForm({ ...form, email: e.target.value })} />
@@ -197,6 +249,7 @@ export function AdminUsersPage() {
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Email</th>
<th>Role</th>
<th>Action</th>
@@ -206,11 +259,70 @@ export function AdminUsersPage() {
{users.map((u) => (
<tr key={u.id} className="admin-user-row">
<td className="user-col-id">{u.id}</td>
<td className="user-col-email">{u.email}</td>
<td>
<span className={`pill role-pill role-${u.role}`}>{u.role}</span>
<td className="user-col-name">
{editingUserId === u.id ? (
<div className="admin-inline-grid two">
<input
value={editForm.first_name}
placeholder="First name"
onChange={(e) => setEditForm({ ...editForm, first_name: e.target.value })}
/>
<input
value={editForm.last_name}
placeholder="Last name"
onChange={(e) => setEditForm({ ...editForm, last_name: e.target.value })}
/>
</div>
) : (
<span className="user-col-name-value">{[u.first_name, u.last_name].filter(Boolean).join(" ") || "-"}</span>
)}
</td>
<td className="user-col-email">
{editingUserId === u.id ? (
<input
value={editForm.email}
placeholder="user@example.com"
onChange={(e) => setEditForm({ ...editForm, email: e.target.value })}
/>
) : (
u.email
)}
</td>
<td>
{editingUserId === u.id ? (
<select value={editForm.role} onChange={(e) => setEditForm({ ...editForm, role: e.target.value })}>
<option value="viewer">viewer</option>
<option value="operator">operator</option>
<option value="admin">admin</option>
</select>
) : (
<span className={`pill role-pill role-${u.role}`}>{u.role}</span>
)}
</td>
<td className="admin-user-actions">
{editingUserId === u.id && (
<input
type="password"
className="admin-inline-password"
value={editForm.password}
placeholder="New password (optional)"
onChange={(e) => setEditForm({ ...editForm, password: e.target.value })}
/>
)}
{editingUserId === u.id ? (
<>
<button className="table-action-btn primary small-btn" onClick={() => saveEdit(u.id)}>
Save
</button>
<button className="table-action-btn small-btn" onClick={cancelEdit}>
Cancel
</button>
</>
) : (
<button className="table-action-btn edit small-btn" onClick={() => startEdit(u)}>
Edit
</button>
)}
{u.id !== me.id && (
<button className="table-action-btn delete small-btn" onClick={() => remove(u.id)}>
<span aria-hidden="true">

View File

@@ -1,4 +1,4 @@
import React, { useEffect, useState } from "react";
import React, { useEffect, useRef, useState } from "react";
import { apiFetch } from "../api";
import { useAuth } from "../state";
@@ -62,6 +62,7 @@ function buildQueryTips(row) {
export function QueryInsightsPage() {
const { tokens, refresh } = useAuth();
const refreshRef = useRef(refresh);
const [targets, setTargets] = useState([]);
const [targetId, setTargetId] = useState("");
const [rows, setRows] = useState([]);
@@ -71,6 +72,10 @@ export function QueryInsightsPage() {
const [error, setError] = useState("");
const [loading, setLoading] = useState(true);
useEffect(() => {
refreshRef.current = refresh;
}, [refresh]);
useEffect(() => {
(async () => {
try {
@@ -89,17 +94,26 @@ export function QueryInsightsPage() {
useEffect(() => {
if (!targetId) return;
let active = true;
(async () => {
try {
const data = await apiFetch(`/targets/${targetId}/top-queries`, {}, tokens, refresh);
const data = await apiFetch(`/targets/${targetId}/top-queries`, {}, tokens, refreshRef.current);
if (!active) return;
setRows(data);
setSelectedQuery(data[0] || null);
setPage(1);
setSelectedQuery((prev) => {
if (!prev) return data[0] || null;
const keep = data.find((row) => row.queryid === prev.queryid);
return keep || data[0] || null;
});
setPage((prev) => (prev === 1 ? prev : 1));
} catch (e) {
setError(String(e.message || e));
if (active) setError(String(e.message || e));
}
})();
}, [targetId, tokens, refresh]);
return () => {
active = false;
};
}, [targetId, tokens?.accessToken, tokens?.refreshToken]);
const dedupedByQueryId = [...rows].reduce((acc, row) => {
if (!row?.queryid) return acc;

View File

@@ -41,6 +41,19 @@ function formatNumber(value, digits = 2) {
return Number(value).toFixed(digits);
}
function formatHostMetricUnavailable() {
return "N/A (agentless)";
}
function formatDiskSpaceAgentless(diskSpace) {
if (!diskSpace) return formatHostMetricUnavailable();
if (diskSpace.free_bytes !== null && diskSpace.free_bytes !== undefined) {
return formatBytes(diskSpace.free_bytes);
}
if (diskSpace.status === "unavailable") return formatHostMetricUnavailable();
return "-";
}
function MetricsTooltip({ active, payload, label }) {
if (!active || !payload || payload.length === 0) return null;
const row = payload[0]?.payload || {};
@@ -346,6 +359,9 @@ export function TargetDetailPage() {
{uiMode === "dba" && overview && (
<div className="card">
<h3>Database Overview</h3>
<p className="muted" style={{ marginTop: 2 }}>
Agentless mode: host-level CPU, RAM, and free-disk metrics are not available.
</p>
<div className="grid three overview-kv">
<div><span>PostgreSQL Version</span><strong>{overview.instance.server_version || "-"}</strong></div>
<div>
@@ -366,8 +382,8 @@ export function TargetDetailPage() {
<div title="Total WAL directory size (when available)">
<span>WAL Size</span><strong>{formatBytes(overview.storage.wal_directory_size_bytes)}</strong>
</div>
<div title="Optional metric via future Agent/SSH provider">
<span>Free Disk</span><strong>{formatBytes(overview.storage.disk_space.free_bytes)}</strong>
<div title={overview.storage.disk_space?.message || "Agentless mode: host-level free disk is unavailable."}>
<span>Free Disk</span><strong>{formatDiskSpaceAgentless(overview.storage.disk_space)}</strong>
</div>
<div title="Replication replay delay on standby">
<span>Replay Lag</span>
@@ -378,6 +394,12 @@ export function TargetDetailPage() {
<div><span>Replication Slots</span><strong>{overview.replication.replication_slots_count ?? "-"}</strong></div>
<div><span>Repl Clients</span><strong>{overview.replication.active_replication_clients ?? "-"}</strong></div>
<div><span>Autovacuum Workers</span><strong>{overview.performance.autovacuum_workers ?? "-"}</strong></div>
<div title="Host CPU requires OS-level telemetry">
<span>Host CPU</span><strong>{formatHostMetricUnavailable()}</strong>
</div>
<div title="Host RAM requires OS-level telemetry">
<span>Host RAM</span><strong>{formatHostMetricUnavailable()}</strong>
</div>
</div>
<div className="grid two">

View File

@@ -195,6 +195,23 @@ a {
color: #d7e4fa;
}
.profile-name {
font-size: 15px;
font-weight: 700;
line-height: 1.25;
}
.profile-email {
margin-top: 2px;
font-size: 12px;
color: #a6bcda;
word-break: break-all;
}
.profile-role {
margin-top: 4px;
}
.mode-switch-block {
margin-bottom: 12px;
padding: 10px;
@@ -1255,6 +1272,31 @@ td {
font-weight: 600;
}
.user-col-name-value {
font-weight: 600;
}
.admin-inline-grid {
display: grid;
gap: 8px;
}
.admin-inline-grid.two {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
.admin-inline-password {
min-width: 190px;
}
.admin-user-actions {
display: flex;
gap: 8px;
align-items: center;
justify-content: flex-start;
flex-wrap: wrap;
}
.role-pill {
display: inline-flex;
align-items: center;

View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -euo pipefail
# Usage:
# bash bootstrap-compose.sh
# BASE_URL="https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main" bash bootstrap-compose.sh
BASE_URL="${BASE_URL:-https://git.nesterovic.cc/nessi/NexaPG/raw/branch/main}"
echo "[bootstrap] Using base URL: ${BASE_URL}"
fetch_file() {
local path="$1"
local out="$2"
if command -v wget >/dev/null 2>&1; then
wget -q -O "${out}" "${BASE_URL}/${path}"
elif command -v curl >/dev/null 2>&1; then
curl -fsSL "${BASE_URL}/${path}" -o "${out}"
else
echo "[bootstrap] ERROR: wget or curl is required"
exit 1
fi
}
fetch_file "docker-compose.yml" "docker-compose.yml"
fetch_file ".env.example" ".env.example"
fetch_file "Makefile" "Makefile"
if [[ ! -f ".env" ]]; then
cp .env.example .env
echo "[bootstrap] Created .env from .env.example"
else
echo "[bootstrap] .env already exists, keeping it"
fi
echo
echo "[bootstrap] Next steps:"
echo " 1) Edit .env (set JWT_SECRET_KEY and ENCRYPTION_KEY at minimum)"
echo " 2) Run: make up"
echo