thumbnail

IB Scheduling: Go + Next.js Booking Platform

Next.jsGoSupabasePostgreSQLRailwayVercelPrometheusGrafana

Portfolio booking engine: Supabase Auth + Postgres, Go API (chi, pgx, OIDC JWT) on Railway, Next.js on Vercel, Prometheus/Grafana for metrics - race-safe reservations and full telemetry.

Live preview

Architecture at a glance

The summary above names the pieces; here is how work is divided: Next.js (TypeScript) owns the product UI and browser session; Go owns booking rules, JWT verification against Supabase's OIDC issuer, and Postgres access via pgx—nothing authoritative runs in the client.

  • Identity: Supabase Auth issues JWTs; Next keeps the session; Go verifies tokens with OIDC (JWKS).
  • Data: Single Postgres (Supabase). Go connects with DATABASE_URL using a service / pooler role - never exposed in the browser.
  • Deploy pattern: Next on Vercel, Go API on Railway (long-lived), CORS restricted to the web origin.

The Go service never sees the Supabase anon key - only the user's access_token as a Bearer JWT, verified against the Auth issuer.

System diagram

How the browser, Next.js, Supabase (Auth + Postgres), and the Go API exchange requests. Observability shows metrics scrape/query (Prometheus/Grafana), not centralized logs.

Zoned like the live app: client → edge → Supabase → Go → observability, with labeled flows (JWT, pgx, scrape, PromQL).

Deployment diagram

How Supabase, Vercel, and Railway services connect in production.

Production shape: Users → Vercel (Next + rewrites) ↔ Supabase Auth/Postgres; Railway runs Go API, /metrics, Prometheus, and Grafana (same layout as go-booking-system deployment diagram).

Telemetry and monitoring

The stack exposes a full metrics path: the Go API serves Prometheus text at /metrics; Prometheus scrapes and stores time-series; Grafana dashboards query Prometheus (e.g. PromQL). This is the operational layer for runtime and app health.

Go booking API dashboard (Grafana)

Grafana dashboard showing Go booking API metrics including goroutines and runtime signals
Snapshot of the live Go booking API dashboard (last 15 minutes). Use the demo credentials above if Grafana prompts for login.

Go API surface

Public vs JWT-protected routes are mounted in apps/api/cmd/server/main.go. Protected handlers expect Authorization: Bearer <supabase access_token>.

MethodPathAuth
GET/healthNone
GET/api/v1/db-statusBearer
GET/api/v1/availabilityBearer
GET/api/v1/reservationsBearer
POST/api/v1/reservationsBearer
POST/api/v1/slotsBearer
POST/api/v1/benchmark/booking-rushBearer
POST/api/v1/mimic/notification/emailBearer
POST/api/v1/mimic/notification/whatsappBearer

Database invariants

  • public.resources - bookable entity
  • public.slots - UNIQUE (resource_id, starts_at) - discrete windows
  • public.reservations - UNIQUE (slot_id) - at most one confirmed row per slot ( ON CONFLICT target)
  • reservations.user_id references auth.users(id) - aligns with JWT sub

Booking mechanics (race-safe)

POST /api/v1/reservations with a body { "slot_id": "uuid" } runs in a transaction: lock the slot row, insert reservation, commit. The unique constraint on slot_id plus FOR UPDATE serializes competing bookings; duplicate inserts return 409.

GET /api/v1/availability returns open slots (no confirmed reservation) via LEFT JOIN; the UI sends the Supabase access token on every call through a small fetch wrapper.

Source and live demo

Full source and deployment notes live in the repo sibtihaj/go-booking-system. Try the hosted app at go-booking-system.vercel.app and the Architecture pages for interactive diagrams.

Syed Ibtihaj

Design & Code by Syed Ibtihaj

Actively maintaining this site and pushing new work to GitHub as it ships.

Open for exploration.

© 2026 — Built with Next.js 16