Overview¶
Sixteen workspace crates communicate via NATS pub/sub with JSON messages containing hex-encoded binary packets.
Request flow¶
sequenceDiagram
autonumber
participant C as Client
participant A as splintertree-auth<br/>(TCP 3724)
participant G as splintertree-gateway<br/>(TCP 8085)
participant N as NATS bus
participant W as Worker<br/>(world / guild / social / …)
C->>A: SRP6 handshake
A-->>C: Realm list + session key
C->>G: Encrypted session (ARC4 / header)
G->>N: Publish opcode message
N->>W: Route to subscriber
W->>N: Publish response
N->>G: Forward to gateway
G-->>C: Encrypted response
splintertree-authterminates the SRP6 handshake, hands back the realm list, and disappears from the request path once the session is established.splintertree-gatewayowns the TCP socket, the per-session ARC4 / header encryption, and routes opcodes onto NATS subjects by handler.- Workers subscribe to their subjects, consume messages, and push packets back to the gateway over NATS as well.
World workers per map (or per zone group)¶
splintertree-orchestrator spawns splintertree-world once per
shard. A shard is either:
- a whole map — continent, dungeon copy, BG / arena match (Eastern Kingdoms, Deadmines #41, WSG #87), or
- a zone group inside a map, when the operator opted into zone
splitting via
zone_groups.yaml. Continents that would saturate one process get carved into N workers, each owning a disjoint set of zone IDs.
Worker key: (map_id, instance_id) for un-split maps,
(map_id, instance_id, zone_group_id) for split ones. Zone
hand-off triggers a transfer between workers without a realm
restart.
flowchart LR
Orchestrator[splintertree-orchestrator<br/><i>N replicas</i>]
PG[(PostgreSQL<br/>advisory locks)]
W1[world: 0/0 zg=1<br/>EK · north zones]
W2[world: 0/0 zg=2<br/>EK · south zones]
W3[world: 36/41<br/>Deadmines #41]
W4[world: 489/87<br/>WSG #87]
Orchestrator -- "leader election" --> PG
Orchestrator -- "spawn / drain" --> W1
Orchestrator -- "spawn / drain" --> W2
Orchestrator -- "spawn / drain" --> W3
Orchestrator -- "spawn / drain" --> W4
classDef worker fill:#2b6cb0,color:#fff,stroke:#1c4f8c;
class W1,W2,W3,W4 worker;
- The orchestrator runs N replicas behind a NATS queue group with PostgreSQL advisory locks for spawn coordination.
- The shard registry (
characters.instances,characters.player_assignments) is the single source of truth so any orchestrator replica can take over. - Per-shard Prometheus metrics are auto-discovered through Docker labels.
Layers¶
| Layer | Responsibility |
|---|---|
| Edge | splintertree-auth, splintertree-gateway |
| Stateful workers | splintertree-world, plus per-feature crates (guild, social, auction, ticket, matchmaking, character) |
| Orchestration | splintertree-orchestrator |
| HTTP plane | splintertree-web-api (axum) — admin REST, public REST, launcher API |
| UI | splintertree-web-frontend (Quasar / Vue 3 SPA), splintertree-launcher (Tauri 2) |
| Tooling | splintertree-ctl, splintertree-dbc, splintertree-bench, splintertree-test-harness |
Three components sit outside the NATS bus and talk to the cluster
over HTTP: splintertree-web-api, splintertree-web-frontend, and
splintertree-launcher.
Persistence¶
- PostgreSQL is the primary store. Three logical schemas:
auth.*— accounts, realmlist, SRP6.characters.*— characters, instance registry, transfer state.world.*— AzerothCore-parity content (creatures, quests, items, loot, conditions, smart scripts, …).
- Redis caches launcher manifests and other read-mostly data.
- NATS JetStream carries durable subjects where applicable.
Continue with Crates for a per-crate breakdown.