Deployment¶
Splintertree officially supports one deployment target: Kubernetes via the bundled Helm chart. Docker Compose exists for local development and code study only.
Supported topology
The only officially supported way to run the full cluster is
the Helm chart under deploy/helm/splintertree/ against
Kubernetes (k3s for self-hosting, managed Kubernetes for
larger setups). Docker Compose is a development convenience —
it is not exercised in CI as a production target, the
operational primitives (queue groups, advisory locks, PVCs,
rolling restarts) are designed around Kubernetes, and the
project does not investigate Compose-specific issues. As stated
on the home page, Splintertree does not
recommend running public realms with this code at all.
Helm (officially supported)¶
The chart lives at deploy/helm/splintertree/ and targets k3s as
well as managed Kubernetes. It provisions:
- Deployments per crate, sized independently.
- A NATS cluster (StatefulSet, 3 replicas by default).
- PostgreSQL StatefulSet with PVCs.
- Redis Deployment.
- Ingress for
splintertree-web-api+splintertree-web-frontend. - A persistent volume (
fullClient) for the canonical client tree that the launcher pulls per-locale files from.
Per-realm config maps drive splintertree-orchestrator so a single
chart install can host an arbitrary number of realms.
The Helm path is also what the integration test rig and the multi-replica orchestrator behaviour are designed around — queue groups, PostgreSQL advisory locks, and rolling restarts all assume a Kubernetes scheduler.
Topology¶
flowchart TB
subgraph External[" "]
direction LR
Client[WoW client<br/><i>auth + gateway TCP</i>]
Browser[Browser / launcher<br/><i>HTTPS</i>]
end
subgraph K8s["Kubernetes namespace · Helm release"]
direction TB
subgraph Edge["Edge — Service + Ingress"]
direction LR
AuthSvc[Service: auth<br/>:3724/tcp]
GatewaySvc[Service: gateway<br/>:8085/tcp]
Ingress[Ingress<br/>web-api + web-frontend]
end
subgraph Stateless["Stateless Deployments"]
direction LR
Auth[splintertree-auth]
Gateway[splintertree-gateway<br/><i>HPA-scalable</i>]
WebAPI[splintertree-web-api]
WebFE[splintertree-web-frontend]
Orchestrator[splintertree-orchestrator<br/><i>N replicas, queue group</i>]
Support[splintertree-guild · social ·<br/>auction · matchmaking · ticket ·<br/>character]
end
subgraph Workers["Hot-spawned per (map_id, instance_id)"]
direction LR
W1[world: 0/0]
W2[world: 36/41]
W3[world: 489/87]
WN[…]
end
subgraph Stateful["StatefulSets + PVCs"]
direction LR
Postgres[(PostgreSQL<br/>StatefulSet)]
NATS[(NATS cluster<br/>StatefulSet · 3 replicas)]
Redis[(Redis<br/>Deployment)]
FullClient[(PVC: fullClient<br/><i>canonical client tree</i>)]
end
subgraph Config["ConfigMaps · Secrets"]
direction LR
RealmCM[Per-realm ConfigMaps]
Secrets[(Secrets<br/>JWT · DB creds · cosign keys)]
end
Client --> AuthSvc --> Auth
Client --> GatewaySvc --> Gateway
Browser --> Ingress --> WebAPI
Browser --> Ingress --> WebFE
Gateway -- "publish opcodes" --> NATS
WebAPI -- "spawn / drain RPC" --> NATS
Auth --> Postgres
Gateway --> Postgres
WebAPI --> Postgres
WebAPI --> Redis
Support --> NATS
Orchestrator --> NATS
Orchestrator --> Postgres
Orchestrator -- "kubectl apply<br/>(world Pods)" --> W1
Orchestrator --> W2
Orchestrator --> W3
W1 --> NATS
W2 --> NATS
W3 --> NATS
W1 --> Postgres
W2 --> Postgres
W3 --> Postgres
WebAPI -. "client manifest" .-> FullClient
Orchestrator -. "reads" .-> RealmCM
WebAPI -. "reads" .-> Secrets
end
classDef stateful fill:#2b6cb0,color:#fff,stroke:#1c4f8c;
classDef stateless fill:#3a3a3a,color:#fff,stroke:#222;
classDef worker fill:#d2691e,color:#fff,stroke:#a04e15;
classDef external fill:#444,color:#ddd,stroke:#666,stroke-dasharray:3 3;
class Postgres,NATS,Redis,FullClient,Secrets stateful;
class Auth,Gateway,WebAPI,WebFE,Orchestrator,Support stateless;
class W1,W2,W3,WN worker;
class Client,Browser external;
Reading the chart top-down: external clients hit Services / Ingress, stateless Deployments handle protocol termination and the HTTP plane, the orchestrator schedules world Pods per map (or per zone group) on demand, and everything stateful (PostgreSQL, NATS, Redis, the client-tree PVC) sits in StatefulSets / PVCs underneath.
Docker Compose (development only)¶
compose.yaml at the repository root brings up the full stack —
PostgreSQL, Redis, NATS, auth, gateway, world worker, web API,
web frontend — for local code study and contribution work.
Not for hosting
Docker Compose is not a supported deployment target. Operational primitives like advisory-lock leadership, NATS queue-group rebalancing, and rolling restarts are not exercised under Compose, and bug reports for Compose-only issues are out of scope. Use it to read the cluster in motion; use Helm/Kubernetes for anything else.
For backing services only (useful with native cargo builds):
Local port and env overrides go in compose.override.yaml
(gitignored).
OCI artefact delivery¶
Both game patches and mods ride the same content-addressable pipeline:
- Patches push to
SPLINTERTREE_PATCH_REGISTRY; launcher pulls per realm. - Mods push to
SPLINTERTREE_MOD_REGISTRY; activator pulls on activation. - Cosign signatures carry through end to end.
The launcher and the activator share the same pull path; signed artefacts mean the chain of custody is verifiable from author to client.
Encrypted preload¶
Patches with status=preload ship AES-256-GCM-encrypted (12-byte
nonce header). The key only releases when the patch flips to
active via /launcher/patches/<id>/unlock, so realms can stage
content publicly without leaking it early.
Roll-forward / roll-back¶
cluster.patches stores the realm's full ancestor chain. Activation
is a single atomic write; rollback is the same operation pointed at
the previous chain head. Mods bundled into the patch follow the
same lifecycle.