Key concepts¶
Splintertree borrows most of its vocabulary from the AzerothCore / TrinityCore lineage but stretches a few terms to fit a clustered, microservice topology. This page is the glossary you should keep open while you read the rest of the docs.
Cluster¶
A cluster is one Splintertree deployment — a single Helm release, a single Docker Compose stack, or one developer's laptop. A cluster owns one PostgreSQL topology, one NATS bus, one set of worker processes, and one HTTP plane (auth, gateway, web API).
A cluster can host any number of realms, of any combination of game versions, side by side.
Game¶
A game is a client target — Classic (1.12), TBC (2.4.3), or
Wrath (3.3.5a) — recorded in the cluster.games table. Each game
has its own world.* content schema (creatures, quests, items,
DBC tables). One cluster can run several games at once; each
realm maps to exactly one game.
Realm¶
A realm is a row in auth.realmlist — what a player sees in
the in-client realm picker. A realm has:
- A name and a public address.
- A reference to one game (
cluster.realm_game). - A population of characters in
characters.*. - A patch chain in
cluster.patches. - A set of activated mods.
A cluster hosts N realms; a single realm's behaviour is the sum of its game baseline plus its mod stack.
Realm pool¶
A realm pool (auth.realm_pools + auth.realm_pool_members)
groups realms for cross-realm features. Battleground queues, arena
teams, and LFR / LFG matches key on (pool_id, activity_id) so
players from any realm in the pool can fill the same match.
A pool also carries the default phase for its member realms —
operators bump the pool to roll new content out to every realm in
it at once. Individual realms can override the pool's default
through their own current_phase if a single realm needs to lead
or lag the pool.
Connected realms¶
Connected realms are multiple auth.realmlist rows that share
the same cluster.realm_game mapping — they read the same
world.* schema while keeping separate populations and
economies. Useful when you want one merged endgame across what
look like distinct servers to players.
Map¶
A map is a geography (world.map) — Eastern Kingdoms,
Kalimdor, Naxxramas, a battleground arena, an instanced dungeon,
the GM Island debug map. Every map has a map_id. Some maps are
continents (one shared world copy per realm), others are
instanceable (one fresh copy per group, raid, or BG match).
Instance (shard)¶
An instance is one splintertree-world process owning one
slice of the world. The orchestrator hot-spawns instances on
demand. Granularity depends on the map:
- A continent map normally runs as a single long-lived instance.
Operators can opt into zone splitting via
zone_groups.yaml, carving the same map into N processes that each own a disjoint set of zone IDs (a zone group). Zone hand-off then transfers the player between sibling workers without a realm restart. - A 5-man dungeon runs as one instance per group.
- A 25-man raid runs as one instance per raid lockout.
- A BG arena runs as one instance per match.
Identifier: (map_id, instance_id) for un-split maps,
(map_id, instance_id, zone_group_id) when the map is zone-sharded.
From the players' point of view it is just "the place they are in
right now"; horizontal scaling means starting more instances.
The shard registry lives in characters.instances and
characters.player_assignments; the orchestrator coordinates
spawn / drain decisions across replicas through PostgreSQL
advisory locks so no instance is started twice.
Worker¶
A worker is the generic term for any stateful process that
subscribes to a NATS subject and acts on game state. The most
prominent worker is splintertree-world (one per map, or one per
zone group within a split map), but
guild, social, auction, matchmaking, and ticket each run as their
own worker too. Workers are stateless from the cluster's point of
view — losing one means restarting it; the source of truth is
PostgreSQL.
Gateway¶
The gateway (splintertree-gateway) is the TCP front-end. It
owns the per-session encryption (ARC4 / header), routes opcodes
onto NATS subjects, and pushes responses back to the client. A
cluster typically runs N gateways behind a load balancer; sessions
stick to whichever gateway accepted the connection.
Auth server¶
The auth server (splintertree-auth) terminates the SRP6
handshake, hands back the realm list, and disappears from the
request path once the session is established. It runs separately
from the gateway because the protocol is different (TCP 3724 with
its own framing) and because it wants its own scale profile.
Orchestrator¶
The orchestrator (splintertree-orchestrator) spawns and
drains splintertree-world containers based on player load,
runs the cross-realm transfer state machine, and applies mod /
patch activations. Multiple orchestrator replicas run behind a
NATS queue group with PostgreSQL advisory locks for spawn
coordination, so no single replica is the leader.
Mod (.rcmod bundle)¶
A mod is a .rcmod archive — Splintertree's unit of
content delivery. It bundles Python scripts, client UI addons,
client-file overrides, idempotent SQL migrations, DBC table
patches, and AC-shaped content rows (creatures, quests, items,
loot, conditions, smart scripts, achievement rewards). Mods are
hot-loadable, per-realm, per-patch, with rollback. See the
Modding section.
Patch¶
A patch (cluster.patches) is a versioned snapshot of a
game's state — a row owned by game_id, not by any single
realm. Each row points at:
- A set of mods installed at that level (
cluster.mod_installs). - A client
.MPQartefact built from the mod stack's DBC and asset overrides, served to the launcher. - The previous patch in the DAG (
parent_id), so authors can fast-forward, branch, or rewind. - A
status(draft/published/archived) — only published patches are eligible for realm activation.
Patches form a directed acyclic graph per game: every game has
its own DAG (cluster.games.current_patch_id is the head), and
every realm picks a position in its game's DAG.
A realm then references that DAG via
cluster.realm_game.pinned_patch_id:
pinned_patch_id = NULL→ the realm rides the latest published patch for its game; new patches roll out automatically.pinned_patch_id = <some patch>→ the realm is pinned, so upstream publishes do not move it.
The launcher pulls each realm's ancestor chain on connect (walks parent links from the realm's pinned patch back to the seed); the activator pulls and applies mods; the client lands at the right patch level without operator intervention.
Phase¶
A phase is a content-rollout milestone — Vanilla 1.1 MC + Ony, Vanilla 1.6 BWL, TBC 2.4 Sunwell, WotLK 3.3 ICC, and so on.
Phases are per-game: the set of phases that exists is the set that makes sense for that game's content schedule. Classic has the original Vanilla 1.x rollout, Burning Crusade has the 2.x rollout, Wrath has the 3.x rollout. The shared seed in the repository mirrors the official Blizzard cadence end-to-end so a multi-game cluster can use the same data with each game picking its own slice.
The active phase on a given realm is resolved in two layers:
- Realm pool sets the default. A pool's default phase is what every member realm runs unless told otherwise. Bumping the pool advances the whole pool at once — useful when a connected-realm group should roll new content together.
- Realm can override. An individual realm can pin
current_phaseto a value different from its pool's default — for a beta / leading realm, a lagging legacy realm, or any one-off rollout shape.
Phases gate which raids, dungeons, quests, and balance rules are live: a realm sitting on the BWL phase sees Blackwing Lair open and Onyxia Attunement up but cannot enter AQ40, even though both exist in the same game's patch DAG.
Phases are orthogonal to patches:
- Patches carry the bits — DBC tables, mod content rows,
client MPQs. Patches are per-game, owned by
cluster.patches. - Phases carry the rollout state — which of those bits are active right now for a given realm's playerbase. Phases are per-game, defaulted per-pool, overridable per-realm.
Mental model¶
flowchart TD
Cluster[Cluster<br/><i>one Helm release</i>]
Cluster --> Game1[Game: Classic]
Cluster --> Game2[Game: TBC]
Cluster --> Game3[Game: Wrath]
Game3 --> Patches[Patch DAG<br/><i>cluster.patches</i>]
Game3 --> Phases[Phases<br/><i>per-game rollout plan</i>]
Patches --> Mods[Mods<br/><i>cluster.mod_installs</i>]
Mods --> Content[Content + scripts + addons]
Game3 --> Pool[Realm pool<br/><i>default phase</i>]
Pool --> RealmA[Realm A]
Pool --> RealmB[Realm B]
Pool -- "pool default" --> Phases
RealmA -- "pinned_patch_id" --> Patches
RealmA -. "override" .-> Phases
RealmA --> Inst1[Instance: Eastern Kingdoms]
RealmA --> Inst2[Instance: Deadmines #42]
RealmA --> Inst3[Instance: WSG match #87]
classDef topLevel fill:#d2691e,color:#fff,stroke:#a04e15;
class Cluster topLevel;
Players log into a realm; the realm belongs to a game and sits inside a realm pool that carries the default phase; the realm holds its own position in the game's patch DAG and may override the pool's phase if it needs to lead or lag. The cluster spawns instances of maps as players need them. Mods sit underneath patches — the patch is what stamps a specific mod set onto a realm so clients can sync.
Where to dig deeper¶
- Architecture › Overview — request flow and crate map.
- Architecture › Crates — what every worker actually does.
- Architecture › Clustering & multi-realm — pool / connected-realm / cross-realm-transfer details.
- Modding — mod bundle layout and lifecycle.