LLM, moderation & translation¶
Splintertree's LLM-driven features all read from a shared assignment table so a single provider switch (e.g. swap from Ollama Cloud to a local llama.cpp instance) updates every consumer.
Providers + models¶
/admin/llm/providers(world.llm_provider) — base URL, API key, kind (openai / ollama). Supports any OpenAI-compatible endpoint./admin/llm/models(world.llm_model) — per-provider model registry with name, context window, enabled flag.
Assignments¶
world.llm_assignment maps a use case to a model:
| Use case | Default | Surface |
|---|---|---|
moderation |
glm-5.1 | Forum + chat moderation pipeline |
chat_translate |
glm-5.1 | In-game chat translation |
npc_ai |
glm-5.1 | NPC dialogue agent |
playbot_ai |
glm-5.1 | Player-bot decision agent |
ticket_assist |
glm-5.1 | GM ticket reply drafter |
Each assignment carries a system_prompt editable inline. The
autonomous flag controls whether the agent acts without operator
confirmation (default: false except for playbot_ai and
npc_ai).
Moderation pipeline¶
- Backend:
splintertree-common::moderation::classifierruns HMAC classifications via the configured LLM, caches verdicts per text+category in Redis. - Categories (configurable in
world.world_config.moderation_categories): spam, rmt, harassment, sexual, threats, personal_info. - Threshold (
moderation_threshold, default 70) flips Allow → Block. - Forum traffic re-evaluates against a stricter local threshold (40) inside the web-api so a low-confidence harassment hit on a forum post still blocks even though the chat moderator caches it as Allow.
- Strikes (
moderation_strike_thresholdwithinmoderation_strike_window_sec) trigger an automatic mute ofmoderation_mute_duration_sec. - Chat / mail / forum surfaces are independently togglable
(
moderation_apply_to_chat,moderation_apply_to_mail).
Chat translate¶
- Live translation between chat locales when both sender and recipient are configured. The world worker batches translations per-locale per broadcast so a 35-player guild in mixed locales costs at most one LLM call per locale, not per recipient.
- Min character count (
chat_translate_min_chars) avoids translating "lol" / "k" style noise.
Generative content (AdminAiGenPage)¶
Designer surface for generating NPC dialogue, quest text, or item descriptions from a brief — see Creatures & NPC AI for how that ties into the design pipeline.