PromptBattle
AI prompt-engineering challenge where participants submit prompts, an LLM generates outputs, everyone votes, and winners are revealed.
Example structures
Basic
{
"type": "PromptBattle",
"data": {
"LLM": "llama",
"prompt": "Write a product description for a smart water bottle...",
"output": "An elegant, hydration-tracking bottle that..."
},
"hostdata": {
"notes": "Judge on clarity, constraints handling, and measurability."
}
}
Minimal (defaults to gemini)
{
"type": "PromptBattle",
"data": {
"prompt": "Summarize this text in 5 bullet points...",
"output": "Concise, factual summary..."
}
}
Properties
data.LLM (string, optional)
Model identifier for server-side generation. Defaults to "gemini".
data.prompt (string, required)
Reference prompt shown as the “target task” (host side).
data.output (string, required)
Reference/expected output used for comparison (host side, Markdown supported).
hostdata.notes (string, optional)
Evaluation guidelines for the facilitator. Not consumed by the component logic.
Tip: Clients submit their own prompts during the battle; the above
prompt/outputare the “gold standard” shown on the host’s Original card.
Stages (finite state machine)
- PromptBattle (0) – Participants submit prompts; server calls the chosen LLM and collects outputs.
- results (1) – Host preview of AI-generated outputs.
- voting (2) – Participants vote on the best output. Revotes overwrite prior votes.
- resolution (3) – Winners revealed; host can show original prompt/output.
Navigation uses the standard Next/Back controls:
next()increments the stage; leavingresolutionreturnstrueto advance the flow.back()decrements the stage; atPromptBattleit returnstrue(cannot go earlier).
Client protocol
OpCodes
submitPrompt(0): player submits a promptvotePrompt(1): player votes for a user’s prompt/output
Submit prompt (client → server)
{
"opCode": 0,
"data": {
"prompt": "<base64-encoded prompt>",
"isBase64": true
}
}
- Server decodes Base64 (fallback: uses raw text if decoding fails).
- Server calls
LLM_makeRequest(ctx, logger, nk, LLM, actualPrompt)and tracks the pending request byrequest_id.
LLM callback (server signal → node)
matchSignal expects { "opcode": 0, "data": "{\"requestid\":\"...\",\"value\":\"...\"}" }.
When a response arrives, the node copies the generated value into that user’s entry and sends an update.
Vote (client → server)
{
"opCode": 1,
"data": { "userId": "<target user id>" }
}
- Multiple votes allowed; the latest vote replaces the previous one.
Server → client update schema
interface PromptBattleNode_ClientUpdate {
state: 0|1|2|3; // current stage
originalPrompt: string; // from config
originalOutput: string; // from config
prompts: Array<{
prompt: string; // original (may be Base64)
output: string; // LLM result
voteCount: number;
userId: string;
}>;
userVotes: { [userId: string]: string }; // voter -> voted userId
sendPrompt: boolean; // for this recipient only: has the user already submitted?
}
The node sets sendPrompt per presence before sending, so players who already submitted see a waiting screen while others can still submit.
Frontend behavior (key points)
- Submission view (players): text area with Base64 encoding on submit; after submit, show “waiting for others”.
- Host “Original” card: shows the reference
prompt/output(Markdown rendered), plus a step timeline. - Results (host): grid of AI outputs (Markdown rendered).
- Voting: players see anonymized tiles and select one; host sees live vote counts once everyone voted.
- Resolution: winners highlighted; host can reveal prompts and outputs side-by-side.
Color chips/icons are assigned deterministically per userId for consistent visuals across stages.
Audio & timing
- The node initializes
endTime = now + 5minin state (reserved for timers). - A
timeLeftfield exists in the client type for UI but isn’t emitted bysendData()in the shown server code; include it from your tick loop if you want a visible countdown in voting.
Data model (server)
enum PromptBattleNode_Types { PromptBattle=0, results=1, voting=2, resolution=3 }
enum PromptBattleNode_Opcode { submitPrompt=0, votePrompt=1 }
interface PromptBattleNode_PromtStructure {
prompt: string; // stored as received; may be Base64
output: string; // LLM result text
voteCount: number;
isBase64?: boolean; // retained for client-side decoding
}
interface PromptBattleNodeState {
userPrompts: { [userId: string]: PromptBattleNode_PromtStructure };
userVotes: { [userId: string]: string };
originalPrompt: string;
originalOutput: string;
LLM: string; // default "gemini" if not provided
endTime: number; // unix seconds
state: PromptBattleNode_Types;
waitingPrompts: {
[requestId: string]: { prompt: string; userId: string; isBase64?: boolean };
};
}
interface PromptBattleNodeState_Storage {
prompt: string;
output: string;
LLM: string;
}
- If you accept short or sanitized prompts, keep
isBase64: trueto avoid character loss. The client includes adesanitizeText()utility for display. - Videos/links in LLM outputs rely on Markdown rendering; sanitize as needed for your environment.
Example client flow
- Player writes a prompt → client Base64-encodes and sends with
isBase64: true. - Server calls the configured LLM and parks the request in
waitingPrompts. - LLM replies via
matchSignal→ server fillsuserPrompts[userId].output, sends update. - Host advances to results, then to voting.
- Players vote; revotes are allowed; server keeps
userVotes[voter]=targetand adjustsvoteCount. - Host advances to resolution to show winners and reveal prompts/outputs.
Use cases
- “Best prompt wins” mini-tournaments.
- Teaching prompt patterns (role, constraints, format, evaluation signals).
- Rapid A/B testing of prompt variants before deploying to production agents.