Skip to main content

Chat

One-to-one agent chat. Each participant has a private thread with the LLM; messages are Base64-safe and Markdown-rendered on the client.

Example structures

Minimal

{ "type": "Chat" }

With model and system prompt

{
"type": "Chat",
"data": {
"LLM": "n8n",
"systemPrompt": "You are a supportive tutor."
}
}

Properties

KeyTypeDefaultDescription
data.LLMstring"llama"Model identifier used by the server (LLM_makeRequest).
data.systemPromptstring"You are a helpful assistant."Prefixed to the user message before sending to the LLM.

The UI state (messages, isWaitingForResponse) comes from the server and is not configured in data.

Behavior

  • On enter, each active user receives a welcome message from the assistant if they have no history yet.
  • Users type in a textarea; Enter sends, Shift+Enter inserts a newline.
  • Outgoing messages are Base64-encoded client-side and flagged with isBase64: true. The server decodes before calling the LLM.
  • While a user has a pending LLM call, further sends are blocked for that user. A typing indicator is shown.
  • The server responds with assistant messages and clears the waiting flag for that user.
  • Threads are per-user; the server only sends each participant their own history.

Client protocol

OpCode

  • sendMessage (0)

Client → Server payload

{
"opCode": 0,
"data": { "text": "<base64>", "isBase64": true }
}

Server → Client update

interface ChatNodeClientUpdate {
messages: Array<{
text: string;
role: 'user'|'assistant';
userId: string; // 'ai' for assistant messages
timestamp: number;
isBase64?: boolean;
}>;
isWaitingForResponse: boolean;
}

Server logic (overview)

  • State shape

    interface ChatNodeState {
    userMessages: { [userId: string]: ChatMessage[] };
    waitingForResponses: { [userId: string]: boolean };
    currentRequestIds: { [userId: string]: string|null };
    LLM: string;
    systemPrompt: string;
    }
  • Init: adds a per-user welcome message and immediately sends each user their thread.

  • On sendMessage: appends the user message, decodes if Base64, builds prompt as systemPrompt + " User: <text>", calls LLM_makeRequest(...), stores request_id, sets waiting flag, and updates the user.

  • On matchSignal (opcode 0): matches requestiduserId, appends assistant message, clears waiting, and updates only that user.

  • Persistence hook: after sending, the node mirrors the decoded history into state.leafState (host vs. learners) for later analysis.

Visual design

  • Modern chat layout inside a <Card> with a scrollable area.

  • Markdown rendering via react-markdown + remark-gfm.

  • Avatars:

    • Assistant: Lucide Bot in an avatar fallback.
    • User: ProfileImg using workshop.playerList for name, initials, and optional avatar.
  • Send button shows a spinner while waiting.

Use cases

  • Private “ask the agent” during workshops.
  • Tutoring, coaching, or reflection prompts with per-student threads.
  • Lightweight UX for LEAF study logging (history is mirrored into leafState).
tip
  • Short or sanitized user strings should stay Base64-encoded on send; the client safely decodes for display.
  • Assistant messages are plain text (no Base64 flag).
  • If you need shared or group chat, this node must be extended; it currently isolates threads per user presence.

Integration example

[
{ "type": "Slide", "data": { "headline": "Ask your tutor" } },
{ "type": "Chat", "data": { "LLM": "llama", "systemPrompt": "You are a supportive tutor." } },
{ "type": "Scoreboard", "data": { "showPodium": false } }
]