Skip to main content

LLM Service Documentation

Overview

The humuus platform integrates Large Language Model (LLM) capabilities to provide intelligent, context-aware assistance during workshops. The LLM service enables real-time AI-powered interactions, personalized feedback, and automated content generation.

Architecture

The LLM service follows a three-tier architecture:

Frontend (Next.js) → API Route (/api/llm/chat) → External LLM Provider (n8n)
↓ ↓ ↓
User Interface Request Handling AI Processing

Key Components

  1. Frontend Service (llmService.ts) - Client-side interface for LLM interactions
  2. API Route (/api/llm/chat) - Next.js middleware handling requests/responses
  3. Nakama Backend - Server-side LLM request management
  4. External Provider - n8n workflow or other LLM endpoints (Gemini, Llama)

Configuration

Environment Variables

Next.js (apps/web/.env.local)

LLM_N8N_ENDPOINT=http://your-n8n-server:5678/webhook/llm-handler
LLM_N8N_AUTH=username:password

Nakama (apps/nakama/.env)

PORT=7350
HTTP_KEY=your-nakama-http-key

Default LLM Configurations

The Nakama backend supports multiple LLM providers configured in storage:

Gemini Configuration

{
url: 'https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$[key]',
method: 'post',
headers: { 'Content-Type': 'application/json' },
body: {
contents: [{ parts: [{ text: '$[prompt]' }] }],
generationConfig: { maxOutputTokens: 100 }
},
responsePath: 'candidates[0].content.parts[0].text'
}

Llama Configuration

{
url: 'https://api.groq.com/openai/v1/chat/completions',
method: 'post',
headers: {
'Content-Type': 'application/json',
Authorization: 'Bearer $[key]'
},
body: {
messages: [{ role: 'user', content: '$[prompt]' }],
model: 'llama-3.3-70b-versatile',
max_completion_tokens: 260
},
responsePath: 'choices[0].message.content'
}

n8n Configuration

{
url: 'http://your-n8n-server:5678/webhook/llm-handler',
method: 'post',
headers: {
'Content-Type': 'application/json',
Authorization: 'Basic $[key]'
},
body: {
sessionId: '$[session_id]',
chatInput: '$[prompt]',
type: 'chat'
},
responsePath: 'response'
}

Usage

Chat Interactions

Basic Chat Request

import { sendChatMessageToLLM } from '@/services/llmService';

const response = await sendChatMessageToLLM({
prompt: btoa('What is the capital of France?'), // Base64 encoded
matchId: 'workshop-match-id',
userId: 'user-123',
isBase64: true,
workshopKey: 'workshop-abc',
settings: {
saveLearningActivities: true
}
});

if (response.message) {
console.log('LLM Response:', response.message);
} else {
console.error('Error:', response.error);
}

Chat with Context

const response = await sendChatMessageToLLM({
prompt: btoa('Help me understand this concept'),
matchId: 'workshop-match-id',
userId: 'user-123',
isBase64: true,
workshopKey: 'workshop-abc',
coCreateNodeIndex: 2, // If within a CoCreate node
llmContext: {
connectedNodes: [
{
id: 'node-1',
type: 'slide',
data: { headline: 'Introduction', body: '...' }
},
{
id: 'node-2',
type: 'quiz',
data: { question: '...', options: [...] }
}
],
connectedPlayerInsights: [
{
userId: 'user-456',
insights: ['Struggled with question 2', 'Strong visual learner']
}
],
learning: {
globalLearningObjectives: ['Understand core concepts'],
globalCompetences: ['Critical thinking'],
currentPhaseTag: 'Explore'
}
},
settings: {
saveLearningActivities: true,
userSystemPrompt: 'Be encouraging and supportive'
}
});

Host Context for Group Insights

const response = await sendChatMessageToLLM({
prompt: btoa('Provide insights on group progress'),
matchId: 'workshop-match-id',
userId: 'host-user-id',
isBase64: true,
workshopKey: 'workshop-abc',
isHost: true,
hostDetailedGroupContext: [
{
groupId: 'group-1',
subtopic: 'Climate Change',
learningObjectives: ['Understand greenhouse effect'],
competences: ['Scientific reasoning'],
currentGroupActivitySummary: 'Completed quiz with 80% accuracy',
currentGroupInsight: 'Strong collaboration',
currentGroupChallenge: 'Needs more time for reflection'
},
{
groupId: 'group-2',
subtopic: 'Renewable Energy',
learningObjectives: ['Compare energy sources'],
competences: ['Critical analysis']
}
],
currentPhaseDisplayTime: '5:30 remaining',
settings: {
saveLearningActivities: false
}
});

Flow Summaries

Generate AI summaries of completed learning flows:

import { sendFlowSummaryToLLM } from '@/services/llmService';

const response = await sendFlowSummaryToLLM({
type: 'flow_summary',
userId: 'user-123',
matchId: 'workshop-match-id',
workshopKey: 'workshop-abc',
coCreateNodeIndex: 1,
flowId: 'climate-quiz-flow',
flowAnswers: [
{
nodeId: 'quiz-1',
questionText: 'What causes global warming?',
selectedOptionText: 'Greenhouse gases',
isCorrect: true,
correctAnswersText: ['Greenhouse gases'],
nodeIndex: 0
},
{
nodeId: 'quiz-2',
questionText: 'Name a renewable energy source',
selectedOptionText: 'Coal',
isCorrect: false,
correctAnswersText: ['Solar', 'Wind', 'Hydro'],
nodeIndex: 1
}
],
learningContext: {
globalLearningObjectives: ['Understand climate science'],
currentPhaseTag: 'Apply'
},
settings: {
saveLearningActivities: true
}
});

console.log('Flow Summary:', response.message);

Group Activity Summaries

const response = await sendChatMessageToLLM({
prompt: btoa('Summarize group activities'),
matchId: 'workshop-match-id',
userId: 'user-123',
isBase64: true,
workshopKey: 'workshop-abc',
activitiesForGroupSummary: [
{
userId: 'user-123',
activity: 'Completed quiz',
timestamp: 1699876543210,
nodeIndex: 2
},
{
userId: 'user-456',
activity: 'Added sticky note',
timestamp: 1699876600000,
nodeIndex: 3
}
],
groupIdForSummary: 'group-1',
settings: {
saveLearningActivities: true
}
});

Host Summary (All Groups)

const response = await sendChatMessageToLLM({
prompt: btoa('Provide overall workshop summary'),
matchId: 'workshop-match-id',
userId: 'host-user-id',
isBase64: true,
workshopKey: 'workshop-abc',
allGroupActivitiesForHostSummary: [
{
groupId: 'group-1',
activities: [/* LearnerActivity objects */]
},
{
groupId: 'group-2',
activities: [/* LearnerActivity objects */]
}
],
settings: {
saveLearningActivities: false
}
});

Data Structures

LLMChatPayload

interface LLMChatPayload {
prompt: string; // Base64 encoded user message
matchId: string; // Workshop match identifier
userId: string; // User identifier
isBase64: boolean; // Always true for proper encoding
workshopKey: string; // Workshop storage key
coCreateNodeIndex?: number; // Active CoCreate node index
llmContext?: EnrichedLLMContext; // Additional context
settings: {
saveLearningActivities: boolean; // Store interactions as learning data
userSystemPrompt?: string; // Custom system prompt
};
activitiesForGroupSummary?: LearnerActivity[];
groupIdForSummary?: string;
allGroupActivitiesForHostSummary?: GroupActivity[];
isHost?: boolean;
hostDetailedGroupContext?: HostGroupDetail[];
currentPhaseDisplayTime?: string;
}

EnrichedLLMContext

interface EnrichedLLMContext {
connectedNodes?: NodeContextDataItem[];
connectedPlayerInsights?: Array<{
userId: string;
insights: string[];
}>;
learning?: LearningContext;
}

LearningContext

interface LearningContext {
globalLearningObjectives?: string[];
globalCompetences?: string[];
groupSubtopic?: string;
groupLearningObjectives?: string[];
groupCompetences?: string[];
globalInterventionStyle?: string;
globalInterventionFrequency?: number;
groupInterventionStyle?: string;
groupInterventionFrequency?: number;
currentPhaseTag?: string; // LEAF framework phase
}

NodeContextDataItem

interface NodeContextDataItem {
id: string;
type: 'quiz' | 'slide' | 'stickyNote';
data: any; // Node-specific data structure
}

HostGroupDetail

interface HostGroupDetail {
groupId: string;
subtopic?: string;
learningObjectives?: string[];
competences?: string[];
currentGroupActivitySummary?: string;
currentGroupInsight?: string;
currentGroupChallenge?: string;
}

Backend Implementation

Nakama Functions

Making LLM Requests

function LLM_makeRequest(
ctx: nkruntime.Context,
logger: nkruntime.Logger,
nk: nkruntime.Nakama,
LLM: string,
prompt: string,
userId: string = ''
): LLM_requestResponse {
const aiRequest = {
prompt: prompt,
LLM: LLM,
match_id: ctx.matchId,
user_id: userId
};

const port = ctx.env['PORT'];
const httpkey = ctx.env['HTTP_KEY'];

const response = nk.httpRequest(
`http://localhost:${port}/v2/rpc/Ai_Request?http_key=${httpkey}&unwrap`,
'post',
{ 'Content-Type': 'application/json' },
JSON.stringify(aiRequest),
10000
);

return JSON.parse(response.body);
}

Retrieving Values by Path

function getValueByPath(obj: any, path: string): any {
if (!path) return undefined;

const parts = path
.replace(/\[(\w+)\]/g, '.$1')
.split('.');

let result = obj;
for (const part of parts) {
if (result === null || result === undefined) {
return undefined;
}
result = result[part];
}

return result;
}

API Route Details

Request Flow

  1. Request Reception: Next.js API route receives POST request
  2. Parameter Validation: Checks for required fields based on request type
  3. Prompt Decoding: Decodes Base64 encoded prompt if flagged
  4. Payload Construction: Builds n8n-compatible payload
  5. Authentication: Adds Basic Auth header
  6. External Request: Forwards to n8n endpoint
  7. Response Processing: Extracts message from n8n response
  8. Response Return: Sends formatted response back to client

Request Types

chat (Default)

Standard conversational interaction with context support.

Required Parameters:

  • prompt (Base64 encoded)
  • matchId
  • userId
  • workshopKey

Optional Parameters:

  • coCreateNodeIndex
  • llmContext
  • activitiesForGroupSummary
  • groupIdForSummary
  • allGroupActivitiesForHostSummary
  • isHost
  • hostDetailedGroupContext
  • currentPhaseDisplayTime

flow_summary

Generates summary of completed learning flow.

Required Parameters:

  • matchId
  • userId
  • workshopKey
  • coCreateNodeIndex
  • flowId
  • flowAnswers

Optional Parameters:

  • learningContext

Error Handling

Client-Side

try {
const response = await sendChatMessageToLLM(payload);

if (response.error) {
console.error('LLM Error:', response.error);
// Show user-friendly error message
showNotification('Unable to get AI response. Please try again.');
} else {
// Process successful response
displayMessage(response.message);
}
} catch (error) {
console.error('Network error:', error);
showNotification('Connection error. Check your internet connection.');
}

API Route Error Handling

The API route handles several error scenarios:

  • Missing Configuration: Returns 500 with "LLM service not configured"
  • Missing Parameters: Returns 400 with specific parameter requirements
  • Base64 Decoding Errors: Returns 400 with "Invalid base64 encoded prompt"
  • External Service Errors: Returns original status code with error details
  • Invalid Response Structure: Returns 500 with "Invalid LLM response structure"
  • Unexpected Errors: Returns 500 with error message

Best Practices

1. Always Encode Prompts

// ✅ Correct
const encodedPrompt = btoa('Your message here');

// ❌ Wrong
const prompt = 'Your message here';

2. Provide Relevant Context

Include only necessary context to avoid token limits:

const context: EnrichedLLMContext = {
connectedNodes: relevantNodes.map(node => ({
id: node.id,
type: node.type,
data: node.data
})),
learning: {
globalLearningObjectives: workshop.learningObjectives,
currentPhaseTag: getCurrentPhase()
}
};

3. Handle Async Operations

Always use try-catch with async/await:

async function handleChat(message: string) {
try {
const response = await sendChatMessageToLLM({
prompt: btoa(message),
// ... other params
});

if (response.message) {
return response.message;
}
} catch (error) {
console.error('Chat error:', error);
return 'Sorry, I encountered an error.';
}
}

4. Use Learning Activities

Enable saveLearningActivities for educational interactions:

settings: {
saveLearningActivities: true, // Stores interaction data
userSystemPrompt: 'Be a supportive learning coach'
}

5. Optimize for Performance

  • Batch related requests when possible
  • Cache common responses
  • Use appropriate token limits
  • Debounce user inputs to prevent excessive API calls

Integration Examples

CoCreate Node Integration

const handleCoCreateChat = async (userMessage: string) => {
const response = await sendChatMessageToLLM({
prompt: btoa(userMessage),
matchId: workshop.matchId,
userId: currentUser.id,
isBase64: true,
workshopKey: workshop.key,
coCreateNodeIndex: currentNodeIndex,
llmContext: {
connectedNodes: getConnectedNodes(),
learning: workshop.learningContext
},
settings: {
saveLearningActivities: true,
userSystemPrompt: coCreateNode.systemPrompt
}
});

return response;
};

Quiz Flow Summary

const generateQuizSummary = async (quizResults: QuizResult[]) => {
const flowAnswers = quizResults.map((result, index) => ({
questionText: result.question,
selectedOptionText: result.selectedAnswer,
isCorrect: result.correct,
correctAnswersText: result.correctAnswers,
nodeIndex: index
}));

return await sendFlowSummaryToLLM({
type: 'flow_summary',
userId: currentUser.id,
matchId: workshop.matchId,
workshopKey: workshop.key,
coCreateNodeIndex: quizNodeIndex,
flowId: 'quiz-flow-1',
flowAnswers,
learningContext: workshop.learningContext
});
};

Troubleshooting

Common Issues

1. "LLM service not configured"

Cause: Missing environment variables
Solution: Check LLM_N8N_ENDPOINT and LLM_N8N_AUTH in .env.local

2. "Invalid base64 encoded prompt"

Cause: Prompt not properly encoded
Solution: Always use btoa() for encoding and set isBase64: true

3. "Invalid LLM response structure from n8n"

Cause: n8n response doesn't match expected format
Solution: Verify n8n workflow returns message, reply, text, or output field

4. Authentication Errors (401/403)

Cause: Incorrect credentials in LLM_N8N_AUTH
Solution: Verify username:password format and Base64 encoding

5. Timeout Errors

Cause: n8n workflow takes too long
Solution: Optimize n8n workflow or increase timeout in fetch call

Debug Logging

Enable detailed logging:

// In llmService.ts
console.log('Sending chat message to LLM service:', payload);

// In API route
console.log('n8n payload:', n8nPayload);
console.log('n8n response:', responseData);