ai-townCode Deep DiveConversation Manager

File Analysis: convex/agent/conversation.ts

This file is the AI Dialogue Engine. Its core responsibility is to generate context-aware, natural-sounding dialogue for AI agents by orchestrating data from various sources and feeding it into a Large Language Model (LLM).

1. Core Role and Positioning

This file acts as the AI Dialogue Engine for the simulation. Its core responsibility is to generate context-aware, natural-sounding dialogue for AI agents. It functions as an orchestrator, gathering information about the agents, their memories, and the current conversational context, and then using a Large Language Model (LLM) to produce an appropriate line of speech.

2. Core Value and Process

The primary value of this file is to breathe life into the AI agents by enabling them to have dynamic, stateful conversations. It handles three distinct conversational moments: starting, continuing, and leaving. The process uses a sophisticated retrieval-augmented generation (RAG) system where agent memories and past interactions are fed into a prompt to ensure the dialogue is relevant and personalized, making the AI characters feel more intelligent and consistent.

3. Code Deep Dive

startConversationMessage(...)

This function generates the very first line of dialogue when one AI agent starts a conversation with another. It gathers extensive context, including the agents’ personalities, their plans, and their past conversations. It then performs a vector search for memories related to talking with this specific person. All this information is compiled into a detailed prompt for the LLM, with a special instruction to reference a previous conversation if one exists. This encourages more engaging and context-aware openers.

export async function startConversationMessage(
  ctx: ActionCtx,
  worldId: Id<'worlds'>,
  conversationId: GameId<'conversations'>,
  playerId: GameId<'players'>,
  otherPlayerId: GameId<'players'>,
): Promise<string> {
  const { player, otherPlayer, agent, otherAgent, lastConversation } = await ctx.runQuery(
    selfInternal.queryPromptData,
    //...
  );
  const embedding = await embeddingsCache.fetch(
    ctx,
    `${player.name} is talking to ${otherPlayer.name}`,
  );
 
  const memories = await memory.searchMemories(
    ctx,
    player.id as GameId<'players'>,
    embedding,
    Number(process.env.NUM_MEMORIES_TO_SEARCH) || NUM_MEMORIES_TO_SEARCH,
  );
 
  const memoryWithOtherPlayer = memories.find(
    (m) => m.data.type === 'conversation' && m.data.playerIds.includes(otherPlayerId),
  );
  const prompt = [
    `You are ${player.name}, and you just started a conversation with ${otherPlayer.name}.`,
  ];
  prompt.push(...agentPrompts(otherPlayer, agent, otherAgent ?? null));
  prompt.push(...previousConversationPrompt(otherPlayer, lastConversation));
  prompt.push(...relatedMemoriesPrompt(memories));
  if (memoryWithOtherPlayer) {
    prompt.push(
      `Be sure to include some detail or question about a previous conversation in your greeting.`,
    );
  }
  const lastPrompt = `${player.name} to ${otherPlayer.name}:`;
  prompt.push(lastPrompt);
 
  const { content } = await chatCompletion({
    //...
  });
  return trimContentPrefx(content, lastPrompt);
}

continueConversationMessage(...)

This function generates a response in the middle of an ongoing conversation. It’s similar to starting one but adapted for continuing dialogue. It fetches memories related to the agent’s opinion of their partner and includes the recent chat history in the LLM prompt. It also gives the LLM explicit instructions like “DO NOT greet them again” and “Your response should be brief” to keep the conversation flowing naturally.

export async function continueConversationMessage(
  ctx: ActionCtx,
  worldId: Id<'worlds'>,
  conversationId: GameId<'conversations'>,
  playerId: GameId<'players'>,
  otherPlayerId: GameId<'players'>,
): Promise<string> {
  const { player, otherPlayer, conversation, agent, otherAgent } = await ctx.runQuery(
    //...
  );
  //... (memory search)
  const prompt = [
    `You are ${player.name}, and you're currently in a conversation with ${otherPlayer.name}.`,
    `The conversation started at ${started.toLocaleString()}. It's now ${now.toLocaleString()}.`,
  ];
  //... (prompt construction)
  prompt.push(
    `Below is the current chat history between you and ${otherPlayer.name}.`,
    `DO NOT greet them again. Do NOT use the word "Hey" too often. Your response should be brief and within 200 characters.`,
  );
 
  const llmMessages: LLMMessage[] = [
    {
      role: 'system',
      content: prompt.join('\n'),
    },
    ...(await previousMessages(
      //...
    )),
  ];
  //... (call chatCompletion)
}

leaveConversationMessage(...)

This function generates a polite closing statement when an agent decides to end a conversation. The flow is nearly identical to continuing a conversation, gathering the same context and chat history. The key difference is the prompt’s intent, which explicitly tells the LLM: “You’ve decided to leave… How would you like to tell them that you’re leaving?” This ensures conversations have a natural and socially appropriate conclusion.

export async function leaveConversationMessage(
  ctx: ActionCtx,
  worldId: Id<'worlds'>,
  conversationId: GameId<'conversations'>,
  playerId: GameId<'players'>,
  otherPlayerId: GameId<'players'>,
): Promise<string> {
  //... (data fetching is similar to continueConversationMessage)
  
  const prompt = [
    `You are ${player.name}, and you're currently in a conversation with ${otherPlayer.name}.`,
    `You've decided to leave the question and would like to politely tell them you're leaving the conversation.`,
    //...
    `How would you like to tell them that you're leaving? Your response should be brief and within 200 characters.`,
  ];
  //... (call chatCompletion)
}

queryPromptData(...)

This is a powerful, internal-only database query that serves as the data backbone for all dialogue generation. It takes a set of IDs and returns a single, rich object containing all the information needed to build a prompt, including player details, agent personalities and plans, the current conversation state, and the history of the last interaction between the two agents. It performs rigorous checks to ensure data integrity, throwing an error if any critical piece of information is missing.

export const queryPromptData = internalQuery({
  args: {
    worldId: v.id('worlds'),
    playerId,
    otherPlayerId: playerId,
    conversationId,
  },
  handler: async (ctx, args) => {
    const world = await ctx.db.get(args.worldId);
    if (!world) { /* ... error handling ... */ }
    const player = world.players.find((p) => p.id === args.playerId);
    if (!player) { /* ... error handling ... */ }
    const playerDescription = await ctx.db
      .query('playerDescriptions')
      //...
      .first();
    // ... (fetches otherPlayer, otherPlayerDescription, conversation, agent, otherAgent, etc.)
 
    const lastTogether = await ctx.db
      .query('participatedTogether')
      //...
      .first();
 
    let lastConversation = null;
    if (lastTogether) {
      lastConversation = await ctx.db
        .query('archivedConversations')
        //...
        .first();
    }
    return {
      player: { name: playerDescription.name, ...player },
      otherPlayer: { name: otherPlayerDescription.name, ...otherPlayer },
      conversation,
      agent: { identity: agentDescription.identity, plan: agentDescription.plan, ...agent },
      otherAgent: otherAgent && { /* ... */ },
      lastConversation,
    };
  },
});