Tutorials

Building chatbots

Two reference implementations that show how to wire an LLM into the FrontRow API end-to-end — receive a fan message via webhook, run it through your model, and reply through the API. Use them as a starting point for a custom chatbot or to embed FrontRow chat into an existing application.

Example repos are available in both Node.js / TypeScript and Python. The inline minimal example below is TypeScript — the same calls work from any language.

Example repos

How the bot works

Both starters follow the same three-stage loop:

  1. 1. Receive. FrontRow delivers a signed message.received webhook to your endpoint whenever a fan sends a DM.
  2. 2. Reason. Your handler verifies the HMAC signature, loads recent conversation history, and calls your LLM with a persona prompt.
  3. 3. Reply. The handler posts the model’s response back through POST /conversations/:id/messages. FrontRow tags the message as AI-generated automatically.

Minimal example

If you’d rather not clone a starter, here’s the smallest useful version of the loop in TypeScript. It assumes you already verified the webhook signature (see the Security tutorial).

ts · bot.ts
1import OpenAI from "openai";
2 
3const openai = new OpenAI();
4const API_BASE = "https://frontrow.center/api/v1";
5 
6const PERSONA = `You are a friendly creator chatting with a fan.
7Keep replies under 2 sentences. Never break character.`;
8 
9export async function handleMessage(event: {
10 conversationId: string;
11 message: { text: string; from: { handle: string } };
12}) {
13 const history = await fetch(
14 `${API_BASE}/conversations/${event.conversationId}/messages?limit=10`,
15 {
16 headers: { Authorization: `Bearer ${process.env.FRONTROW_API_KEY}` },
17 }
18 ).then((r) => r.json());
19 
20 const completion = await openai.chat.completions.create({
21 model: "gpt-4o-mini",
22 messages: [
23 { role: "system", content: PERSONA },
24 ...history.messages.map((m: { fromMe: boolean; text: string }) => ({
25 role: m.fromMe ? "assistant" : "user",
26 content: m.text,
27 })),
28 { role: "user", content: event.message.text },
29 ],
30 });
31 
32 await fetch(
33 `${API_BASE}/conversations/${event.conversationId}/messages`,
34 {
35 method: "POST",
36 headers: {
37 Authorization: `Bearer ${process.env.FRONTROW_API_KEY}`,
38 "Content-Type": "application/json",
39 },
40 body: JSON.stringify({
41 text: completion.choices[0].message.content,
42 }),
43 }
44 );
45}

Production checklist

  • Idempotency. Webhooks may be retried. Track the X-FrontRow-Delivery header and skip duplicate IDs.
  • Latency budget. FrontRow expects a 2xx within 5 seconds. Acknowledge first, then process the message in a background queue.
  • Persona drift. Anchor the system prompt with concrete examples and refuse rules. Long conversations drift without explicit constraints.
  • Cost control. Cap context window length and rate-limit per-fan replies. A single high-spend fan can otherwise drive runaway model costs.
  • Compliance. AI-generated messages are labeled automatically — never strip the flag. Doing so violates FrontRow’s creator agreement.

Next steps

Once your bot is running for one-on-one chats, you can layer on mass outreach for re-engagement campaigns, or harden your webhook endpoint against abuse.