Tutorials
Building chatbots
Two reference implementations that show how to wire an LLM into the FrontRow API end-to-end — receive a fan message via webhook, run it through your model, and reply through the API. Use them as a starting point for a custom chatbot or to embed FrontRow chat into an existing application.
Example repos are available in both Node.js / TypeScript and Python. The inline minimal example below is TypeScript — the same calls work from any language.
Example repos
Always-on agent that auto-replies to fan DMs through your LLM, with conversation memory and an admin view to spot-check what it's saying.
View on GitHub →
Same loop in Python — receive a DM, ask Claude what to reply, send it, and persist every turn to Postgres for a queryable record.
View on GitHub →
How the bot works
Both starters follow the same three-stage loop:
- 1. Receive. FrontRow delivers a signed
message.receivedwebhook to your endpoint whenever a fan sends a DM. - 2. Reason. Your handler verifies the HMAC signature, loads recent conversation history, and calls your LLM with a persona prompt.
- 3. Reply. The handler posts the model’s response back through
POST /conversations/:id/messages. FrontRow tags the message as AI-generated automatically.
Minimal example
If you’d rather not clone a starter, here’s the smallest useful version of the loop in TypeScript. It assumes you already verified the webhook signature (see the Security tutorial).
| 1 | import OpenAI from "openai"; |
| 2 | |
| 3 | const openai = new OpenAI(); |
| 4 | const API_BASE = "https://frontrow.center/api/v1"; |
| 5 | |
| 6 | const PERSONA = `You are a friendly creator chatting with a fan. |
| 7 | Keep replies under 2 sentences. Never break character.`; |
| 8 | |
| 9 | export async function handleMessage(event: { |
| 10 | conversationId: string; |
| 11 | message: { text: string; from: { handle: string } }; |
| 12 | }) { |
| 13 | const history = await fetch( |
| 14 | `${API_BASE}/conversations/${event.conversationId}/messages?limit=10`, |
| 15 | { |
| 16 | headers: { Authorization: `Bearer ${process.env.FRONTROW_API_KEY}` }, |
| 17 | } |
| 18 | ).then((r) => r.json()); |
| 19 | |
| 20 | const completion = await openai.chat.completions.create({ |
| 21 | model: "gpt-4o-mini", |
| 22 | messages: [ |
| 23 | { role: "system", content: PERSONA }, |
| 24 | ...history.messages.map((m: { fromMe: boolean; text: string }) => ({ |
| 25 | role: m.fromMe ? "assistant" : "user", |
| 26 | content: m.text, |
| 27 | })), |
| 28 | { role: "user", content: event.message.text }, |
| 29 | ], |
| 30 | }); |
| 31 | |
| 32 | await fetch( |
| 33 | `${API_BASE}/conversations/${event.conversationId}/messages`, |
| 34 | { |
| 35 | method: "POST", |
| 36 | headers: { |
| 37 | Authorization: `Bearer ${process.env.FRONTROW_API_KEY}`, |
| 38 | "Content-Type": "application/json", |
| 39 | }, |
| 40 | body: JSON.stringify({ |
| 41 | text: completion.choices[0].message.content, |
| 42 | }), |
| 43 | } |
| 44 | ); |
| 45 | } |
Production checklist
- Idempotency. Webhooks may be retried. Track the
X-FrontRow-Deliveryheader and skip duplicate IDs. - Latency budget. FrontRow expects a 2xx within 5 seconds. Acknowledge first, then process the message in a background queue.
- Persona drift. Anchor the system prompt with concrete examples and refuse rules. Long conversations drift without explicit constraints.
- Cost control. Cap context window length and rate-limit per-fan replies. A single high-spend fan can otherwise drive runaway model costs.
- Compliance. AI-generated messages are labeled automatically — never strip the flag. Doing so violates FrontRow’s creator agreement.
Next steps
Once your bot is running for one-on-one chats, you can layer on mass outreach for re-engagement campaigns, or harden your webhook endpoint against abuse.