Skip to content

Chat Stream API

Stream AI-powered chat responses using the Recoup chat system. This endpoint mirrors the request payload of the Chat Generate API but returns a streaming response compatible with the Vercel AI SDK createUIMessageStreamResponse.

Endpoint

POST https://chat.recoupable.com/api/chat

Authentication

All requests to this endpoint must be authenticated using an API key header:

HeaderTypeRequiredDescription
x-api-keystringYesYour Recoup API key. See Getting Started for how to create and manage keys.

Parameters

NameTypeRequiredDescription
promptstringNoSingle text prompt for the assistant. Required if messages is not provided.
messagesarrayNoArray of UIMessage objects for context. Required if prompt is not provided.
artistIdstringNoThe unique identifier of the artist (optional)
modelstringNoThe AI model to use for text generation (optional)
excludeToolsarrayNoArray of tool names to exclude from execution (e.g., ["create_scheduled_actions"])
roomIdstringNoUUID of the chat room. If not provided, one will be generated automatically. Use Create Chat to create a chat beforehand.

Exactly one of messages or prompt should be provided in each request.

Request Examples

cURL
# Stream all responses to stdout
curl -N -X POST "https://chat.recoupable.com/api/chat" \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_API_KEY" \
  -d '{
    "prompt": "Draft a tweet announcing our new single."
  }'

Response Format

The endpoint returns a streaming HTTP response produced by createUIMessageStreamResponse. The stream emits UI message parts encoded as data chunks that can be parsed with createUIMessageStreamParser.

Stream Events

Event TypeDescription
message-startFired when the assistant begins generating a message. Contains the message metadata.
message-deltaFired for incremental updates (token chunks, tool results, etc.) and includes only the delta payload.
message-endFired when the assistant finishes a message. Contains the complete message payload.
errorFired if an error occurs. The payload includes a serialized error string generated by serializeError.
metadataOptional event that can include usage data, finish reasons, or other metadata emitted by downstream execution.

Sample Stream Output

0:"data: {\"type\":\"message-start\",\"message\":{\"id\":\"msg-assistant-1\",\"role\":\"assistant\"}}\n\n"
1:"data: {\"type\":\"message-delta\",\"delta\":{\"role\":\"assistant\",\"content\":[{\"type\":\"text\",\"text\":\"Hello\"}]}}\n\n"
2:"data: {\"type\":\"message-delta\",\"delta\":{\"content\":[{\"type\":\"text\",\"text\":\"! Here's your draft.\"}]}}\n\n"
3:"data: {\"type\":\"message-end\",\"message\":{\"id\":\"msg-assistant-1\",\"role\":\"assistant\",\"content\":[{\"type\":\"text\",\"text\":\"Hello! Here's your draft.\"}]}}\n\n"

Notes

  • The response is streamed and should be consumed with a readable stream reader, an SSE parser, or the helpers provided by the Vercel AI SDK.
  • Errors emitted during streaming are serialized and returned as error events before the stream closes.
  • The same post-processing hooks as the generate endpoint run after the stream completes, ensuring completions are persisted.
  • CORS headers are automatically applied; OPTIONS requests respond with 200 for preflight checks.