Chat Stream API
Stream AI-powered chat responses using the Recoup chat system. This endpoint mirrors the request payload of the Chat Generate API but returns a streaming response compatible with the Vercel AI SDK createUIMessageStreamResponse.
Endpoint
POST https://chat.recoupable.com/api/chatAuthentication
All requests to this endpoint must be authenticated using an API key header:
| Header | Type | Required | Description |
|---|---|---|---|
| x-api-key | string | Yes | Your Recoup API key. See Getting Started for how to create and manage keys. |
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| prompt | string | No | Single text prompt for the assistant. Required if messages is not provided. |
| messages | array | No | Array of UIMessage objects for context. Required if prompt is not provided. |
| artistId | string | No | The unique identifier of the artist (optional) |
| model | string | No | The AI model to use for text generation (optional) |
| excludeTools | array | No | Array of tool names to exclude from execution (e.g., ["create_scheduled_actions"]) |
| roomId | string | No | UUID of the chat room. If not provided, one will be generated automatically. Use Create Chat to create a chat beforehand. |
Exactly one of
messagesorpromptshould be provided in each request.
Request Examples
cURL
# Stream all responses to stdout
curl -N -X POST "https://chat.recoupable.com/api/chat" \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"prompt": "Draft a tweet announcing our new single."
}'Response Format
The endpoint returns a streaming HTTP response produced by createUIMessageStreamResponse. The stream emits UI message parts encoded as data chunks that can be parsed with createUIMessageStreamParser.
Stream Events
| Event Type | Description |
|---|---|
message-start | Fired when the assistant begins generating a message. Contains the message metadata. |
message-delta | Fired for incremental updates (token chunks, tool results, etc.) and includes only the delta payload. |
message-end | Fired when the assistant finishes a message. Contains the complete message payload. |
error | Fired if an error occurs. The payload includes a serialized error string generated by serializeError. |
metadata | Optional event that can include usage data, finish reasons, or other metadata emitted by downstream execution. |
Sample Stream Output
0:"data: {\"type\":\"message-start\",\"message\":{\"id\":\"msg-assistant-1\",\"role\":\"assistant\"}}\n\n"
1:"data: {\"type\":\"message-delta\",\"delta\":{\"role\":\"assistant\",\"content\":[{\"type\":\"text\",\"text\":\"Hello\"}]}}\n\n"
2:"data: {\"type\":\"message-delta\",\"delta\":{\"content\":[{\"type\":\"text\",\"text\":\"! Here's your draft.\"}]}}\n\n"
3:"data: {\"type\":\"message-end\",\"message\":{\"id\":\"msg-assistant-1\",\"role\":\"assistant\",\"content\":[{\"type\":\"text\",\"text\":\"Hello! Here's your draft.\"}]}}\n\n"Notes
- The response is streamed and should be consumed with a readable stream reader, an SSE parser, or the helpers provided by the Vercel AI SDK.
- Errors emitted during streaming are serialized and returned as
errorevents before the stream closes. - The same post-processing hooks as the generate endpoint run after the stream completes, ensuring completions are persisted.
- CORS headers are automatically applied;
OPTIONSrequests respond with200for preflight checks.