Answers a natural language question using content from your project as context. The endpoint first performs semantic retrieval to find relevant content, then sends the context to an LLM to generate a grounded answer. Responds via Server-Sent Events (SSE) for streaming output.Requires a paid plan with semantic search enabled.
Each source object follows the same shape as the Search Content response — sourceType, similarity, and a fully populated record. An empty array is sent when no relevant context was found.done — signals the stream is finished:
event: donedata: {}
If no relevant content is found above the similarity threshold, the LLM response is skipped and the answer event immediately says “I couldn’t find any relevant content to answer your question.”