- Semantic search — find content, users, and spaces by meaning, not just keywords
- AI ask — a retrieval-augmented endpoint that synthesizes a streamed answer from your content in response to a natural-language question
Semantic search and AI features require a paid plan. Projects on the free tier cannot use embedding-based endpoints.
What Gets Embedded
Replyke embeds the following content types automatically, with no configuration required beyond enabling the feature:| Content Type | What is embedded |
|---|---|
| Entities | Title, body, and other text fields |
| Comments | Comment text content |
| Chat messages | Message text content |
| Users | Profile text assembled from name, username, and bio |
| Spaces | Space name, description, and related metadata |
Semantic Search
There are three search endpoints, each scoped to a different content domain:Search Content
Searches across entities, comments, and chat messages. You can filter by source type and optionally scope results to a specific space or conversation.- Parameters:
query,sourceTypes(array of"entity" | "comment" | "message"),spaceId(optional),conversationId(optional),limit - Returns: Array of results, each with
sourceType,similarityscore (0–1), and the full hydratedrecord(entity, comment, or message object)
Search Users
Searches user profiles by semantic similarity to the query. Useful for finding users by description, interests, or bio content.- Parameters:
query,limit - Returns: Array of results, each with
similarityscore and the fullrecord(user object)
Search Spaces
Searches spaces by semantic similarity.- Parameters:
query,limit - Returns: Array of results, each with
similarityscore and the fullrecord(space object)
similarity field (cosine similarity, 0–1) alongside the hydrated record, so you can display or filter by relevance score.
AI Ask
The ask endpoint takes a natural-language question, retrieves the most relevant content chunks from your project, and streams a synthesized answer using an LLM — grounded entirely in your content.How It Works
Retrieval
The most semantically similar content is retrieved from your project’s embeddings. Results can be scoped to a specific space or conversation.
Generation
The retrieved content is used as context for the LLM, which synthesizes an answer grounded in your data. If the context does not contain enough information to answer, the model says so.
Response Format (SSE)
The ask endpoint responds withContent-Type: text/event-stream. Events are delivered in this order:
| Event | Data | Description |
|---|---|---|
token | { "content": "..." } | One token of the generated answer |
sources | Array of content results | The hydrated source records used as context |
done | {} | Stream complete |
error | { "error": "..." } | Only sent if an error occurs after streaming starts |
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
query | string | Yes | The natural-language question |
sourceTypes | string[] | Yes | Content types to search: "entity", "comment", "message" |
spaceId | string | No | Scope retrieval to a specific space |
conversationId | string | No | Scope retrieval to a specific conversation |
limit | number | No | Max number of source records to return |
SDK Integration
The SDK provides hooks for all three search endpoints and the ask endpoint:useSearchContent— search entities, comments, and messagesuseSearchUsers— search user profilesuseSearchSpaces— search spacesuseAskContent— ask a question and receive a streamed answer

