Building a RAG Workflow with Memories
This tutorial walks through building a workflow that searches a document knowledge base using vector search and passes the results to a Prompt node to answer a question.
Step 1: Set up the memory folder
- Go to Memories in the sidebar and create a new folder.
- Open the folder Settings.
- Enable Knowledge Base Search.
Enabling this setting activates vector embedding for all documents in the folder. Documents added before enabling the setting will be indexed retroactively.
Step 2: Add your documents
There are two ways to populate the folder:
| Method | Details |
|---|---|
| Upload files | Supported formats: PDF, Word, CSV. Content is extracted automatically. |
| Create manually | Write markdown content directly in the editor. |
Each document is queued for indexing immediately after saving.
Step 3: Wait for indexing
Embedding runs in the background after a document is created or updated. Each document displays a status indicator:
| Status | Meaning |
|---|---|
| Pending | Document is queued for embedding. |
| Processing | Embedding is in progress. |
| Completed | Document is indexed and available for vector search. |
| Failed | Embedding encountered an error. Re-save the document to retry. |
Vector search only returns results from documents with a Completed status.
Step 4: Build the workflow
Start node
Add one input variable:
| Variable | Type |
|---|---|
question | String |
Memory node
Add a Memory node and configure it as follows:
| Field | Value |
|---|---|
| Operation | vector_search |
| Query | {{question}} |
| Folder | The folder created in Step 1 |
| Limit | 5 (number of chunks to return) |
| Output variable | context |
The query is semantic - searching for "billing problems" will match documents containing "customer charged twice" without requiring keyword overlap.
Prompt node
Add a Prompt node after the Memory node. In the instructions field, reference both input variables:
Answer the following question using only the context provided.
Question: {{question}}
Context:
{{context}}
The {{context}} variable contains the retrieved chunks and their similarity scores from the Memory node.
Step 5: Test it
- Run the workflow with a sample question.
- Inspect the Memory node output to review which chunks were retrieved and their similarity scores.
- If results are too broad, increase the similarity threshold. If too few results are returned, lower it. The default threshold is
0.7.
Going further
- Project reference: Assign the folder to a Project as a Reference. It will be searched automatically in every Project chat without requiring a workflow.
- Empty results handling: Add a Condition node after the Memory node to branch on whether
contextis empty, and return a fallback response when no chunks are found. - Multiple folders: Add additional Memory nodes, each pointing to a different folder, to search across separate knowledge bases in a single workflow.