Skip to main content

Building a RAG Workflow with Memories

This tutorial walks through building a workflow that searches a document knowledge base using vector search and passes the results to a Prompt node to answer a question.

Step 1: Set up the memory folder

  1. Go to Memories in the sidebar and create a new folder.
  2. Open the folder Settings.
  3. Enable Knowledge Base Search.

Enabling this setting activates vector embedding for all documents in the folder. Documents added before enabling the setting will be indexed retroactively.

Step 2: Add your documents

There are two ways to populate the folder:

MethodDetails
Upload filesSupported formats: PDF, Word, CSV. Content is extracted automatically.
Create manuallyWrite markdown content directly in the editor.

Each document is queued for indexing immediately after saving.

Step 3: Wait for indexing

Embedding runs in the background after a document is created or updated. Each document displays a status indicator:

StatusMeaning
PendingDocument is queued for embedding.
ProcessingEmbedding is in progress.
CompletedDocument is indexed and available for vector search.
FailedEmbedding encountered an error. Re-save the document to retry.

Vector search only returns results from documents with a Completed status.

Step 4: Build the workflow

Start node

Add one input variable:

VariableType
questionString

Memory node

Add a Memory node and configure it as follows:

FieldValue
Operationvector_search
Query{{question}}
FolderThe folder created in Step 1
Limit5 (number of chunks to return)
Output variablecontext

The query is semantic - searching for "billing problems" will match documents containing "customer charged twice" without requiring keyword overlap.

Prompt node

Add a Prompt node after the Memory node. In the instructions field, reference both input variables:

Answer the following question using only the context provided.

Question: {{question}}

Context:
{{context}}

The {{context}} variable contains the retrieved chunks and their similarity scores from the Memory node.

Step 5: Test it

  1. Run the workflow with a sample question.
  2. Inspect the Memory node output to review which chunks were retrieved and their similarity scores.
  3. If results are too broad, increase the similarity threshold. If too few results are returned, lower it. The default threshold is 0.7.

Going further

  • Project reference: Assign the folder to a Project as a Reference. It will be searched automatically in every Project chat without requiring a workflow.
  • Empty results handling: Add a Condition node after the Memory node to branch on whether context is empty, and return a fallback response when no chunks are found.
  • Multiple folders: Add additional Memory nodes, each pointing to a different folder, to search across separate knowledge bases in a single workflow.