Window Buffer Memory#
Use the Window Buffer Memory node to persist chat history in your workflow.
On this page, you'll find a list of operations the Window Buffer Memory node supports, and links to more resources.
Don't use this node if running n8n in queue mode
If your n8n instance uses queue mode, this node doesn't work in a production (active) workflow. This is because n8n can't guarantee that every call to Window Buffer Memory will go to the same worker.
Parameter resolution in sub-nodes
Sub-nodes behave differently to other nodes when processing multiple items using an expression.
Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name
values, the expression {{ $json.name }}
resolves to each name in turn.
In sub-nodes, the expression always resolves to the first item. For example, given an input of five name
values, the expression {{ $json.name }}
always resolves to the first name.
Node parameters#
- Session Key: the key to use to store the memory in the workflow data.
- Context Window Length: the number of previous messages to consider for context.
Templates and examples#
Related resources#
Refer to LangChain's Buffer Window Memory documentation for more information about the service.
View n8n's Advanced AI documentation.
Single memory instance#
If you add more than one Window Buffer Memory (easiest) node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the Chat Memory Manager node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes.