Skip to content

Embeddings HuggingFace Inference#

Use the Embeddings HuggingFace Inference node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings HuggingFace Inference, and links to more resources.

Credentials

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node parameters#

Model: the model to use to generate the embedding. Refer to the Hugging Face models documentation for available models.

Node options#

Custom Inference Endpoint: the URL of your deployed model, hosted by HuggingFace. If you set this, n8n ignores the Model Name.

Refer to HuggingFace's guide to inference for more information.

Templates and examples#

Ask questions about a PDF using AI

by David Roberts

View template details
Chat with PDF docs using AI (quoting sources)

by David Roberts

View template details
AI Crew to Automate Fundamental Stock Analysis - Q&A Workflow

by Derek Cheung

View template details
Browse Embeddings HuggingFace Inference integration templates, or search all templates

Refer to Langchain's HuggingFace Inference embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.