Back to Integrations
integration integration
integration Hugging Face Inference Model node

Integrate Hugging Face Inference Model in your LLM apps and 422+ apps and services

Use Hugging Face Inference Model to easily build AI-powered applications and integrate them with 422+ apps and services. n8n lets you seamlessly import data from files, websites, or databases into your LLM-powered application and create automated scenarios.

Popular ways to use Hugging Face Inference Model integration

Hugging Face Inference Model node

Use an open-source LLM (via HuggingFace)

This workflow demonstrates how to connect an open-source model to a Basic LLM node. The workflow is triggered when a new manual chat message appears. The message is then run through a Language Model Chain that is set up to process text with a specific prompt to guide the model's responses. Note that open-source LLMs with a small number of parameters require slightly different prompting with more guidance to the model. You can change the default Mistral-7B-Instruct-v0.1 model to any other LLM supported by HuggingFace. You can also connect other nodes, such as Ollama. Note that to use this template, you need to be on n8n version 1.19.4 or later.
n8n-team
n8n Team
Hugging Face Inference Model node

About Hugging Face Inference Model

Related categories

Similar integrations

  • Wikipedia node
  • OpenAI Chat Model node
  • Zep Vector Store node
  • Postgres Chat Memory node
  • Pinecone Vector Store node
  • Embeddings OpenAI node
  • Supabase: Insert node
  • OpenAI node

Over 3000 companies switch to n8n every single week

Connect Hugging Face Inference Model with your company’s tech stack and create automation workflows