HTTP Request node
Google Drive node
Code node
n8n Form Trigger node

Replace Data in Google Docs from n8n Form

Published 9 days ago

Template description

Who is this for?

This workflow is perfect for anyone looking to automate the process of replacing variables in Google Docs with data from form.

Zrzut ekranu 20250314 o 21.25.04.png

What problem does this workflow solve?

This workflow automates the process of filling Google Docs templates with data coming from n8n forms or other variables. It’s especially useful for generating documents like contracts, invoices, or reports quickly and efficiently without manual intervention.

What does this workflow do?

The workflow receives data from a form in n8n.
It uses the form data to replace the corresponding variables (e.g., {{example_variable}}) in a Google Docs template.
The document is then generated with the new values, ready for further use, such as sending or archiving.

How to set up this workflow?

  1. Prepare the template: Create a Google Docs template with variables in the {{variable}} format that you want to replace with form data.
  2. Modify the variables in the n8n form: Make sure the form fields correspond to the variables you want to replace in the Google Docs template.
  3. Connect to Google Docs: Set up the connection to Google Docs in n8n using the appropriate authentication credentials.
  4. Test the workflow: Run the workflow to ensure that the form data correctly replaces the variables in the Google Docs template.

How to customize this workflow to your needs?

Change the data source: You can modify the form or other data sources (e.g., API) from which the replacement values will be fetched.
Customize the Google Docs template: Adapt the template to include additional fields for replacement as needed.
Integrate with other applications: You can expand the workflow to include actions like sending the generated document via email, saving it to Google Drive, or passing it to other systems.

Share Template

More Finance workflow templates

HTTP Request node
Code node
+7

Build Your First AI Data Analyst Chatbot

Enhance your data analysis by connecting an AI Agent to your dataset, using n8n tools. This template teaches you how to build an AI Data Analyst Chatbot that is capable of pulling data from your sources, using tools like Google Sheets or databases. It's designed to be easy and efficient, making it a good starting point for AI-driven data analysis. You can easily replace the current Google Sheets tools for databases like Postgres or MySQL. How It Works The core of the workflow is the AI Agent. It's connected to different data retrieval tools, to get data from Google Sheets (or your preferred database) in many different ways. Once the data is retrieved, the Calculator tool allows the AI to perform mathematical operations, making your data analysis precise. Who is this template for Data Analysts & Researchers:** Pull data from different sources and perform quick calculations. Developers & AI Enthusiasts:** Learn to build your first AI Agent with easy dataset access. Business Owners:** Streamline your data analysis with AI insights and automate repetitive tasks. Automation Experts:** Enhance your automation skills by integrating AI with your existing databases. How to Set Up You can find detailed instructions in the workflow itself. Check out my other templates 👉 https://n8n.io/creators/solomon/
solomon
Solomon
Google Sheets node
HTTP Request node
Merge node
+10

Invoice data extraction with LlamaParse and OpenAI

This n8n workflow automates the process of parsing and extracting data from PDF invoices. With this workflow, accounts and finance people can realise huge time and cost savings in their busy schedules. Read the Blog: https://blog.n8n.io/how-to-extract-data-from-pdf-to-excel-spreadsheet-advance-parsing-with-n8n-io-and-llamaparse/ How it works This workflow will watch an email inbox for incoming invoices from suppliers It will download the attached PDFs and processing them through a third party service called LlamaParse. LlamaParse is specifically designed to handle and convert complex PDF data structures such as tables to markdown. Markdown is easily to process for LLM models and so the data extraction by our AI agent is more accurate and reliable. The workflow exports the extracted data from the AI agent to Google Sheets once the job complete. Requirements The criteria of the email trigger must be configured to capture emails with attachments. The gmail label "invoice synced" must be created before using this workflow. A LlamaIndex.ai account to use the LlamaParse service. An OpenAI account to use GPT for AI work. Google Sheets to save the output of the data extraction process although this can be replaced for whatever your needs. Customizing this workflow This workflow uses Gmail and Google Sheets but these can easily be swapped out for equivalent services such as Outlook and Excel. Not using Excel? Simple redirect the output of the AI agent to your accounting software of choice.
jimleuk
Jimleuk
HTTP Request node
Google Drive node
Google Calendar node
+9

Actioning Your Meeting Next Steps using Transcripts and AI

This n8n workflow demonstrates how you can summarise and automate post-meeting actions from video transcripts fed into an AI Agent. Save time between meetings by allowing AI handle the chores of organising follow-up meetings and invites. How it works This workflow scans for the calendar for client or team meetings which were held online. * Attempts will be made to fetch any recorded transcripts which are then sent to the AI agent. The AI agent summarises and identifies if any follow-on meetings are required. If found, the Agent will use its Calendar Tool to to create the event for the time, date and place for the next meeting as well as add known attendees. Requirements Google Calendar and the ability to fetch Meeting Transcripts (There is a special OAuth permission for this action!) OpenAI account for access to the LLM. Customising the workflow This example only books follow-on meetings but could be extended to generate reports or send emails.
jimleuk
Jimleuk
Webhook node
Respond to Webhook node
OpenAI node

Analyze tradingview.com charts with Chrome extension, N8N and OpenAI

This flow is supported by a Chrome plugin created with Cursor AI. The idea was to create a Chrome plugin and a backend service in N8N to do chart analytics with OpenAI. It's a good sample on how to submit a screenshot from the browser to N8N. Who is it for? N8N developers who want to learn about using a Chrome plugin, an N8N webhook and OpenAI. What opportunity does it present? This sample opens up a whole range of N8N connected Chrome extensions that can analyze screenshots by using OpenAI. What this workflow does? The workflow contains: a webhook trigger an OpenAI node with GPT-4O-MINI and Analyze Image selected a response node to send back the Text that was created after analysing the screenshot. All this is needed to talk to the Chrome extension which is created with Cursor AI. The idea is to visit the tradingview.com crypto charts, click the Chrome plugin and get back analytics about the shown chart in understandable language. This is driven by the N8N flow. With the new image analytics capabilities of OpenAI this opens up a world of opportunities. Requirements/setup OpenAI API key Cursor AI installed The Chrome extension. Download The N8N JSON code. Download How to customize it to your needs? Both the Chrome extension and N8N flow can be adapted to use on other websites. You can consider: analyzing a financial screen and ask questions about the data shown analyzing other charts extending the N8N workflow with other AI nodes With AI and image analytics the sky is the limit and in some cases it saves you from creating complex API integrations. Download Chrome extension
thingsio
Hans Blaauw
HTTP Request node
+11

Build a Financial Documents Assistant using Qdrant and Mistral.ai

This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading. Disclaimer This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes. How it works A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events. When a file is created, its contents are uploaded to the vector store. When a file is updated, its previous records are replaced. When the file is deleted, the corresponding records are also removed from the vector store. A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system. Requirements A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem. Qdrant instance to store the records. Customising the workflow This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
jimleuk
Jimleuk
Google Sheets node
HTTP Request node
Google Drive node
+7

Invoices from Gmail to Drive and Google Sheets

Attachments Gmail to Drive and Google Sheets Description Automatically process invoice emails by saving attachments to Google Drive and extracting key invoice data to Google Sheets using AI. This workflow monitors your Gmail for unread emails with attachments, saves PDFs to a specified Google Drive folder, and uses OpenAI's GPT-4o to extract invoice details (date, description, amount) into a structured spreadsheet. Use cases Invoice Management**: Automatically organize and track invoices received via email Financial Record Keeping**: Maintain a structured database of all invoice information Document Organization**: Keep digital copies of invoices organized in Google Drive Automated Data Entry**: Eliminate manual data entry for invoice processing Resources Gmail account Google Drive account Google Sheets account OpenAI API key Setup instructions Prerequisites Active Gmail, Google Drive, and Google Sheets accounts OpenAI API key (GPT-4o model access) n8n instance with credentials manager Steps Gmail and Google Drive Setup: Connect your Gmail account in n8n credentials Connect your Google Drive account with appropriate permissions Create a destination folder in Google Drive for invoice storage Google Sheets Setup: Connect your Google Sheets account Create a spreadsheet with columns: Invoice date, Invoice Description, Total price, and Fichero Copy your spreadsheet ID for configuration OpenAI Setup: Add your OpenAI API key to n8n credentials Configure Email Filter: Update the email filter node to match your specific sender requirements Benefits Time Saving**: Eliminates manual downloading, filing, and data entry Accuracy**: AI-powered data extraction reduces human error Organization**: Consistent file naming and storage structure Searchability**: Creates a searchable database of all invoice information Automation**: Runs every minute to process new emails as they arrive Related templates Email Parser to CRM Document Processing Workflow Financial Data Automation
carlosgracia
Juan Carlos Cavero Gracia

More DevOps workflow templates

HTTP Request node
Merge node
+13

AI Agent To Chat With Files In Supabase Storage

Video Guide I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n. Youtube Link Who is this for? This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive. What problem does this workflow solve? Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files. What this workflow does The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include: Fetching and comparing files to avoid duplicate processing. Handling file downloads and extracting content based on the file type. Converting documents into vectorized data for contextual information retrieval. Storing and querying vectorized data from a Supabase vector store. File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content. Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions. Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot. Setup N8N Workflow Fetch File List from Supabase: Use Supabase to retrieve the stored file list from a specified bucket. Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing. Compare and Filter Files: Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table. Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled. Handle File Downloads: Download new files using detailed storage configurations for public/private access. Adjust the storage settings and GET requests to match your Supabase setup. File Type Processing: Use a Switch node to target specific file types (e.g., PDFs or text files). Employ relevant tools to process the content: For PDFs, extract embedded content. For text files, directly process the text data. Content Chunking: Break large text data into smaller chunks using the Text Splitter node. Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks. Vector Embedding Creation: Generate vectorized embeddings for the processed content using OpenAI's embedding tools. Ensure metadata, such as file ID, is included for easy data retrieval. Store Vectorized Data: Save the vectorized information into a dedicated Supabase vector store. Use the default schema and table provided by Supabase for seamless setup. AI Chatbot Integration: Add a chatbot node to handle user input and retrieve relevant document chunks. Use metadata like file ID for targeted queries, especially when multiple documents are involved. Testing Upload sample files to your Supabase bucket. Verify if files are processed and stored successfully in the vector store. Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?"). Test for accuracy and contextual relevance of retrieved results.
lowcodingdev
Mark Shcherbakov
Merge node
MySQL node
+9

Generate SQL queries from schema only - AI-powered

This workflow is a modification of the previous template on how to create an SQL agent with LangChain and SQLite. The key difference – the agent has access only to the database schema, not to the actual data. To achieve this, SQL queries are made outside the AI Agent node, and the results are never passed back to the agent. This approach allows the agent to generate SQL queries based on the structure of tables and their relationships, without having to access the actual data. This makes the process more secure and efficient, especially in cases where data confidentiality is crucial. 🚀 Setup To get started with this workflow, you’ll need to set up a free MySQL server and import your database (check Step 1 and 2 in this tutorial). Of course, you can switch MySQL to another SQL database such as PostgreSQL, the principle remains the same. The key is to download the schema once and save it locally to avoid repeated remote connections. Run the top part of the workflow once to download and store the MySQL chinook database schema file on the server. With this approach, we avoid the need to repeatedly connect to a remote db4free database and fetch the schema every time. As a result, we reach greater processing speed and efficiency. 🗣️ Chat with your data Start a chat: send a message in the chat window. The workflow loads the locally saved MySQL database schema, without having the ability to touch the actual data. The file contains the full structure of your MySQL database for analysis. The Langchain AI Agent receives the schema, your input and begins to work. The AI Agent generates SQL queries and brief comments based solely on the schema and the user’s message. An IF node checks whether the AI Agent has generated a query. When: Yes: the AI Agent passes the SQL query to the next MySQL node for execution. No: You get a direct answer from the Agent without further action. The workflow formats the results of the SQL query, ensuring they are convenient to read and easy to understand. Once formatted, you get both the Agent answer and the query result in the chat window. 🌟 Example queries Try these sample queries to see the schema-driven AI Agent in action: Would you please list me all customers from Germany? What are the music genres in the database? What tables are available in the database? Please describe the relationships between tables. - In this example, the AI Agent does not need to create the SQL query. And if you prefer to keep the data private, you can manually execute the generated SQL query in your own environment using any database client or tool you trust 🗄️ 💭 The AI Agent memory node does not store the actual data as we run SQL-queries outside the agent. It contains the database schema, user questions and the initial Agent reply. Actual SQL query results are passed to the chat window, but the values are not stored in the Agent memory.
yulia
Yulia
OpenAI Chat Model node

AI Agent to chat with Supabase/PostgreSQL DB

Video Guide I prepared a detailed guide that showed the whole process of building a resume analyzer. Who is this for? This workflow is ideal for developers, data analysts, and business owners who want to enable conversational interactions with their database. It’s particularly useful for cases where users need to extract, analyze, or aggregate data without writing SQL queries manually. What problem does this workflow solve? Accessing and analyzing database data often requires SQL expertise or dedicated reports, which can be time-consuming. This workflow empowers users to interact with a database conversationally through an AI-powered agent. It dynamically generates SQL queries based on user requests, streamlining data retrieval and analysis. What this workflow does This workflow integrates OpenAI with a Supabase database, enabling users to interact with their data via an AI agent. The agent can: Retrieve records from the database. Extract and analyze JSON data stored in tables. Provide summaries, aggregations, or specific data points based on user queries. Dynamic SQL Querying: The agent uses user prompts to create and execute SQL queries on the database. Understand JSON Structure: The workflow identifies JSON schema from sample records, enabling the agent to parse and analyze JSON fields effectively. Database Schema Exploration: It provides the agent with tools to retrieve table structures, column details, and relationships for precise query generation. Setup Preparation Create Accounts: N8N: For workflow automation. Supabase: For database hosting and management. OpenAI: For building the conversational AI agent. Configure Database Connection: Set up a PostgreSQL database in Supabase. Use appropriate credentials (username, password, host, and database name) in your workflow. N8N Workflow AI agent with tools: Code Tool: Execute SQL queries based on user input. Database Schema Tool: Retrieve a list of all tables in the database. Use a predefined SQL query to fetch table definitions, including column names, types, and references. Table Definition: Retrieve a list of columns with types for one table.
lowcodingdev
Mark Shcherbakov

Git backup of workflows and credentials

This creates a git backup of the workflows and credentials. It uses the n8n export command with git diff, so you can run as many times as you want, but only when there are changes they will create a commit. Setup You need some access to the server. Create a repository in some remote place to host your project, like Github, Gitlab, or your favorite private repo. Clone the repository in the server in a place that the n8n has access. In the example, it's the ., and the repository name is repo. Change it in the commands and in the workflow commands (you can set it as a variable in the wokflow). Checkout to another branch if you won't use the master one. cd . git clone repository Or you could git init and then add the remote (git remote add origin YOUR_REPO_URL), whatever pleases you more. As the server, check if everything is ok for beeing able to commit. Very likely you'll need to setup the user email and name. Try to create a commit, and push it to upstream, and everything you need (like config a user to comit) will appear in way. I strong suggest testing with exporting the commands to garantee it will work too. cd ./repo git commit -c "Initial commmit" --allow-empty -u is the same as --set-upstream git push -u origin master Testing to push to upstream with the first exported data npx n8n export:workflow --backup --output ./repo/workflows/ npx n8n export:credentials --backup --output repo/credentials/ cd ./repo git add . git commit -c "manual backup: first export" git push After that, if everything is ok, the workflow should work just fine. Adjustments Adjust the path in used in the workflow. See the the git -C PATH command is the same as cd PATH; git .... Also, adjust the cron to run as you need. As I said in the beginning, you can run it even for every minute, but it will create commits only when there are changes. Credentials encryption The default for exporting the credentials is to do them encrypted. You can add the flag --decrypted to the n8n export:credentials command if you need to save them in plain. But as general rule, it's better to save the encryption key, that you only need to do that once, and them export it safely encrypted.
allandaemon
Allan Daemon
HTTP Request node
Merge node
Webhook node
Telegram Trigger node
+10

Proxmox AI Agent with n8n and Generative AI Integration

Proxmox AI Agent with n8n and Generative AI Integration This template automates IT operations on a Proxmox Virtual Environment (VE) using an AI-powered conversational agent built with n8n. By integrating Proxmox APIs and generative AI models (e.g., Google Gemini), the workflow converts natural language commands into API calls, enabling seamless management of your Proxmox nodes, VMs, and clusters. Buy My Book: Mastering n8n on Amazon Full Courses & Tutorials: http://lms.syncbricks.com Watch Video on Youtube How It Works Trigger Mechanism The workflow can be triggered through multiple channels like chat (Telegram, email, or n8n's built-in chat). Interact with the AI agent conversationally. AI-Powered Parsing A connected AI model (Google Gemini or other compatible models like OpenAI or Claude) processes your natural language input to determine the required Proxmox API operation. API Call Generation The AI parses the input and generates structured JSON output, which includes: response_type: The HTTP method (GET, POST, PUT, DELETE). url: The Proxmox API endpoint to execute. details: Any required payload parameters for the API call. Proxmox API Execution The structured output is used to make HTTP requests to the Proxmox VE API. The workflow supports various operations, such as: Retrieving cluster or node information. Creating, deleting, starting, or stopping VMs. Migrating VMs between nodes. Updating or resizing VM configurations. Response Formatting The workflow formats API responses into a user-friendly summary. For example: Success messages for operations (e.g., "VM started successfully"). Error messages with missing parameter details. Extensibility You can enhance the workflow by connecting additional triggers, external services, or AI models. It supports: Telegram/Slack integration for real-time notifications. Backup and restore workflows. Cloud monitoring extensions. Key Features Multi-Channel Input**: Use chat, email, or custom triggers to communicate with the AI agent. Low-Code Automation**: Easily customize the workflow to suit your Proxmox environment. Generative AI Integration**: Supports advanced AI models for precise command interpretation. Proxmox API Compatibility**: Fully adheres to Proxmox API specifications for secure and reliable operations. Error Handling**: Detects and informs you of missing or invalid parameters in your requests. Example Use Cases Create a Virtual Machine Input: "Create a VM with 4 cores, 8GB RAM, and 50GB disk on psb1." Action: Sends a POST request to Proxmox to create the VM with specified configurations. Start a VM Input: "Start VM 105 on node psb2." Action: Executes a POST request to start the specified VM. Retrieve Node Details Input: "Show the memory usage of psb3." Action: Sends a GET request and returns the node's resource utilization. Migrate a VM Input: "Migrate VM 202 from psb1 to psb3." Action: Executes a POST request to move the VM with optional online migration. Pre-Requisites Proxmox API Configuration Enable the Proxmox API and generate API keys in the Proxmox Data Center. Use the Authorization header with the format: PVEAPIToken=<user>@<realm>!<token-id>=<token-value> n8n Setup Add Proxmox API credentials in n8n using Header Auth. Connect a generative AI model (e.g., Google Gemini) via the relevant credential type. Access the Workflow Import this template into your n8n instance. Replace placeholder credentials with your Proxmox and AI service details. Additional Notes This template is designed for Proxmox 7.x and above. For advanced features like backup, VM snapshots, and detailed node monitoring, you can extend this workflow. Always test with a non-production Proxmox environment before deploying in live systems.
amjid
Amjid Ali
Google Sheets node
HTTP Request node
Slack node
+4

Host your own Uptime Monitoring with Scheduled Triggers

This n8n workflow demonstrates how to build a simple uptime monitoring service using scheduled triggers. Useful for webmasters with a handful of sites who want a cost-effective solution without the need for all the bells and whistles. How it works Scheduled trigger reads a list of website urls in a Google Sheet every 5 minutes Each website url is checked using the HTTP node which determines if the website is either in the UP or DOWN state. An email and Slack message are sent for websites which are in the DOWN state. The Google Sheet is updated with the website's state and a log created. Logs can be used to determine total % of UP and DOWN time over a period. Requirements Google Sheet for storing websites to monitor and their states Gmail for email alerts Slack for channel alerts Customising the workflow Don't use Google Sheets? This can easily be exchanged with Excel or Airtable.
jimleuk
Jimleuk

More HR workflow templates

Notion node
Code node
+6

Notion AI Assistant Generator

This n8n workflow template lets teams easily generate a custom AI chat assistant based on the schema of any Notion database. Simply provide the Notion database URL, and the workflow downloads the schema and creates a tailored AI assistant designed to interact with that specific database structure. Set Up Watch this quick set up video 👇 Key Features Instant Assistant Generation**: Enter a Notion database URL, and the workflow produces an AI assistant configured to the database schema. Advanced Querying**: The assistant performs flexible queries, filtering records by multiple fields (e.g., tags, names). It can also search inside Notion pages to pull relevant content from specific blocks. Schema Awareness**: Understands and interacts with various Notion column types like text, dates, and tags for accurate responses. Reference Links**: Each query returns direct links to the exact Notion pages that inform the assistant’s response, promoting transparency and easy access. Self-Validation**: The workflow has logic to check the generated assistant, and if any errors are detected, it reruns the agent to fix them. Ideal for Product Managers**: Easily access and query product data across Notion databases. Support Teams**: Quickly search through knowledge bases for precise information to enhance support accuracy. Operations Teams**: Streamline access to HR, finance, or logistics data for fast, efficient retrieval. Data Teams**: Automate large dataset queries across multiple properties and records. How It Works This AI assistant leverages two HTTP request tools—one for querying the Notion database and another for retrieving data within individual pages. It’s powered by the Anthropic LLM (or can be swapped for GPT-4) and always provides reference links for added transparency.
max-n8n
Max Tkacz
Google Sheets node
Merge node
Google Drive node
+7

AI Automated HR Workflow for CV Analysis and Candidate Evaluation

How it Works This workflow automates the process of handling job applications by extracting relevant information from submitted CVs, analyzing the candidate's qualifications against a predefined profile, and storing the results in a Google Sheet. Here’s how it operates: Data Collection and Extraction: The workflow begins with a form submission (On form submission node), which triggers the extraction of data from the uploaded CV file using the Extract from File node. Two informationExtractor nodes (Qualifications and Personal Data) are used to parse specific details such as educational background, work history, skills, city, birthdate, and telephone number from the text content of the CV. Processing and Evaluation: A Merge node combines the extracted personal and qualification data into a single output. This merged data is then passed through a Summarization Chain that generates a concise summary of the candidate’s profile. An HR Expert chain evaluates the candidate against a desired profile (Profile Wanted), assigning a score and providing considerations for hiring. Finally, all collected and processed data including the evaluation results are appended to a Google Sheets document via the Google Sheets node for further review or reporting purposes [[9]]. Set Up Steps To replicate this workflow within your own n8n environment, follow these steps: Configuration: Begin by setting up an n8n instance if you haven't already; you can sign up directly on their website or self-host the application. Import the provided JSON configuration into your n8n workspace. Ensure that all necessary credentials (e.g., Google Drive, Google Sheets, OpenAI API keys) are correctly configured under the Credentials section since some nodes require external service integrations like Google APIs and OpenAI for language processing tasks. Customization: Adjust the parameters of each node according to your specific requirements. For example, modify the fields in the formTrigger node to match what kind of information you wish to collect from applicants. Customize the prompts given to AI models in nodes like Qualifications, Summarization Chain, and HR Expert so they align with the type of analyses you want performed on the candidates' profiles. Update the destination settings in the Google Sheets node to point towards your own spreadsheet where you would like the final outputs recorded.
n3witalia
Davide
Notion node
OpenAI Chat Model node
+3

Notion knowledge base AI assistant

Who is this for This workflow is perfect for teams and individuals who manage extensive data in Notion and need a quick, AI-powered way to interact with their databases. If you're looking to streamline your knowledge management, automate searches, and get faster insights from your Notion databases, this workflow is for you. It’s ideal for support teams, project managers, or anyone who needs to query specific data across multiple records or within individual pages of their Notion setup. Check out the Notion template this Assistant is set up to use: https://www.notion.so/templates/knowledge-base-ai-assistant-with-n8n How it works The Notion Database Assistant uses an AI Agent built with Retrieval-Augmented Generation (RAG) to query this Knowledge Base style Notion database. The assistant can search across multiple properties like tags or question and retrieves content from inside individual Notion pages for additional context. Key features include: Querying the database with flexible filters. Searching within individual Notion pages and extracting relevant blocks. Providing a reference link to the exact Notion pages used to inform its responses, ensuring transparency and easy verification. This assistant uses two HTTP request tools—one for querying the Notion database and another for pulling data from within specific pages. It streamlines knowledge retrieval, offering a conversational, AI-driven way to interact with large datasets. Set up Find basic set up instructions inside the workflow itself or watch a quickstart video 👇
max-n8n
Max Tkacz
HTTP Request node
Google Drive node
Google Calendar node
+9

Actioning Your Meeting Next Steps using Transcripts and AI

This n8n workflow demonstrates how you can summarise and automate post-meeting actions from video transcripts fed into an AI Agent. Save time between meetings by allowing AI handle the chores of organising follow-up meetings and invites. How it works This workflow scans for the calendar for client or team meetings which were held online. * Attempts will be made to fetch any recorded transcripts which are then sent to the AI agent. The AI agent summarises and identifies if any follow-on meetings are required. If found, the Agent will use its Calendar Tool to to create the event for the time, date and place for the next meeting as well as add known attendees. Requirements Google Calendar and the ability to fetch Meeting Transcripts (There is a special OAuth permission for this action!) OpenAI account for access to the LLM. Customising the workflow This example only books follow-on meetings but could be extended to generate reports or send emails.
jimleuk
Jimleuk
HTTP Request node
Telegram node
Telegram Trigger node
+12

HR & IT Helpdesk Chatbot with Audio Transcription

An intelligent chatbot that assists employees by answering common HR or IT questions, supporting both text and audio messages. This unique feature ensures employees can conveniently ask questions via voice messages, which are transcribed and processed just like text queries. How It Works Message Capture: When an employee sends a message to the chatbot in WhatsApp or Telegram (text or audio), the chatbot captures the input. Audio Transcription: For audio messages, the chatbot transcribes the content into text using an AI-powered transcription service (e.g., Whisper, Google Cloud Speech-to-Text). Query Processing: The transcribed text (or directly entered text) is sent to an AI service (e.g., OpenAI) to generate embeddings. These embeddings are used to search a vector database (e.g., Supabase or Qdrant) containing the company’s internal HR and IT documentation. The most relevant data is retrieved and sent back to the AI service to compose a concise and helpful response. Response Delivery: The chatbot sends the final response back to the employee, whether the input was text or audio. Set Up Steps Estimated Time**: 20–25 minutes Prerequisites**: Create an account with an AI provider (e.g., OpenAI). Connect WhatsApp or Telegram credentials in n8n. Set up a transcription service (e.g., Whisper or Google Cloud Speech-to-Text). Configure a vector database (e.g., Supabase or Qdrant) and add your internal HR and IT documentation. Import the workflow template into n8n and update environment variables for your credentials.
occult
Felipe Braga
HTTP Request node
Google Drive node
+4

CV Resume PDF Parsing with Multimodal Vision AI

This n8n workflow demonstrates how we can use Multimodal LLMs to parse and extract from PDF documents in n8n. In this particular scenario, we're passing a candidate's CV/resume to an AI which filters out unqualified applications. However, this sneaky candidate has added in hidden prompt to bypass our bot! Whatever will we do? No fret, using AI Vision is one approach to solve this problem... read on! How it works Our candidate's CV/Resume is a PDF downloaded via Google Drive for this demonstration. The PDF is then converted into an image PNG using a tool called Stirling PDF. Since the hidden prompt has a white font color, it is is invisible in the converted image. The image is then forwarded to a Basic LLM node to process using our multimodal model - in this example, we'll use Google's Gemini 1.5 Pro. In the Basic LLM node, we'll need to set a User Message with the type of Binary. This allows us to directly send the image file in our request. The LLM is now immune to the hidden prompt and its response is has expected. The example CV/Resume with hidden prompt can be found here: https://drive.google.com/file/d/1MORAdeev6cMcTJBV2EYALAwll8gCDRav/view?usp=sharing Requirements Google Gemini API Key. Alternatively, GPT4 will also work for this use-case. Stirling PDF or another service which can convert PDFs into images. Note for data privacy, this example uses a public API and it is recommended that you self-host and use a private instance of Stirling PDF instead. Customising the workflow Swap out the manual trigger for another trigger such as a webhook to integrate into your existing services. This example demonstrates a validation use-case ie. "does the candidate look qualified?". You can try additionally extracting data points instead such as years of experiences, previous companies etc.
jimleuk
Jimleuk

More IT Ops workflow templates

OpenAI Chat Model node

Chat with Postgresql Database

Who is this template for? This workflow template is designed for any professionals seeking relevent data from database using natural language. How it works Each time user ask's question using the n8n chat interface, the workflow runs. Then the message is processed by AI Agent using relevent tools - Execute SQL Query, Get DB Schema and Tables List and Get Table Definition, if required. Agent uses these tool to form and run sql query which are necessary to answer the questions. Once AI Agent has the data, it uses it to form answer and returns it to the user. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a Postgresql Credentials, and OpenAI api key. Template was created in n8n v1.77.0
kumohq
KumoHQ
HTTP Request node
Merge node
+13

AI Agent To Chat With Files In Supabase Storage

Video Guide I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n. Youtube Link Who is this for? This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive. What problem does this workflow solve? Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files. What this workflow does The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include: Fetching and comparing files to avoid duplicate processing. Handling file downloads and extracting content based on the file type. Converting documents into vectorized data for contextual information retrieval. Storing and querying vectorized data from a Supabase vector store. File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content. Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions. Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot. Setup N8N Workflow Fetch File List from Supabase: Use Supabase to retrieve the stored file list from a specified bucket. Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing. Compare and Filter Files: Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table. Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled. Handle File Downloads: Download new files using detailed storage configurations for public/private access. Adjust the storage settings and GET requests to match your Supabase setup. File Type Processing: Use a Switch node to target specific file types (e.g., PDFs or text files). Employ relevant tools to process the content: For PDFs, extract embedded content. For text files, directly process the text data. Content Chunking: Break large text data into smaller chunks using the Text Splitter node. Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks. Vector Embedding Creation: Generate vectorized embeddings for the processed content using OpenAI's embedding tools. Ensure metadata, such as file ID, is included for easy data retrieval. Store Vectorized Data: Save the vectorized information into a dedicated Supabase vector store. Use the default schema and table provided by Supabase for seamless setup. AI Chatbot Integration: Add a chatbot node to handle user input and retrieve relevant document chunks. Use metadata like file ID for targeted queries, especially when multiple documents are involved. Testing Upload sample files to your Supabase bucket. Verify if files are processed and stored successfully in the vector store. Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?"). Test for accuracy and contextual relevance of retrieved results.
lowcodingdev
Mark Shcherbakov
Notion node
Code node
+6

Notion AI Assistant Generator

This n8n workflow template lets teams easily generate a custom AI chat assistant based on the schema of any Notion database. Simply provide the Notion database URL, and the workflow downloads the schema and creates a tailored AI assistant designed to interact with that specific database structure. Set Up Watch this quick set up video 👇 Key Features Instant Assistant Generation**: Enter a Notion database URL, and the workflow produces an AI assistant configured to the database schema. Advanced Querying**: The assistant performs flexible queries, filtering records by multiple fields (e.g., tags, names). It can also search inside Notion pages to pull relevant content from specific blocks. Schema Awareness**: Understands and interacts with various Notion column types like text, dates, and tags for accurate responses. Reference Links**: Each query returns direct links to the exact Notion pages that inform the assistant’s response, promoting transparency and easy access. Self-Validation**: The workflow has logic to check the generated assistant, and if any errors are detected, it reruns the agent to fix them. Ideal for Product Managers**: Easily access and query product data across Notion databases. Support Teams**: Quickly search through knowledge bases for precise information to enhance support accuracy. Operations Teams**: Streamline access to HR, finance, or logistics data for fast, efficient retrieval. Data Teams**: Automate large dataset queries across multiple properties and records. How It Works This AI assistant leverages two HTTP request tools—one for querying the Notion database and another for retrieving data within individual pages. It’s powered by the Anthropic LLM (or can be swapped for GPT-4) and always provides reference links for added transparency.
max-n8n
Max Tkacz
Notion node
OpenAI Chat Model node
+3

Notion knowledge base AI assistant

Who is this for This workflow is perfect for teams and individuals who manage extensive data in Notion and need a quick, AI-powered way to interact with their databases. If you're looking to streamline your knowledge management, automate searches, and get faster insights from your Notion databases, this workflow is for you. It’s ideal for support teams, project managers, or anyone who needs to query specific data across multiple records or within individual pages of their Notion setup. Check out the Notion template this Assistant is set up to use: https://www.notion.so/templates/knowledge-base-ai-assistant-with-n8n How it works The Notion Database Assistant uses an AI Agent built with Retrieval-Augmented Generation (RAG) to query this Knowledge Base style Notion database. The assistant can search across multiple properties like tags or question and retrieves content from inside individual Notion pages for additional context. Key features include: Querying the database with flexible filters. Searching within individual Notion pages and extracting relevant blocks. Providing a reference link to the exact Notion pages used to inform its responses, ensuring transparency and easy verification. This assistant uses two HTTP request tools—one for querying the Notion database and another for pulling data from within specific pages. It streamlines knowledge retrieval, offering a conversational, AI-driven way to interact with large datasets. Set up Find basic set up instructions inside the workflow itself or watch a quickstart video 👇
max-n8n
Max Tkacz
HTTP Request node
Merge node
+3

Backup n8n workflows to Google Drive

Temporary solution using the undocumented REST API for backups using Google drive. Please note that there are issues with this workflow. It does not support versioning, so please know that it will create multiple copies of the workflows so if you run this daily it will make the folder grow quickly. Once I figure out how to version in Gdrive I'll update it here.
djangelic
Angel Menendez
HTTP Request node
Redis node
+8

Advanced Telegram Bot, Ticketing System, LiveChat, User Management, Broadcasting

A robust n8n workflow designed to enhance Telegram bot functionality for user management and broadcasting. It facilitates automatic support ticket creation, efficient user data storage in Redis, and a sophisticated system for message forwarding and broadcasting. How It Works Telegram Bot Setup: Initiate the workflow with a Telegram bot configured for handling different chat types (private, supergroup, channel). User Data Management: Formats and updates user data, storing it in a Redis database for efficient retrieval and management. Support Ticket Creation: Automatically generates chat tickets for user messages and saves the corresponding topic IDs in Redis. Message Forwarding: Forwards new messages to the appropriate chat thread, or creates a new thread if none exists. Support Forum Management: Handles messages within a support forum, differentiating between various chat types and user statuses. Broadcasting System: Implements a broadcasting mechanism that sends channel posts to all previous bot users, with a system to filter out blocked users. Blocked User Management: Identifies and manages blocked users, preventing them from receiving broadcasted messages. Versatile Channel Handling: Ensures that messages from verified channels are properly managed and broadcasted to relevant users. Set Up Steps Estimated Time**: Around 30 minutes. Requirements**: A Telegram bot, a Redis database, and Telegram group/channel IDs are necessary. Configuration**: Input the Telegram bot token and relevant group/channel IDs. Configure message handling and user data processing according to your needs. Detailed Instructions**: Sticky notes within the workflow provide extensive setup information and guidance. Live Demo Workflow Bot: Telegram Bot Link (Click here) Support Group: Telegram Group Link (Click here) Broadcasting Channel: Telegram Channel Link (Click here) Keywords: n8n workflow, Telegram bot, chat ticket system, Redis database, message broadcasting, user data management, support forum automation
nskha
Nskha

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon