Enhance your data analysis by connecting an AI Agent to your dataset, using n8n tools.
This template teaches you how to build an AI Data Analyst Chatbot that is capable of pulling data from your sources, using tools like Google Sheets or databases. It's designed to be easy and efficient, making it a good starting point for AI-driven data analysis.
You can easily replace the current Google Sheets tools for databases like Postgres or MySQL.
How It Works
The core of the workflow is the AI Agent. It's connected to different data retrieval tools, to get data from Google Sheets (or your preferred database) in many different ways.
Once the data is retrieved, the Calculator tool allows the AI to perform mathematical operations, making your data analysis precise.
Who is this template for
Data Analysts & Researchers:** Pull data from different sources and perform quick calculations.
Developers & AI Enthusiasts:** Learn to build your first AI Agent with easy dataset access.
Business Owners:** Streamline your data analysis with AI insights and automate repetitive tasks.
Automation Experts:** Enhance your automation skills by integrating AI with your existing databases.
How to Set Up
You can find detailed instructions in the workflow itself.
Check out my other templates
👉 https://n8n.io/creators/solomon/
This n8n workflow automates the process of parsing and extracting data from PDF invoices. With this workflow, accounts and finance people can realise huge time and cost savings in their busy schedules.
Read the Blog: https://blog.n8n.io/how-to-extract-data-from-pdf-to-excel-spreadsheet-advance-parsing-with-n8n-io-and-llamaparse/
How it works
This workflow will watch an email inbox for incoming invoices from suppliers
It will download the attached PDFs and processing them through a third party service called LlamaParse.
LlamaParse is specifically designed to handle and convert complex PDF data structures such as tables to markdown.
Markdown is easily to process for LLM models and so the data extraction by our AI agent is more accurate and reliable.
The workflow exports the extracted data from the AI agent to Google Sheets once the job complete.
Requirements
The criteria of the email trigger must be configured to capture emails with attachments.
The gmail label "invoice synced" must be created before using this workflow.
A LlamaIndex.ai account to use the LlamaParse service.
An OpenAI account to use GPT for AI work.
Google Sheets to save the output of the data extraction process although this can be replaced for whatever your needs.
Customizing this workflow
This workflow uses Gmail and Google Sheets but these can easily be swapped out for equivalent services such as Outlook and Excel.
Not using Excel? Simple redirect the output of the AI agent to your accounting software of choice.
This n8n workflow demonstrates how you can summarise and automate post-meeting actions from video transcripts fed into an AI Agent.
Save time between meetings by allowing AI handle the chores of organising follow-up meetings and invites.
How it works
This workflow scans for the calendar for client or team meetings which were held online. * Attempts will be made to fetch any recorded transcripts which are then sent to the AI agent.
The AI agent summarises and identifies if any follow-on meetings are required.
If found, the Agent will use its Calendar Tool to to create the event for the time, date and place for the next meeting as well as add known attendees.
Requirements
Google Calendar and the ability to fetch Meeting Transcripts (There is a special OAuth permission for this action!)
OpenAI account for access to the LLM.
Customising the workflow
This example only books follow-on meetings but could be extended to generate reports or send emails.
This flow is supported by a Chrome plugin created with Cursor AI.
The idea was to create a Chrome plugin and a backend service in N8N to do chart analytics with OpenAI. It's a good sample on how to submit a screenshot from the browser to N8N.
Who is it for?
N8N developers who want to learn about using a Chrome plugin, an N8N webhook and OpenAI.
What opportunity does it present?
This sample opens up a whole range of N8N connected Chrome extensions that can analyze screenshots by using OpenAI.
What this workflow does?
The workflow contains:
a webhook trigger
an OpenAI node with GPT-4O-MINI and Analyze Image selected
a response node to send back the Text that was created after analysing the screenshot.
All this is needed to talk to the Chrome extension which is created with Cursor AI.
The idea is to visit the tradingview.com crypto charts, click the Chrome plugin and get back analytics about the shown chart in understandable language. This is driven by the N8N flow.
With the new image analytics capabilities of OpenAI this opens up a world of opportunities.
Requirements/setup
OpenAI API key
Cursor AI installed
The Chrome extension. Download
The N8N JSON code. Download
How to customize it to your needs?
Both the Chrome extension and N8N flow can be adapted to use on other websites. You can consider:
analyzing a financial screen and ask questions about the data shown
analyzing other charts
extending the N8N workflow with other AI nodes
With AI and image analytics the sky is the limit and in some cases it saves you from creating complex API integrations.
Download Chrome extension
This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading.
Disclaimer
This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes.
How it works
A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events.
When a file is created, its contents are uploaded to the vector store.
When a file is updated, its previous records are replaced.
When the file is deleted, the corresponding records are also removed from the vector store.
A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system.
Requirements
A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem.
Qdrant instance to store the records.
Customising the workflow
This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more.
Want to go fully local?
A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
Attachments Gmail to Drive and Google Sheets
Description
Automatically process invoice emails by saving attachments to Google Drive and extracting key invoice data to Google Sheets using AI. This workflow monitors your Gmail for unread emails with attachments, saves PDFs to a specified Google Drive folder, and uses OpenAI's GPT-4o to extract invoice details (date, description, amount) into a structured spreadsheet.
Use cases
Invoice Management**: Automatically organize and track invoices received via email
Financial Record Keeping**: Maintain a structured database of all invoice information
Document Organization**: Keep digital copies of invoices organized in Google Drive
Automated Data Entry**: Eliminate manual data entry for invoice processing
Resources
Gmail account
Google Drive account
Google Sheets account
OpenAI API key
Setup instructions
Prerequisites
Active Gmail, Google Drive, and Google Sheets accounts
OpenAI API key (GPT-4o model access)
n8n instance with credentials manager
Steps
Gmail and Google Drive Setup:
Connect your Gmail account in n8n credentials
Connect your Google Drive account with appropriate permissions
Create a destination folder in Google Drive for invoice storage
Google Sheets Setup:
Connect your Google Sheets account
Create a spreadsheet with columns: Invoice date, Invoice Description, Total price, and Fichero
Copy your spreadsheet ID for configuration
OpenAI Setup:
Add your OpenAI API key to n8n credentials
Configure Email Filter:
Update the email filter node to match your specific sender requirements
Benefits
Time Saving**: Eliminates manual downloading, filing, and data entry
Accuracy**: AI-powered data extraction reduces human error
Organization**: Consistent file naming and storage structure
Searchability**: Creates a searchable database of all invoice information
Automation**: Runs every minute to process new emails as they arrive
Related templates
Email Parser to CRM
Document Processing Workflow
Financial Data Automation
Who is this template for?
This workflow template is designed for any professionals seeking relevent data from database using natural language.
How it works
Each time user ask's question using the n8n chat interface, the workflow runs.
Then the message is processed by AI Agent using relevent tools - Execute SQL Query, Get DB Schema and Tables List and Get Table Definition, if required. Agent uses these tool to form and run sql query which are necessary to answer the questions.
Once AI Agent has the data, it uses it to form answer and returns it to the user.
Set up instructions
Complete the Set up credentials step when you first open the workflow. You'll need a Postgresql Credentials, and OpenAI api key.
Template was created in n8n v1.77.0
Video Guide
I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n.
Youtube Link
Who is this for?
This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive.
What problem does this workflow solve?
Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files.
What this workflow does
The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include:
Fetching and comparing files to avoid duplicate processing.
Handling file downloads and extracting content based on the file type.
Converting documents into vectorized data for contextual information retrieval.
Storing and querying vectorized data from a Supabase vector store.
File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content.
Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions.
Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot.
Setup
N8N Workflow
Fetch File List from Supabase:
Use Supabase to retrieve the stored file list from a specified bucket.
Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing.
Compare and Filter Files:
Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table.
Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled.
Handle File Downloads:
Download new files using detailed storage configurations for public/private access.
Adjust the storage settings and GET requests to match your Supabase setup.
File Type Processing:
Use a Switch node to target specific file types (e.g., PDFs or text files).
Employ relevant tools to process the content:
For PDFs, extract embedded content.
For text files, directly process the text data.
Content Chunking:
Break large text data into smaller chunks using the Text Splitter node.
Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks.
Vector Embedding Creation:
Generate vectorized embeddings for the processed content using OpenAI's embedding tools.
Ensure metadata, such as file ID, is included for easy data retrieval.
Store Vectorized Data:
Save the vectorized information into a dedicated Supabase vector store.
Use the default schema and table provided by Supabase for seamless setup.
AI Chatbot Integration:
Add a chatbot node to handle user input and retrieve relevant document chunks.
Use metadata like file ID for targeted queries, especially when multiple documents are involved.
Testing
Upload sample files to your Supabase bucket.
Verify if files are processed and stored successfully in the vector store.
Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?").
Test for accuracy and contextual relevance of retrieved results.
This n8n workflow template lets teams easily generate a custom AI chat assistant based on the schema of any Notion database. Simply provide the Notion database URL, and the workflow downloads the schema and creates a tailored AI assistant designed to interact with that specific database structure.
Set Up
Watch this quick set up video 👇
Key Features
Instant Assistant Generation**: Enter a Notion database URL, and the workflow produces an AI assistant configured to the database schema.
Advanced Querying**: The assistant performs flexible queries, filtering records by multiple fields (e.g., tags, names). It can also search inside Notion pages to pull relevant content from specific blocks.
Schema Awareness**: Understands and interacts with various Notion column types like text, dates, and tags for accurate responses.
Reference Links**: Each query returns direct links to the exact Notion pages that inform the assistant’s response, promoting transparency and easy access.
Self-Validation**: The workflow has logic to check the generated assistant, and if any errors are detected, it reruns the agent to fix them.
Ideal for
Product Managers**: Easily access and query product data across Notion databases.
Support Teams**: Quickly search through knowledge bases for precise information to enhance support accuracy.
Operations Teams**: Streamline access to HR, finance, or logistics data for fast, efficient retrieval.
Data Teams**: Automate large dataset queries across multiple properties and records.
How It Works
This AI assistant leverages two HTTP request tools—one for querying the Notion database and another for retrieving data within individual pages. It’s powered by the Anthropic LLM (or can be swapped for GPT-4) and always provides reference links for added transparency.
Who is this for
This workflow is perfect for teams and individuals who manage extensive data in Notion and need a quick, AI-powered way to interact with their databases. If you're looking to streamline your knowledge management, automate searches, and get faster insights from your Notion databases, this workflow is for you. It’s ideal for support teams, project managers, or anyone who needs to query specific data across multiple records or within individual pages of their Notion setup.
Check out the Notion template this Assistant is set up to use: https://www.notion.so/templates/knowledge-base-ai-assistant-with-n8n
How it works
The Notion Database Assistant uses an AI Agent built with Retrieval-Augmented Generation (RAG) to query this Knowledge Base style Notion database. The assistant can search across multiple properties like tags or question and retrieves content from inside individual Notion pages for additional context.
Key features include:
Querying the database with flexible filters.
Searching within individual Notion pages and extracting relevant blocks.
Providing a reference link to the exact Notion pages used to inform its responses, ensuring transparency and easy verification.
This assistant uses two HTTP request tools—one for querying the Notion database and another for pulling data from within specific pages. It streamlines knowledge retrieval, offering a conversational, AI-driven way to interact with large datasets.
Set up
Find basic set up instructions inside the workflow itself or watch a quickstart video 👇
Temporary solution using the undocumented REST API for backups using Google drive.
Please note that there are issues with this workflow. It does not support versioning, so please know that it will create multiple copies of the workflows so if you run this daily it will make the folder grow quickly. Once I figure out how to version in Gdrive I'll update it here.
A robust n8n workflow designed to enhance Telegram bot functionality for user management and broadcasting. It facilitates automatic support ticket creation, efficient user data storage in Redis, and a sophisticated system for message forwarding and broadcasting.
How It Works
Telegram Bot Setup: Initiate the workflow with a Telegram bot configured for handling different chat types (private, supergroup, channel).
User Data Management: Formats and updates user data, storing it in a Redis database for efficient retrieval and management.
Support Ticket Creation: Automatically generates chat tickets for user messages and saves the corresponding topic IDs in Redis.
Message Forwarding: Forwards new messages to the appropriate chat thread, or creates a new thread if none exists.
Support Forum Management: Handles messages within a support forum, differentiating between various chat types and user statuses.
Broadcasting System: Implements a broadcasting mechanism that sends channel posts to all previous bot users, with a system to filter out blocked users.
Blocked User Management: Identifies and manages blocked users, preventing them from receiving broadcasted messages.
Versatile Channel Handling: Ensures that messages from verified channels are properly managed and broadcasted to relevant users.
Set Up Steps
Estimated Time**: Around 30 minutes.
Requirements**: A Telegram bot, a Redis database, and Telegram group/channel IDs are necessary.
Configuration**: Input the Telegram bot token and relevant group/channel IDs. Configure message handling and user data processing according to your needs.
Detailed Instructions**: Sticky notes within the workflow provide extensive setup information and guidance.
Live Demo Workflow
Bot: Telegram Bot Link (Click here)
Support Group: Telegram Group Link (Click here)
Broadcasting Channel: Telegram Channel Link (Click here)
Keywords: n8n workflow, Telegram bot, chat ticket system, Redis database, message broadcasting, user data management, support forum automation