Notion node
Notion Trigger node
+5

Store Notion's Pages as Vector Documents into Supabase with OpenAI

Published 5 months ago

Created by

dataki
Dataki

Categories

Template description

Workflow updated on 17/06/2024:
Added 'Summarize' node to avoid creating a row for each Notion content block in the Supabase table.

Store Notion's Pages as Vector Documents into Supabase

This workflow assumes you have a Supabase project with a table that has a vector column. If you don't have it, follow the instructions here: Supabase Langchain Guide

Workflow Description

This workflow automates the process of storing Notion pages as vector documents in a Supabase database with a vector column. The steps are as follows:

  1. Notion Page Added Trigger:

    • Monitors a specified Notion database for newly added pages. You can create a specific Notion database where you copy the pages you want to store in Supabase.
    • Node: Page Added in Notion Database
  2. Retrieve Page Content:

    • Fetches all block content from the newly added Notion page.
    • Node: Get Blocks Content
  3. Filter Non-Text Content:

    • Excludes blocks of type "image" and "video" to focus on textual content.
    • Node: Filter - Exclude Media Content
  4. Summarize Content:

    • Concatenates the Notion blocks content to create a single text for embedding.
    • Node: Summarize - Concatenate Notion's blocks content
  5. Store in Supabase:

    • Stores the processed documents and their embeddings into a Supabase table with a vector column.
    • Node: Store Documents in Supabase
  6. Generate Embeddings:

    • Utilizes OpenAI's API to generate embeddings for the textual content.
    • Node: Generate Text Embeddings
  7. Create Metadata and Load Content:

    • Loads the block content and creates associated metadata, such as page ID and block ID.
    • Node: Load Block Content & Create Metadata
  8. Split Content into Chunks:

    • Divides the text into smaller chunks for easier processing and embedding generation.
    • Node: Token Splitter

Share Template

More AI workflow templates

OpenAI Chat Model node
SerpApi (Google Search) node

AI agent chat

This workflow employs OpenAI's language models and SerpAPI to create a responsive, intelligent conversational agent. It comes equipped with manual chat triggers and memory buffer capabilities to ensure seamless interactions. To use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Merge node
+7

Scrape and summarize webpages with AI

This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content. Note that to use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Markdown node
+5

AI agent that can scrape webpages

⚙️🛠️🚀🤖🦾 This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** – an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recude the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
eduard
Eduard

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon