Back to Integrations
integration integration
integration Markdown node

Integrate Markdown with 500+ apps and services

Unlock Markdown’s full potential with n8n, connecting it to similar Core Nodes apps and over 1000 other services. Create adaptable and scalable workflows between Markdown and your stack. All within a building experience you will love.

Popular ways to use Markdown integration

HTTP Request node
TheHive node
Item Lists node
+4

Weekly Shodan Query - Report Accidents

This n8n workflow, which runs every Monday at 5:00 AM, initiates a comprehensive process to monitor and analyze network security by scrutinizing IP addresses and their associated ports. It begins by fetching a list of watched IP addresses and expected ports through an HTTP request. Each IP address is then processed in a sequential loop. For every IP, the workflow sends a GET request to Shodan, a renowned search engine for internet-connected devices, to gather detailed information about the IP. It then extracts the data field from Shodan's response, converting it into an array. This array contains information on all ports Shodan has data for regarding the IP. A filter node compares the ports returned from Shodan with the expected list obtained initially. If a port doesn't match the expected list, it is retained for further processing; otherwise, it's filtered out. For each such unexpected port, the workflow assembles data including the IP, hostnames from Shodan, the unexpected port number, service description, and detailed data from Shodan like HTTP status code, date, time, and headers. This collected data is then formatted into an HTML table, which is subsequently converted into Markdown format. Finally, the workflow generates an alert in TheHive, a popular security incident response platform. This alert contains details like the title indicating unexpected ports for the specific IP, a description comprising the Markdown table with Shodan data, medium severity, current date and time, tags, Traffic Light Protocol (TLP) set to Amber, a new status, type as 'Unexpected open port', the source as n8n, a unique source reference combining the IP with the current Unix time, and enabling follow and JSON parameters options. This comprehensive workflow thus aids in the proactive monitoring and management of network security.
n8n-team
n8n Team
Google Sheets node
HTTP Request node
Merge node
Slack node
Item Lists node
+5

Monitor G2 competitors reviews [Google Sheets, ScrapingBee, Slack]

This workflow monitor G2 reviews URLS. When a new review is published, it will: trigger a Slack notification record the review in Google Sheets To install it, you'll need: access to Slack, Google Sheets and ScrapingBee Full guide here: https://lempire.notion.site/Scrape-G2-reviews-with-n8n-3f46e280e8f24a68b3797f98d2fba433?pvs=4
lucasperret
Lucas Perret
HTTP Request node
Merge node
Postgres node
+18

WordPress - AI Chatbot to enhance user experience - with Supabase and OpenAI

This is the first version of a template for a RAG/GenAI App using WordPress content. As creating, sharing, and improving templates brings me joy 😄, feel free to reach out on LinkedIn if you have any ideas to enhance this template! How It Works This template includes three workflows: Workflow 1**: Generate embeddings for your WordPress posts and pages, then store them in the Supabase vector store. Workflow 2**: Handle upserts for WordPress content when edits are made. Workflow 3**: Enable chat functionality by performing Retrieval-Augmented Generation (RAG) on the embedded documents. Why use this template? This template can be applied to various use cases: Build a GenAI application that requires embedded documents from your website's content. Embed or create a chatbot page on your website to enhance user experience as visitors search for information. Gain insights into the types of questions visitors are asking on your website. Simplify content management by asking the AI for related content ideas or checking if similar content already exists. Useful for internal linking. Prerequisites Access to Supabase for storing embeddings. Basic knowledge of Postgres and pgvector. A WordPress website with content to be embedded. An OpenAI API key Ensure that your n8n workflow, Supabase instance, and WordPress website are set to the same timezone (or use GMT) for consistency. Workflow 1 : Initial Embedding This workflow retrieves your WordPress pages and posts, generates embeddings from the content, and stores them in Supabase using pgvector. Step 0 : Create Supabase tables Nodes : Postgres - Create Documents Table: This table is structured to support OpenAI embedding models with 1536 dimensions Postgres - Create Workflow Execution History Table These two nodes create tables in Supabase: The documents table, which stores embeddings of your website content. The n8n_website_embedding_histories table, which logs workflow executions for efficient management of upserts. This table tracks the workflow execution ID and execution timestamp. Step 1 : Retrieve and Merge WordPress Pages and Posts Nodes : WordPress - Get All Posts WordPress - Get All Pages Merge WordPress Posts and Pages These three nodes retrieve all content and metadata from your posts and pages and merge them. Important: ** **Apply filters to avoid generating embeddings for all site content. Step 2 : Set Fields, Apply Filter, and Transform HTML to Markdown Nodes : Set Fields Filter - Only Published & Unprotected Content HTML to Markdown These three nodes prepare the content for embedding by: Setting up the necessary fields for content embeddings and document metadata. Filtering to include only published and unprotected content (protected=false), ensuring private or unpublished content is excluded from your GenAI application. Converting HTML to Markdown, which enhances performance and relevance in Retrieval-Augmented Generation (RAG) by optimizing document embeddings. Step 3: Generate Embeddings, Store Documents in Supabase, and Log Workflow Execution Nodes: Supabase Vector Store Sub-nodes: Embeddings OpenAI Default Data Loader Token Splitter Aggregate Supabase - Store Workflow Execution This step involves generating embeddings for the content and storing it in Supabase, followed by logging the workflow execution details. Generate Embeddings: The Embeddings OpenAI node generates vector embeddings for the content. Load Data: The Default Data Loader prepares the content for embedding storage. The metadata stored includes the content title, publication date, modification date, URL, and ID, which is essential for managing upserts. ⚠️ Important Note : Be cautious not to store any sensitive information in metadata fields, as this information will be accessible to the AI and may appear in user-facing answers. Token Management: The Token Splitter ensures that content is segmented into manageable sizes to comply with token limits. Aggregate: Ensure the last node is run only for 1 item. Store Execution Details: The Supabase - Store Workflow Execution node saves the workflow execution ID and timestamp, enabling tracking of when each content update was processed. This setup ensures that content embeddings are stored in Supabase for use in downstream applications, while workflow execution details are logged for consistency and version tracking. This workflow should be executed only once for the initial embedding. Workflow 2, described below, will handle all future upserts, ensuring that new or updated content is embedded as needed. Workflow 2: Handle document upserts Content on a website follows a lifecycle—it may be updated, new content might be added, or, at times, content may be deleted. In this first version of the template, the upsert workflow manages: Newly added content** Updated content** Step 1: Retrieve WordPress Content with Regular CRON Nodes: CRON - Every 30 Seconds Postgres - Get Last Workflow Execution WordPress - Get Posts Modified After Last Workflow Execution WordPress - Get Pages Modified After Last Workflow Execution Merge Retrieved WordPress Posts and Pages A CRON job (set to run every 30 seconds in this template, but you can adjust it as needed) initiates the workflow. A Postgres SQL query on the n8n_website_embedding_histories table retrieves the timestamp of the latest workflow execution. Next, the HTTP nodes use the WordPress API (update the example URL in the template with your own website’s URL and add your WordPress credentials) to request all posts and pages modified after the last workflow execution date. This process captures both newly added and recently updated content. The retrieved content is then merged for further processing. Step 2 : Set fields, use filter Nodes : Set fields2 Filter - Only published and unprotected content The same that Step 2 in Workflow 1, except that HTML To Makrdown is used in further Step. Step 3: Loop Over Items to Identify and Route Updated vs. Newly Added Content Here, I initially aimed to use 'update documents' instead of the delete + insert approach, but encountered challenges, especially with updating both content and metadata columns together. Any help or suggestions are welcome! :) Nodes: Loop Over Items Postgres - Filter on Existing Documents Switch Route existing_documents (if documents with matching IDs are found in metadata): Supabase - Delete Row if Document Exists: Removes any existing entry for the document, preparing for an update. Aggregate2: Used to aggregate documents on Supabase with ID to ensure that Set Fields3 is executed only once for each WordPress content to avoid duplicate execution. Set Fields3: Sets fields required for embedding updates. Route new_documents (if no matching documents are found with IDs in metadata): Set Fields4: Configures fields for embedding newly added content. In this step, a loop processes each item, directing it based on whether the document already exists. The Aggregate2 node acts as a control to ensure Set Fields3 runs only once per WordPress content, effectively avoiding duplicate execution and optimizing the update process. Step 4 : HTML to Markdown, Supabase Vector Store, Update Workflow Execution Table The HTML to Markdown node mirrors Workflow 1 - Step 2. Refer to that section for a detailed explanation on how HTML content is converted to Markdown for improved embedding performance and relevance. Following this, the content is stored in the Supabase vector store to manage embeddings efficiently. Lastly, the workflow execution table is updated. These nodes mirros the **Workflow 1 - Step 3 nodes. Workflow 3 : An example of GenAI App with Wordpress Content : Chatbot to be embed on your website Step 1: Retrieve Supabase Documents, Aggregate, and Set Fields After a Chat Input Nodes: When Chat Message Received Supabase - Retrieve Documents from Chat Input Embeddings OpenAI1 Aggregate Documents Set Fields When a user sends a message to the chat, the prompt (user question) is sent to the Supabase vector store retriever. The RPC function match_documents (created in Workflow 1 - Step 0) retrieves documents relevant to the user’s question, enabling a more accurate and relevant response. In this step: The Supabase vector store retriever fetches documents that match the user’s question, including metadata. The Aggregate Documents node consolidates the retrieved data. Finally, Set Fields organizes the data to create a more readable input for the AI agent. Directly using the AI agent without these nodes would prevent metadata from being sent to the language model (LLM), but metadata is essential for enhancing the context and accuracy of the AI’s response. By including metadata, the AI’s answers can reference relevant document details, making the interaction more informative. Step 2: Call AI Agent, Respond to User, and Store Chat Conversation History Nodes: AI Agent** Sub-nodes: OpenAI Chat Model Postgres Chat Memories Respond to Webhook** This step involves calling the AI agent to generate an answer, responding to the user, and storing the conversation history. The model used is gpt4-o-mini, chosen for its cost-efficiency.
dataki
Dataki
HTTP Request node
Merge node
Supabase node
Markdown node
+11

Autonomous AI crawler

This workflow with AI agent is designed to navigate through the page to retrieve specific type of information (in this example: social media profile links). The agent is equipped with 2 tools: text tool:** to retrieve all the text from the page, URLs tool:** to extract all possible links from the page. 💡 You can edit prompt and JSON schema connected to the agent in order to return other data then social media profile links. 👉 This workflow uses Supabase as storage (input/output). Feel free to change it to any other database of your choice. 🎬 See this workflow in action in my YouTube video. How it works? The workflow uses the input URL (website) as a starting point to retrieve the data (e.g. example.com). Using the "URLs tool", the agent is able to retrieve all links from the page and navigate to them. For example, if you want to retrieve contact information, agent will try to find a subpage that might contain this information (e.g. example.com/contact) and extract the information using the text tool. Set up steps Connect database with input data (website addresses) or pin sample data to trigger node. Configure the crawling agent to retrieve the desired data (e.g. modify prompt and/or parsing schema). Set credentials for OpenAI. Optionally: split agent tools to separate workflows. If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
workfloows
Oskar
Google Sheets node
Gmail node
Markdown node
+2

Summarize Google Sheets form feedback via OpenAI's GPT-4

This n8n workflow was developed to collect and summarize feedback from an event that was collected via a Google Form and saved in a Google Sheets document. The workflow is triggered manually by clicking on the "Test workflow" button. The Google Sheets node retrieves the responses from the feedback form. The Aggregate node then combines all responses for each question into arrays and prepares the data for analysis. The OpenAI node processes the aggregated feedback data. System Prompt instructs the model to analyze the responses and generate a summary report that includes the overall sentiment regarding the event and constructive suggestions for improvement. The Markdown node converts the summary report, which is in Markdown format, into HTML. Finally, the Gmail node sends an HTML-formatted email to the specified email address.
yulia
Yulia
HTTP Request node
Merge node
+3

Markdown report generation

This workflow illustrates how HTML reports can be created using Markdown Node. An example data consists of a Time Sheet table for 2 persons. Based on this table a markdown document is generated using Function Node. After that a final HTML report is created and is saved as binary file. This file can be either downloaded directly from the workflow canvas or sent as an email attachement.
eduard
Eduard

Supported modes

Markdown to HTML
Convert data from Markdown to HTML
HTML to Markdown
Convert data from HTML to Markdown

Over 3000 companies switch to n8n every single week

Connect Markdown with your company’s tech stack and create automation workflows

in other news I installed @n8n_io tonight and holy moly it’s good

it’s compatible with EVERYTHING

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. 🤗

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon