Google Sheets node
HTTP Request node
+8

Enrich Company Data from Google Sheet with OpenAI Agent and ScrapingBee

Published 29 days ago

Created by

dataki
Dataki

Categories

Template description

This workflow demonstrates how to enrich data from a list of companies in a spreadsheet. While this workflow is production-ready if all steps are followed, adding error handling would enhance its robustness.

Important notes

  • Check legal regulations: This workflow involves scraping, so make sure to check the legal regulations around scraping in your country before getting started. Better safe than sorry!
  • Mind those tokens: OpenAI tokens can add up fast, so keep an eye on usage unless you want a surprising bill that could knock your socks off! 💸

Main Workflow

Node 1 - Webhook

This node triggers the workflow via a webhook call. You can replace it with any other trigger of your choice, such as form submission, a new row added in Google Sheets, or a manual trigger.

Node 2 - Get Rows from Google Sheet

This node retrieves the list of companies from your spreadsheet. The columns in this Google Sheet are:

  • Company: The name of the company

  • Website: The website URL of the company
    These two fields are required at this step.

  • Business Area: The business area deduced by OpenAI from the scraped data

  • Offer: The offer deduced by OpenAI from the scraped data

  • Value Proposition: The value proposition deduced by OpenAI from the scraped data

  • Business Model: The business model deduced by OpenAI from the scraped data

  • ICP: The Ideal Customer Profile deduced by OpenAI from the scraped data

  • Additional Information: Information related to the scraped data, including:

    • Information Sufficiency:
      • Description: Indicates if the information was sufficient to provide a full analysis.
      • Options: "Sufficient" or "Insufficient"
    • Insufficient Details:
      • Description: If labeled "Insufficient," specifies what information was missing or needed to complete the analysis.
    • Mismatched Content:
      • Description: Indicates whether the page content aligns with that of a typical company page.
    • Suggested Actions:
      • Description: Provides recommendations if the page content is insufficient or mismatched, such as verifying the URL or searching for alternative sources.

Node 3 - Loop Over Items

This node ensures that, in subsequent steps, the website in "extra workflow input" corresponds to the row being processed. You can delete this node, but you'll need to ensure that the "query" sent to the scraping workflow corresponds to the website of the specific company being scraped (rather than just the first row).

Node 4 - AI Agent

This AI agent is configured with a prompt to extract data from the content it receives. The node has three sub-nodes:

  • OpenAI Chat Model: The model used is currently gpt4-o-mini.
  • Call n8n Workflow: This sub-node calls the workflow to use ScrapingBee and retrieves the scraped data.
  • Structured Output Parser: This parser structures the output for clarity and ease of use, and then adds rows to the Google Sheet.

Node 5 - Update Company Row in Google Sheet

This node updates the specific company's row in Google Sheets with the enriched data.

Scraper Agent Workflow

Node 1 - Tool Called from Agent

This is the trigger for when the AI Agent calls the Scraper. A query is sent with:

  • Company name
  • Website (the URL of the website)

Node 2 - Set Company URL

This node renames a field, which may seem trivial but is useful for performing transformations on data received from the AI Agent.

Node 3 - ScrapingBee: Scrape Company's Website

This node scrapes data from the URL provided using ScrapingBee. You can use any scraper of your choice, but ScrapingBee is recommended, as it allows you to configure scraper behavior directly. Once configured, copy the provided "curl" command and import it into n8n.

Node 4 - HTML to Markdown

This node converts the scraped HTML data to Markdown, which is then sent to OpenAI. The Markdown format generally uses fewer tokens than HTML.

Improving the Workflow

It's always a pleasure to share workflows, but creators sometimes want to keep some magic to themselves ✨. Here are some ways you can enhance this workflow:

  • Handle potential errors
  • Configure the scraper tool to scrape other pages on the website. Although this will cost more tokens, it can be useful (e.g., scraping "Pricing" or "About Us" pages in addition to the homepage).
  • Instead of Google Sheets, connect directly to your CRM to enrich company data.
  • Trigger the workflow from form submissions on your website and send the scraped data about the lead to a Slack or Teams channel.

Share Template

More AI workflow templates

OpenAI Chat Model node
SerpApi (Google Search) node

AI agent chat

This workflow employs OpenAI's language models and SerpAPI to create a responsive, intelligent conversational agent. It comes equipped with manual chat triggers and memory buffer capabilities to ensure seamless interactions. To use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Merge node
+7

Scrape and summarize webpages with AI

This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content. Note that to use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Markdown node
+5

AI agent that can scrape webpages

⚙️🛠️🚀🤖🦾 This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** – an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recude the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
eduard
Eduard

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon