This workflow demonstrates how to enrich data from a list of companies in a spreadsheet. While this workflow is production-ready if all steps are followed, adding error handling would enhance its robustness.
Webhook
This node triggers the workflow via a webhook call. You can replace it with any other trigger of your choice, such as form submission, a new row added in Google Sheets, or a manual trigger.
Get Rows from Google Sheet
This node retrieves the list of companies from your spreadsheet. The columns in this Google Sheet are:
Company: The name of the company
Website: The website URL of the company
These two fields are required at this step.
Business Area: The business area deduced by OpenAI from the scraped data
Offer: The offer deduced by OpenAI from the scraped data
Value Proposition: The value proposition deduced by OpenAI from the scraped data
Business Model: The business model deduced by OpenAI from the scraped data
ICP: The Ideal Customer Profile deduced by OpenAI from the scraped data
Additional Information: Information related to the scraped data, including:
Loop Over Items
This node ensures that, in subsequent steps, the website in "extra workflow input" corresponds to the row being processed. You can delete this node, but you'll need to ensure that the "query" sent to the scraping workflow corresponds to the website of the specific company being scraped (rather than just the first row).
AI Agent
This AI agent is configured with a prompt to extract data from the content it receives. The node has three sub-nodes:
gpt4-o-mini
.Update Company Row in Google Sheet
This node updates the specific company's row in Google Sheets with the enriched data.
Tool Called from Agent
This is the trigger for when the AI Agent calls the Scraper. A query is sent with:
Set Company URL
This node renames a field, which may seem trivial but is useful for performing transformations on data received from the AI Agent.
ScrapingBee: Scrape Company's Website
This node scrapes data from the URL provided using ScrapingBee. You can use any scraper of your choice, but ScrapingBee is recommended, as it allows you to configure scraper behavior directly. Once configured, copy the provided "curl" command and import it into n8n.
HTML to Markdown
This node converts the scraped HTML data to Markdown, which is then sent to OpenAI. The Markdown format generally uses fewer tokens than HTML.
It's always a pleasure to share workflows, but creators sometimes want to keep some magic to themselves ✨. Here are some ways you can enhance this workflow:
Implement complex processes faster with n8n