Back to Integrations
integrationHTTP Request node
integration

HTTP Request and Information Extractor integration

Save yourself the work of writing custom integrations for HTTP Request and Information Extractor and use n8n instead. Build adaptable and scalable Development, Core Nodes, AI, and Langchain workflows that work with your technology stack. All within a building experience you will love.

How to connect HTTP Request and Information Extractor

  • Step 1: Create a new workflow
  • Step 2: Add and configure nodes
  • Step 3: Connect
  • Step 4: Customize and extend your integration
  • Step 5: Test and activate your workflow

Step 1: Create a new workflow and add the first step

In n8n, click the "Add workflow" button in the Workflows tab to create a new workflow. Add the starting point – a trigger on when your workflow should run: an app event, a schedule, a webhook call, another workflow, an AI chat, or a manual trigger. Sometimes, the HTTP Request node might already serve as your starting point.

HTTP Request and Information Extractor integration: Create a new workflow and add the first step

Step 2: Add and configure HTTP Request and Information Extractor nodes

You can find HTTP Request and Information Extractor in the nodes panel. Drag them onto your workflow canvas, selecting their actions. Click each node, choose a credential, and authenticate to grant n8n access. Configure HTTP Request and Information Extractor nodes one by one: input data on the left, parameters in the middle, and output data on the right.

HTTP Request and Information Extractor integration: Add and configure HTTP Request and Information Extractor nodes

Step 3: Connect HTTP Request and Information Extractor

A connection establishes a link between HTTP Request and Information Extractor (or vice versa) to route data through the workflow. Data flows from the output of one node to the input of another. You can have single or multiple connections for each node.

HTTP Request and Information Extractor integration: Connect HTTP Request and Information Extractor

Step 4: Customize and extend your HTTP Request and Information Extractor integration

Use n8n's core nodes such as If, Split Out, Merge, and others to transform and manipulate data. Write custom JavaScript or Python in the Code node and run it as a step in your workflow. Connect HTTP Request and Information Extractor with any of n8n’s 1000+ integrations, and incorporate advanced AI logic into your workflows.

HTTP Request and Information Extractor integration: Customize and extend your HTTP Request and Information Extractor integration

Step 5: Test and activate your HTTP Request and Information Extractor workflow

Save and run the workflow to see if everything works as expected. Based on your configuration, data should flow from HTTP Request to Information Extractor or vice versa. Easily debug your workflow: you can check past executions to isolate and fix the mistake. Once you've tested everything, make sure to save your workflow and activate it.

HTTP Request and Information Extractor integration: Test and activate your HTTP Request and Information Extractor workflow

Ultimate Scraper Workflow for n8n

What this template does
The Ultimate Scraper for n8n uses Selenium and AI to retrieve any information displayed on a webpage. You can also use session cookies to log in to the targeted webpage for more advanced scraping needs.

⚠️ Important: This project requires specific setup instructions. Please follow the guidelines provided in the GitHub repository: n8n Ultimate Scraper Setup : https://github.com/Touxan/n8n-ultimate-scraper/tree/main.

The workflow version on n8n and the GitHub project may differ; however, the most up-to-date version will always be the one available on the GitHub repository : https://github.com/Touxan/n8n-ultimate-scraper/tree/main.

How to use
Deploy the project with all the requirements and request your webhook.

Example of request:
curl -X POST http://localhost:5678/webhook-test/yourwebhookid
-H "Content-Type: application/json"
-d '{
"subject": "Hugging Face",
"Url": "github.com",
"Target data": [
{
"DataName": "Followers",
"description": "The number of followers of the GitHub page"
},
{
"DataName": "Total Stars",
"description": "The total numbers of stars on the different repos"
}
],
"cookie": []
}'

Or to just scrap a url :
curl -X POST http://localhost:5678/webhook-test/67d77918-2d5b-48c1-ae73-2004b32125f0
-H "Content-Type: application/json"
-d '{
"Target Url": "https://github.com",
"Target data": [
{
"DataName": "Followers",
"description": "The number of followers of the GitHub page"
},
{
"DataName": "Total Stars",
"description": "The total numbers of stars on the different repo"
}
],
"cookies": []
}'
`

Nodes used in this workflow

Popular HTTP Request and Information Extractor workflows

+4

Startup Funding Research Automation with Claude, Perplexity AI, and Airtable

Startup Funding Research Automation with Claude, Perplexity AI, and Airtable How it works This intelligent workflow automatically discovers and analyzes recently funded startups by: Monitoring multiple news sources (TechCrunch and VentureBeat) for funding announcements Using AI to extract key funding details (company name, amount raised, investors) Conducting automated deep research on each company through perplexity deep research or jina deep search. Organizing all findings into a structured Airtable database for easy access and analysis Set up steps (10-15 minutes) Connect your news feed sources (TechCrunch and VentureBeat). Could be extended. These were easy to scrape and this data can be expensive. Set up your AI service credentials (Claude and Perplexity or jina which has generous free tier) Connect your Airtable account and create a base with appropriate fields (can be imported from my base) or see structure below. Airtable Base Structure Funding Round Base | Field Name | Data Type | Description | |------------|-----------|-------------| | website_url | String | URL of the company website | | company_name | String | Name of the company | | funding_round | String | The funding stage or round (e.g., Series A, Seed, etc.) | | funding_amount | Number | The amount of funding received | | lead_investor | String | The primary investor leading the funding round | | market | String | The market or industry sector the company operates in | | participating_investors | String | List of other investors participating in the funding round | | press_release_url | String | URL to the press release about the funding | | evaluation | Number | The company's valuation | Structure Company Deep Research Base | Field Name | Data Type | Description | |------------|-----------|-------------| | website_url | String | URL of the company website | | company_name | String | Name of the company | | funding_round | String | The funding stage or round (e.g., Series A, Seed, etc.) | | funding_amount | Number | The amount of funding received | | currency | String | Currency of the funding amount | | announcement_date | String | Date when the funding was announced | | lead_investor | String | The primary investor leading the funding round | | participating_investors | String | List of other investors participating in the funding round | | industry | String | The industry sectors the company operates in | | company_description | String | Description of the company's business | | hq_location | String | Company headquarters location | | founding_year | Number | Year the company was founded | | founder_names | String | Names of the company founders | | ceo_name | String | Name of the company CEO | | employee_count | Number | Number of employees at the company | | total_funding | Number | Total funding amount received to date | | total_funding_currency | String | Currency of total funding | | funding_purpose | String | Purpose or use of the funding | | business_model | String | Company's business model | | valuation | Object | Company valuation information | | previous_rounds | Object | Information about previous funding rounds | | source_urls | String | Source URLs for the funding information | | original_report | String | Original report text about the funding | | market | String | The market the company operates in | | press_release_url | String | URL to the press release about the funding | | evaluation | Number | The company's valuation | Notes I found that by using perplexity via open router, we lose access to the sources, as they are not stored in the same location as the report itself so I opted to use perplexity API via HTTP node. For using perplexity and or jina you have to configure header auth as described in Header Auth - n8n Docs What you can learn How to scrape data using sitemaps How to extract strucutred data from unstructured text How to execute parts of the workflow as subworkflow How to use deep research in a practical scenario How to define more complex JSON schemas
+3

Travel AI Agent - AI-Powered Travel Planner

Overview An n8n workflow automating business travel planning via Telegram. Uses AI and APIs to find and book flights/hotels efficiently. Prerequisites Telegram Bot** (BotFather) API Keys**: OpenAI (transcription), SerpAPI (flights/hotels), DeepSeek (AI processing) n8n Instance** with API access Setup Instructions Import Workflow: Upload JSON to n8n. Configure API Credentials: Set up Telegram, OpenAI, SerpAPI, and DeepSeek keys. Webhook Activation: Ensure Telegram webhook is active with HTTPS. Test: Send a Telegram message and verify execution. Workflow Operation User Input Processing Telegram bot triggers workflow, extracts text/audio. OpenAI transcribes voice messages. AI (DeepSeek) extracts key travel details (locations, dates, accommodation needs). Travel Search Flights**: Uses SerpAPI for flight options (airlines, times, prices). Hotels**: Fetches accommodations with dynamic check-out date. AI Recommendations & Customization DeepSeek** generates structured travel plans. Users can modify prompts to adjust AI responses for personalized results. Professional, well-structured responses with links. Response Delivery Sends travel recommendations via Telegram with clear details. Use Cases Ideal for business professionals, executive assistants, frequent travelers, and small businesses. Customization & Troubleshooting Adjust memory handling and API calls. Modify prompts to refine AI output. Ensure API keys are active and network is accessible.
+3

Android to N8N Automation | Save Links to with Readeck, Openrouter, SerpAPI

This workflow is for automating and centralizing your bookmarking process using AI-powered tagging and seamless integration between your Android device and a self-hosted Read Deck platform (https://readeck.org/en/). This workflow eliminates manual entry, organizes links with smart AI-generated tags, and ensures your bookmarks are always accessible, searchable, and secure. How It Works 📱 Android Shortcut Integration Use the HTTP Shortcuts app to create a 1-tap trigger that sends URLs and titles from your Android phone directly to n8n. 🤖 AI-Powered Tagging & Processing Leverage ChatGPT-4 to analyze content context and auto-generate relevant tags (e.g., “Tech Tutorials,” “Productivity Tools”). Extract clean titles and URLs from messy shared data (even from apps like Twitter or Reddit). 🔗 Readeck Integration Automatically save processed bookmarks to your self-hosted Readeck-like platform with structured metadata (title, URL, tags). ⚡ Silent Automation It runs in the background—no pop-ups or interruptions. 🔒 Pro Security Optional authentication (API tokens, headers) to protect your data. Use Case Perfect for researchers, content creators, or anyone drowning in tabs who wants to: Save articles, videos, or social posts in one click. Organize bookmarks with AI-generated tags. Build a personal knowledge base that’s always accessible. Tutorial 1️⃣ Set Up Android Shortcut Install "HTTP Shortcuts" and configure it to send data to your n8n webhook. Enable “Share Menu” to trigger bookmarks from any app. 2️⃣ Configure n8n Workflow Import the template and add your Read Deck API token (or similar service). 3️⃣ Test & Scale Share a link from your phone—watch it appear in Read Deck instantly! Add error handling or notifications for advanced use. Note: For self-hosted platforms, ensure your instance is publicly accessible (or use a VPN). Why Choose This Workflow? Zero Manual Entry: Save hours of copying/pasting. AI Organization: Say goodbye to chaotic bookmark folders. Privacy First: Host your data on your terms. Transform your bookmarking chaos into a streamlined system—try “Save: Bookmark” today! 🚀

Scrape Trustpilot Reviews with DeepSeek, Analyze Sentiment with OpenAI

Workflow Overview This workflow automates the process of scraping Trustpilot reviews, extracting key details, analyzing sentiment, and saving the results to Google Sheets. It uses OpenAI for sentiment analysis and HTML parsing for review extraction. How It Works Scrape Trustpilot Reviews HTTP Request**: Fetches review pages from Trustpilot (https://it.trustpilot.com/review/{{company_id}}). Paginates through pages (up to max_page limit). HTML Parsing**: Extracts review URLs using CSS selectors Splits the URLs into individual review links. Extract Review Details Information Extractor**: Uses DeepSeek to extract structured data from the review: Author: Name of the reviewer. Rating: Numeric rating (1-5). Date: Review date in YYYY-MM-DD format. Title: Review title. Text: Full review text. Total Reviews: Number of reviews by the user. Country: Reviewer’s country (2-letter code). Sentiment Analysis Sentiment Analysis Node**: Uses OpenAI to classify the review text as Positive, Neutral, or Negative. Example output: { "category": "Positive", "confidence": 0.95 } Save to Google Sheets Google Sheets Node**: Appends or updates the extracted data to a Google Sheet Set Up Steps Configure Trustpilot Scraping Edit Fields1 Node**: Set company_id to the Trustpilot company name Set max_page to limit the number of pages scraped. Configure Google Sheets Google Sheets Node**: Update the documentId with your Google Sheet ID Ensure the sheet has the required columns (Id, Data, Nome, etc.). Configure OpenAI OpenAI Chat Model Node**: Add your OpenAI API key. Sentiment Analysis Node**: Ensure the categories match your desired sentiment labels (Positive, Neutral, Negative). Key Components Nodes**: HTTP Request/HTML: Scrape and parse Trustpilot reviews. Information Extractor: Extract structured review data using DeepSeek. Sentiment Analysis: Classify review sentiment. Google Sheets: Save and update review data. Credentials**: OpenAI API key. DeepSeek API key. Google Sheets OAuth2.
+7

Personal Shopper Chatbot for WooCommerce with RAG using Google Drive and openAI

This workflow combines OpenAI, Retrieval-Augmented Generation (RAG), and WooCommerce to create an intelligent personal shopping assistant. It handles two scenarios: Product Search: Extracts user intent (keywords, price ranges, SKUs) and fetches matching products from WooCommerce. General Inquiries: Answers store-related questions (e.g., opening hours, policies) using RAG and documents stored in Google Drive. How It Works Chat Interaction & Intent Detection Chat Trigger**: Starts when a user sends a message ("When chat message received"). Information Extractor**: Uses OpenAI to analyze the message and determine if the user is searching for a product or asking a general question. Extracts: search (true/false). keyword, priceRange, SKU, category (if product-related). Example: { "search": true, "keyword": "red handbags", "priceRange": { "min": 50, "max": 100 }, "SKU": "BAG123", "category": "women's accessories" } Product Search (WooCommerce Integration) AI Agent**: If search: true, routes the request to the personal_shopper tool. WooCommerce Node: Queries the WooCommerce store using extracted parameters (keyword, priceRange, SKU). Filters products in stock (stockStatus: "instock"). Returns matching products (e.g., "red handbags under €100"). General Inquiries (RAG System) RAG Tool**: If search: false, uses the Qdrant Vector Store to retrieve store information from documents. Google Drive Integration: Documents (e.g., store policies, FAQs) are stored in Google Drive. Downloaded, split into chunks, and embedded into Qdrant for semantic search. OpenAI Chat Model: Generates answers based on retrieved documents (e.g., "Our store opens at 9 AM"). Set Up Steps Configure the RAG System Google Drive Setup**: Upload store documents . Update the Google Drive2 node with your folder ID. Qdrant Vector Database**: Clean the collection (update Qdrant Vector Store node with your URL). Use Embeddings OpenAI to convert documents into vectors. Configure OpenAI & WooCommerce OpenAI Credentials**: Add your API key to all OpenAI nodes (OpenAI Chat Model, Embeddings OpenAI, etc.). WooCommerce Integration**: Connect your WooCommerce store (credentials in the personal_shopper node). Ensure product data is synced and accessible. Customize the AI Agent Intent Detection**: Modify the Information Extractor’s system prompt to align with your store’s terminology. RAG Responses**: Update the tool description to reflect your store’s documents. Notes This template is ideal for e-commerce businesses needing a hybrid assistant for product discovery and customer support.

AI Powered Web Scraping with Jina, Google Sheets and OpenAI : the EASY way

Purpose of workflow: The purpose of this workflow is to automate scraping of a website, transforming it into a structured format, and loading it directly into a Google Sheets spreadsheet. How it works: Web Scraping: Uses the Jina AI service to scrape website data and convert it into LLM-friendly text. Information Extraction: Employs an AI node to extract specific book details (title, price, availability, image URL, product URL) from the scraped data. Data Splitting: Splits the extracted information into individual book entries. Google Sheets Integration: Automatically populates a Google Sheets spreadsheet with the structured book data. Step by step setup: Set up Jina AI service: Sign up for a Jina AI account and obtain an API key. Configure the HTTP Request node: Enter the Jina AI URL with the target website. Add the API key to the request headers for authentication. Set up the Information Extractor node: Use Claude AI to generate a JSON schema for data extraction. Upload a screenshot of the target website to Claude AI. Ask Claude AI to suggest a JSON schema for extracting required information. Copy the generated schema into the Information Extractor node. Configure the Split node: Set it up to separate the extracted data into individual book entries. Set up the Google Sheets node: Create a Google Sheets spreadsheet with columns for title, price, availability, image URL, and product URL. Configure the node to map the extracted data to the appropriate columns.

Build your own HTTP Request and Information Extractor integration

Create custom HTTP Request and Information Extractor workflows by choosing triggers and actions. Nodes come with global operations and settings, as well as app-specific parameters that can be configured. You can also use the HTTP Request node to query data from any app or service with a REST API.

HTTP Request and Information Extractor integration details

Use case

Save engineering resources

Reduce time spent on customer integrations, engineer faster POCs, keep your customer-specific functionality separate from product all without having to code.

Learn more

FAQs

  • Can HTTP Request connect with Information Extractor?

  • Can I use HTTP Request’s API with n8n?

  • Can I use Information Extractor’s API with n8n?

  • Is n8n secure for integrating HTTP Request and Information Extractor?

  • How to get started with HTTP Request and Information Extractor integration in n8n.io?

Need help setting up your HTTP Request and Information Extractor integration?

Discover our latest community's recommendations and join the discussions about HTTP Request and Information Extractor integration.
Moiz Contractor
theo
Jon
Dan Burykin
Tony

Looking to integrate HTTP Request and Information Extractor in your company?

Over 3000 companies switch to n8n every single week

Why use n8n to integrate HTTP Request with Information Extractor

Build complex workflows, really fast

Build complex workflows, really fast

Handle branching, merging and iteration easily.
Pause your workflow to wait for external events.

Code when you need it, UI when you don't

Simple debugging

Your data is displayed alongside your settings, making edge cases easy to track down.

Use templates to get started fast

Use 1000+ workflow templates available from our core team and our community.

Reuse your work

Copy and paste, easily import and export workflows.

Implement complex processes faster with n8n

red iconyellow iconred iconyellow icon