Published 3 years ago
This workflow allows extracting data from multiple pages website.
The workflow:
It uses getWorkflowStaticData('global') method to recover the next page (saved from the previous page), and it goes ahead with all the pages.
There is a first section where the countries list is recovered and extracted.
Later, I try to read if a local cache page is available and I recover the cached page from the disk.
Finally, I save data to MongoDB, and we paginate all the pages in the country and for all the countries.
I have applied a cache system to save a visited page to n8n local disk. If I relaunch workflow, we check if a cache file exists to discard non-required requests to the webpage.
If the data present in the website changes, you can apply a Cron node to check the website once per week.
Finally, before inserting data in MongoDB, the best way to avoid duplicates is to check that swift_code (the primary value of the collection) doesn't exist.
I recommend using a proxy for all requests to avoid IP blocks. A good solution for proxy plus IP rotation is scrapoxy.io.
This workflow is perfect for small data requirements. If you need to scrape dynamic data, you can use a Headless browser or any other service.
If you want to scrape huge lists of URIs, I recommend using Scrapy + Scrapoxy.
Implement complex processes faster with n8n