Back to Integrations
integration integration
integration Split Out node

Integrate Split Out with 500+ apps and services

Unlock Split Out’s full potential with n8n, connecting it to similar Core Nodes apps and over 1000 other services. Create adaptable and scalable workflows between Split Out and your stack. All within a building experience you will love.

Popular ways to use Split Out integration

Code node
+6

Reconcile Rent Payments with Local Excel Spreadsheet and OpenAI

This n8n workflow is designed to work on the local network and assists with reconciling downloaded bank statements with internal tenant records to quickly highlight any issues with payments such as missed or late payments or those of incorrect amounts. This assistant can then generate a report to quick flag attention to ensure remedial action is taken. How it works The workflow monitors a local network drive to watch for new bank statements that are added. This bank statement is then imported into the n8n workflow, its contents extracted and sent to the AI Agent. The AI Agent analyses the line items to identify the dates and any incoming payments from tenants. The AI agent then uses an locally-hosted Excel ("XLSX") spreadsheet to get both tenant records and property records. From this data, it can determine for each active tenant when payment is due, the amount and the tenancy duration. Comparing to the bank statement, the AI Agent can now report on where tenants have missed their payments, made late payments or are paying the incorrect amounts. The final report is generated and logged in the same XLSX for a human to check and action. Requirements A self-hosted version of n8n is required. OpenAI account for the AI model Customising this workflow If you organisation has a Slack or Teams account, consider sending reports to a channel for increased productivity. Email may be a good choice too. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/1YRKjfakpInm23F_g8AHupKPBN-fphWgK/view?usp=sharing
jimleuk
Jimleuk
HTTP Request node
Slack node
Webhook node
Telegram node
+3

Monitor Multiple Github Repos via Webhook

What this workflow does This workflow allows you to monitor multiple Github repos simultaneously without polling due to use of Webhooks. It programmatically allows for adding and deleting of repos to your watchlist to make management convenient. Description Can monitor multiple repos simultaneously. Programmatically register or unregister repos from a list. No need for manual work. Webhook notification means no constant polling necessary. Setup 1. Creating Credentials on Github Generate a personal access token on github by following these esteps; Right hand side of page -> Settings -> scroll to bottom -> Developer Settings > Personal Access Token > Tokens (classic) > Generate New Token Give scopes: admin:repo_hook repo (if you want to use it for your own private repo) if you need more help, see here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens 2. Setting Credentials in n8n In Register Github Webhook Authenticaion > Generic Credential Type Generic Auth Type > Header Auth Header Auth > Create New Credential with Name set to 'Authorization' and Value set to 'Bearer '. (You can reuse this for Delete Github Webhook and Get Existing Webhooks). Now in Register Github Webhook, scroll down to Send Body > JSON and inside the JSON, change the value of "url" to the webhook address given as Production URL in the node Webhook Trigger. 3. Notification settings In the third row, link up the Webhook Trigger to any API of your choice. Slack and Telegram are given as examples. You can also format the notification message as you wish. Setup time: roughly 10 minutes. Instructions Video: https://vimeo.com/1013473758 Test 1. Register Webhooks In Repos to Monitor, add any repo you want to monitor changes for. Disable Webhook Trigger, Click Test Workflow and if your Github credentials were set correctly, it will automatically register the webhooks. - You can test this by running the single node Get Existing Webhook and confirming it outputs the repo addresses. 2. Handle Github Events Now that you have registered the webhooks, re-enable Webhook Trigger and activate the workflow. Make a commit to any of the registered repos. Confirm that the notification went through. That's it!
jay
Jay Hartley
Google Sheets node
HTTP Request node
+5

🚀 Local Multi-LLM Testing & Performance Tracker

🚀 Local Multi-LLM Testing & Performance Tracker This workflow is perfect for developers, researchers, and data scientists benchmarking multiple LLMs with LM Studio. It dynamically fetches active models, tests prompts, and tracks metrics like word count, readability, and response time, logging results into Google Sheets. Easily adjust temperature 🔥 and top P 🎯 for flexible model testing. Level of Effort: 🟢 Easy – Minimal setup with customizable options. Setup Steps: Install LM Studio and configure models. Update IP to connect to LM Studio. Create a Google Sheet for result tracking. Key Outcomes: Benchmark LLM performance. Automate results in Google Sheets for easy comparison. Version 1.0
davidmoneil
Wildkick
HTTP Request node
Merge node
Webhook node
+13

AI-powered WooCommerce Support-Agent

With this workflow you get a fully automated AI powered Support-Agent for your WooCommerce webshop. It allows customers to request information about things like: the status of their order the ordered products shipping and billing address current DHL shipping status How it works The workflow receives chat messages from an in a website integrated chat. For security and data-privacy reasons, does the website transmit the email address of the user encrypted with the requests. That ensures that user can just request the information about their own orders. An AI agent with a custom tool supplies the needed information. The tool calls a sub-workflow (in this case, in the same workflow for convenience) to retrieve the required information. This includes the full information of past orders plus the shipping information from DHL. If otherr shipping providers are used it should be simple to adjust the workflow to query information from other APIs like UPS, Fedex or others.
jan
Jan Oberhauser
+8

Parse DMARC reports, save them in database and notify on DKIM or SPF error

Who is it for If you are a postmaster or you manage email server, you can set up DKIM and SPF records to ensure that spoofing your email address is hard. On your domain you can also set up DMARC record to receive XML reports from email providers (rua tag). Those reports contain data if email they received passed DKIM and SPF verifications. Since DMARC email is public, you will receive a lot of emails from email providers, not only if DKIM/SPF fail. There is no need for it - you probably only need to know if SPF/DKIM failed. So this script is intended to automatically parse all DMARC reports that come from email providers, but ONLY send you notification if SPF or DKIM failed - meaning that either someone tries to spoof your email or your DKIM/SPF is improperly set up. How it works script monitors postmaster email for DMARC reprots (rua) unpacks report and parses XML into JSON maps JSON and formats fields for MySQL/MariaDB input inputs into database sends notification on DKIM or SPF failure Remember to set up email input mailbox notification channels for slack for email
lukaszpp
Łukasz
HTTP Request node
Google Drive node
+9

Narrating over a Video using Multimodal AI

This n8n template takes a video and extracts frames from it which are used with a multimodal LLM to generate a script. The script is then passed to the same multimodal LLM to generate a voiceover clip. This template was inspired by Processing and narrating a video with GPT's visual capabilities and the TTS API How it works Video is downloaded using the HTTP node. Python code node is used to extract the frames using OpenCV. Loop node is used o batch the frames for the LLM to generate partial scripts. All partial scripts are combined to form the full script which is then sent to OpenAI to generate audio from it. The finished voiceover clip is uploaded to Google Drive. Sample the finished product here: https://drive.google.com/file/d/1-XCoii0leGB2MffBMPpCZoxboVyeyeIX/view?usp=sharing Requirements OpenAI for LLM Ideally, a mid-range (16GB RAM) machine for acceptable performance! Customising this workflow For larger videos, consider splitting into smaller clips for better performance Use a multimodal LLM which supports fully video such as Google's Gemini.
jimleuk
Jimleuk

Over 3000 companies switch to n8n every single week

Connect Split Out with your company’s tech stack and create automation workflows

Last week I automated much of the back office work for a small design studio in less than 8hrs and I am still mind-blown about it.

n8n is a game-changer and should be known by all SMBs and even enterprise companies.

We're using the @n8n_io cloud for our internal automation tasks since the beta started. It's awesome! Also, support is super fast and always helpful. 🤗

in other news I installed @n8n_io tonight and holy moly it’s good

it’s compatible with EVERYTHING

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon