⚡ TLDR (For People Who Also Have 20 IPs In Their Queue)
You know that moment when it's 9 AM, you have coffee, and 20 flagged IPs are staring at you like unpaid invoices?
Yeah. I fixed that.
🐢 Before: 45 minutes per indicator. 6 browser tabs open. Copy. Paste. Wait. Repeat. Cry a little. Document.
⚡ After: Submit a URL or IP. Get a Slack alert in 5 minutes. Go drink that coffee while it is still hot for once.
💰 The math: $141,600 saved per analyst per year. Build time: 4 to 6 hours. Number of times I questioned my life choices during those 6 hours: several.
🛠️ The stack: n8n + VirusTotal + URLScan.io + AbuseIPDB + a custom hybrid scoring algorithm that is smarter than averaging two numbers and calling it threat intelligence.
⚠️ The honest part: The IP path still only uses AbuseIPDB. One source. Known gap. Version 2 is coming with GreyNoise, AlienVault OTX, IPQualityScore and more. Because single-source IP intel is just AbuseIPDB with extra steps.
📖 Full breakdown, architecture, ROI math, and the painful lessons learned are below. 👇
(Yes the workflow JSON is coming too. I won't ask you to comment in order to get the JSON😆.)
There's a moment every SOC analyst knows well.
You're staring at 20 flagged IPs in your queue. It's 9 AM. You have your coffee. You open the first tab. By 11:30 AM, you've cleared four of them.
This is not a skills problem. This is a process problem. And I built an automation that cuts right through it.
🔥 The Before: What Manual Threat Intel Actually Costs
Let me walk you through what a "thorough" IP or URL reputation check looks like when done manually — the way most Tier 1 and Tier 2 analysts actually do it.
🔍 Step 1: Open AbuseIPDB in a new tab. Paste the IP. Read the abuse confidence score, check the ISP, note the usage type, scan the report history. Copy findings to a notepad.
🦠 Step 2: Open VirusTotal. Paste the same IP. Wait for the scan. Cross-reference the vendor detections. Note which high-reputation engines flagged it — Kaspersky, Bitdefender, Google Safe Browsing. Copy findings.
🌐 Step 3: If it's a URL in the alert, open another VirusTotal tab. Paste the URL this time. Different analysis pipeline, different detection logic. More waiting.
📸 Step 4: Open URLScan.io. Submit the URL. Wait 30 to 60 seconds for the screenshot and verdict. Check if the page looks like a phishing clone. Copy the report link.
📊 Step 5: Cross-reference everything against your SIEM. Does this IP appear in recent logs? What internal assets communicated with it? What's the volume?
🎫 Step 6: Write it all up in your ticketing system. Jira, ServiceNow, whatever your org uses. Structured format. Timestamps. Evidence links.
That's six steps. For one indicator.
A conservative time estimate is 30 to 45 minutes per indicator when done correctly. Junior analysts often take longer because they second-guess themselves without a standardized scoring process.
Now multiply that by reality.
A mid-sized SOC handling moderate alert volume can easily generate 15 to 25 indicators per analyst per day that require reputation checks. At 20 indicators per day, that's 900 minutes — ⏰ 15 hours of analyst time — consumed by a process that is almost entirely copy, paste, read, repeat.
At a mid-level analyst salary of roughly $42.50 per hour (using the $35 to $50 range common for SOC Analyst II roles in North America), that is 💸 $637.50 per analyst per day spent on mechanical lookups.
Not analysis. Not correlation. Not threat hunting. Just lookups.
⚡ The After: What the Automated Workflow Does
I built this in n8n, an open-source workflow automation platform. The entire pipeline runs in under 5 minutes per indicator, including API call time, scoring, logging, and alerting.
Here's the architecture:
`🔔 Webhook Trigger (POST /url-scan) | v 🧠 [Is it URL or IP?] -- Code node classifies input via regex | | | | v v 🌐 [URL Path] 📡 [IP Path]
🌐 URL Path: Parallel: 🔎 URLScan.io: Submit → Wait 30s → Get Results 🦠 VirusTotal: Submit URL → Wait 30s → Get Report 🔀 Merge Results 🧮 Calculate Hybrid Risk Score (Code Node) ⚖️ IF Risk Score >= 50: 🚨 YES → Log to High Risk Sheet + Slack Alert ✅ NO → Log to Safe Sheet 📤 Respond to Webhook
📡 IP Path: 🛡️ AbuseIPDB: GET /check?ipAddress=... ⚖️ IF Abuse Score > 50: 🚨 YES → Log to High Risk Sheet + Slack Alert ✅ NO → Log to Safe Sheet 📤 Respond to Webhook`

The analyst submits a URL or IP through a simple POST request. The workflow handles everything else and returns a structured result with risk score, vendor detections, URLScan verdict, screenshot link, and full report links.
Total analyst effort: reviewing a pre-built summary. Not gathering data from scratch
🧮 The Technical Core: Hybrid Risk Scoring
The most interesting engineering challenge was the scoring algorithm. I want to explain this because it reflects a real problem in threat intelligence: not all sources agree, and not all disagreements mean the same thing.
The algorithm handles four scenarios:
⬛ Both sources unavailable: Returns a score of 0 with a flag requiring manual review. No data is not the same as clean.
🟡 VirusTotal only (URLScan unavailable): Applies a single-source penalty. VT score is multiplied by 0.85 and a flat 10 points added to account for reduced confidence. The logic: one source claiming something is suspicious is weaker evidence than two sources agreeing.
🟡 URLScan only (VT unavailable): Same single-source penalty applied symmetrically.
🟢 Both available:
- If either source scores 70 or above, the maximum score is used. High-confidence detections from either source warrant immediate escalation regardless of the other source.
- If both score below 70, the average is used. This prevents a borderline URLScan result from inflating a genuinely clean VT verdict and vice versa.
🏆 On top of this, there's a high-reputation vendor override. I track a curated list of high-signal vendors: Kaspersky, Bitdefender, ESET, Sophos, Emsisoft, Netcraft, Google Safe Browsing, and others. If 2 or more of these flag a URL as malicious, the score is forced to a minimum of 50. If 4 or more flag it, the minimum floor rises to 60.
The reason for the override: commodity vendors produce high false positive rates. A detection from 15 random engines means less than a detection from Netcraft and Google Safe Browsing together. This is a judgment call baked into automation — exactly the kind of institutional knowledge that usually lives in a senior analyst's head.
⚖️ The binary verdict: Anything scoring 50 or above is 🔴 HIGH risk. Below 50 is 🟢 LOW risk. This threshold is adjustable but reflects a deliberate design choice: in a SOC environment, a borderline indicator should escalate, not silently pass.
📣 The Slack Alert: What Analysts Actually See
When a HIGH risk URL is detected, the Slack alert contains:
- 🔗 The URL itself
- 📊 Risk score out of 100
- 🦠 VirusTotal malicious and suspicious counts
- 🌐 URLScan verdict and raw score
- 📄 Direct links to the URLScan report
- 📸 Page screenshot link
- 🕐 Scan ID and timestamp
The analyst does not need to open a single external tab to form an initial triage decision. If the screenshot shows a Microsoft login clone and VT shows 12 malicious detections, the decision is already obvious. The automation surfaces that in 5 minutes. The analyst then spends their time on the actual decision — not the data collection.
Example of an high alert slack channel message
🚨 HIGH RISK URL DETECTED 🚨
🔗 URL: http://110.37.0.37:35668/bin.sh
📊 Risk Score: 60/100
🛡️ Threat Intelligence
| Source | Result |
|---|---|
| 🦠 VirusTotal | 21 malicious, 2 suspicious |
| 🌐 URLScan Verdict | Malicious |
| 📈 URLScan Score | 7/100 |
📄 Reports
🔎 Scan Details
- 🆔 Scan ID: 1771614286778
- 🕐 Timestamp: 2026-02-20T19:04:46.778Z
💰 The ROI Math
Let me be direct about the numbers.
| 📋 Metric | 🐢 Manual | ⚡ Automated |
|---|---|---|
| ⏱️ Time per indicator | 45 minutes | 5 minutes (analyst review) |
| 📦 Indicators per day | 20 | 20 |
| 🕐 Total daily time | 15 hours | 1.67 hours |
| ✂️ Time saved per day | 13.33 hours | |
| 💵 Analyst hourly cost | $42.50 | |
| 📅 Daily savings | $566.53 | |
| 📆 Annual savings (250 working days) | $141,600 | |
| 🔨 Build time | 4 to 6 hours | |
| 🔄 Payback period | Same day |
The build cost is essentially zero relative to the return. If you pay an analyst $42.50 per hour and they spend the first hour after deployment reviewing automated alerts instead of manually querying APIs, the workflow has already paid for itself.
Scale this across a team of four analysts running similar workloads: 🚀 $566,000 in annual analyst capacity freed up. That is capacity that can be redirected to threat hunting, detection rule development, incident response readiness, and actual security improvement.
CISOs and security directors reading this: this is not a hypothetical. This is arithmetic.
🧠 What I Actually Learned Building This
⚡ Score mismatches are common and meaningful. I expected the two sources to roughly agree most of the time. They do not. URLScan frequently returns higher scores than VirusTotal for newer phishing pages because its ML engine processes visual and behavioral signals that signature-based AV engines miss. When sources disagree significantly, that disagreement itself is a signal worth flagging.
🎯 The maximum score approach for high-confidence detections was a deliberate counter to averaging. Averaging made the system too forgiving of genuinely malicious URLs when one source scored high and the other was neutral. Real adversaries understand that different sources have different detection profiles. A threat actor who evades VirusTotal but triggers URLScan at 85 should not get an averaged score of 42 and a LOW verdict. That is how things get missed.
⏳ Wait nodes are unglamorous and critical. Both VirusTotal and URLScan require submission followed by polling. Submit, wait 30 seconds, retrieve. If you skip the wait and poll immediately, you get empty or incomplete results. Building reliable retry logic into the Get Scan node (5 retries, 5-second intervals) was the difference between a flaky prototype and a stable workflow.
🤝 Automation does not replace analyst judgment. It replaces analyst data collection. The analyst still decides what to do with a HIGH risk verdict. Block the IP? Escalate to IR? Tag it for hunting? That judgment requires context the automation does not have: asset criticality, user behavior patterns, known false positive sources in your environment. The workflow makes that judgment faster to reach because the analyst starts with a summary, not a blank page.
🔭 What's Coming Next: Future Improvements
This workflow is version one. It works, it saves time, and the ROI is real. But There are gaps that needs to be take care of Now.
🚧 The Biggest Current Limitation: Single-Source IP Intelligence
Right now the IP path of this workflow relies entirely on AbuseIPDB.
That is a problem I am fully aware of.
AbuseIPDB is excellent at what it does — community-reported abuse history, ISP context, usage type classification. But it is one perspective on one IP. And a single source means a single blind spot.
Compare that to the URL side of this workflow where I deliberately built multi-source corroboration — VirusTotal and URLScan.io running in parallel, scores merged, disagreements handled explicitly by the algorithm. That approach catches things a single source would miss.
The IP path deserves the same treatment.
🛠️ The Tools I Plan to Add
Here is what the upgraded IP reputation pipeline will look like(I will try to use free tier but I do understand that some of organization do have funds to use better threat Intel platforms but doing this will give you an idea what it can be):
| 🔧 Tool | 📦 Free Tier | 🎯 What It Adds |
|---|---|---|
| 🌐 IPQualityScore | 5,000 requests/month | Fraud scoring, VPN/proxy/Tor detection |
| 🔍 Shodan | Limited free tier | Open ports, banners, exposed services |
| 📡 GreyNoise | 1,000 requests/month | Classifies IPs as internet scanners vs real threats |
| 🕵️ IPinfo.io | 50,000 requests/month | ASN, geolocation, hosting provider enrichment |
| 🌍 ip-api.com | 45 requests/minute | Geolocation, ISP, org — no API key required |
| 🔒 AlienVault OTX | Unlimited (rate limited) | Community threat pulses, malware associations |
| 📊 ThreatFox (Abuse.ch) | Free | IOC lookup, known malware family associations |
| 🛑 Pulsedive | 30 requests/minute | Risk scoring across multiple threat feeds |
🧠 Why Multi-Source Matters for IPs Too
The same logic that drove the URL hybrid scoring algorithm applies here.
An IP that AbuseIPDB rates at 20% confidence might simultaneously be flagged as a known Mirai botnet node on AlienVault OTX, classified as a mass internet scanner on GreyNoise, and show port 23 open on Shodan.
Each source alone tells you something. All sources together tell you the truth.
The goal is to build a hybrid IP scoring algorithm that mirrors what I already built for URLs — weighted multi-source scoring, high-reputation vendor overrides, and a clean binary verdict that an analyst can act on immediately.
👀 Keep an Eye Out
I will be updating the URL and Domain Reputation Checker workflow with these integrations and publishing the updated JSON when it's ready.
If you want to follow along as the workflow evolves, stay tuned. The next version will be significantly more robust — and still entirely free-tier.
🎯 What This Means for Your SOC
If your team is still doing manual reputation lookups for every indicator in your queue, you are not running a SOC. You are running a lookup service.
The value a trained analyst provides is not the ability to copy and paste URLs into threat intel platforms. It is the ability to reason about threat context, recognize patterns, prioritize response, and make decisions under uncertainty. Every minute spent on mechanical lookups is a minute that judgment is not being applied to actual threats.
Automation built for this specific problem is not complex. Four to six hours of n8n development. A few free-tier API accounts. A Google Sheet and a Slack workspace you already have. The return is an order-of-magnitude reduction in time-per-indicator and a meaningful improvement in analyst capacity.
The question is not whether this kind of automation is worth building.
The question is what you are going to automate next. 🚀
💬 What is the most time-consuming manual workflow in your SOC? I would love to hear what you would automate first. Drop it in the comments.
📌 I am sharing the full n8n workflow JSON in a follow-up post. If you want to fork it and adapt it to your environment, that post will have everything you need.
#SOCAutomation #ThreatIntelligence #n8n #Cybersecurity #SecurityOperations #BlueTeam #VirusTotal #URLScan #WorkflowAutomation #SOCAnalyst