CG
SkillsPerforming Dark Web Monitoring for Threats
Start Free
Back to Skills Library
Threat Intelligence🔴 Advanced

Performing Dark Web Monitoring for Threats

Dark web monitoring involves systematically scanning Tor hidden services, underground forums, paste sites, and dark web marketplaces to identify threats targeting an organization, including leaked cre.

4 min read4 code examples

Prerequisites

  • Tor Browser and Tor proxy (SOCKS5 on port 9050)
  • Python 3.9+ with `requests`, `stem`, `beautifulsoup4`, `stix2` libraries
  • Understanding of Tor hidden service architecture (.onion domains)
  • API access to dark web monitoring services (Flare, SpyCloud, DarkOwl, Intel 471)
  • Awareness of legal and ethical boundaries for dark web research
  • Isolated VM for dark web browsing (no personal or corporate identity leakage)

Performing Dark Web Monitoring for Threats

Overview

Dark web monitoring involves systematically scanning Tor hidden services, underground forums, paste sites, and dark web marketplaces to identify threats targeting an organization, including leaked credentials, data breaches, threat actor discussions, vulnerability exploitation tools, and planned attacks. This guide covers setting up monitoring infrastructure, using Tor-based collection tools, implementing automated alerting for brand mentions and credential leaks, and analyzing dark web intelligence for actionable threat indicators.

Prerequisites

  • Tor Browser and Tor proxy (SOCKS5 on port 9050)
  • Python 3.9+ with requests, stem, beautifulsoup4, stix2 libraries
  • Understanding of Tor hidden service architecture (.onion domains)
  • API access to dark web monitoring services (Flare, SpyCloud, DarkOwl, Intel 471)
  • Awareness of legal and ethical boundaries for dark web research
  • Isolated VM for dark web browsing (no personal or corporate identity leakage)

Key Concepts

Dark Web Intelligence Sources

  • Underground Forums: Hacking forums where threat actors discuss TTPs, sell exploits, and share tools
  • Paste Sites: Platforms for sharing stolen data, credentials, and code snippets
  • Marketplaces: Dark web markets selling stolen data, RaaS, exploit kits, and access
  • Telegram/Discord: Alternative communication channels for cybercriminal groups
  • Ransomware Leak Sites: Blogs where ransomware groups post stolen data from victims

Collection Methods

  • Automated Crawling: Tor-based web crawlers scanning hidden services
  • API-Based Monitoring: Commercial dark web monitoring APIs (Flare, DarkOwl, Intel 471)
  • Manual HUMINT: Analyst-driven research on specific forums and marketplaces
  • Credential Monitoring: Breach databases and paste site monitoring for leaked credentials

OPSEC for Dark Web Research

  • Use dedicated VMs with no personal data
  • Route all traffic through Tor (Whonix or Tails recommended)
  • Never use personal accounts or identifiable information
  • Use separate email addresses and personas for forum registration
  • Disable JavaScript in Tor Browser for enhanced security
  • Never download or execute files from dark web sources on production systems

Practical Steps

Step 1: Set Up Tor-Based HTTP Client

import requests
from requests.adapters import HTTPAdapter

def create_tor_session():
    """Create a requests session routed through Tor SOCKS5 proxy."""
    session = requests.Session()
    session.proxies = {
        "http": "socks5h://127.0.0.1:9050",
        "https": "socks5h://127.0.0.1:9050",
    }
    session.headers.update({
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; rv:109.0) Gecko/20100101 Firefox/115.0",
    })
    return session


def verify_tor_connection(session):
    """Verify that traffic is routed through Tor."""
    try:
        resp = session.get("https://check.torproject.org/api/ip", timeout=30)
        data = resp.json()
        return {
            "is_tor": data.get("IsTor", False),
            "ip": data.get("IP", ""),
        }
    except Exception as e:
        return {"error": str(e)}

Step 2: Monitor Paste Sites for Credential Leaks

import re
from datetime import datetime

def monitor_paste_sites(session, organization_domains):
    """Monitor paste sites for leaked credentials matching organization domains."""
    findings = []

    # Check Have I Been Pwned API (clearnet)
    for domain in organization_domains:
        try:
            resp = requests.get(
                f"https://haveibeenpwned.com/api/v3/breaches",
                headers={"hibp-api-key": "YOUR_HIBP_KEY"},
                timeout=30,
            )
            if resp.status_code == 200:
                breaches = resp.json()
                for breach in breaches:
                    if domain.lower() in breach.get("Domain", "").lower():
                        findings.append({
                            "source": "HIBP",
                            "breach_name": breach["Name"],
                            "breach_date": breach.get("BreachDate"),
                            "data_classes": breach.get("DataClasses", []),
                            "pwn_count": breach.get("PwnCount", 0),
                            "domain": domain,
                        })
        except Exception as e:
            print(f"[-] HIBP error for {domain}: {e}")

    return findings


def search_for_keywords(session, keywords, onion_paste_urls):
    """Search dark web paste sites for specific keywords."""
    results = []

    for paste_url in onion_paste_urls:
        try:
            resp = session.get(paste_url, timeout=60)
            if resp.status_code == 200:
                content = resp.text.lower()
                for keyword in keywords:
                    if keyword.lower() in content:
                        results.append({
                            "url": paste_url,
                            "keyword": keyword,
                            "timestamp": datetime.utcnow().isoformat(),
                            "snippet": extract_context(content, keyword.lower()),
                        })
        except Exception as e:
            print(f"[-] Error fetching {paste_url}: {e}")

    return results


def extract_context(text, keyword, context_chars=200):
    """Extract text context around a keyword match."""
    idx = text.find(keyword)
    if idx == -1:
        return ""
    start = max(0, idx - context_chars)
    end = min(len(text), idx + len(keyword) + context_chars)
    return text[start:end]

Step 3: Monitor Ransomware Leak Sites

def check_ransomware_leak_sites(session, organization_name):
    """Check known ransomware group leak sites for organization mentions."""
    # Use Ransomwatch API (clearnet aggregator of ransomware leak sites)
    try:
        resp = requests.get(
            "https://raw.githubusercontent.com/joshhighet/ransomwatch/main/posts.json",
            timeout=30,
        )
        if resp.status_code == 200:
            posts = resp.json()
            matches = []
            for post in posts:
                post_title = post.get("post_title", "").lower()
                if organization_name.lower() in post_title:
                    matches.append({
                        "group": post.get("group_name", ""),
                        "title": post.get("post_title", ""),
                        "discovered": post.get("discovered", ""),
                        "url": post.get("post_url", ""),
                    })
            return matches
    except Exception as e:
        print(f"[-] Ransomwatch error: {e}")
    return []

Step 4: Generate Dark Web Intelligence Report

def generate_dark_web_report(findings, organization):
    """Generate structured dark web intelligence report."""
    report = {
        "organization": organization,
        "report_date": datetime.utcnow().isoformat(),
        "executive_summary": "",
        "credential_leaks": [],
        "ransomware_mentions": [],
        "dark_web_mentions": [],
        "recommendations": [],
    }

    for finding in findings:
        if finding.get("source") == "HIBP":
            report["credential_leaks"].append(finding)
        elif finding.get("group"):
            report["ransomware_mentions"].append(finding)
        else:
            report["dark_web_mentions"].append(finding)

    # Generate executive summary
    cred_count = len(report["credential_leaks"])
    ransom_count = len(report["ransomware_mentions"])
    report["executive_summary"] = (
        f"Monitoring identified {cred_count} credential leak sources "
        f"and {ransom_count} ransomware group mentions for {organization}."
    )

    if ransom_count > 0:
        report["recommendations"].append(
            "CRITICAL: Organization mentioned on ransomware leak site. "
            "Initiate incident response immediately."
        )
    if cred_count > 0:
        report["recommendations"].append(
            "HIGH: Leaked credentials detected. Force password resets for "
            "affected accounts and enable MFA."
        )

    return report

Validation Criteria

  • Tor connection established and verified via check.torproject.org
  • Credential leak monitoring returns results from HIBP and paste sites
  • Ransomware leak site monitoring identifies relevant mentions
  • Dark web intelligence report generated with actionable recommendations
  • All monitoring performed within legal and ethical boundaries
  • OPSEC maintained: no personal or corporate identity exposure

Compliance Framework Mapping

This skill supports compliance evidence collection across multiple frameworks:

  • SOC 2: CC7.1 (Monitoring), CC7.2 (Anomaly Detection)
  • ISO 27001: A.6.1 (Threat Intelligence), A.16.1 (Security Incident Management)
  • NIST 800-53: PM-16 (Threat Awareness), RA-3 (Risk Assessment), SI-5 (Security Alerts)
  • NIST CSF: ID.RA (Risk Assessment), DE.AE (Anomalies & Events)

Claw GRC Tip: When this skill is executed by a registered agent, compliance evidence is automatically captured and mapped to the relevant controls in your active frameworks.

Deploying This Skill with Claw GRC

Agent Execution

Register this skill with your Claw GRC agent for automated execution:

# Install via CLI
npx claw-grc skills add performing-dark-web-monitoring-for-threats

# Or load dynamically via MCP
grc.load_skill("performing-dark-web-monitoring-for-threats")

Audit Trail Integration

When executed through Claw GRC, every step of this skill generates tamper-evident audit records:

  • SHA-256 chain hashing ensures no step can be modified after execution
  • Evidence artifacts (configs, scan results, logs) are automatically attached to relevant controls
  • Trust score impact — successful execution increases your agent's trust score

Continuous Compliance

Schedule this skill for recurring execution to maintain continuous compliance posture. Claw GRC monitors for drift and alerts when re-execution is needed.

References

  • Tor Project
  • Have I Been Pwned API
  • Ransomwatch
  • DarkOwl
  • Intel 471
  • Flare Systems

Use with Claw GRC Agents

This skill is fully compatible with Claw GRC's autonomous agent system. Deploy it to any registered agent via MCP, and every execution will be logged in the tamper-evident audit trail.

// Load this skill in your agent
npx claw-grc skills add performing-dark-web-monitoring-for-threats
// Or via MCP
grc.load_skill("performing-dark-web-monitoring-for-threats")

Tags

threat-intelligencectiiocmitre-attackstixdark-webtorthreat-monitoring

Related Skills

Threat Intelligence

Tracking Threat Actor Infrastructure

4m·advanced
Threat Intelligence

Analyzing Campaign Attribution Evidence

3m·intermediate
Threat Intelligence

Analyzing Threat Actor TTPS with MITRE ATT&CK

4m·intermediate
Threat Intelligence

Building IOC Enrichment Pipeline with Opencti

3m·intermediate
Threat Intelligence

Building Threat Intelligence Platform

4m·intermediate
Threat Intelligence

Collecting Threat Intelligence with Misp

3m·intermediate

Skill Details

Domain
Threat Intelligence
Difficulty
advanced
Read Time
4 min
Code Examples
4

On This Page

OverviewPrerequisitesKey ConceptsPractical StepsValidation CriteriaReferencesCompliance Framework MappingDeploying This Skill with Claw GRC

Deploy This Skill

Add this skill to your Claw GRC agent and start automating.

Get Started Free →