Monitoring Dark Web Sources
When to Use
Use this skill when:
- Establishing continuous monitoring for organizational domain names, executive names, and product brands on dark web forums
- Investigating a reported data breach claim found on a ransomware leak site or paste site
- Enriching an incident investigation with context about stolen credentials or planned attacks
Do not use this skill without proper operational security measures — dark web browsing without isolation exposes analyst infrastructure to adversary counter-intelligence.
Prerequisites
- Commercial dark web monitoring service (Recorded Future, Flashpoint, Intel 471, or Cybersixgill)
- Isolated operational environment: Whonix OS or Tails OS running in a VM with no persistent storage
- Keyword watchlist: organization domain, key executive names, product names, IP ranges, known credentials
- Legal guidance confirming passive monitoring is authorized in your jurisdiction
Workflow
Step 1: Establish Keyword Monitoring via Commercial Services
Configure dark web monitoring keywords in your CTI platform (e.g., Recorded Future Exposure module):
- Domain variations:
company.com,@company.com,company[dot]com - Executive names: CEO, CISO, CFO full names
- Product/brand names
- Internal codenames or project names (if suspected breach scope is broad)
- Known email domains for credential monitoring
Most commercial services (Flashpoint, Intel 471, Cybersixgill) crawl forums like XSS, Exploit[.]in, BreachForums, and Russian-language cybercriminal communities without analyst exposure.
Step 2: Manual Investigation with Operational Security
For investigations requiring direct dark web access:
Environment setup:
- Use a dedicated physical machine or air-gapped VM (Whonix + VirtualBox)
- Connect via Tor Browser only — never via standard browser
- Use a cover identity with no links to organization
- Never log in with real credentials to any dark web site
- Document all sessions in investigation log with timestamps
Paste site monitoring (clearnet-accessible, no Tor required):
# Hunt paste sites via API
curl "https://psbdmp.ws/api/search/company.com" | jq '.data[].id'
curl "https://pastebin.com/search?q=company.com" # Rate-limited public search
Step 3: Investigate Ransomware Leak Sites
Ransomware groups maintain .onion leak sites. Monitor these through commercial services rather than direct access. When a claim appears about your organization:
- Capture screenshot evidence via commercial service (do not access directly)
- Assess legitimacy: Does the threat actor's claimed data align with any known internal systems?
- Check timestamp: Is this claim recent or historical?
- Cross-reference with any known security incidents or phishing campaigns from that timeframe
- Engage IR team if claim appears credible before public disclosure
Known active ransomware leak site operators (as of early 2025): LockBit (disrupted Feb 2024), ALPHV/BlackCat (disrupted Dec 2023), Cl0p, RansomHub, Play.
Step 4: Credential Exposure Monitoring
For leaked credential monitoring:
- Have I Been Pwned Enterprise: Domain-level notification for credential exposures in breach datasets
- SpyCloud: Commercial credential monitoring with anti-cracking and plaintext password recovery from criminal markets
- Flare Systems: Automated monitoring of paste sites and dark web markets for credential dumps
When credential exposures are confirmed:
- Force password reset for affected accounts immediately
- Check if credentials provide access to any organizational systems (SSO, VPN)
- Review access logs for the period between credential exposure and detection for unauthorized access
Step 5: Document and Escalate Findings
For each dark web finding:
- Capture evidence (commercial service screenshot, paste site archive)
- Classify severity: P1 (imminent attack threat or active data exposure), P2 (credential exposure), P3 (general mention)
- Notify appropriate stakeholders within defined SLAs
- Open investigation ticket and link to evidence artifacts
- Apply TLP:RED for any findings referencing named executives or specific attack plans
Key Concepts
| Term | Definition |
|---|---|
| Dark Web | Tor-accessible hidden services (.onion domains) not indexed by standard search engines; hosts both legitimate and criminal content |
| Paste Site | Clearnet text-sharing sites (Pastebin, Ghostbin) frequently used to publish stolen data or malware configurations |
| Ransomware Leak Site | .onion site operated by ransomware group to publish stolen victim data as extortion leverage |
| Operational Security (OPSEC) | Protecting analyst identity and organizational affiliation during dark web investigation |
| Credential Stuffing | Automated use of leaked username/password pairs against authentication systems |
| Stealer Logs | Data packages exfiltrated by infostealer malware containing saved browser credentials, cookies, and session tokens |
Tools & Systems
- Recorded Future Dark Web Module: Automated monitoring of dark web sources with alerting on organization-specific keywords
- Flashpoint: Dark web forum monitoring with human intelligence augmentation for criminal community context
- Intel 471: Closed-source access to cybercriminal communities with structured intelligence on threat actors
- SpyCloud: Credential exposure monitoring with recaptured plaintext passwords from criminal markets
- Have I Been Pwned Enterprise: Domain-level breach notification API for credential monitoring at scale
Common Pitfalls
- Direct access without OPSEC: Accessing dark web forums without Tor and a cover identity can expose analyst IP, browser fingerprint, and organization affiliation to adversaries.
- Overreacting to unverified claims: Ransomware groups and forum posters fabricate attack claims for extortion or reputation. Verify before escalating to incident response.
- Missing clearnet sources: Most dark web intelligence programs miss Telegram channels, Discord servers, and paste sites which operate on the clearnet and host significant criminal activity.
- Inadequate legal review: Dark web monitoring must be reviewed by legal counsel — passive monitoring is generally lawful but active participation in criminal markets is not.
- No evidence preservation: Dark web content disappears rapidly. Capture timestamped evidence immediately upon discovery using commercial service exports.
Verification Criteria
Confirm successful execution by validating:
- [ ] All prerequisite tools and access requirements are satisfied
- [ ] Each workflow step completed without errors
- [ ] Output matches expected format and contains expected data
- [ ] No security warnings or misconfigurations detected
- [ ] Results are documented and evidence is preserved for audit
Compliance Framework Mapping
This skill supports compliance evidence collection across multiple frameworks:
- SOC 2: CC7.1 (Monitoring), CC7.2 (Anomaly Detection)
- ISO 27001: A.6.1 (Threat Intelligence), A.16.1 (Security Incident Management)
- NIST 800-53: PM-16 (Threat Awareness), RA-3 (Risk Assessment), SI-5 (Security Alerts)
- NIST CSF: ID.RA (Risk Assessment), DE.AE (Anomalies & Events)
Claw GRC Tip: When this skill is executed by a registered agent, compliance evidence is automatically captured and mapped to the relevant controls in your active frameworks.
Deploying This Skill with Claw GRC
Agent Execution
Register this skill with your Claw GRC agent for automated execution:
# Install via CLI
npx claw-grc skills add monitoring-darkweb-sources
# Or load dynamically via MCP
grc.load_skill("monitoring-darkweb-sources")
Audit Trail Integration
When executed through Claw GRC, every step of this skill generates tamper-evident audit records:
- SHA-256 chain hashing ensures no step can be modified after execution
- Evidence artifacts (configs, scan results, logs) are automatically attached to relevant controls
- Trust score impact — successful execution increases your agent's trust score
Continuous Compliance
Schedule this skill for recurring execution to maintain continuous compliance posture. Claw GRC monitors for drift and alerts when re-execution is needed.