CG
SkillsAnalyzing Campaign Attribution Evidence
Start Free
Back to Skills Library
Threat Intelligence🟡 Intermediate

Analyzing Campaign Attribution Evidence

Campaign attribution analysis involves systematically evaluating evidence to determine which threat actor or group is responsible for a cyber operation. This guide covers collecting and weighting attr.

3 min read4 code examples

Prerequisites

  • Python 3.9+ with `attackcti`, `stix2`, `networkx` libraries
  • Access to threat intelligence platforms (MISP, OpenCTI)
  • Understanding of Diamond Model of Intrusion Analysis
  • Familiarity with MITRE ATT&CK threat group profiles
  • Knowledge of malware analysis and infrastructure tracking techniques

Analyzing Campaign Attribution Evidence

Overview

Campaign attribution analysis involves systematically evaluating evidence to determine which threat actor or group is responsible for a cyber operation. This guide covers collecting and weighting attribution indicators using the Diamond Model and ACH (Analysis of Competing Hypotheses), analyzing infrastructure overlaps, TTP consistency, malware code similarities, operational timing patterns, and language artifacts to build confidence-weighted attribution assessments.

Prerequisites

  • Python 3.9+ with attackcti, stix2, networkx libraries
  • Access to threat intelligence platforms (MISP, OpenCTI)
  • Understanding of Diamond Model of Intrusion Analysis
  • Familiarity with MITRE ATT&CK threat group profiles
  • Knowledge of malware analysis and infrastructure tracking techniques

Key Concepts

Attribution Evidence Categories

  1. Infrastructure Overlap: Shared C2 servers, domains, IP ranges, hosting providers
  2. TTP Consistency: Matching ATT&CK techniques and sub-techniques across campaigns
  3. Malware Code Similarity: Shared code bases, compilers, PDB paths, encryption routines
  4. Operational Patterns: Timing (working hours, time zones), targeting patterns, operational tempo
  5. Language Artifacts: Embedded strings, variable names, error messages in specific languages
  6. Victimology: Target sector, geography, and organizational profile consistency

Confidence Levels

  • High Confidence: Multiple independent evidence categories converge on same actor
  • Moderate Confidence: Several evidence categories match, some ambiguity remains
  • Low Confidence: Limited evidence, possible false flags or shared tooling

Analysis of Competing Hypotheses (ACH)

Structured analytical method that evaluates evidence against multiple competing hypotheses. Each piece of evidence is scored as consistent, inconsistent, or neutral with respect to each hypothesis. The hypothesis with the least inconsistent evidence is favored.

Practical Steps

Step 1: Collect Attribution Evidence

from stix2 import MemoryStore, Filter
from collections import defaultdict

class AttributionAnalyzer:
    def __init__(self):
        self.evidence = []
        self.hypotheses = {}

    def add_evidence(self, category, description, value, confidence):
        self.evidence.append({
            "category": category,
            "description": description,
            "value": value,
            "confidence": confidence,
            "timestamp": None,
        })

    def add_hypothesis(self, actor_name, actor_id=""):
        self.hypotheses[actor_name] = {
            "actor_id": actor_id,
            "consistent_evidence": [],
            "inconsistent_evidence": [],
            "neutral_evidence": [],
            "score": 0,
        }

    def evaluate_evidence(self, evidence_idx, actor_name, assessment):
        """Assess evidence against a hypothesis: consistent/inconsistent/neutral."""
        if assessment == "consistent":
            self.hypotheses[actor_name]["consistent_evidence"].append(evidence_idx)
            self.hypotheses[actor_name]["score"] += self.evidence[evidence_idx]["confidence"]
        elif assessment == "inconsistent":
            self.hypotheses[actor_name]["inconsistent_evidence"].append(evidence_idx)
            self.hypotheses[actor_name]["score"] -= self.evidence[evidence_idx]["confidence"] * 2
        else:
            self.hypotheses[actor_name]["neutral_evidence"].append(evidence_idx)

    def rank_hypotheses(self):
        """Rank hypotheses by attribution score."""
        ranked = sorted(
            self.hypotheses.items(),
            key=lambda x: x[1]["score"],
            reverse=True,
        )
        return [
            {
                "actor": name,
                "score": data["score"],
                "consistent": len(data["consistent_evidence"]),
                "inconsistent": len(data["inconsistent_evidence"]),
                "confidence": self._score_to_confidence(data["score"]),
            }
            for name, data in ranked
        ]

    def _score_to_confidence(self, score):
        if score >= 80:
            return "HIGH"
        elif score >= 40:
            return "MODERATE"
        else:
            return "LOW"

Step 2: Infrastructure Overlap Analysis

def analyze_infrastructure_overlap(campaign_a_infra, campaign_b_infra):
    """Compare infrastructure between two campaigns for attribution."""
    overlap = {
        "shared_ips": set(campaign_a_infra.get("ips", [])).intersection(
            campaign_b_infra.get("ips", [])
        ),
        "shared_domains": set(campaign_a_infra.get("domains", [])).intersection(
            campaign_b_infra.get("domains", [])
        ),
        "shared_asns": set(campaign_a_infra.get("asns", [])).intersection(
            campaign_b_infra.get("asns", [])
        ),
        "shared_registrars": set(campaign_a_infra.get("registrars", [])).intersection(
            campaign_b_infra.get("registrars", [])
        ),
    }

    overlap_score = 0
    if overlap["shared_ips"]:
        overlap_score += 30
    if overlap["shared_domains"]:
        overlap_score += 25
    if overlap["shared_asns"]:
        overlap_score += 15
    if overlap["shared_registrars"]:
        overlap_score += 10

    return {
        "overlap": {k: list(v) for k, v in overlap.items()},
        "overlap_score": overlap_score,
        "assessment": "STRONG" if overlap_score >= 40 else "MODERATE" if overlap_score >= 20 else "WEAK",
    }

Step 3: TTP Comparison Across Campaigns

from attackcti import attack_client

def compare_campaign_ttps(campaign_techniques, known_actor_techniques):
    """Compare campaign TTPs against known threat actor profiles."""
    campaign_set = set(campaign_techniques)
    actor_set = set(known_actor_techniques)

    common = campaign_set.intersection(actor_set)
    unique_campaign = campaign_set - actor_set
    unique_actor = actor_set - campaign_set

    jaccard = len(common) / len(campaign_set.union(actor_set)) if campaign_set.union(actor_set) else 0

    return {
        "common_techniques": sorted(common),
        "common_count": len(common),
        "unique_to_campaign": sorted(unique_campaign),
        "unique_to_actor": sorted(unique_actor),
        "jaccard_similarity": round(jaccard, 3),
        "overlap_percentage": round(len(common) / len(campaign_set) * 100, 1) if campaign_set else 0,
    }

Step 4: Generate Attribution Report

def generate_attribution_report(analyzer):
    """Generate structured attribution assessment report."""
    rankings = analyzer.rank_hypotheses()

    report = {
        "assessment_date": "2026-02-23",
        "total_evidence_items": len(analyzer.evidence),
        "hypotheses_evaluated": len(analyzer.hypotheses),
        "rankings": rankings,
        "primary_attribution": rankings[0] if rankings else None,
        "evidence_summary": [
            {
                "index": i,
                "category": e["category"],
                "description": e["description"],
                "confidence": e["confidence"],
            }
            for i, e in enumerate(analyzer.evidence)
        ],
    }

    return report

Validation Criteria

  • Evidence collection covers all six attribution categories
  • ACH matrix properly evaluates evidence against competing hypotheses
  • Infrastructure overlap analysis identifies shared indicators
  • TTP comparison uses ATT&CK technique IDs for precision
  • Attribution confidence levels are properly justified
  • Report includes alternative hypotheses and false flag considerations

Compliance Framework Mapping

This skill supports compliance evidence collection across multiple frameworks:

  • SOC 2: CC7.1 (Monitoring), CC7.2 (Anomaly Detection)
  • ISO 27001: A.6.1 (Threat Intelligence), A.16.1 (Security Incident Management)
  • NIST 800-53: PM-16 (Threat Awareness), RA-3 (Risk Assessment), SI-5 (Security Alerts)
  • NIST CSF: ID.RA (Risk Assessment), DE.AE (Anomalies & Events)

Claw GRC Tip: When this skill is executed by a registered agent, compliance evidence is automatically captured and mapped to the relevant controls in your active frameworks.

Deploying This Skill with Claw GRC

Agent Execution

Register this skill with your Claw GRC agent for automated execution:

# Install via CLI
npx claw-grc skills add analyzing-campaign-attribution-evidence

# Or load dynamically via MCP
grc.load_skill("analyzing-campaign-attribution-evidence")

Audit Trail Integration

When executed through Claw GRC, every step of this skill generates tamper-evident audit records:

  • SHA-256 chain hashing ensures no step can be modified after execution
  • Evidence artifacts (configs, scan results, logs) are automatically attached to relevant controls
  • Trust score impact — successful execution increases your agent's trust score

Continuous Compliance

Schedule this skill for recurring execution to maintain continuous compliance posture. Claw GRC monitors for drift and alerts when re-execution is needed.

References

  • Diamond Model of Intrusion Analysis
  • MITRE ATT&CK Groups
  • Analysis of Competing Hypotheses
  • Threat Attribution Framework

Use with Claw GRC Agents

This skill is fully compatible with Claw GRC's autonomous agent system. Deploy it to any registered agent via MCP, and every execution will be logged in the tamper-evident audit trail.

// Load this skill in your agent
npx claw-grc skills add analyzing-campaign-attribution-evidence
// Or via MCP
grc.load_skill("analyzing-campaign-attribution-evidence")

Tags

threat-intelligencectiiocmitre-attackstixattributioncampaign-analysis

Related Skills

Threat Intelligence

Analyzing Threat Actor TTPS with MITRE ATT&CK

4m·intermediate
Threat Intelligence

Building IOC Enrichment Pipeline with Opencti

3m·intermediate
Threat Intelligence

Building Threat Intelligence Platform

4m·intermediate
Threat Intelligence

Collecting Threat Intelligence with Misp

3m·intermediate
Threat Intelligence

Implementing STIX Taxii Feed Integration

4m·intermediate
Threat Intelligence

Performing Indicator Lifecycle Management

3m·intermediate

Skill Details

Domain
Threat Intelligence
Difficulty
intermediate
Read Time
3 min
Code Examples
4

On This Page

OverviewPrerequisitesKey ConceptsPractical StepsValidation CriteriaReferencesCompliance Framework MappingDeploying This Skill with Claw GRC

Deploy This Skill

Add this skill to your Claw GRC agent and start automating.

Get Started Free →