CG
SkillsImplementing SIEM Use Cases for Detection
Start Free
Back to Skills Library
Security Operations๐ŸŸก Intermediate

Implementing SIEM Use Cases for Detection

Implements SIEM detection use cases by designing correlation rules, threshold alerts, and behavioral analytics mapped to MITRE ATT&CK techniques across Splunk, Elastic, and Sentinel.

6 min read12 code examples4 MITRE techniques

Prerequisites

  • SIEM platform (Splunk ES, Elastic Security, or Microsoft Sentinel) with production data
  • ATT&CK Navigator for coverage gap analysis
  • Log sources normalized to CIM/ECS field standards
  • Use case documentation framework (wiki, Git repo, or detection engineering platform)
  • Testing environment with attack simulation tools (Atomic Red Team, MITRE Caldera)

MITRE ATT&CK Coverage

T1110.001T1110.003T1204.002T1059.001

Implementing SIEM Use Cases for Detection

When to Use

Use this skill when:

  • SOC teams need to build or expand their SIEM detection library from scratch
  • Threat assessments identify ATT&CK technique gaps requiring new detection rules
  • Detection engineers need a structured process for use case design, testing, and deployment
  • Compliance requirements mandate specific detection capabilities (PCI DSS, HIPAA, SOX)

Do not use for ad-hoc hunting queries โ€” use cases are formalized, tested, and maintained detection rules, not exploratory searches.

Prerequisites

  • SIEM platform (Splunk ES, Elastic Security, or Microsoft Sentinel) with production data
  • ATT&CK Navigator for coverage gap analysis
  • Log sources normalized to CIM/ECS field standards
  • Use case documentation framework (wiki, Git repo, or detection engineering platform)
  • Testing environment with attack simulation tools (Atomic Red Team, MITRE Caldera)

Workflow

Step 1: Assess Detection Coverage Gaps

Map current detection rules to ATT&CK and identify gaps:

import json

# Load current detection rules mapped to ATT&CK
current_rules = [
    {"name": "Brute Force Detection", "techniques": ["T1110.001", "T1110.003"]},
    {"name": "Malware Hash Match", "techniques": ["T1204.002"]},
    {"name": "Suspicious PowerShell", "techniques": ["T1059.001"]},
]

# Load ATT&CK Enterprise techniques
with open("enterprise-attack.json") as f:
    attack = json.load(f)

all_techniques = set()
for obj in attack["objects"]:
    if obj["type"] == "attack-pattern":
        ext = obj.get("external_references", [])
        for ref in ext:
            if ref.get("source_name") == "mitre-attack":
                all_techniques.add(ref["external_id"])

covered = set()
for rule in current_rules:
    covered.update(rule["techniques"])

gaps = all_techniques - covered
print(f"Total techniques: {len(all_techniques)}")
print(f"Covered: {len(covered)} ({len(covered)/len(all_techniques)*100:.1f}%)")
print(f"Gaps: {len(gaps)}")

# Prioritize gaps by threat relevance
priority_techniques = [
    "T1003", "T1021", "T1053", "T1547", "T1078",
    "T1055", "T1071", "T1105", "T1036", "T1070"
]
priority_gaps = [t for t in priority_techniques if t in gaps]
print(f"Priority gaps: {priority_gaps}")

Step 2: Design Use Case Specification

Document each use case with a standardized template:

use_case_id: UC-2024-015
name: Credential Dumping via LSASS Access
description: Detects tools accessing LSASS process memory for credential extraction
mitre_attack:
  tactic: Credential Access (TA0006)
  technique: T1003.001 - LSASS Memory
  data_sources:
    - Process: OS API Execution (Sysmon EventCode 10)
    - Process: Process Access (Windows Security 4663)
log_sources:
  - index: sysmon, sourcetype: XmlWinEventLog:Microsoft-Windows-Sysmon/Operational
  - index: wineventlog, sourcetype: WinEventLog:Security
severity: High
confidence: Medium-High
false_positive_sources:
  - Antivirus products scanning LSASS
  - CrowdStrike Falcon sensor
  - Windows Defender ATP
  - SCCM client
tuning_notes: >
  Maintain exclusion list for known security tools that legitimately access LSASS.
  Review exclusions quarterly for newly deployed security products.
sla: Alert within 5 minutes of detection
owner: detection_engineering_team
status: Production
created: 2024-03-15
last_tested: 2024-03-15

Step 3: Implement Detection Logic Across Platforms

Splunk ES Correlation Search:

| tstats summariesonly=true count from datamodel=Endpoint.Processes
  where Processes.process_name="lsass.exe"
  by Processes.dest, Processes.user, Processes.process_name,
     Processes.parent_process_name, Processes.parent_process
| `drop_dm_object_name(Processes)`
| lookup lsass_access_whitelist parent_process AS parent_process OUTPUT is_whitelisted
| where isnull(is_whitelisted) OR is_whitelisted!="true"
| `credential_dumping_lsass_filter`

Or using raw Sysmon data:

index=sysmon EventCode=10 TargetImage="*\\lsass.exe"
GrantedAccess IN ("0x1010", "0x1038", "0x1fffff", "0x40")
NOT [| inputlookup lsass_whitelist.csv | fields SourceImage]
| stats count, values(GrantedAccess) AS access_flags by Computer, SourceImage, SourceUser
| where count > 0

Elastic Security EQL Rule:

process where event.type == "access" and
  process.name == "lsass.exe" and
  not process.executable : (
    "?:\\Windows\\System32\\svchost.exe",
    "?:\\Windows\\System32\\csrss.exe",
    "?:\\Program Files\\CrowdStrike\\*",
    "?:\\ProgramData\\Microsoft\\Windows Defender\\*"
  )

Microsoft Sentinel KQL Rule:

DeviceProcessEvents
| where Timestamp > ago(1h)
| where FileName == "lsass.exe"
| where ActionType == "ProcessAccessed"
| where InitiatingProcessFileName !in ("svchost.exe", "csrss.exe", "MsMpEng.exe")
| project Timestamp, DeviceName, InitiatingProcessFileName,
          InitiatingProcessCommandLine, AccountName

Step 4: Test with Attack Simulation

Validate detection rules using Atomic Red Team:

# Install Atomic Red Team
IEX (IWR 'https://raw.githubusercontent.com/redcanaryco/invoke-atomicredteam/master/install-atomicredteam.ps1' -UseBasicParsing)
Install-AtomicRedTeam -getAtomics

# Execute T1003.001 - Credential Dumping
Invoke-AtomicTest T1003.001 -TestNumbers 1,2,3

# Execute T1053.005 - Scheduled Task
Invoke-AtomicTest T1053.005 -TestNumbers 1

# Execute T1547.001 - Registry Run Key
Invoke-AtomicTest T1547.001 -TestNumbers 1,2

Verify detection in SIEM:

index=sysmon EventCode=10 TargetImage="*\\lsass.exe"
earliest=-1h
| stats count by Computer, SourceImage, GrantedAccess
| where count > 0

Document test results:

TEST RESULTS โ€” UC-2024-015
Atomic Test T1003.001-1 (Mimikatz):      DETECTED (alert fired in 47s)
Atomic Test T1003.001-2 (ProcDump):      DETECTED (alert fired in 32s)
Atomic Test T1003.001-3 (Task Manager):  FALSE NEGATIVE (excluded by whitelist โ€” expected)
False Positive Rate (7-day backtest):     2 events (CrowdStrike scan โ€” added to whitelist)

Step 5: Deploy and Monitor Use Case Health

Track detection rule effectiveness:

-- Use case firing frequency
index=notable
| stats count AS fires, dc(src) AS unique_sources,
        dc(dest) AS unique_dests
  by rule_name, status_label
| eval true_positive_rate = round(
    sum(eval(if(status_label="Resolved - True Positive", 1, 0))) /
    count * 100, 1)
| sort - fires
| table rule_name, fires, unique_sources, unique_dests, true_positive_rate

-- Detection latency monitoring
index=notable
| eval detection_latency = _time - orig_time
| stats avg(detection_latency) AS avg_latency_sec,
        perc95(detection_latency) AS p95_latency_sec
  by rule_name
| eval avg_latency_min = round(avg_latency_sec / 60, 1)
| sort - avg_latency_sec

Step 6: Maintain Use Case Library

Establish lifecycle management for all detection use cases:

USE CASE LIFECYCLE
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
1. PROPOSED    โ†’ New detection need identified (threat intel, gap analysis, incident finding)
2. DEVELOPMENT โ†’ Query written, false positive analysis, tuning
3. TESTING     โ†’ Atomic Red Team validation, 7-day backtest
4. STAGING     โ†’ Deployed in alert-only mode (no incident creation) for 14 days
5. PRODUCTION  โ†’ Full production with incident creation and SOAR integration
6. REVIEW      โ†’ Quarterly review of effectiveness, false positive rate, relevance
7. DEPRECATED  โ†’ Technique no longer relevant or replaced by better detection

Key Concepts

TermDefinition
Use CaseFormalized detection rule with documented logic, testing, tuning, and lifecycle management
Detection EngineeringPractice of designing, testing, and maintaining SIEM detection rules as a software development discipline
Correlation SearchSIEM query that combines events from multiple sources to identify attack patterns
False Positive RatePercentage of alerts that are benign activity โ€” target <20% for production use cases
Detection LatencyTime between event occurrence and alert generation โ€” target <5 minutes for critical detections
ATT&CK CoveragePercentage of relevant ATT&CK techniques with at least one production detection rule

Tools & Systems

  • Splunk ES: Enterprise SIEM with correlation searches, risk-based alerting, and Incident Review
  • Elastic Security: SIEM with detection rules, EQL sequences, and ML-based anomaly detection
  • Microsoft Sentinel: Cloud SIEM with KQL analytics rules, Fusion ML engine, and Lighthouse multi-tenant
  • Atomic Red Team: Open-source attack simulation framework for testing detection rules against ATT&CK techniques
  • ATT&CK Navigator: MITRE visualization tool for mapping and tracking detection coverage across techniques

Common Scenarios

  • Post-Incident Use Case: After a ransomware incident, build detection for the initial access vector discovered during investigation
  • Compliance-Driven: PCI DSS requires detection of admin account misuse โ€” build use cases for 4672/4720/4732 events
  • Threat-Intel Driven: New APT group targets your sector โ€” build use cases for their documented TTPs
  • Red Team Findings: Purple team exercise identifies blind spots โ€” convert findings into production detection rules
  • SIEM Migration: Migrating from QRadar to Splunk โ€” convert and validate all existing use cases on new platform

Output Format

USE CASE DEPLOYMENT REPORT
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
Quarter:      Q1 2024
Total Use Cases: 147 (Production: 128, Staging: 12, Development: 7)

New Deployments This Quarter:
  UC-2024-012  Kerberoasting Detection (T1558.003)     โ€” Production
  UC-2024-013  DLL Side-Loading (T1574.002)            โ€” Production
  UC-2024-014  Scheduled Task Persistence (T1053.005)  โ€” Production
  UC-2024-015  LSASS Memory Access (T1003.001)         โ€” Staging

ATT&CK Coverage:
  Overall: 67% of relevant techniques (up from 61%)
  Initial Access:      78%
  Execution:           82%
  Persistence:         71%
  Credential Access:   65%
  Lateral Movement:    58% (priority gap area)

Health Metrics:
  Avg True Positive Rate:    74% (target: >70%)
  Avg Detection Latency:     2.3 min (target: <5 min)
  Use Cases Deprecated:      3 (replaced by improved versions)

Verification Criteria

Confirm successful execution by validating:

  • [ ] All prerequisite tools and access requirements are satisfied
  • [ ] Each workflow step completed without errors
  • [ ] Output matches expected format and contains expected data
  • [ ] No security warnings or misconfigurations detected
  • [ ] Results are documented and evidence is preserved for audit

Compliance Framework Mapping

This skill supports compliance evidence collection across multiple frameworks:

  • SOC 2: CC7.1 (Monitoring), CC7.2 (Anomaly Detection), CC7.3 (Incident Identification)
  • ISO 27001: A.12.4 (Logging & Monitoring), A.16.1 (Security Incident Management)
  • NIST 800-53: AU-6 (Audit Review), SI-4 (System Monitoring), IR-5 (Incident Monitoring)
  • NIST CSF: DE.AE (Anomalies & Events), DE.CM (Continuous Monitoring)

Claw GRC Tip: When this skill is executed by a registered agent, compliance evidence is automatically captured and mapped to the relevant controls in your active frameworks.

Deploying This Skill with Claw GRC

Agent Execution

Register this skill with your Claw GRC agent for automated execution:

# Install via CLI
npx claw-grc skills add implementing-siem-use-cases-for-detection

# Or load dynamically via MCP
grc.load_skill("implementing-siem-use-cases-for-detection")

Audit Trail Integration

When executed through Claw GRC, every step of this skill generates tamper-evident audit records:

  • SHA-256 chain hashing ensures no step can be modified after execution
  • Evidence artifacts (configs, scan results, logs) are automatically attached to relevant controls
  • Trust score impact โ€” successful execution increases your agent's trust score

Continuous Compliance

Schedule this skill for recurring execution to maintain continuous compliance posture. Claw GRC monitors for drift and alerts when re-execution is needed.

Use with Claw GRC Agents

This skill is fully compatible with Claw GRC's autonomous agent system. Deploy it to any registered agent via MCP, and every execution will be logged in the tamper-evident audit trail.

// Load this skill in your agent
npx claw-grc skills add implementing-siem-use-cases-for-detection
// Or via MCP
grc.load_skill("implementing-siem-use-cases-for-detection")

Tags

socsiemuse-casesdetection-engineeringmitre-attacksplunkelasticsentinel

Related Skills

Security Operations

Building Detection Rules with Sigma

5mยทintermediate
Security Operations

Implementing SIEM Use Case Tuning

3mยทintermediate
Security Operations

Building Detection Rule with Splunk Spl

5mยทintermediate
Security Operations

Performing Threat Hunting with Elastic SIEM

5mยทintermediate
Security Operations

Analyzing Windows Event Logs in Splunk

5mยทintermediate
Security Operations

Building Threat Intelligence Enrichment in Splunk

4mยทintermediate

Skill Details

Domain
Security Operations
Difficulty
intermediate
Read Time
6 min
Code Examples
12
MITRE IDs
4

On This Page

When to UsePrerequisitesWorkflowKey ConceptsTools & SystemsCommon ScenariosOutput FormatVerification CriteriaCompliance Framework MappingDeploying This Skill with Claw GRC

Deploy This Skill

Add this skill to your Claw GRC agent and start automating.

Get Started Free โ†’