CG
SkillsDetecting S3 Data Exfiltration Attempts
Start Free
Back to Skills Library
Cloud Security🟡 Intermediate

Detecting S3 Data Exfiltration Attempts

Detect data exfiltration attempts from AWS S3 buckets by analyzing CloudTrail S3 data events, VPC Flow Logs, GuardDuty findings, Amazon Macie alerts, and S3 access patterns to identify unauthorized bulk downloads and cross-account data transfers.

6 min read7 code examples

Prerequisites

  • CloudTrail configured with S3 data event logging (`GetObject`, `PutObject`, `CopyObject`)
  • GuardDuty enabled with S3 Protection feature activated
  • Amazon Macie enabled for sensitive data discovery in target buckets
  • CloudWatch Logs or Athena for querying CloudTrail logs at scale
  • VPC endpoint policies configured for S3 access monitoring

Detecting S3 Data Exfiltration Attempts

When to Use

  • When GuardDuty detects anomalous S3 access patterns such as bulk downloads from unusual IPs
  • When investigating suspected data breach involving S3-stored sensitive data
  • When building detection rules for S3 data loss prevention monitoring
  • When responding to Macie alerts about sensitive data being accessed or moved
  • When compliance requires monitoring and logging of all access to classified data stores

Do not use for preventing data exfiltration (use S3 bucket policies, VPC endpoints, and SCPs), for data classification (use Amazon Macie discovery jobs), or for network-level exfiltration detection (use VPC Flow Logs with network analysis tools).

Prerequisites

  • CloudTrail configured with S3 data event logging (GetObject, PutObject, CopyObject)
  • GuardDuty enabled with S3 Protection feature activated
  • Amazon Macie enabled for sensitive data discovery in target buckets
  • CloudWatch Logs or Athena for querying CloudTrail logs at scale
  • VPC endpoint policies configured for S3 access monitoring

Workflow

Step 1: Enable S3 Data Event Logging in CloudTrail

Configure CloudTrail to capture all S3 object-level operations for forensic analysis.

# Enable S3 data events on an existing trail
aws cloudtrail put-event-selectors \
  --trail-name management-trail \
  --event-selectors '[{
    "ReadWriteType": "All",
    "IncludeManagementEvents": true,
    "DataResources": [{
      "Type": "AWS::S3::Object",
      "Values": ["arn:aws:s3:::sensitive-data-bucket/", "arn:aws:s3:::customer-records/"]
    }]
  }]'

# Verify data event configuration
aws cloudtrail get-event-selectors --trail-name management-trail \
  --query 'EventSelectors[*].DataResources' --output json

# Enable GuardDuty S3 Protection
aws guardduty update-detector \
  --detector-id $(aws guardduty list-detectors --query 'DetectorIds[0]' --output text) \
  --data-sources '{"S3Logs":{"Enable":true}}'

Step 2: Query CloudTrail for Anomalous S3 Access Patterns

Analyze CloudTrail logs for bulk download activity, unusual access times, and unfamiliar source IPs.

# Athena query: Top S3 downloaders by volume in last 24 hours
cat << 'EOF'
SELECT
  useridentity.arn as principal,
  sourceipaddress,
  COUNT(*) as request_count,
  SUM(CAST(json_extract_scalar(requestparameters, '$.bytesTransferredOut') AS bigint)) as bytes_downloaded
FROM cloudtrail_logs
WHERE eventname = 'GetObject'
  AND eventsource = 's3.amazonaws.com'
  AND eventtime > date_add('hour', -24, now())
GROUP BY useridentity.arn, sourceipaddress
ORDER BY request_count DESC
LIMIT 50
EOF

# CloudWatch Logs Insights: S3 GetObject requests from unusual IPs
aws logs start-query \
  --log-group-name cloudtrail-logs \
  --start-time $(date -d "24 hours ago" +%s) \
  --end-time $(date +%s) \
  --query-string '
    fields @timestamp, userIdentity.arn, sourceIPAddress, requestParameters.bucketName, requestParameters.key
    | filter eventName = "GetObject"
    | stats count() as requestCount by sourceIPAddress, userIdentity.arn
    | sort requestCount desc
    | limit 25
  '

# Detect cross-account copies (potential exfiltration)
aws logs start-query \
  --log-group-name cloudtrail-logs \
  --start-time $(date -d "7 days ago" +%s) \
  --end-time $(date +%s) \
  --query-string '
    fields @timestamp, userIdentity.arn, sourceIPAddress, requestParameters.bucketName
    | filter eventName in ["CopyObject", "ReplicateObject", "UploadPart"]
    | filter userIdentity.accountId != "OUR_ACCOUNT_ID"
    | sort @timestamp desc
    | limit 100
  '

Step 3: Review GuardDuty S3 Findings

Check for GuardDuty S3-specific finding types that indicate exfiltration activity.

# List active S3 exfiltration-related findings
aws guardduty list-findings \
  --detector-id $(aws guardduty list-detectors --query 'DetectorIds[0]' --output text) \
  --finding-criteria '{
    "Criterion": {
      "type": {
        "Eq": [
          "Exfiltration:S3/MaliciousIPCaller",
          "Exfiltration:S3/ObjectRead.Unusual",
          "Discovery:S3/MaliciousIPCaller.Custom",
          "Discovery:S3/BucketEnumeration.Unusual",
          "UnauthorizedAccess:S3/MaliciousIPCaller.Custom",
          "UnauthorizedAccess:S3/TorIPCaller",
          "Impact:S3/AnomalousBehavior.Delete"
        ]
      }
    }
  }' --output json

# Get detailed finding information
aws guardduty get-findings \
  --detector-id $(aws guardduty list-detectors --query 'DetectorIds[0]' --output text) \
  --finding-ids FINDING_IDS \
  --query 'Findings[*].{Type:Type,Severity:Severity,Resource:Resource.S3BucketDetails[0].Name,Action:Service.Action}' \
  --output table

Step 4: Analyze Macie Findings for Sensitive Data Access

Review Macie findings to correlate data sensitivity with access anomalies.

# List Macie findings for sensitive data exposure
aws macie2 list-findings \
  --finding-criteria '{
    "criterion": {
      "category": {"eq": ["CLASSIFICATION"]},
      "severity.description": {"eq": ["High", "Critical"]}
    }
  }' \
  --sort-criteria '{"attributeName": "updatedAt", "orderBy": "DESC"}' \
  --max-results 25

# Get detailed finding with data classification
aws macie2 get-findings \
  --finding-ids FINDING_IDS \
  --query 'findings[*].{Type:type,Severity:severity.description,Bucket:resourcesAffected.s3Bucket.name,SensitiveDataTypes:classificationDetails.result.sensitiveData[*].category}' \
  --output table

# Run a sensitive data discovery job on target bucket
aws macie2 create-classification-job \
  --job-type ONE_TIME \
  --name "exfiltration-investigation" \
  --s3-job-definition '{
    "bucketDefinitions": [{
      "accountId": "ACCOUNT_ID",
      "buckets": ["sensitive-data-bucket"]
    }]
  }'

Step 5: Build Automated Detection Rules

Create CloudWatch alarms and EventBridge rules for real-time exfiltration detection.

# CloudWatch metric filter for high-volume S3 downloads
aws logs put-metric-filter \
  --log-group-name cloudtrail-logs \
  --filter-name s3-bulk-download \
  --filter-pattern '{$.eventName = "GetObject" && $.eventSource = "s3.amazonaws.com"}' \
  --metric-transformations '[{
    "metricName": "S3GetObjectCount",
    "metricNamespace": "SecurityMetrics",
    "metricValue": "1",
    "defaultValue": 0
  }]'

# Alarm for anomalous download volume (>1000 objects/hour)
aws cloudwatch put-metric-alarm \
  --alarm-name s3-exfiltration-alert \
  --metric-name S3GetObjectCount \
  --namespace SecurityMetrics \
  --statistic Sum \
  --period 3600 \
  --threshold 1000 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 1 \
  --alarm-actions arn:aws:sns:us-east-1:ACCOUNT:security-alerts

# EventBridge rule for GuardDuty S3 findings
aws events put-rule \
  --name guardduty-s3-exfiltration \
  --event-pattern '{
    "source": ["aws.guardduty"],
    "detail-type": ["GuardDuty Finding"],
    "detail": {
      "type": [{"prefix": "Exfiltration:S3/"}]
    }
  }'

Step 6: Implement Preventive Controls

Deploy bucket policies and VPC endpoint policies to restrict data movement paths.

# VPC endpoint policy restricting S3 access to specific buckets
aws ec2 modify-vpc-endpoint \
  --vpc-endpoint-id vpce-ENDPOINT_ID \
  --policy-document '{
    "Statement": [{
      "Sid": "RestrictToOwnBuckets",
      "Effect": "Allow",
      "Principal": "*",
      "Action": ["s3:GetObject", "s3:PutObject"],
      "Resource": ["arn:aws:s3:::approved-bucket-1/*", "arn:aws:s3:::approved-bucket-2/*"]
    }]
  }'

# Bucket policy denying access from outside the VPC
aws s3api put-bucket-policy --bucket sensitive-data-bucket --policy '{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "DenyNonVpcAccess",
    "Effect": "Deny",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::sensitive-data-bucket/*",
    "Condition": {
      "StringNotEquals": {
        "aws:sourceVpce": "vpce-ENDPOINT_ID"
      }
    }
  }]
}'

Key Concepts

TermDefinition
S3 Data EventsCloudTrail object-level logging that captures GetObject, PutObject, DeleteObject, and CopyObject API calls with request details
GuardDuty S3 ProtectionThreat detection feature analyzing CloudTrail S3 data events to identify anomalous access patterns and exfiltration attempts
Amazon MacieData security service that discovers and classifies sensitive data in S3 and generates findings for data exposure risks
VPC Endpoint PolicyAccess control policy on an S3 VPC endpoint that restricts which buckets and actions can be accessed through the endpoint
Data ExfiltrationUnauthorized transfer of data from an organization's S3 storage to an external location controlled by an attacker
Anomalous Behavior DetectionMachine learning-based identification of S3 access patterns that deviate from established baselines for a principal

Tools & Systems

  • AWS CloudTrail: Audit logging of S3 object-level operations for forensic analysis and anomaly detection
  • Amazon GuardDuty: ML-based threat detection with S3-specific finding types for exfiltration and unauthorized access
  • Amazon Macie: Sensitive data discovery and classification for correlating access anomalies with data sensitivity
  • Amazon Athena: SQL query engine for analyzing CloudTrail logs at scale to identify bulk download patterns
  • CloudWatch Logs Insights: Real-time log analysis for building detection queries against CloudTrail data

Common Scenarios

Scenario: Compromised IAM Credentials Used for Bulk S3 Data Download

Context: GuardDuty reports an Exfiltration:S3/ObjectRead.Unusual finding indicating that a developer's access key is downloading thousands of objects from a sensitive data bucket at 3 AM from an IP address in a foreign country.

Approach:

  1. Immediately deactivate the compromised access key
  2. Query CloudTrail for all S3 actions by the compromised principal in the last 72 hours
  3. Identify which buckets and objects were accessed using Athena queries
  4. Cross-reference accessed objects with Macie classifications to assess data sensitivity
  5. Check for CopyObject calls to external accounts (cross-account exfiltration)
  6. Review how the credentials were compromised (TruffleHog scan, phishing investigation)
  7. Implement VPC endpoint policies to restrict future S3 access to approved network paths

Pitfalls: CloudTrail S3 data events can generate massive log volume. Use Athena with partitioned tables rather than CloudWatch Logs Insights for queries spanning more than 24 hours. GuardDuty baseline learning requires 7-14 days, so new accounts may generate false positives for normal access patterns.

Output Format

S3 Data Exfiltration Investigation Report
============================================
Account: 123456789012
Detection Source: GuardDuty Exfiltration:S3/ObjectRead.Unusual
Investigation Date: 2026-02-23

INCIDENT TIMELINE:
  2026-02-23 02:47 UTC - First anomalous GetObject from 185.x.x.x
  2026-02-23 02:47-04:12 UTC - 12,847 GetObject requests
  2026-02-23 04:15 UTC - GuardDuty finding generated
  2026-02-23 04:20 UTC - PagerDuty alert received by SOC
  2026-02-23 04:25 UTC - Access key deactivated

COMPROMISED PRINCIPAL:
  ARN: arn:aws:iam::123456789012:user/developer-jane
  Access Key: AKIA...WXYZ
  Source IP: 185.x.x.x (Tor exit node)

DATA IMPACT ASSESSMENT:
  Buckets accessed: 3
  Objects downloaded: 12,847
  Total data volume: 4.7 GB
  Sensitive data types: PII (SSN, email), Financial (credit card)
  Macie severity: CRITICAL

CONTAINMENT ACTIONS:
  [x] Access key deactivated
  [x] User password reset and MFA re-enrolled
  [x] VPC endpoint policy applied to sensitive buckets
  [x] Bucket policy restricting to VPC-only access
  [x] TruffleHog scan initiated on developer repositories

Verification Criteria

Confirm successful execution by validating:

  • [ ] All prerequisite tools and access requirements are satisfied
  • [ ] Each workflow step completed without errors
  • [ ] Output matches expected format and contains expected data
  • [ ] No security warnings or misconfigurations detected
  • [ ] Results are documented and evidence is preserved for audit

Compliance Framework Mapping

This skill supports compliance evidence collection across multiple frameworks:

  • SOC 2: CC6.1 (Logical Access), CC6.6 (System Boundaries), CC7.1 (Monitoring)
  • ISO 27001: A.8.1 (Asset Management), A.13.1 (Network Security), A.14.1 (System Acquisition)
  • NIST 800-53: AC-3 (Access Enforcement), SC-7 (Boundary Protection), CM-7 (Least Functionality)
  • NIST CSF: PR.AC (Access Control), PR.DS (Data Security), DE.CM (Continuous Monitoring)

Claw GRC Tip: When this skill is executed by a registered agent, compliance evidence is automatically captured and mapped to the relevant controls in your active frameworks.

Deploying This Skill with Claw GRC

Agent Execution

Register this skill with your Claw GRC agent for automated execution:

# Install via CLI
npx claw-grc skills add detecting-s3-data-exfiltration-attempts

# Or load dynamically via MCP
grc.load_skill("detecting-s3-data-exfiltration-attempts")

Audit Trail Integration

When executed through Claw GRC, every step of this skill generates tamper-evident audit records:

  • SHA-256 chain hashing ensures no step can be modified after execution
  • Evidence artifacts (configs, scan results, logs) are automatically attached to relevant controls
  • Trust score impact — successful execution increases your agent's trust score

Continuous Compliance

Schedule this skill for recurring execution to maintain continuous compliance posture. Claw GRC monitors for drift and alerts when re-execution is needed.

Use with Claw GRC Agents

This skill is fully compatible with Claw GRC's autonomous agent system. Deploy it to any registered agent via MCP, and every execution will be logged in the tamper-evident audit trail.

// Load this skill in your agent
npx claw-grc skills add detecting-s3-data-exfiltration-attempts
// Or via MCP
grc.load_skill("detecting-s3-data-exfiltration-attempts")

Tags

cloud-securityawss3data-exfiltrationguarddutymaciethreat-detection

Related Skills

Cloud Security

Auditing AWS S3 Bucket Permissions

6m·intermediate
Cloud Security

Detecting AWS Cloudtrail Anomalies

3m·intermediate
Cloud Security

Detecting AWS Guardduty Findings Automation

4m·intermediate
Cloud Security

Detecting Cloud Cryptomining Activity

7m·intermediate
Cloud Security

Detecting Compromised Cloud Credentials

7m·intermediate
Cloud Security

Implementing AWS Macie for Data Classification

3m·intermediate

Skill Details

Domain
Cloud Security
Difficulty
intermediate
Read Time
6 min
Code Examples
7

On This Page

When to UsePrerequisitesWorkflowKey ConceptsTools & SystemsCommon ScenariosOutput FormatVerification CriteriaCompliance Framework MappingDeploying This Skill with Claw GRC

Deploy This Skill

Add this skill to your Claw GRC agent and start automating.

Get Started Free →