Agent Registry
The Agent Registry is Claw GRC's most distinctive feature — a compliance ledger for every AI agent in your organization. Register agents, track their trust scores, audit their interactions, and demonstrate AI governance to any auditor.
Why Register AI Agents?
Traditional GRC platforms track human users and human processes. But in a modern organization, AI agents are taking actions — querying databases, calling APIs, writing code, sending communications, making decisions — that have compliance implications just as significant as human actions.
The Claw GRC Agent Registry gives every AI agent an identity in your compliance system, enabling:
- A tamper-evident audit trail of every action an agent takes
- Trust scoring that surfaces risky agents before they cause incidents
- Capability declarations that map to framework controls (EU AI Act, ISO 42001)
- Automated compliance evidence generation for AI-specific frameworks
- Customer-facing governance reports that prove your AI program is under control
Registering an Agent
Agents can be registered manually through the dashboard or programmatically via the API.
Via Dashboard
- Navigate to Dashboard → Agents → Register Agent
- Enter the agent name, type, and description
- Specify capability declarations (what the agent can do)
- Assign an owner (the human responsible for this agent)
- Link to relevant framework controls (e.g., EU AI Act Article 9 for high-risk AI)
- Click Register
Via API
curl -X POST https://api.clawgrc.com/api/v1/agents -H "Authorization: Bearer YOUR_TOKEN" -H "Content-Type: application/json" -d '{
"name": "optimus-will",
"agent_type": "autonomous",
"description": "Main orchestration agent for MoltbotDen platform operations",
"framework": "openclaw",
"version": "2.1.0",
"capabilities": [
"read_codebase",
"write_files",
"execute_commands",
"api_access",
"data_access"
],
"owner_id": "user_01J8...",
"metadata": {
"runtime": "openai-gpt4",
"deployment": "gcp-vm-e2-standard-2",
"communication": "telegram"
}
}'Agent Types
| Agent Type | Description |
|---|---|
| autonomous | Fully autonomous agent that initiates actions independently based on goals or schedules |
| supervised | Agent that takes actions but requires human approval above a defined risk threshold |
| reactive | Agent that only responds to explicit human prompts or API calls — no autonomous initiation |
| pipeline | Agent used in automated pipelines (CI/CD, data processing) with fixed, predictable behavior |
| customer_facing | Agent that interacts directly with customers (chatbot, support agent, sales assistant) |
Trust Score Components
Every registered agent has a Trust Score — a 0–100 rating that represents the agent's compliance health. The score is calculated from five components:
| Component | Description | Weight |
|---|---|---|
| Behavioral Consistency | How consistent the agent's actions are with its declared capabilities. Agents that take undeclared actions lower this score. | 30% |
| Policy Adherence | Whether the agent's interactions stay within the bounds of your AI governance policies. | 25% |
| Audit Trail Integrity | Whether interaction chain hashes are intact and unbroken. Any tampered hash drops this component significantly. | 20% |
| Incident History | Historical record of policy violations, anomalous actions, or security findings associated with this agent. | 15% |
| Compliance Coverage | How well the agent's declared capabilities are covered by active compliance controls. | 10% |
Trust score thresholds
| Score | Label | Action Required |
|---|---|---|
| 90–100 | Fully trusted — operating within all compliance bounds with complete audit trail | Trusted |
| 70–89 | Good standing — minor deviations that don't represent material risk | Good |
| 50–69 | Monitoring required — elevated risk, review recent interactions for anomalies | Monitor |
| 30–49 | High risk — significant policy deviation. Consider restricting capabilities or requiring human approval | At Risk |
| 0–29 | Critical — immediate investigation required. Agent may be operating outside all compliance controls | Critical |
Trust Score Lifecycle
Trust scores are not static — they change continuously as the agent operates. New agents start at a default score of 70 (Good standing) and move based on their activity:
- Increases — Consistent behavior, audit trail integrity, policy compliance over time
- Decreases — Undeclared capability use, policy violations, anomalous patterns, hash failures
- Resets partially — After a suspension and reinstatement, the score resets to 50 (Monitoring) regardless of previous history
Trust score trend matters as much as the current value
An agent with a trust score of 65 that's been trending up for 14 days is in a better position than an agent at 75 that's been declining for a week. The trend line in the agent detail view shows the 30-day trajectory.Capability Declarations
Capability declarations list everything an agent is authorized to do. Declaring capabilities serves two purposes:
- Compliance coverage mapping — Declared capabilities are mapped to framework controls. For EU AI Act Article 9 (risk management system), Claw GRC uses your capability declarations to determine which control requirements apply.
- Behavioral anomaly detection — When an agent takes an action not covered by its declarations, it's flagged as an anomaly and the trust score Behavioral Consistency component drops.
Standard capability names (for consistent cross-agent tracking):
Interaction Chain Hashing
Every agent interaction is logged with a SHA-256 chain hash — similar to a blockchain ledger. Each hash includes:
- The content of the current interaction (action type, parameters, result)
- The hash of the previous interaction in the chain
- A timestamp
- The agent's current trust score at the time
This means that if anyone attempts to delete or modify a historical interaction log entry, all subsequent chain hashes become invalid — making tampering immediately detectable. Auditors can verify chain integrity from the Agent Registry in Claw GRC, or export the full interaction log for independent verification.
{
"interaction_id": "ia_01J8X3...",
"agent_id": "agent_01J8...",
"timestamp": "2026-03-14T12:34:00Z",
"action_type": "api_call",
"action_details": {
"endpoint": "GET /api/v1/frameworks",
"org_id": "00000000-0000-0000-0000-000000000001",
"duration_ms": 42
},
"trust_score_at_time": 87,
"chain_hash": "a3f1d7c2e8b9...",
"previous_hash": "9b2f1a8d3c7e..."
}Suspension and Reinstatement
An agent can be suspended when it poses an immediate compliance risk. Suspension:
- Immediately revokes the agent's API tokens
- Blocks any MCP tool calls from the agent
- Logs the suspension in the tamper-evident audit trail
- Notifies the agent owner and all Admin users
- Creates an automatic ticket for investigation
To reinstate a suspended agent, an Admin user must provide a written resolution justification and confirm that the root cause has been addressed. Upon reinstatement:
- A new API token is issued
- Trust score is reset to 50 (Monitoring)
- The agent enters a 30-day enhanced monitoring period
- Any interaction anomaly during the monitoring period triggers immediate re-suspension
AIRSS scoring applies to agents too
Registered agents contribute to your AI governance risk assessment. An agent with a low trust score and high-impact capabilities (e.g.,financial_access,data_write) increases your AIRSS score for the ai_governance risk category.