EU AI Act Compliance

AgentMD helps you align with the EU AI Act risk classification and compliance workflows for AI agents.

Risk Classification

The EU AI Act classifies AI systems by risk level. AgentMD executions fall into these categories:

  • Minimal risk — Build, test, lint, and validation commands. No human oversight required. Most AgentMD runs fall here.
  • Limited risk — Deploy, migration, or production changes. Transparency and human-in-the-loop recommended.
  • High risk — Critical infrastructure, safety components. Requires risk management, human oversight, and audit trails.

AgentMD Compliance Features

  • Deterministic workflows — Commands are explicit and version-controlled in AGENTS.md. No opaque autonomous behavior.
  • Permission boundariespermissions.shell allow/deny lists limit agent scope.
  • Guardrails — YAML frontmatter declares constraints (e.g., "Never modify production").
  • Human-in-the-loop — Policy rules with approval: always for sensitive operations.
  • Audit trails — Execution history, logs, and audit logs for traceability.
  • Kill switch — Cancel running executions from the dashboard.

Compliance Workflow

  1. Classify your agent use case (minimal, limited, or high risk).
  2. Use permissions.shell.default: deny with explicit allowlists.
  3. Enable human approval for deploy, migrate, or production changes.
  4. Review execution history and success rates regularly.
  5. Export traces via OpenTelemetry for external governance platforms.

References