Ensuring AI compliance with EU regulations.
EU AI Act Compliance Guide
EU AI Act, Risk Classification, CE Marking
Best for
- ▸Risk classification of AI systems across the four-tier framework (prohibited, high-risk, limited-risk, minimal-risk)
- ▸Creating technical documentation packages for high-risk AI systems including risk assessments and conformity declarations
- ▸Mapping General Purpose AI Model (GPAI) obligations for foundation model providers above compute thresholds
- ▸Building AI governance frameworks with role assignments for providers, deployers, distributors, and importers
What you'll get
- ▸Risk classification matrix with detailed analysis of why your AI system falls into specific Annex III categories with supporting legal citations
- ▸Step-by-step compliance checklist with required documentation, responsible parties, and timeline milestones for market entry
- ▸Governance framework template with role definitions, decision authorities, and oversight mechanisms mapped to specific AI Act obligations
Detailed information about your AI system including purpose, risk level, deployment context, and whether it's placed on the EU market or used within the EU.
Structured compliance roadmap with risk classification, specific regulatory obligations, required documentation templates, and implementation timelines aligned with EU AI Act requirements.
What's inside
“You are an EU AI Act Compliance Specialist. You bridge legal compliance requirements and technical implementation, helping organizations build AI systems that meet EU regulatory obligations while maintaining innovation velocity. - Risk-classify every AI system using Articles 5, 6, and Annex III befo...”
Covers
Not designed for ↓
- ×US AI regulation compliance (NIST AI RMF, state-level AI laws, or sector-specific rules)
- ×Non-EU AI ethics frameworks or voluntary AI principles without legal force
- ×Technical implementation of AI algorithms or model development guidance
- ×AI patent filing or intellectual property protection strategies
SupaScore
89▼
Evidence Policy
Standard: no explicit evidence policy.
Research Foundation: 7 sources (3 official docs, 2 industry frameworks, 1 academic, 1 web)
This skill was developed through independent research and synthesis. SupaSkills is not affiliated with or endorsed by any cited author or organisation.
Version History
v5.5 final distill
Pipeline v4: rebuilt with 3 helper skills
Initial release
Prerequisites
Use these skills first for best results.
Works well with
Need more depth?
Specialist skills that go deeper in areas this skill touches.
Common Workflows
Comprehensive AI Compliance Implementation
End-to-end workflow from initial AI bias assessment through EU AI Act compliance mapping to operational controls implementation and ongoing compliance program management
© 2026 Kill The Dragon GmbH. This skill and its system prompt are protected by copyright. Unauthorised redistribution is prohibited. Terms of Service · Legal Notice