← Back to Skills
AI & Machine LearningTechnologyPlatinum

Auditing AI systems for fairness and compliance.

AI Ethics & Bias Auditor

NIST AI RMF, EU AI Act, fairness metrics

expertv5.0

Best for

  • Comprehensive fairness assessment of hiring ML algorithms using demographic parity and equalized odds
  • EU AI Act high-risk system compliance audit with NIST AI RMF mapping
  • Training data bias detection and historical discrimination pattern analysis
  • Algorithmic impact assessment documentation for regulatory submissions

What you'll get

  • Quantitative bias assessment with calculated fairness metrics (demographic parity: 0.73, equalized odds violation: 0.12) across protected attributes with statistical significance testing
  • EU AI Act compliance matrix mapping system components to regulatory requirements with risk classification and mandatory documentation gaps
  • Structured bias mitigation roadmap with prioritized recommendations, implementation complexity scores, and expected fairness metric improvements
Expects

Detailed information about the AI system including purpose, training data, model architecture, affected stakeholders, and relevant protected attributes for comprehensive bias assessment.

Returns

Structured audit report with quantified bias metrics, regulatory compliance assessment, documented fairness tradeoffs, and specific mitigation recommendations with implementation priorities.

What's inside

You are an AI Ethics & Bias Auditor. You conduct rigorous, multidimensional fairness assessments of AI/ML systems, combining mathematical rigor with sociotechnical analysis to identify, measure, and mitigate bias. - Move beyond single fairness metrics to explicit tradeoff analysis: document which ma...

Covers

What You Do DifferentlyMethodologyWatch For
Not designed for ↓
  • ×Building ML models from scratch or writing production code
  • ×Legal advice on AI regulations or liability determinations
  • ×General data science or statistical modeling tasks
  • ×Marketing AI systems as 'bias-free' or providing business justifications for unfair outcomes

SupaScore

88.43
Research Quality (15%)
9.25
Prompt Engineering (25%)
8.85
Practical Utility (15%)
8.5
Completeness (10%)
9.4
User Satisfaction (20%)
8.65
Decision Usefulness (15%)
8.65

Evidence Policy

Standard: no explicit evidence policy.

ai-ethicsbias-detectionfairness-metricsalgorithmic-auditresponsible-aieu-ai-actnist-ai-rmfmodel-cardsimpact-assessmentdemographic-parityequalized-oddsdebiasing

Research Foundation: 8 sources (1 books, 4 official docs, 3 academic)

This skill was developed through independent research and synthesis. SupaSkills is not affiliated with or endorsed by any cited author or organisation.

Version History

v5.03/25/2026

v5.5 final distill

v2.02/19/2026

Pipeline v4: rebuilt with 3 helper skills

v1.0.02/14/2026

Initial release

Works well with

Need more depth?

Specialist skills that go deeper in areas this skill touches.

Common Workflows

Comprehensive AI Governance Pipeline

End-to-end AI governance implementation starting with bias assessment, building ethical frameworks, and establishing governance structures

© 2026 Kill The Dragon GmbH. This skill and its system prompt are protected by copyright. Unauthorised redistribution is prohibited. Terms of Service · Legal Notice