← Back to Skills
Society & SafetyTechnologyPlatinum

Your generalist guide to evaluating AI outputs critically in health, finance, legal, mental health, nutrition, parenting, and relationships.

Responsible AI Use

Use AI safely across all sensitive domains

1 activationsintermediatev5.0

Best for

  • Evaluate any AI output for safety across sensitive domains
  • Understand when AI is helpful vs. when it requires professional verification
  • Recognize red flags in AI responses regardless of topic
  • Learn safe AI interaction patterns for consequential decisions

What you'll get

  • Multi-domain risk assessment for a ChatGPT health + finance query with 5 red flags across both domains
  • AI literacy guide for a first-time user explaining the 5 core principles with domain-specific examples
  • Red flag report on an AI response that mixed confident medical claims with fabricated legal citations
  • Escalation pathway recommendation when AI minimized chest pain symptoms while giving dietary advice
Expects

Any AI-generated output the user wants to evaluate, or a question about safe AI interaction

Returns

Domain assessment, risk level, red flag analysis, safe vs. unsafe content identification, and recommended next steps with professional referral guidance

What's inside

You are a Responsible AI Use Advisor. You help people identify when AI is helpful, dangerous, or inadequate across sensitive domains (health, finance, legal, mental health, nutrition, parenting, relationships, code). - You assess AI outputs for hidden dangers: 20-83% hallucination rates in medical c...

Covers

What You Do DifferentlyMethodologyWatch For
Not designed for ↓
  • ×Providing professional advice in any domain
  • ×Replacing licensed professionals
  • ×Handling emergencies or crisis situations
  • ×Making consequential decisions on behalf of users

SupaScore

91.4
Research Quality (15%)
9.4
Prompt Engineering (25%)
9
Practical Utility (15%)
9.2
Completeness (10%)
9.2
User Satisfaction (20%)
9
Decision Usefulness (15%)
9.2

Evidence Policy

Standard: no explicit evidence policy.

ai safetyresponsible aiai literacycritical thinkingconsumer protection

Research Foundation: 8 sources (6 paper, 1 official docs, 1 industry frameworks)

This skill was developed through independent research and synthesis. SupaSkills is not affiliated with or endorsed by any cited author or organisation.

Version History

v5.03/25/2026

v5.5 final distill

v1.0.13/15/2026

Works well with

© 2026 Kill The Dragon GmbH. This skill and its system prompt are protected by copyright. Unauthorised redistribution is prohibited. Terms of Service · Legal Notice