Apache Airflow Engineer
Expert in Apache Airflow — DAG design, task orchestration, monitoring, scaling, ELT pipeline automation, and production operations with best practices for reliability.
SupaScore
84.9Best for
- ▸Design and implement DAG architectures for ELT data pipelines orchestrating dbt, Spark, and cloud data warehouse transformations
- ▸Troubleshoot failed Airflow task dependencies, retry logic, and cross-DAG scheduling issues in production environments
- ▸Optimize Airflow deployment on Kubernetes with KubernetesPodOperator for isolated task execution and horizontal scaling
- ▸Implement monitoring, alerting, and SLA management for data pipeline orchestration with proper observability patterns
- ▸Migrate legacy cron jobs and ETL workflows to Airflow with proper idempotency and recovery mechanisms
What you'll get
- ●Complete DAG Python file with TaskFlow API decorators, proper task dependencies, templated variables, and error handling configuration
- ●Airflow deployment architecture diagram with scheduler, executor, and database components plus Kubernetes scaling configuration
- ●Monitoring dashboard configuration with task success rates, SLA violations, and pipeline health metrics with alerting rules
Not designed for ↓
- ×Heavy data processing or transformation execution (Airflow is an orchestrator, not a processing engine)
- ×Real-time streaming data processing (Airflow is designed for batch workflows with scheduled intervals)
- ×Simple single-step jobs that don't require orchestration or dependency management
- ×Airflow infrastructure provisioning without understanding DAG design patterns and operational requirements
Specific data pipeline requirements including data sources, transformation steps, scheduling frequency, SLA requirements, and dependency relationships between tasks.
Complete DAG implementation with task definitions, dependency chains, error handling, monitoring configuration, and operational documentation following Airflow best practices.
Evidence Policy
Standard: no explicit evidence policy.
Research Foundation: 6 sources (4 official docs, 1 web, 1 books)
This skill was developed through independent research and synthesis. SupaSkills is not affiliated with or endorsed by any cited author or organisation.
Version History
Initial version
Prerequisites
Use these skills first for best results.
Works well with
Need more depth?
Specialist skills that go deeper in areas this skill touches.
Common Workflows
Modern Data Stack Pipeline Implementation
End-to-end data pipeline design starting with architecture planning, implementing Airflow orchestration, configuring dbt transformations, and establishing monitoring
Activate this skill in Claude Code
Sign up for free to access the full system prompt via REST API or MCP.
Start Free to Activate This Skill© 2026 Kill The Dragon GmbH. This skill and its system prompt are protected by copyright. Unauthorised redistribution is prohibited. Terms of Service · Legal Notice