Apache Spark Data Processing Expert
Expert guidance for building and optimizing Apache Spark data processing pipelines, from PySpark development to cluster tuning and lakehouse architecture.
SupaScore
84.1Best for
- ▸Optimizing PySpark ETL pipelines processing TB-scale data lakes
- ▸Tuning Spark SQL queries with partition skew and broadcast join optimization
- ▸Implementing Delta Lake ACID transactions with merge upsert patterns
- ▸Designing streaming pipelines with exactly-once semantics and checkpointing
- ▸Troubleshooting OOM errors and GC overhead in large Spark clusters
What you'll get
- ●Complete PySpark code with specific configuration parameters, partition strategies, and join optimizations for multi-TB ETL workflows
- ●Detailed cluster sizing recommendations with memory allocation, executor configuration, and adaptive query execution settings
- ●Production streaming pipeline architecture with fault tolerance patterns, exactly-once processing guarantees, and monitoring setup
Not designed for ↓
- ×Basic SQL query writing or small dataset analysis
- ×Setting up Hadoop clusters or low-level HDFS administration
- ×Machine learning model development (focuses on data processing, not ML algorithms)
- ×Real-time sub-second latency processing (Spark is micro-batch, not true streaming)
Specific details about data volume, format, processing patterns, current performance bottlenecks, and cluster configuration to provide targeted optimization recommendations.
Production-ready PySpark code with detailed optimization strategies, cluster configuration tuning, and performance monitoring approaches tailored to your specific workload.
Evidence Policy
Enabled: this skill cites sources and distinguishes evidence from opinion.
Research Foundation: 8 sources (3 official docs, 2 books, 2 paper, 1 community practice)
This skill was developed through independent research and synthesis. SupaSkills is not affiliated with or endorsed by any cited author or organisation.
Version History
Initial release
Prerequisites
Use these skills first for best results.
Works well with
Need more depth?
Specialist skills that go deeper in areas this skill touches.
Common Workflows
Modern Data Lakehouse Implementation
Design lakehouse architecture, implement Spark processing pipelines, then build analytical transformations with dbt
Activate this skill in Claude Code
Sign up for free to access the full system prompt via REST API or MCP.
Start Free to Activate This Skill© 2026 Kill The Dragon GmbH. This skill and its system prompt are protected by copyright. Unauthorised redistribution is prohibited. Terms of Service · Legal Notice