← Back to Skills

Apache Spark Data Processing Expert

Expert guidance for building and optimizing Apache Spark data processing pipelines, from PySpark development to cluster tuning and lakehouse architecture.

Gold
v1.0.00 activationsData & AnalyticsTechnologyadvanced

SupaScore

84.1
Research Quality (15%)
8.5
Prompt Engineering (25%)
8.5
Practical Utility (15%)
8.5
Completeness (10%)
8.5
User Satisfaction (20%)
8.2
Decision Usefulness (15%)
8.3

Best for

  • Optimizing PySpark ETL pipelines processing TB-scale data lakes
  • Tuning Spark SQL queries with partition skew and broadcast join optimization
  • Implementing Delta Lake ACID transactions with merge upsert patterns
  • Designing streaming pipelines with exactly-once semantics and checkpointing
  • Troubleshooting OOM errors and GC overhead in large Spark clusters

What you'll get

  • Complete PySpark code with specific configuration parameters, partition strategies, and join optimizations for multi-TB ETL workflows
  • Detailed cluster sizing recommendations with memory allocation, executor configuration, and adaptive query execution settings
  • Production streaming pipeline architecture with fault tolerance patterns, exactly-once processing guarantees, and monitoring setup
Not designed for ↓
  • ×Basic SQL query writing or small dataset analysis
  • ×Setting up Hadoop clusters or low-level HDFS administration
  • ×Machine learning model development (focuses on data processing, not ML algorithms)
  • ×Real-time sub-second latency processing (Spark is micro-batch, not true streaming)
Expects

Specific details about data volume, format, processing patterns, current performance bottlenecks, and cluster configuration to provide targeted optimization recommendations.

Returns

Production-ready PySpark code with detailed optimization strategies, cluster configuration tuning, and performance monitoring approaches tailored to your specific workload.

Evidence Policy

Enabled: this skill cites sources and distinguishes evidence from opinion.

apache-sparkpysparkspark-sqldistributed-computingdata-pipelinedelta-lakeperformance-tuningetldata-engineeringlakehousestructured-streamingcluster-optimizationbig-data

Research Foundation: 8 sources (3 official docs, 2 books, 2 paper, 1 community practice)

This skill was developed through independent research and synthesis. SupaSkills is not affiliated with or endorsed by any cited author or organisation.

Version History

v1.0.02/16/2026

Initial release

Prerequisites

Use these skills first for best results.

Works well with

Need more depth?

Specialist skills that go deeper in areas this skill touches.

Common Workflows

Modern Data Lakehouse Implementation

Design lakehouse architecture, implement Spark processing pipelines, then build analytical transformations with dbt

Data Lakehouse Designerspark-data-processing-expertdbt-analytics-engineer

Activate this skill in Claude Code

Sign up for free to access the full system prompt via REST API or MCP.

Start Free to Activate This Skill

© 2026 Kill The Dragon GmbH. This skill and its system prompt are protected by copyright. Unauthorised redistribution is prohibited. Terms of Service · Legal Notice