Staff Product Engineer / Product Specialist - Spark SME

1 Month ago • All levels • DevOps

About the job

Summary

Not hearing back from companies?
Unlock the secrets to a successful job application and accelerate your journey to your next opportunity.
Position Summary:
We are seeking an Apache Spark - Subject Matter Expert (SME) who will be responsible for designing, optimizing, and scaling Spark-based data processing systems. This role involves hands-on experience in Spark architecture and core functionalities, focusing on building resilient, high-performance distributed data systems. You will collaborate with engineering teams to deliver high-throughput Spark applications and solve complex data challenges in real-time processing, big data analytics, and streaming.

If you’re passionate about working in fast-paced, dynamic environments and want to be part of the cutting edge of data solutions, this role is for you.

We’re looking for someone who can:

    • Design and optimize distributed Spark-based applications, ensuring low-latency, high-throughput performance for big data workloads.
    • Troubleshooting: Provide expert-level troubleshooting for any data or performance issues related to Spark jobs and clusters.
    • Data Processing Expertise: Work extensively with large-scale data pipelines using Spark's core components (Spark SQL, DataFrames, RDDs, Datasets, and structured streaming).
    • Performance Tuning: Conduct deep-dive performance analysis, debugging, and optimization of Spark jobs to reduce processing time and resource consumption.
    • Cluster Management: Collaborate with DevOps and infrastructure teams to manage Spark clusters on platforms like Hadoop/YARN, Kubernetes, or cloud platforms (AWS EMR, GCP Dataproc, etc.).
    • Real-time Data: Design and implement real-time data processing solutions using Apache Spark Streaming or Structured Streaming.

What makes you the right fit for this position:

    • Expert in Apache Spark: In-depth knowledge of Spark architecture, execution models, and the components (Spark Core, Spark SQL, Spark Streaming, etc.)
    • Data Engineering Practices: Solid understanding of ETL pipelines, data partitioning, shuffling, and serialization techniques to optimize Spark jobs.
    • Big Data Ecosystem: Knowledge of related big data technologies such as Hadoop, Hive, Kafka, HDFS, and YARN.
    • Performance Tuning and Debugging: Demonstrated ability to tune Spark jobs, optimize query execution, and troubleshoot performance bottlenecks.
    • Experience with Cloud Platforms: Hands-on experience in running Spark clusters on cloud platforms such as AWS, Azure, or GCP.
    • Containerization & Orchestration: Experience with containerized Spark environments using Docker and Kubernetes is a plus.
undefined
View Full Job Description

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug