The Product Support Engineer / Product Specialist - "Hadoop SME" will be responsible for designing, optimizing, and scaling Spark-based data processing systems. This role involves hands-on experience in Spark architecture and core functionalities, focusing on building resilient, high-performance distributed data systems. You will collaborate with engineering teams to deliver high-throughput Spark applications and solve complex data challenges in real-time processing, big data analytics, and streaming. The role requires expertise in troubleshooting, data processing, performance tuning, and cluster management within a big data environment, emphasizing the optimization of Spark jobs and integration with related technologies like Hadoop, Hive, and Kafka.