Staff Product Support Engineer - Hadoop Operations

9 Minutes ago • All levels
Devops

Job Description

We are seeking a Staff Product Support Engineer / Product Specialist - Hadoop Operations SME responsible for designing, optimizing, migrating, and scaling Hadoop and Spark-based data processing systems. This role involves hands-on experience with Hadoop and other core data operations, focusing on building resilient, high-performance distributed data systems. You will collaborate with customer engineering teams to deliver high-throughput Hadoop, Nifi, and Spark applications and solve complex data challenges in migration, upgrades, reliability, and optimize post-migration system performance. The role requires flexibility for rotational shifts to support 24/7 operations.
Good To Have:
  • Master’s degree and experience working with scripting languages (Scala, Python, Bash, PowerShell).
  • Familiarity with virtual machine technologies and multi-node environment (50+ nodes).
  • Proficient with Linux, NFS, and Windows, including application installation, scripting, and command line.
  • Working knowledge of application, server, and network security management concepts.
  • Certification on any of the leading Cloud providers (AWS, Azure, GCP) and/or Kubernetes.
  • Knowledge of databases like MySQL and PostgreSQL.
  • Involvement with and work on other support-related activities, performing POC & assisting with Onboarding deployments.
Must Have:
  • Design and optimize distributed Hadoop-based applications, ensuring low-latency, high-throughput performance.
  • Provide expert-level support for data or performance issues in Hadoop, Nifi, and Spark jobs and clusters.
  • Work extensively with large-scale data pipelines using Hadoop, Nifi, and Spark's core components.
  • Conduct deep-dive performance analysis, debugging, and optimization of Nifi, Impala, and Spark jobs.
  • Collaborate with DevOps and infrastructure teams to manage Nifi, Impala, and Spark clusters on various platforms.
  • Provide dedicated support for the stability and reliability of the new ODP Hadoop environment during and after migration.

Add these skills to join the top 1% applicants for this job

problem-solving
performance-analysis
data-analytics
talent-acquisition
game-texts
mysql
postgresql
linux
aws
azure
hadoop
powershell
spark
yarn
kubernetes
python
scala
bash

We are seeking a Staff Product Support Engineer / Product Specialist - Hadoop Operations SME who will be responsible for designing, optimizing, migrating, and scaling Hadoop and Spark-based data processing systems. This role involves hands-on experience with Hadoop and other core data operations, focusing on building resilient, high-performance distributed data systems.

You will collaborate with customer engineering teams to deliver high-throughput Hadoop, Nifi, and Spark applications and solve complex data challenges in migration, upgrades, reliability, and optimise post-migration system performance.

This role requires flexibility to work in rotational shifts, based on team coverage needs and customer demand. Candidates should be comfortable supporting operations in a 24/7 environment and be willing to adjust their working hours accordingly.

We’re looking for someone who perform:

  • Design and optimise distributed Hadoop-based applications, ensuring low-latency, high-throughput performance for big data workloads.
  • Troubleshooting: Provide expert-level support for data or performance issues in Hadoop, Nifi, and Spark jobs and clusters.
  • Data Processing Expertise: Work extensively with large-scale data pipelines using Hadoop, Nifi, and Spark's core components.
  • Performance Tuning: Conduct deep-dive performance analysis, debugging, and optimization of Nifi, Impala, and Spark jobs to reduce processing time and resource consumption.
  • Cluster Management: Collaborate with DevOps and infrastructure teams to manage Nifi, Impala, and Spark clusters on platforms like Hadoop/YARN, Kubernetes, or cloud platforms (AWS EMR, GCP Dataproc, etc.).
  • Migration: Provide dedicated support to ensure the stability and reliability of the new ODP Hadoop environment during and after migration. Promptly address evolving technical challenges and optimise system performance after migration to ODP.

Good to have:

  • Master’s degree and Experience working with scripting languages (Scala, Python, Bash, PowerShell).
  • Familiarity with virtual machine technologies and multi-node environment (50+ nodes).
  • Proficient with Linux, NFS, and Windows, including application installation, scripting, and working with the command line.
  • Working knowledge of application, server, and network security management concepts. Certification on any of the leading Cloud providers (AWS, Azure, GCP ) and/or Kubernetes.
  • Knowledge of databases like MySQL and PostgreSQL.
  • Be involved with and work on other support-related activities - Performing POC & assisting with Onboarding deployments of Acceldata & Hadoop distribution products.

We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Set alerts for more jobs like Staff Product Support Engineer - Hadoop Operations
Set alerts for new jobs by AccelData
Set alerts for new Devops jobs in United States
Set alerts for new jobs in United States
Set alerts for Devops (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙