Senior PySpark Data Engineer

8 Hours ago • 7 Years +

Job Summary

Job Description

The Senior PySpark Data Engineer will design, develop, and optimize scalable data processing pipelines using PySpark. The role involves collaborating with data engineers, data scientists, and business analysts to understand data requirements. Responsibilities include implementing data transformations, managing large datasets in distributed storage systems, troubleshooting performance issues, and documenting data processes. The engineer will also support data migration and integration efforts across various platforms. The role requires efficient processing of large-scale data workloads, maintaining data security, and driving continuous improvement in data workflows. The candidate should have at least 7 years of experience in big data environments and hands-on PySpark development.
Must have:
  • Proficiency in PySpark.
  • Familiarity with Hadoop ecosystem components.
  • Experience with Linux/Unix operating systems.
  • Experience with data processing tools like Apache Kafka.
  • Experience with distributed data storage
Good to have:
  • Experience with cloud-based big data platforms.
  • Knowledge of Python, Java or Scala.
  • Familiarity with data orchestration tools.
  • Experience with NoSQL databases.

Job Details

Job Summary

Synechron is seeking an experienced and technically proficient Senior PySpark Data Engineer to join our data engineering team. In this role, you will be responsible for developing, optimizing, and maintaining large-scale data processing solutions using PySpark. Your expertise will support our organization’s efforts to leverage big data for actionable insights, enabling data-driven decision-making and strategic initiatives.

Software Requirements

Required Skills:

  • Proficiency in PySpark
  • Familiarity with Hadoop ecosystem components (e.g., HDFS, Hive, Spark SQL)
  • Experience with Linux/Unix operating systems
  • Data processing tools like Apache Kafka or similar streaming platforms

Preferred Skills:

  • Experience with cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight)
  • Knowledge of Python (beyond PySpark), Java or Scala relevant to big data applications
  • Familiarity with data orchestration tools (e.g., Apache Airflow, Luigi)

Overall Responsibilities

  • Design, develop, and optimize scalable data processing pipelines using PySpark.
  • Collaborate with data engineers, data scientists, and business analysts to understand data requirements and deliver solutions.
  • Implement data transformations, aggregations, and extraction processes to support analytics and reporting.
  • Manage large datasets in distributed storage systems, ensuring data integrity, security, and performance.
  • Troubleshoot and resolve performance issues within big data workflows.
  • Document data processes, architectures, and best practices to promote consistency and knowledge sharing.
  • Support data migration and integration efforts across varied platforms.

Strategic Objectives:

  • Enable efficient and reliable data processing to meet organizational analytics and reporting needs.
  • Maintain high standards of data security, compliance, and operational durability.
  • Drive continuous improvement in data workflows and infrastructure.

Performance Outcomes & Expectations:

  • Efficient processing of large-scale data workloads with minimum downtime.
  • Clear, maintainable, and well-documented code.
  • Active participation in team reviews, knowledge transfer, and innovation initiatives.

Technical Skills (By Category)

Programming Languages:

  • Required: PySpark (essential); Python (needed for scripting and automation)
  • Preferred: Java, Scala

Databases/Data Management:

  • Required: Experience with distributed data storage (HDFS, S3, or similar) and data warehousing solutions (Hive, Snowflake)
  • Preferred: Experience with NoSQL databases (Cassandra, HBase)

Cloud Technologies:

  • Required: Familiarity with deploying and managing big data solutions on cloud platforms such as AWS (EMR), Azure, or GCP
  • Preferred: Cloud certifications

Frameworks and Libraries:

  • Required: Spark SQL, Spark MLlib (basic familiarity)
  • Preferred: Integration with streaming platforms (e.g., Kafka), data validation tools

Development Tools and Methodologies:

  • Required: Version control systems (e.g., Git), Agile/Scrum methodologies
  • Preferred: CI/CD pipelines, containerization (Docker, Kubernetes)

Security Protocols:

  • Optional: Basic understanding of data security practices and compliance standards relevant to big data management

Experience Requirements

  • Minimum of 7+ years of experience in big data environments with hands-on PySpark development.
  • Proven ability to design and implement large-scale data pipelines.
  • Experience working with cloud and on-premises big data architectures.
  • Preference for candidates with domain-specific experience in finance, banking, or related sectors.
  • Candidates with substantial related experience and strong technical skills in big data, even from different domains, are encouraged to apply.

Day-to-Day Activities

  • Develop, test, and deploy PySpark data processing jobs to meet project specifications.
  • Collaborate in multi-disciplinary teams during sprint planning, stand-ups, and code reviews.
  • Optimize existing data pipelines for performance and scalability.
  • Monitor data workflows, troubleshoot issues, and implement fixes.
  • Engage with stakeholders to gather new data requirements, ensuring solutions are aligned with business needs.
  • Contribute to documentation, standards, and best practices for data engineering processes.
  • Support the onboarding of new data sources, including integration and validation.

Decision-Making Authority & Responsibilities:

  • Identify performance bottlenecks and propose effective solutions.
  • Decide on appropriate data processing approaches based on project requirements.
  • Escalate issues that impact project timelines or data integrity.

Qualifications

  • Bachelor’s degree in Computer Science, Information Technology, or related field. Equivalent experience considered.
  • Relevant certifications are preferred: Cloudera, Databricks, AWS Certified Data Analytics, or similar.
  • Commitment to ongoing professional development in data engineering and big data technologies.
  • Demonstrated ability to adapt to evolving data tools and frameworks.

Professional Competencies

  • Strong analytical and problem-solving skills, with the ability to model complex data workflows.
  • Excellent communication skills to articulate technical solutions to non-technical stakeholders.
  • Effective teamwork and collaboration in a multidisciplinary environment.
  • Adaptability to new technologies and emerging trends in big data.
  • Ability to prioritize tasks effectively and manage time in fast-paced projects.
  • Innovation mindset, actively seeking ways to improve data infrastructure and processes.

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.


All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in Pune, Maharashtra, India

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Category Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

About The Company

At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies spanend-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in ourFinLabswe develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more.Over the last20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of14,500+,and has58offices in 21 countrieswithin key global markets.For more information on the company, please visit ourwebsiteorLinkedIncommunity.

Pune, Maharashtra, India (On-Site)

Pune, Maharashtra, India (On-Site)

Pune, Maharashtra, India (On-Site)

Chennai, Tamil Nadu, India (On-Site)

Pune, Maharashtra, India (On-Site)

Pune, Maharashtra, India (On-Site)

Pune, Maharashtra, India (On-Site)

Mississauga, Ontario, Canada (On-Site)

View All Jobs

Get notified when new jobs are added by Synechron

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug