Scala Data Engineer (Databricks, Cloud & Big Data Platform Expertise)

2 Months ago • 5 Years +
Data Analysis

Job Description

Synechron is seeking a skilled Scala Engineer to join our Data & AI (CEDA) team, supporting the development and deployment of scalable big data solutions. The successful candidate will leverage expertise in Scala and Databricks to build robust, extensible data solutions that serve global stakeholders with minimal localization. This role plays a critical part in enabling data-driven decision-making, platform engineering, and cloud-native development, contributing directly to our organization’s strategic data initiatives.
Good To Have:
  • Experience in programming with Java or Python for data engineering tasks
  • Ability to develop and manage CI/CD pipelines (GitLab or equivalent)
  • Experience with Docker, Kubernetes for deployment and runtime environments
  • Strong experience with Microsoft Azure cloud platform (preferred)
  • Familiarity with cloud architecture best practices for scalability and security
  • Kubernetes (preferred)
  • Azure Data Engineer Certification or similar cloud data certifications
  • Candidates with extensive data engineering experience using Scala and Spark, even if cloud experience is limited, are encouraged to apply.
  • Experience in large-scale enterprise environments, financial institutions, or data-focused industries is preferred but not mandatory.
Must Have:
  • Design, develop, and optimize scalable big data pipelines using Apache Spark and Databricks
  • Implement complex data transformations and analysis solutions in Scala and Spark
  • Collaborate with cross-functional teams to understand data requirements and provide effective technical solutions
  • Develop, maintain, and enhance CI/CD pipelines to support continuous integration and deployment processes
  • Contribute to platform engineering activities on cloud platforms, particularly Microsoft Azure
  • Assist in containerizing applications and managing orchestration using Kubernetes
  • Ensure solutions are globally deployable with minimal localization, adhering to security and compliance standards
  • Participate in Agile sprint planning, stand-ups, and retrospective activities to promote iterative development
  • Minimum of 5+ years in data engineering or related big data development roles
  • Proven experience in executing complex data analysis and designing scalable data pipelines
  • Demonstrated expertise in building data transformations within SQL and Spark environments
Perks:
  • flexible workplace arrangements
  • mentoring
  • internal mobility
  • learning and development programs

Add these skills to join the top 1% applicants for this job

team-management
timeline-management
cross-functional
communication
data-analytics
game-texts
agile-development
lqa
automated-testing
gitlab
nosql
azure
spark
microsoft-azure
ci-cd
docker
kubernetes
python
scala
sql
java

Job Summary

Synechron is seeking a skilled Scala Engineer to join our Data & AI (CEDA) team, supporting the development and deployment of scalable big data solutions. The successful candidate will leverage expertise in Scala and Databricks to build robust, extensible data solutions that serve global stakeholders with minimal localization. This role plays a critical part in enabling data-driven decision-making, platform engineering, and cloud-native development, contributing directly to our organization’s strategic data initiatives.

Software Requirements

Required Software Skills:

  • Scala: Proven experience in developing production-level applications
  • Databricks: Solid understanding of Databricks platform operations and integration
  • SQL / Spark SQL: Mastery in writing optimized queries and transformations for big data processing

Preferred Software Skills:

  • Java / Python: Experience in programming with Java or Python for data engineering tasks
  • GitLab or equivalent CI/CD tools: Ability to develop and manage CI/CD pipelines
  • Containerization: Experience with Docker, Kubernetes for deployment and runtime environments

Overall Responsibilities

  • Design, develop, and optimize scalable big data pipelines using Apache Spark and Databricks
  • Implement complex data transformations and analysis solutions in Scala and Spark
  • Collaborate with cross-functional teams to understand data requirements and provide effective technical solutions
  • Develop, maintain, and enhance CI/CD pipelines to support continuous integration and deployment processes
  • Contribute to platform engineering activities on cloud platforms, particularly Microsoft Azure
  • Assist in containerizing applications and managing orchestration using Kubernetes
  • Ensure solutions are globally deployable with minimal localization, adhering to security and compliance standards
  • Participate in Agile sprint planning, stand-ups, and retrospective activities to promote iterative development

Technical Skills (By Category)

Programming Languages:

  • Required: Scala, SQL, Spark queries
  • Preferred: Java, Python

Databases / Data Management:

  • Experience with relational and NoSQL databases
  • Proficient in writing and optimizing complex SQL/Spark queries and transformations

Cloud Technologies:

  • Strong experience with Microsoft Azure cloud platform (preferred)
  • Familiarity with cloud architecture best practices for scalability and security

Frameworks and Libraries:

  • Apache Spark, Databricks platform, Kubernetes (preferred)

Development Tools & Methodologies:

  • GitLab or similar version control and CI/CD systems
  • Agile development practices

Security & Compliance:

  • Knowledge of secure coding practices and data privacy standards (if applicable)

Experience Requirements

  • Minimum of 5+ years in data engineering or related big data development roles
  • Proven experience in executing complex data analysis and designing scalable data pipelines
  • Demonstrated expertise in building data transformations within SQL and Spark environments
  • Hands-on experience with platform engineering on cloud providers, particularly Microsoft Azure, is advantageous
  • Experience with containerization and orchestration tools like Docker and Kubernetes

Alternative Experience Pathways:

  • Candidates with extensive data engineering experience using Scala and Spark, even if cloud experience is limited, are encouraged to apply.
  • Experience in large-scale enterprise environments, financial institutions, or data-focused industries is preferred but not mandatory.

Day-to-Day Activities

  • Developing and maintaining big data processing pipelines using Spark and Databricks
  • Collaborating with data scientists, analysts, and other stakeholders to refine data requirements
  • Implementing and optimizing SQL/Spark queries for performance and scalability
  • Building and deploying CI/CD pipelines supporting automated testing and deployment
  • Containerizing applications and managing deployments on Kubernetes clusters
  • Participating in Agile ceremonies, planning sprints, and providing estimates on deliverables
  • Conducting code reviews, ensuring best practices, and maintaining high code quality

Qualifications

Educational Requirements:

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, Mathematics, or related fields

Certifications (Preferred):

  • Azure Data Engineer Certification or similar cloud data certifications

Training & Development:

  • Commitment to ongoing professional development in big data, cloud technologies, and engineering practices

Professional Competencies

  • Strong problem-solving and analytical capabilities in complex data environments
  • Effective communication skills, capable of building collaborative relationships with stakeholders
  • Ability to work both independently and as part of a team in an Agile setting
  • Adaptability to evolving technologies and project requirements
  • Proactive attitude towards learning and applying new tools and frameworks
  • Time management skills, with an emphasis on prioritization and meeting deadlines

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Set alerts for more jobs like Scala Data Engineer (Databricks, Cloud & Big Data Platform Expertise)
Set alerts for new jobs by Synechron
Set alerts for new Data Analysis jobs in India
Set alerts for new jobs in India
Set alerts for Data Analysis (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙