Engineering Lead - Bigdata

ness digital

Job Summary

As an Engineering Lead for the Big Data team at Ness Digital Engineering, you will be responsible for the analysis, design, architecture, development, and support of ETL pipelines using the Spark framework in the Databricks platform. This role involves driving change with cutting-edge technologies, extending and managing ETL pipelines, and ensuring data distribution to clients. You will contribute to a team focused on innovation, timely delivery, and articulating business value, fostering a culture of trust and continuous growth.

Must Have

  • Demonstrate strong ownership and responsibility with release goals, including understanding requirements, technical specifications, design, architecture, implementation, unit testing, builds/deployments, and code management.
  • Ensure compliance through the adoption of enterprise standards and promotion of best practice/guiding principles.
  • Build and maintain the environment for speed, accuracy, consistency, and uptime.
  • Hands-on position requiring strong analytical, architecture, development, and debugging skills (development and operations).
  • Attain in-depth functional knowledge of the domain.
  • Understand Incident Management, Change Management, Problem Management, and root cause analysis.
  • Ensure data governance principles adopted, data quality checks, and data lineage implemented.
  • Drive and execute complex technical requirements.
  • Demonstrate excellent verbal and written communication skills.
  • Collaborate with team members across the globe.
  • Interface with cross-functional teams and downstream applications as needed.
  • Be a self-starter that is also an excellent team player.
  • Follow agile best practices and experience working with global teams.
  • Minimum 8+ years of working experience in Technology (application development and production support).
  • 5+ years of experience in development of pipelines that extract, transform, and load data.
  • Minimum 3+ years of experience in developing & supporting ETLs using Python/Scala in Databricks/Spark platform.
  • Experience with Python, Spark, Hive, data-warehousing, and data modeling techniques.
  • Strong data engineering skills with AWS cloud platform.
  • Experience with streaming frameworks such as Kafka.
  • Knowledge of Core Java, Linux, SQL, and any scripting language.
  • Experience working with relational Databases, preferably Oracle.
  • Experience in continuous delivery through CI/CD pipelines, containers, and orchestration technologies.
  • Experience working in an Agile development environment.
  • Experience working with cross-functional teams, with strong interpersonal and written communication skills.
  • Desire and ability to quickly understand and work within new technologies.

Good to Have

  • Knowledge of industry-wide visualization and analytics tools (ex: Tableau, R).
  • Be in tune with emerging trends in cloud technologies & Big Data.
  • Regularly evaluate cloud applications, hardware, and software.
  • Work closely with cyber security team to monitor the organization’s cloud policy.
  • Focus on building a team culture that is based on trust and inspire continuous growth.

Perks & Benefits

  • Exposure to work on Latest cutting-edge Technologies.
  • Opportunity to grow personally and professionally.
  • Exposure to cutover legacy ETL pipelines to Spark framework.

Job Description

Description

Position at Ness Digital Engineering (India) Private Limited

The Role: Lead Big Data developer.

The Team: Content Externalization team at S&P Ratings responsible for data distribution to both external & internal clients through various pipeline.

Each of our employees plays a vital role—uncovering the essential intelligence that our clients rely on day in and day out to make the decisions that matter. We pursue excellence in everything we do.

We value results, encourage teamwork, and embrace change. Our team is responsible for the design, architecture, develop, and implement various ETL pipelines and distribution channels.

The team has a broad and expert knowledge domain, technology stacks and architectural patterns.

They foster knowledge sharing and collaboration that results in a unified strategy. Team members provide leadership, innovation, timely delivery, and the ability to articulate business value.

The Impact: As a member of the Content Externalization Team, you will be responsible for analysis, design, architecture, development, and support of ETL pipelines using Spark framework in Databricks platform. The ideal candidate should have expertise with cutting edge technologies and a desire to drive change through all alignment across the enterprise. The role requires the candidate to be a hands-on problem solver and developer helping to extend and manage the ETL pipelines.

What’s in it for you:

➢ Exposure to work on Latest cutting-edge Technologies.

➢ Opportunity to grow personally and professionally.

➢ Exposure to cutover legacy ETL pipelines to Spark framework.

Responsibilities:

➢ Demonstrate a strong sense of ownership and responsibility with release goals. This includes

understanding requirements, technical specifications, design, architecture, implementation, unit

testing, builds/deployments, and code management.

➢ Ensure compliance through the adoption of enterprise standards and promotion of best practice /

guiding principles aligned with organization standards.

➢ Build and maintain the environment for speed, accuracy, consistency and ‘up’ time

➢ Hands-on position requiring strong analytical, architecture, development and debugging skills that

includes both development and operations.

➢ Attaint in-depth Functional knowledge of the domain that we are working on.

S&P Global Public

➢ Understand Incident Management, Change Management and Problem Management, root cause

analysis.

➢ Ensure data governance principles adopted, data quality checks and data lineage implemented in

each hop of the Data.

➢ Be in tune with emerging trends cloud technologies & Big Data.

➢ Should drive and execute complex technical requirements.

➢ Demonstrate excellent verbal and written communication skills.

➢ Collaborate with team members across the globe.

➢ Interface with cross functional teams and downstream applications as needed.

➢ Be a self-starter that is also an excellent team player.

➢ Follow agile best practices. Experienced in working with global teams.

➢ Regularly evaluate cloud applications, hardware, and software.

➢ Work closely with cyber security team to monitor the organization’s cloud policy.

➢ Focus on building a team culture that is based on trust. Inspire your team member so that they

focus on continuous growth.

What We’re Looking For

➢ Minimum 8+ years of working experience in Technology (application development and production

support).

➢ 5+ years of experience in development of pipeline that extract, transform, and load data into an

information product that helps the organization reach its strategic goals.

➢ Minimum 3+ years of experience in developing & supporting ETLs and using Python/Scala in

Databricks/Spark platform.

➢ Experience with Python, Spark, and Hive and Understanding of data-warehousing and data modeling techniques.

➢ Knowledge of industry-wide visualization and analytics tools (ex: Tableau, R)

➢ Strong data engineering skills with AWS cloud platform

➢ Experience with streaming frameworks such as Kafka

➢ Knowledge of Core Java, Linux, SQL, and any scripting language

➢ Experience working with any relational Databases preferably Oracle.

➢ Experience in continuous delivery through CI/CD pipelines, containers, and orchestration

technologies.

➢ Experience working in an Agile development environment.

➢ Experience working with cross functional teams, with strong interpersonal and written

communication skills.

➢ Candidate must have the desire and ability to quickly understand and work within new technologies

17 Skills Required For This Role

Team Management Communication Problem Solving Team Player Data Analytics Oracle Game Texts Agile Development Linux Aws Tableau Spark Ci Cd Python Sql Scala Java

Similar Jobs