Micro Strategy Engineer
Ness
Job Summary
This role is for a Lead Big Data developer within the Content Externalization team at S&P Ratings, focusing on data distribution to both external and internal clients. The team is responsible for the design, architecture, development, and implementation of various ETL pipelines and distribution channels. The ideal candidate will be responsible for the analysis, design, architecture, development, and support of ETL pipelines using the Spark framework in the Databricks platform, driving change with cutting-edge technologies, and acting as a hands-on problem solver and developer.
Must Have
- Demonstrate strong ownership and responsibility with release goals, including understanding requirements, technical specifications, design, architecture, implementation, unit testing, builds/deployments, and code management.
- Ensure compliance through the adoption of enterprise standards and promotion of best practice/guiding principles.
- Build and maintain the environment for speed, accuracy, consistency, and uptime.
- Possess strong analytical, architecture, development, and debugging skills for both development and operations.
- Attain in-depth functional knowledge of the domain.
- Understand Incident Management, Change Management, Problem Management, and root cause analysis.
- Ensure data governance principles are adopted, data quality checks and data lineage are implemented.
- Be in tune with emerging trends in cloud technologies and Big Data.
- Drive and execute complex technical requirements.
- Demonstrate excellent verbal and written communication skills.
- Collaborate with team members across the globe and interface with cross-functional teams.
- Be a self-starter and an excellent team player, following agile best practices.
- Regularly evaluate cloud applications, hardware, and software, and work closely with the cyber security team.
- Minimum 8+ years of working experience in Technology (application development and production support).
- 5+ years of experience in development of pipelines that extract, transform, and load data.
- Minimum 3+ years of experience in developing & supporting ETLs using Python/Scala in Databricks/Spark platform.
- Experience with Python, Spark, and Hive, and understanding of data-warehousing and data modeling techniques.
- Strong data engineering skills with AWS cloud platform.
- Experience with streaming frameworks such as Kafka.
- Knowledge of Core Java, Linux, SQL, and any scripting language.
- Experience in continuous delivery through CI/CD pipelines, containers, and orchestration technologies.
- Experience working in an Agile development environment.
Good to Have
- Experience in creating visualizations on Power BI/Tableau/MicroStrategy, preferably MicroStrategy.
- Knowledge of industry-wide visualization and analytics tools (e.g., R).
- Experience working with relational Databases, preferably Oracle.
Perks & Benefits
- Exposure to work on Latest cutting-edge Technologies.
- Opportunity to grow personally and professionally.
- Exposure to cutover legacy ETL pipelines to Spark framework.
Job Description
Description
Position at Ness Digital Engineering (India) Private Limited
The Role: Lead Big Data developer.
The Team: Content Externalization team at S&P Ratings responsible for data distribution to both external & internal clients through various pipeline.
Each of our employees plays a vital role—uncovering the essential intelligence that our clients rely on day in and day out to make the decisions that matter. We pursue excellence in everything we do.
We value results, encourage teamwork, and embrace change. Our team is responsible for the design, architecture, develop, and implement various ETL pipelines and distribution channels.
The team has a broad and expert knowledge domain, technology stacks and architectural patterns.
They foster knowledge sharing and collaboration that results in a unified strategy. Team members provide leadership, innovation, timely delivery, and the ability to articulate business value.
The Impact: As a member of the Content Externalization Team, you will be responsible for analysis, design, architecture, development, and support of ETL pipelines using Spark framework in Databricks platform. The ideal candidate should have expertise with cutting edge technologies and a desire to drive change through all alignment across the enterprise. The role requires the candidate to be a hands-on problem solver and developer helping to extend and manage the ETL pipelines.
What’s in it for you:
➢ Exposure to work on Latest cutting-edge Technologies.
➢ Opportunity to grow personally and professionally.
➢ Exposure to cutover legacy ETL pipelines to Spark framework.
Responsibilities:
➢ Demonstrate a strong sense of ownership and responsibility with release goals. This includes understanding requirements, technical specifications, design, architecture, implementation, unit testing, builds/deployments, and code management.
➢ Ensure compliance through the adoption of enterprise standards and promotion of best practice / guiding principles aligned with organization standards.
➢ Build and maintain the environment for speed, accuracy, consistency and ‘up’ time
➢ Hands-on position requiring strong analytical, architecture, development and debugging skills that includes both development and operations.
➢ Attaint in-depth Functional knowledge of the domain that we are working on.
S&P Global Public
➢ Understand Incident Management, Change Management and Problem Management, root cause analysis.
➢ Ensure data governance principles adopted, data quality checks and data lineage implemented in each hop of the Data.
➢ Be in tune with emerging trends cloud technologies & Big Data.
➢ Should drive and execute complex technical requirements.
➢ Demonstrate excellent verbal and written communication skills.
➢ Collaborate with team members across the globe.
➢ Interface with cross functional teams and downstream applications as needed.
➢ Be a self-starter that is also an excellent team player.
➢ Follow agile best practices. Experienced in working with global teams.
➢ Regularly evaluate cloud applications, hardware, and software.
➢ Work closely with cyber security team to monitor the organization’s cloud policy.
➢ Focus on building a team culture that is based on trust. Inspire your team member so that they focus on continuous growth.
What We’re Looking For
➢ Minimum 8+ years of working experience in Technology (application development and production support).
➢ 5+ years of experience in development of pipeline that extract, transform, and load data into an information product that helps the organization reach its strategic goals.
➢ Minimum 3+ years of experience in developing & supporting ETLs and using Python/Scala in Databricks/Spark platform.
➢ Having experience in creating visualizations on Power BI/ Tableau/ MicroStrategy preferably Micro Strategy
➢ Experience with Python, Spark, and Hive and Understanding of data-warehousing and data modeling techniques.
➢ Knowledge of industry-wide visualization and analytics tools (ex: Tableau, R)
➢ Strong data engineering skills with AWS cloud platform
➢ Experience with streaming frameworks such as Kafka
➢ Knowledge of Core Java, Linux, SQL, and any scripting language
➢ Experience working with any relational Databases preferably Oracle.
➢ Experience in continuous delivery through CI/CD pipelines, containers, and orchestration technologies.
➢ Experience working in an Agile development environment.
➢ Experience working with cross functional teams, with strong interpersonal and written communication skills.
➢ Candidate must have the desire and ability to quickly understand and work within new technologies