IN-Manager_Big Data Engineer_Data and Analytics_Advisory_Bengaluru

1 Month ago • 8-11 Years • Data Analyst

About the job

Summary

This Big Data Engineer role in Bengaluru requires 8-11 years of experience in Big Data, Hadoop, Scala, and Spark. You'll design, implement, and maintain data pipelines, work with large datasets, and collaborate with cross-functional teams.
Must have:
  • Big Data
  • Hadoop (HDFS)
  • Scala
  • Spark
Good to have:
  • Azure Cloud
  • Data Streaming
  • Docker
  • Kubernetes
Perks:
  • Vibrant Community
  • Inclusive Benefits
Not hearing back from companies?
Unlock the secrets to a successful job application and accelerate your journey to your next opportunity.

Line of Service

Advisory

Industry/Sector

FS X-Sector

Specialism

Data, Analytics & AI

Management Level

Manager

Job Description & Summary

A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.

Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers.

*Why PWC

At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us.

At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. "

Responsibilities:

Broad Role / Responsibilities We are seeking a highly skilled and motivated Data Engineer Developer with 8 to 11 years of experience to join our dynamic team. The ideal candidate must have strong hands-on expertise in technologies such as Spark, Scala, Hadoop, SQL, and demonstrated exposure to Azure cloud services. The Data Engineer Developer will play a crucial role in designing, implementing, and maintaining robust data pipelines, ensuring the efficient flow and processing of large datasets. · Data Pipeline Development: Design, develop, and maintain scalable and efficient data pipelines using Spark and Scala. Implement ETL processes for ingesting, transforming, and loading data from various sources. · Big Data Technologies: Work with Hadoop ecosystem components such as HDFS, Hive, and HBase for efficient storage and retrieval of large-scale datasets. Optimize and tune Spark jobs to ensure optimal performance and resource utilization. · SQL Expertise: Utilize strong SQL skills to query, analyse, and manipulate data stored in relational databases and data warehouses. · Security - Implement security and data protection measures, at all levels – DB, API services. Apply Data masking and row-level and column-level security. Keep abreast of latest security issues and incorporate necessary patches and updates. · Testing and Debugging - Write and maintain test code to validate functionality. Debug applications and troubleshoot issues as they arise. · Collaboration and Communication - Collaborate with cross-functional teams including Database engineers, data integration engineers, reporting teams and product development. Communicate complex data findings in a clear and actionable manner to non-technical stakeholders. · Continual Learning- Keep up to date with emerging tools, techniques, and technologies in data technologies. Engage in self-improvement and continuous learning opportunities to maintain expertise in the data science domain. · End to end understanding of project and infrastructure involving multiple technologies (Big Data Analytics) · Proactively identify problem areas & concerns related to data in the project; exploration of ways to tackle the issues and come-up with optimal solutions. · Creation of FRS/SRS/Design documents and other technical documents. · Prepare “lessons learned” documentation for projects / engagements. Develop best practices and tools for project execution and management. Nice to Have: · Exposure to Azure Cloud. · Experience of working in Travel and logistics domain is preferred. · Familiarity with data streaming technologies (e.g., Apache Kafka). · Exposure to containerization and orchestration tools (e.g., Docker, Kubernetes). · Knowledge of machine learning concepts and frameworks. Broad Experience & Expertise Requirements 8 to 11 years of hands-on experience in handling large data volumes, Data Engineering using Big Data, Hadoop (HDFS, Hive, Hbase), Scala, Spark (Spark Core, Spark SQL, Spark Streaming), Python, PySpark, SQL, ETL, Databricks, Data modelling, Azure Cloud, Data Pipelines, CI/CD, Docker, Containers, GIT, etc.. Knowledge & experience in handling structured, semi-structured and unstructured data sets. Specific Past Work Experience Requirements · 8 Years of relevant experience in the above technologies. · 8-11 years of Consulting experience in Technology Domain, handling data projects.

Mandatory skill sets:

Big Data, Hadoop (HDFS, Hive, Hbase), Scala, Spark

Preferred skill sets:

Big Data, Hadoop (HDFS, Hive, Hbase), Scala, Spark

Years of experience required:

8-11 years

Education qualification: BE/BTECH, ME/MTECH, MBA, MCA

Education (if blank, degree and/or field of study not specified)

Degrees/Field of Study required:

Degrees/Field of Study preferred:

Certifications (if blank, certifications not specified)

Required Skills

Spark SQL

Optional Skills

Desired Languages (If blank, desired languages not specified)

Travel Requirements

Available for Work Visa Sponsorship?

Government Clearance Required?

Job Posting End Date

View Full Job Description

About The Company

At PwC, our purpose is to build trust in society and solve important problems. We’re a network of firms in 152 countries with over 327,000 people who are committed to delivering quality in assurance, advisory and tax services. Find out more and tell us what matters to you by visiting us at www.pwc.com. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity.


Content on this page has been prepared for general information only and is not intended to be relied upon as accounting, tax or professional advice. Please reach out to your advisors for specific advice.

View All Jobs

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug