IN_Senior Associate_Snowflake -- Data and Analytics_Advisory_Bangalore

12 Months ago • 3-8 Years
Data Analysis

Job Description

At PwC, our data and analytics engineers design and develop robust data solutions, transforming raw data into actionable insights for informed decision-making and business growth. This role focuses on building data infrastructure, systems, and implementing data pipelines, integration, and transformation. We seek skilled Cloud Data Engineers specializing in AWS, Azure, Databricks, and GCP, with a strong background in data ingestion, transformation, and warehousing, and expertise in PySpark or Spark for performance optimization.
Good To Have:
  • Cloud Certification (AWS, Azure, or GCP) is a plus.
  • Familiarity with Spark Streaming is a bonus.
  • Accepting Feedback
  • Active Listening
  • Agile Scalability
  • Amazon Web Services (AWS)
  • Analytical Thinking
  • Apache Airflow
  • Apache Hadoop
  • Azure Data Factory
  • Communication
  • Creativity
  • Data Anonymization
  • Data Architecture
  • Database Administration
  • Database Management System (DBMS)
  • Database Optimization
  • Database Security Best Practices
  • Databricks Unified Data Analytics Platform
  • Data Engineering
  • Data Engineering Platforms
  • Data Infrastructure
  • Data Integration
  • Data Lake
  • Data Modeling
  • Data Pipeline
Must Have:
  • Design, build, and maintain scalable data pipelines for cloud platforms (AWS, Azure, Databricks, GCP).
  • Implement data ingestion and transformation processes for efficient data warehousing.
  • Utilize cloud services like AWS Glue, Athena, Lambda, Redshift; Azure Data Factory, Synapse Analytics; GCP Dataflow, BigQuery.
  • Optimize Spark job performance for high efficiency and reliability.
  • 3-8 years of experience in data engineering with a strong focus on cloud environments.
  • Proficiency in PySpark or Spark is mandatory.
  • Proven experience with data ingestion, transformation, and data warehousing.
  • In-depth knowledge and hands-on experience with cloud services (AWS/Azure/GCP).
  • Demonstrated ability in performance optimization of Spark jobs.
  • Strong problem-solving skills and ability to work independently and in a team.
  • Mandatory skills: Python, Pyspark, SQL, AWS, Azure, GCP, Snowflake.
Perks:
  • Inclusive benefits
  • Flexibility programmes
  • Mentorship
  • Support for wellbeing

Add these skills to join the top 1% applicants for this job

team-management
cross-functional
data-analytics
game-texts
agile-development
html
aws
azure
hadoop
spark
amazon-web-services
python
sql

Line of Service

Advisory

Industry/Sector

Not Applicable

Specialism

Data, Analytics & AI

Management Level

Senior Associate

Job Description & Summary

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth.

In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions.

Why PWC

At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for

our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for

each other. Learn more about us

.

At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations.

Job Description & Summary: A career within PWC

Responsibilities:

Cloud Data Engineer (AWS/Azure/Databricks/GCP) Experience:3-8 years in Data Engineering Job Description: We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks, and GCP. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Key Responsibilities:

  • Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP.
  • Implement data ingestion and transformation processes to facilitate efficient data warehousing.
  • Utilize cloud services to enhance data processing capabilities:
  • AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS.
  • Azure: Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus.
  • GCP: Dataflow, BigQuery, DataProc, Cloud Functions, Bigtable, Pub/Sub, Data Fusion.
  • Optimize Spark job performance to ensure high efficiency and reliability.
  • Stay proactive in learning and implementing new technologies to improve data processing frameworks.
  • Collaborate with cross-functional teams to deliver robust data solutions.
  • Work on Spark Streaming for real-time data processing as necessary.

Qualifications:

  • 3-8 years of experience in data engineering with a strong focus on cloud environments.
  • Proficiency in PySpark or Spark is mandatory.
  • Proven experience with data ingestion, transformation, and data warehousing.
  • In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP).
  • Demonstrated ability in performance optimization of Spark jobs.
  • Strong problem-solving skills and the ability to work independently as well as in a team.
  • Cloud Certification (AWS, Azure, or GCP) is a plus.
  • Familiarity with Spark Streaming is a bonus.

Mandatory skill sets:

Python, Pyspark, SQL with (AWS or Azure or GCP)

Preferred skill sets:

Python, Pyspark, SQL with (AWS or Azure or GCP)

Years of experience required:

3-8 years

Education qualification:

BE/BTECH, ME/MTECH, MBA, MCA

Education

Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, MBA (Master of Business Administration)

Degrees/Field of Study preferred:

Certifications

Required Skills

Snowflake (Platform)

Optional Skills

Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more}

Desired Languages

Travel Requirements

Not Specified

Available for Work Visa Sponsorship?

No

Government Clearance Required?

No

Set alerts for more jobs like IN_Senior Associate_Snowflake -- Data and Analytics_Advisory_Bangalore
Set alerts for new jobs by PwC
Set alerts for new Data Analysis jobs in India
Set alerts for new jobs in India
Set alerts for Data Analysis (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙