Cloud Data Engineer

6 Months ago • Upto 1 Years

Job Description

PwC seeks skilled Data Engineers with 0-1 year experience in data engineering, focusing on cloud environments (AWS, Azure, Databricks, GCP). Responsibilities include designing, building, and maintaining scalable data pipelines; implementing data ingestion and transformation processes for efficient data warehousing; optimizing Spark job performance; collaborating with cross-functional teams; and working on Spark Streaming for real-time data processing. Proficiency in PySpark or Spark, experience with data ingestion, transformation, and data warehousing, and in-depth cloud service knowledge are essential. Problem-solving skills and teamwork are crucial. Cloud certifications are a plus.
Good To Have:
  • Cloud certification (AWS, Azure, GCP)
  • Familiarity with Spark Streaming
  • SQL
Must Have:
  • Design & maintain scalable data pipelines on cloud platforms
  • Implement data ingestion & transformation processes
  • Optimize Spark job performance for efficiency & reliability
  • Proficiency in PySpark or Spark
  • Experience with data warehousing

Add these skills to join the top 1% applicants for this job

java
spark
azure
data-visualization
aws
python
sql
data-science
data-analytics
business-intelligence
innovation
cross-functional
team-management

Line of Service

Advisory

Industry/Sector

Not Applicable

Specialism

Operations

Management Level

Specialist

Job Description & Summary

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals.

In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage.

*Why PWC
At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us.
At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. "

Responsibilities: 

Job Description

We are seeking skilled and dynamic Data Engineers.The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. 

 Key Responsibilities

 - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP. 
- Implement data ingestion and transformation processes to facilitate efficient data warehousing. 
- Optimize Spark job performance to ensure high efficiency and reliability. 
- Stay proactive in learning and implementing new technologies to improve data processing frameworks. 
- Collaborate with cross-functional teams to deliver robust data solutions. 
- Work on Spark Streaming for real-time data processing as necessary. 

 Qualifications: 

- 0-1 years of experience in data engineering with a strong focus on cloud environments. 
- Proficiency in PySpark or Spark. 
- Proven experience with data ingestion, transformation, and data warehousing. 
- In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP): 
- Demonstrated ability in performance optimization of Spark jobs. 
- Strong problem-solving skills and the ability to work independently as well as in a team. 
- Cloud Certification (AWS, Azure, or GCP) is a plus. 
- Familiarity with Spark Streaming is a bonus.  

Mandatory skill sets: 

Python, Pyspark, SQL 

Preferred skill sets: 

Python, Pyspark, SQL 

Years of experience required: 

0-1 years 

Education qualification: 

BCA

Education (if blank, degree and/or field of study not specified)

Degrees/Field of Study required: Bachelor of Technology

Degrees/Field of Study preferred:

Certifications (if blank, certifications not specified)

Required Skills

Python Software Development

Optional Skills

Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 7 more}

Desired Languages (If blank, desired languages not specified)

Travel Requirements

Not Specified

Available for Work Visa Sponsorship?

No

Government Clearance Required?

No

Job Posting End Date

Set alerts for new jobs by PwC
Set alerts for new jobs in India
Contact Us
hello@outscal.com
Made in INDIA 💛💙