Line of Service
AdvisoryIndustry/Sector
Not ApplicableSpecialism
SAPManagement Level
Senior AssociateJob Description & Summary
A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.Responsibilities: ● Design, develop, and optimize data pipelines and ETL
processes using PySpark or Scala to extract,
transform, and load large volumes of structured and
unstructured data from diverse sources.
● Implement data ingestion, processing, and storage
solutions on Azure cloud platform, leveraging
services such as Azure Databricks, Azure Data Lake
Storage, and Azure Synapse Analytics.
● Develop and maintain data models, schemas, and
metadata to support efficient data access, query
performance, and analytics requirements.
● Monitor pipeline performance, troubleshoot issues,
and optimize data processing workflows for
scalability, reliability, and cost-effectiveness.
● Implement data security and compliance measures to
protect sensitive information and ensure regulatory
compliance.
Requirement
● Proven experience as a Data Engineer, with expertise
in building and optimizing data pipelines using
PySpark, Scala, and Apache Spark.
● Hands-on experience with cloud platforms,
particularly Azure, and proficiency in Azure services
such as Azure Databricks, Azure Data Lake Storage,
Azure Synapse Analytics, and Azure SQL Database.
● Strong programming skills in Python and Scala, with
experience in software development, version control,
and CI/CD practices.
● Familiarity with data warehousing concepts,
dimensional modeling, and relational databases (e.g.,
SQL Server, PostgreSQL, MySQL).
● Experience with big data technologies and
frameworks (e.g., Hadoop, Hive, HBase) is a plus.
Mandatory skill sets:
Spark, Pyspark, Azure
Preferred skill sets:
Spark, Pyspark, Azure
Years of experience required:
4 - 8
Education qualification:
B.Tech / M.Tech / MBA / MCA
Education (if blank, degree and/or field of study not specified)
Degrees/Field of Study required: Bachelor of Engineering, Master of Business AdministrationDegrees/Field of Study preferred:Certifications (if blank, certifications not specified)
Required Skills
Microsoft Azure Databricks, PySparkOptional Skills
Desired Languages (If blank, desired languages not specified)
Travel Requirements
Available for Work Visa Sponsorship?
Government Clearance Required?
Job Posting End Date
At PwC, our purpose is to build trust in society and solve important problems. We’re a network of firms in 152 countries with over 327,000 people who are committed to delivering quality in assurance, advisory and tax services. Find out more and tell us what matters to you by visiting us at www.pwc.com. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity.
Content on this page has been prepared for general information only and is not intended to be relied upon as accounting, tax or professional advice. Please reach out to your advisors for specific advice.