PaPM

4 Months ago • 4-12 Years

Job Description

Data engineers are responsible for building reliable and scalable data infrastructure. This involves leading and managing a team of data engineers, overseeing projects, and ensuring technical excellence. They play a critical role in driving the success of data engineering initiatives and ensuring the delivery of high-quality data solutions. Key responsibilities include building data pipelines, designing data models, and collaborating with stakeholders to deliver data-driven solutions. The job requires expertise in various data engineering tools and technologies, including those mentioned in the skills section, to derive meaningful insights and make data-driven decisions.
Good To Have:
  • Experience with AWS Airflow
  • Experience with GCP BigQuery
  • Experience with Informatica IICS
  • Experience with Jenkins
Must Have:
  • Experience with Ab Initio
  • Knowledge of Agile methodologies
  • Familiarity with Apache Hadoop
  • Experience with AWS services
  • Experience with Azure services
  • Experience with Data Modeling
  • Experience with DevOps

Add these skills to join the top 1% applicants for this job

team-management
timeline-management
oracle
github
agile-development
linux
aws
azure
hadoop
spark
ci-cd
git
python
sql
shell
scala
perl
bitbucket
javascript
jenkins

Job Description

Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets.

Job Description - Grade Specific

The involves leading and managing a team of data engineers, overseeing data engineering projects, ensuring technical excellence, and fostering collaboration with stakeholders. They play a critical role in driving the success of data engineering initiatives and ensuring the delivery of reliable and high-quality data solutions to support the organization's data-driven objectives.

Skills (competencies)

Ab Initio
Agile (Software Development Framework)
Apache Hadoop
AWS Airflow
AWS Athena
AWS Code Pipeline
AWS EFS
AWS EMR
AWS Redshift
AWS S3
Azure ADLS Gen2
Azure Data Factory
Azure Data Lake Storage
Azure Databricks
Azure Event Hub
Azure Stream Analytics
Azure Sunapse
Bitbucket
Change Management
Client Centricity
Collaboration
Continuous Integration and Continuous Delivery (CI/CD)
Data Architecture Patterns
Data Format Analysis
Data Governance
Data Modeling
Data Validation
Data Vault Modeling
Database Schema Design
Decision-Making
DevOps
Dimensional Modeling
GCP Big Table
GCP BigQuery
GCP Cloud Storage
GCP DataFlow
GCP DataProc
Git
Google Big Tabel
Google Data Proc
Greenplum
HQL
IBM Data Stage
IBM DB2
Industry Standard Data Modeling (FSLDM)
Industry Standard Data Modeling (IBM FSDM))
Influencing
Informatica IICS
Inmon methodology
JavaScript
Jenkins
Kimball
Linux - Redhat
Negotiation
Netezza
NewSQL
Oracle Exadata
Performance Tuning
Perl
Platform Update Management
Project Management
PySpark
Python
R
RDD Optimization
SantOs
SaS
Scala Spark
Shell Script
Snowflake
SPARK
SPARK Code Optimization
SQL
Stakeholder Management
Sun Solaris
Synapse
Talend
Teradata
Time Management
Ubuntu
Vendor Management

Set alerts for new jobs by Capgemini
Set alerts for new jobs in India
Contact Us
hello@outscal.com
Made in INDIA 💛💙