Data Engineering Consultant

endava

Job Summary

This Data Engineering Consultant role at Endava involves designing, implementing, and optimizing scalable data pipelines and architectures. The consultant bridges raw data and actionable insights, ensuring robustness, performance, and data governance. Collaborating with analysts and scientists is crucial for delivering high-quality solutions aligned with business objectives. Endava, a technology company with over two decades of experience, harnesses world-class engineering and industry expertise to create dynamic platforms and intelligent digital experiences for leading brands, driving innovation and business transformation.

Must Have

  • Design, implement, and optimize scalable data pipelines and architectures.
  • Develop and maintain real-time and batch data pipelines efficiently.
  • Utilize frameworks like Apache Spark, Databricks, Snowflake, or Airflow.
  • Build ETL/ELT workflows, including validation and cleaning steps.
  • Automate data reconciliation, metadata management, and error handling.
  • Collaborate with Data Scientists, Architects, and Analysts.
  • Apply robust security measures and ensure regulatory compliance (GDPR).
  • Proficiency in Python, SQL, Scala, Java programming languages.
  • Experience with Big Data technologies: Spark, Hadoop, Databricks, Snowflake.
  • Cloud experience: AWS (Glue, Redshift), Azure (Synapse, Data Factory, Fabric), GCP (BigQuery, Dataflow).
  • Data Modelling & Storage: Relational, NoSQL, Dimensional modelling.
  • DevOps & Automation: Docker, Kubernetes, Terraform, CI/CD for data.
  • Design fault-tolerant, highly available data architectures.
  • Enforce RBAC, encryption, and auditing for data security.

Perks & Benefits

  • Competitive salary package
  • Share plan
  • Company performance bonuses
  • Value-based recognition awards
  • Referral bonus
  • Career coaching
  • Global career opportunities
  • Non-linear career paths
  • Internal development programmes for management and technical leadership
  • Complex projects
  • Rotations
  • Internal tech communities
  • Training
  • Certifications
  • Coaching
  • Online learning platforms subscriptions
  • Pass-it-on sessions
  • Workshops
  • Conferences
  • Hybrid work and flexible working hours
  • Employee assistance programme
  • Global internal wellbeing programme
  • Access to wellbeing apps
  • Hobby clubs and interest groups
  • Inclusion and diversity programmes
  • Events and celebrations

Job Description

Role Overview

A Data Engineering Consultant designs, implements, and optimizes scalable data pipelines and architectures. This role bridges raw data and actionable insights, ensuring robustness, performance, and data governance. Collaboration with analysts and scientists is central to delivering high-quality solutions aligned with business objectives.

Key Responsibilities

  • Data Pipeline Development
  • Architect, implement and maintain real-time and batch data pipelines to handle large datasets efficiently.
  • Employ frameworks such as Apache Spark, Databricks, Snowflake or Airflow to automate ingestion, transformation, and delivery.
  • Data Integration & Transformation
  • Work with Data Analysts to understand source-to-target mappings and quality requirements.
  • Build ETL/ELT workflows, validation checks, and cleaning steps for data reliability.
  • Automation & Process Optimization
  • Automate data reconciliation, metadata management, and error-handling procedures.
  • Continuously refine pipeline performance, scalability, and cost-efficiency.
  • Collaboration & Leadership
  • Coordinate with Data Scientists, Data Architects, and Analysts to ensure alignment with business goals.
  • Mentor junior engineers and enforce best practices (version control, CI/CD for data pipelines).
  • Participate in technical presales activities and client engagement initiatives.
  • Governance & Compliance
  • Apply robust security measures (RBAC, encryption) and ensure regulatory compliance (GDPR).
  • Document data lineage and recommend improvements for data ownership and stewardship.

Qualifications

  • Programming: Python, SQL, Scala, Java.
  • Big Data: Apache Spark, Hadoop, Databricks, Snowflake, etc.
  • Cloud: AWS (Glue, Redshift), Azure (Synapse, Data Factory, Fabric), GCP (BigQuery, Dataflow).
  • Data Modelling & Storage: Relational (PostgreSQL, SQL Server), NoSQL (MongoDB, Cassandra), Dimensional modelling.
  • DevOps & Automation: Docker, Kubernetes, Terraform, CI/CD pipelines for data flows.
  • Architectural Competencies
  • Data Modelling: Designing dimensional, relational, and hierarchical data models.
  • Scalability & Performance: Building fault-tolerant, highly available data architectures.
  • Security & Compliance: Enforcing role-based access control (RBAC), encryption, and auditing.

19 Skills Required For This Role

Data Analytics Internal Audit Game Texts Postgresql Aws Nosql Azure Terraform Hadoop Spark Mongodb Ci Cd Cassandra Docker Kubernetes Python Scala Sql Java

Similar Jobs