Senior Data Engineer - R01557053

Brillio

Job Summary

Brillio is seeking a Senior Data Engineer with expertise in the Azure Data Stack. This role involves designing, building, and optimizing large-scale data pipelines and analytics solutions. Key responsibilities include developing and maintaining data pipelines using Azure Data Factory, Databricks, and Synapse Analytics, working with structured/unstructured data, and optimizing data models for scalability and performance. The engineer will collaborate with various teams and ensure compliance with data governance and security standards.

Must Have

  • Design, build, and optimize large-scale data pipelines and analytics solutions.
  • Develop and maintain data pipelines using Azure Data Factory, Databricks, and Synapse Analytics.
  • Work with structured/unstructured data for analytics and reporting.
  • Optimize data models and storage for scalability and performance.
  • Ensure compliance with enterprise data governance and security standards.
  • Hands-on expertise with Azure Data Factory, Azure Databricks, Synapse, Data Lake.
  • Strong proficiency in SQL, Python, PySpark.
  • Experience in ETL/ELT pipeline design, data warehousing, and real-time streaming.
  • Knowledge of Azure DevOps, CI/CD, and cloud data architecture best practices.

Good to Have

  • Fair idea of Fabrics

Job Description

Primary Skills

  • AKS, Event Hub, Azure DevOps, Cosmos DB, Azure Functions

Job requirements

  • Looking for a Data Engineer with Azure Data Stack expertise to design, build, and optimize large-scale data pipelines and analytics solutions.
  • Key Responsibilities:
  • • Develop and maintain data pipelines using Azure Data Factory, Databricks, and Synapse Analytics
  • • Work with structured/unstructured data for analytics and reporting.
  • • Optimize data models and storage for scalability and performance.
  • • Collaborate with data scientists, BI developers, and business teams.
  • • Ensure compliance with enterprise data governance and security standards.
  • Required Skills:
  • • Hands-on expertise with Azure Data Factory, Azure Databricks, Synapse, Data Lake and fair idea of Fabrics
  • • Strong proficiency in SQL, Python, PySpark.
  • • Experience in ETL/ELT pipeline design, data warehousing, and real-time streaming.
  • • Knowledge of Azure DevOps, CI/CD, and cloud data architecture best practices.
  • • Excellent communication and problem-solving abilities.

7 Skills Required For This Role

Communication Game Texts Azure Azure Devops Ci Cd Python Sql

Similar Jobs