Toppan Merrill is seeking a Senior Data Engineer to develop and optimize ETL/ELT data pipelines using Azure Databricks and Apache Spark (PySpark). The role involves implementing Unity Catalog for data governance, integrating with cloud data platforms like Azure Data Lake Storage, and automating workflows with CI/CD pipelines. The engineer will collaborate with data analysts, scientists, and business teams to translate requirements into scalable data engineering solutions, ensuring data quality and secure data sharing.
Good To Have:
Experience with MLflow for model tracking & deployment in Databricks.
Familiarity with streaming technologies (Kafka, Delta Live Tables, Azure Event Hub, Azure Event Grid).
Hands-on experience with dbt (Data Build Tool) for modular ETL development.
Certification in Databricks, Azure.
Experience with Azure Databricks Lakehouse connectors for SalesForce and SQL Server.
Experience with Azure Synapse Link for Dynamics, dataverse.
Familiarity with other data pipeline strategies, like Azure Functions, Fabric, ADF.
Must Have:
Develop, test, and maintain ETL/ELT data pipelines using Azure Databricks & Apache Spark (PySpark).
Optimize performance and cost-efficiency of Spark jobs.
Ensure data quality through validation, monitoring, and alerting mechanisms.
Design and enforce access control policies using Unity Catalog.
Manage data lineage, auditing, and metadata governance.
Work with Azure Data Lake Storage / Azure Blob Storage/ Azure Event Hub.
Implement Delta Lake for scalable, ACID-compliant storage.
Develop CI/CD pipelines for data workflows using Azure Databricks Workflows or Azure Data Factory.
Monitor and troubleshoot failures in job execution and cluster performance.
Proficiency in writing optimized and maintainable Python code.
Strong knowledge of SQL for data transformations and optimizations.
Familiarity with Databricks CLI, Databricks DABs, and DevOps principles.
Knowledge of IAM, role-based access control (RBAC), and encryption.
Ability to pull data from a wide variety of APIs.
Add these skills to join the top 1% applicants for this job
problem-solving
internal-audit
unity
game-texts
salesforce
azure
spark
ci-cd
python
sql
Job Description:
Responsibilities:
Develop & Optimize Data Pipelines
Build, test, and maintain ETL/ELT data pipelines using Azure Databricks & Apache Spark (PySpark).
Optimize performance and cost-efficiency of Spark jobs.
Ensure data quality through validation, monitoring, and alerting mechanisms.
Understand cluster types, configuration, and use-case for serverless
Implement Unity Catalog for Data Governance
Design and enforce access control policies using Unity Catalog.
Manage data lineage, auditing, and metadata governance.
Enable secure data sharing across teams and external stakeholders.
Integrate with Cloud Data Platforms
Work with Azure Data Lake Storage / Azure Blob Storage/ Azure Event Hub to integrate Databricks with cloud-based data lakes, data warehouses, and event streams.
Implement Delta Lake for scalable, ACID-compliant storage.
Automate & Orchestrate Workflows
Develop CI/CD pipelines for data workflows using AzureDatabricks Workflows or Azure Data Factory.
Monitor and troubleshoot failures in job execution and cluster performance.
Collaborate with Stakeholders
Work with Data Analysts, Scientists, and Business Teams to understand requirements.
Translate business needs into scalable data engineering solutions.
API expertise
Ability to pull data from a wide variety of APIs using different strategies and methods
Required Skills & Experience:
Azure Databricks & Apache Spark (PySpark) – Strong experience in building distributed data pipelines.
Python – Proficiency in writing optimized and maintainable Python code for data engineering.
Unity Catalog – Hands-on experience implementing data governance, access controls, and lineage tracking.
SQL – Strong knowledge of SQL for data transformations and optimizations.
Delta Lake – Understanding of time travel, schema evolution, and performance tuning.
Workflow Orchestration – Experience with AzureDatabricks Jobs or Azure Data Factory.
CI/CD & Infrastructure as Code (IaC) – Familiarity with Databricks CLI, Databricks DABs, and DevOps principles.
Security & Compliance – Knowledge of IAM, role-based access control (RBAC), and encryption.
Preferred Qualifications:
Experience with MLflow for model tracking & deployment in Databricks.
Familiarity with streaming technologies (Kafka, Delta Live Tables, Azure Event Hub, Azure Event Grid).
Hands-on experience with dbt (Data Build Tool) for modular ETL development.
Certification in Databricks, Azure is a plus.
Experience with Azure Databricks Lakehouse connectors for SalesForce and SQL Server
Experience with Azure Synapse Link for Dynamics, dataverse
Familiarity with other data pipeline strategies, like Azure Functions, Fabric, ADF, etc
Soft Skills:
Strong problem-solving and debugging skills.
Ability to work independently and in teams.
Excellent communication and documentation skills.
Set alerts for more jobs like Senior Data Engineer