The role involves building scalable data pipelines using SQL, PySpark, and Delta Live Tables on Databricks. Responsibilities include orchestrating workflows with Databricks Workflows or Airflow, implementing SLA-backed retries and alerting, and designing dimensional models with Unity Catalog and Great Expectations validation. The engineer will deliver Power BI solutions, including dashboards, semantic layers, and paginated reports, and migrate legacy SSRS reports to Power BI. Optimization of compute and cost through cache tuning, partitioning, and capacity monitoring is also required. Additionally, documenting pipeline logic and RLS rules in Git-controlled formats, collaborating cross-functionally, and mentoring team members are key aspects of the position.
Must Have:- 5+ years in analytics engineering
- 3+ years in production Databricks/Spark contexts
- Advanced SQL and PySpark
- Delta Lake and Unity Catalog expertise
- Power BI mastery (DAX, security, paginated reports)
- SSRS to Power BI migration experience
- Git, CI/CD, and cloud platform familiarity
- Strong communication skills
Perks:- Competitive salary
- Comprehensive health insurance
- 401(k) plan
- Flexible work hours
- Professional development opportunities