Databricks Engineer

1 Day ago • 5 Years + • Software Development & Engineering

Job Summary

Job Description

ShyftLabs is looking for a skilled Databricks Engineer to design, develop, and optimize big data solutions using the Databricks Unified Analytics Platform. This role involves working with Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP) to drive data-driven insights. The engineer will be responsible for designing and implementing big data pipelines, developing scalable ETL workflows, leveraging Apache Spark for distributed data processing, implementing data governance and security policies, and optimizing data lakehouse architectures. Collaboration with data scientists and analysts to enable AI/ML workflows is key. Responsibilities also include monitoring and troubleshooting Databricks clusters, automating workflows using CI/CD, and ensuring data integrity.
Must have:
  • 5+ years of experience with Databricks and Apache Spark
  • Proficiency in SQL, Python, or Scala
  • Experience with cloud platforms (AWS, Azure, or GCP)
  • Knowledge of ETL frameworks, data lakes, and Delta Lake
  • Experience with CI/CD tools and DevOps practices
  • Familiarity with data security and governance
  • Strong problem-solving and analytical skills
Good to have:
  • Databricks certifications
  • Experience with MLflow, Feature Store, or Databricks SQL
  • Exposure to Kubernetes, Docker, and Terraform
  • Experience with streaming data architectures
  • Understanding of BI tools (Power BI, Tableau, Looker)
  • Experience with retail, e-commerce, or ad-tech data platforms
Perks:
  • Competitive salary
  • Strong insurance package
  • Extensive learning and development resources

Job Details

Position Overview:
ShyftLabs is seeking a skilled Databricks Engineer to support in designing, developing, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role requires strong expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to drive data-driven insights and ensure scalable, high-performance data architectures.

ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation.


Job Responsibilities

    • Design, implement, and optimize big data pipelines in Databricks.
    • Develop scalable ETL workflows to process large datasets.
    • Leverage Apache Spark for distributed data processing and real-time analytics.
    • Implement data governance, security policies, and compliance standards.
    • Optimize data lakehouse architectures for performance and cost-efficiency.
    • Collaborate with data scientists, analysts, and engineers to enable advanced AI/ML workflows.
    • Monitor and troubleshoot Databricks clusters, jobs, and performance bottlenecks.
    • Automate workflows using CI/CD pipelines and infrastructure-as-code practices.
    • Ensure data integrity, quality, and reliability in all pipelines.

Basic Qualifications

    • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
    • 5+ years of hands-on experience with Databricks and Apache Spark.
    • Proficiency in SQL, Python, or Scala for data processing and analysis.
    • Experience with cloud platforms (AWS, Azure, or GCP) for data engineering.
    • Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture.
    • Experience with CI/CD tools and DevOps best practices.
    • Familiarity with data security, compliance, and governance best practices.
    • Strong problem-solving and analytical skills with an ability to work in a fast-paced environment.

Preferred Qualifications

    • Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer).
    • Hands-on experience with MLflow, Feature Store, or Databricks SQL.
    • Exposure to Kubernetes, Docker, and Terraform.
    • Experience with streaming data architectures (Kafka, Kinesis, etc.).
    • Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker).
    • Prior experience working with retail, e-commerce, or ad-tech data platforms.


We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in Atlanta, Georgia, United States

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Software Development & Engineering Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

About The Company

Here at ShyftLabs, we build data products to help enterprises deliver real impact through tailored data analytics, science, and AI solutions. From consulting to operations, we guide our customers through their data journey and ensure they are data and AI-empowered.

Noida, Uttar Pradesh, India (Hybrid)

Noida, Uttar Pradesh, India (On-Site)

Atlanta, Georgia, United States (Hybrid)

Noida, Uttar Pradesh, India (Hybrid)

Toronto, Ontario, Canada (Hybrid)

Toronto, Ontario, Canada (Hybrid)

Toronto, Ontario, Canada (Hybrid)

Toronto, Ontario, Canada (Hybrid)

Toronto, Ontario, Canada (Hybrid)

Noida, Uttar Pradesh, India (On-Site)

View All Jobs

Get notified when new jobs are added by ShyftLabs

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug