Strong Middle Data Engineer (Azure Databricks)

6 Months ago • 4 Years +

Job Description

We are seeking a strong Middle Data Engineer with expertise in Python and PySpark to design and implement robust data solutions on Azure. Responsibilities include developing, optimizing, and maintaining complex data pipelines using Azure Databricks, implementing user stories, designing scalable Big Data solutions in Azure, ensuring pipeline reliability and performance, collaborating with cross-functional teams, applying OOP and software engineering principles, contributing to GitLab CI/CD, exploring LangChain integration, and resolving technical problems. The ideal candidate possesses 4+ years of hands-on data engineering experience, strong Python, PySpark, and Azure Databricks expertise, and a solid understanding of Azure, GitLab, and OOP concepts. Excellent communication and problem-solving skills are essential.
Good To Have:
  • LangChain knowledge
Must Have:
  • 4+ years Data Engineering experience
  • Python, PySpark, Azure Databricks expertise
  • Azure Big Data integration experience
  • Build and maintain production pipelines
  • Azure, GitLab, OOP knowledge
  • Strong analytical & problem-solving skills
Perks:
  • Flexible working format
  • Competitive salary & compensation
  • Personalized career growth
  • Professional development tools
  • Education reimbursement
  • Corporate events

Add these skills to join the top 1% applicants for this job

ci-cd
azure
python
gitlab
data-analytics
communication
innovation
cross-functional

We are looking for a Middle Strong Data Engineer with strong expertise in Python and PySpark to join our team. This role is ideal for someone who thrives in designing and implementing robust data solutions in the cloud, with a focus on Azure ecosystems and modern data engineering practices. You’ll work on challenging projects, building and maintaining critical data pipelines, and delivering high-quality software solutions that drive business value.

Responsibilities:

  • Develop, optimize, and maintain complex data pipelines using Azure Databricks, Python, and PySpark.
  • Implement and deliver complex user stories with hands-on coding and design.
  • Design and manage scalable Big Data solutions in Azure cloud environments.
  • Ensure reliability and performance of data workflows and proactively monitor critical pipelines.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Apply best practices in OOP and software engineering principles.
  • Contribute to version control and CI/CD processes using GitLab.
  • Explore and integrate LangChain components.
  • Analyze and resolve complex technical problems with a focus on continuous improvement.

Requirements

  • 4+ years of hands-on experience in data engineering.
  • Strong expertise with Python, Pyspark and Azure Databricks.
  • Solid experience in Cloud-based Big Data integration, specifically within the Azure ecosystem.
  • Strong data engineering mindset with experience building and maintaining production-level pipelines.
  • Solid knowledge of Azure, GitLab, and OOP concepts.
  • Strong analytical and problem-solving skills.
  • Excellent communication and collaboration abilities.
  • Upper-intermediate English or higher.
  • Passionate about delivering high-quality solutions and driving innovation.
  • Knowledge of LangChain would be a plus.

We offer:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits

Set alerts for new jobs by N-ix
Set alerts for new jobs in Poland
Contact Us
hello@outscal.com
Made in INDIA 💛💙