Job Description
Join a cutting-edge initiative focused on predictive maintenance for jet engines. You’ll be part of a dynamic team developing scalable data solutions that power real-time analytics and decision-making in aviation technology.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark.
- Implement complex ETL workflows and optimize data transformations for performance and efficiency.
- Tune Spark jobs and optimize resource utilization to improve pipeline performance.
- Collaborate with data scientists, analysts, and stakeholders to deliver high-quality datasets.
- Ensure data security, governance, and compliance with industry standards.
- Troubleshoot production issues and implement robust monitoring solutions.
Tech Stack:
- Azure Cloud, Databricks, Python, .NET, SQL, Terraform, Azure DevOps
Qualifications
- Strong proficiency in SQL.
- Hands-on experience with Python development.
- Proven background in Data Engineering.
- Familiarity with Big Data processing standards and tools.
- Experience working with Databricks (preferably on Azure).
Nice-to-Have Skills:
- Experience with Spark for data processing.
- Knowledge of Azure storage services.
- Familiarity with tools like Jupyter, Pandas, or Dask.
Additional Information
Hybrid work – 2 days per week at the office in Warsaw, Katowice, Poznań, Lublin, Rzeszow or Lodz.