Middle/Senior Data Engineer for Technology Office (#2476)

12 Hours ago • 3-7 Years • Data Analyst

About the job

Summary

N-iX seeks a Middle/Senior Data Engineer proficient in Databricks, DBT, and Python to design, develop, and maintain efficient, scalable data pipelines. Responsibilities include data integration across various sources, implementing data quality and security best practices, optimizing pipeline performance, troubleshooting issues, and maintaining comprehensive documentation. The ideal candidate possesses strong data engineering experience, understands data architecture, and collaborates effectively with cross-functional teams. Experience with cloud platforms (AWS, Azure, GCP), SQL, data modeling, and RDBMS systems is crucial. The role involves transitioning data to modern cloud platforms and staying updated on latest data engineering advancements.
Must have:
  • Databricks expertise (ETL, migration)
  • DBT proficiency (Snowflake, Databricks)
  • Advanced Python (data processing, ETL)
  • Cloud platform experience (AWS, Azure, GCP)
  • Strong SQL skills
  • Data modeling & schema design
Good to have:
  • Spark knowledge
  • Data governance frameworks
  • Data quality & security principles
  • Delta Lake, Iceberg, Parquet knowledge
  • CI/CD for data pipelines
Perks:
  • Flexible working format
  • Competitive salary
  • Personalized career growth
  • Professional development tools
  • Education reimbursement
  • Corporate events
Not hearing back from companies?
Unlock the secrets to a successful job application and accelerate your journey to your next opportunity.

N-iX is a software development service company with a 21-year history, leveraging Eastern European talent to serve Fortune 500 companies and tech startups. We operate in nine countries and employ over 2,000 professionals. Our Data and Analytics practice, within the Technology Office, specializes in data strategy, governance, and platforms, shaping the future for our clients.

We are seeking a Middle-Senior Data Engineer with expertise in Databricks, DBT, and Python to help us build and maintain efficient, scalable, and reliable data pipelines. The ideal candidate will have a strong background in data engineering, a deep understanding of data architecture, and the ability to work collaboratively with cross-functional teams to deliver impactful data solutions.

Responsibilities:

  • Design and Develop Data Pipelines: Build, maintain, and optimize scalable data pipelines using Databricks, DBT, and Python
  • Data Integration: Collaborate with data scientists, analysts, and other stakeholders to ensure seamless data integration across various sources and systems
  • Data Quality: Implement best practices for data quality, data governance, and data security to ensure the reliability and accuracy of data
  • Performance Optimization: Optimize performance of data processing and storage solutions to meet the requirements of low-latency and high-throughput applications
  • Troubleshooting: Troubleshoot and resolve any issues related to data pipelines, data transformations, and data storage
  • Documentation: Maintain comprehensive documentation of data pipelines, data models, and ETL processes
  • Stay Current: Stay up-to-date with the latest trends and advancements in data engineering and related technologies

Requirements:

  • Extensive experience with Databricks, including ETL processes and data migration. Certification as Databricks Engineer is preferred
  • Proficiency in using DBT (Data Build Tool) for transforming data in the warehouse (Snowflake, Databricks)
  • Advanced programming skills in Python for data processing, ETL, and integration tasks
  • Experience with cloud platforms such as AWS, Azure, or GCP
  • Strong knowledge of SQL for querying and manipulating data
  • Proficiency in data modeling and schema design for relational and non-relational databases
  • Excellent problem-solving skills with the ability to analyze complex data issues and deliver effective solutions
  • Experience with RDBMS systems and transitioning data to modern cloud platforms
  • Strong interpersonal and communication skills to work effectively with cross-functional teams
  • English – Upper-Intermediate

Qualifications:

  • Experience with big data technologies such as Spark
  • Knowledge of data governance frameworks, data quality management practices, and data security principles
  • Knowledge of popular data standards and formats (e.g, Delta Lake, Iceberg, Parquet, JSON, XML, etc)
  • Knowledge of continuous integration and continuous deployment (CI/CD) practices for data pipelines

We offer:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits
View Full Job Description

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug