Middle/Senior Data Engineer

7 Minutes ago • 3 Years +
Data Analysis

Job Description

N-iX is seeking a Middle/Senior Data Engineer to design, implement, and manage a new Data Lakehouse for an e-commerce client. This role involves building scalable, reliable, and high-quality data solutions on AWS, optimizing ETL pipelines with PySpark, Airflow, and Snowflake, and ensuring data quality. The engineer will collaborate with cross-functional teams to deliver actionable insights and continuously improve operational efficiency.
Good To Have:
  • Experience in lakehouse or hybrid architectures
  • Data quality controls using Great Expectations
  • BI ecosystems with dbt-like transformations and semantic layers
  • Data modeling principles including SCD handling and cross-source entity resolution
  • Troubleshooting, monitoring performance, and suggesting technical enhancements
Must Have:
  • Build and operate a modern Data Lakehouse on AWS (S3 + Iceberg).
  • Design and optimize ETL pipelines using PySpark, Airflow (MWAA), and Snowflake.
  • Automate workflows with Python scripts, integration validation, and monitoring.
  • Implement and enforce data quality controls (Glue Data Quality, Great Expectations).
  • Collaborate with cross-functional teams to refine data requirements and deliver insights.
  • Support CI/CD practices via GitLab.
  • Document data flows and business logic.
  • Continuously improve operational efficiency by troubleshooting and monitoring.
  • 3+ years of hands-on experience in Data Engineering.
  • Proficiency in PySpark for large-scale transformations.
  • Experience with Airflow (MWAA) for orchestrating pipelines.
  • Knowledge of AWS services: S3 + Iceberg, Glue, Athena, EMR.
  • Experience in Snowflake for analytics serving.
  • Proficiency in Python for automation.
  • Understanding of data modeling principles.
  • Familiarity with CI/CD pipelines in Git/GitLab.
  • Experience working with BI ecosystems (e.g., Power BI, dbt).
  • Upper-Intermediate English or higher.
Perks:
  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits

Add these skills to join the top 1% applicants for this job

team-management
saas-business-models
cross-functional
problem-solving
github
game-texts
gitlab
aws
power-bi
ci-cd
git
python

N-iX is looking for a Middle/Senior Data Engineer who would be involved in designing, implementing, and managing the new Data Lakehouse for our customer in the e-commerce domain. The ideal candidate has worked with data-related services in AWS, Snowflake, and experience in modern data approaches.

Our Client is a global full-service e-commerce and subscription billing platform on a mission to simplify software sales everywhere. For nearly two decades, we’ve helped SaaS, digital goods, and subscription-based businesses grow by managing payments, global tax compliance, fraud prevention, and recurring revenue at scale. Our flexible, cloud-based platform, combined with consultative services, helps clients accelerate growth, reach new markets, and build long-term customer relationships.

Data is at the heart of everything we do — powering insights, driving innovation, and shaping business decisions. We are building a next-generation data platform, and we’re looking for a Senior Data Engineer to help us make it happen.

As a Data Engineer, you will play a key role in designing and building our new Data Lakehouse on AWS, enabling scalable, reliable, and high-quality data solutions. You will work closely with senior engineers, data architects, and product managers to create robust data pipelines, develop data products, and optimize storage solutions that support business-critical analytics and decision-making.

Responsibilities:

  • Build and operate a modern Data Lakehouse on AWS (S3 + Iceberg) supporting ingestion, storage, transformation, and serving layers.
  • Design and optimize ETL pipelines using PySpark, Airflow (MWAA), and Snowflake for scalability and cost efficiency.
  • Automate workflows with Python scripts, integration validation, and monitoring across sources and layers.
  • Implement and enforce data quality controls (Glue Data Quality, Great Expectations) and contribute to governance best practices.
  • Collaborate with cross-functional teams (Data and Software Architects, Engineering Managers, Product Owners, and Data/Power BI Engineers) to refine data requirements and deliver trusted and actionable insights.
  • Support CI/CD practices via GitLab, ensuring version-controlled, testable, and auditable data processes.
  • Document data flows and business logic to maintain transparency, lineage, and knowledge transfer across teams.
  • Continuously improve operational efficiency by troubleshooting issues, monitoring performance, and suggesting technical enhancements.

Requirements:

  • 3+ years of hands-on experience in Data Engineering, preferably in lakehouse or hybrid architectures.
  • Proficiency in PySpark for large-scale transformations across layered datasets.
  • Experience with Airflow (MWAA) for orchestrating end-to-end pipelines, dependencies, and SLA-driven workloads.
  • Knowledge of AWS services used in modern data platforms: S3 + Iceberg, Glue (Catalog + Data Quality), Athena, EMR.
  • Experience in Snowflake for analytics serving and cross-platform ingestion.
  • Proficiency in Python for automation, validation, and auxiliary data workflows.
  • Understanding of data modeling principles and harmonization principles, including SCD handling and cross-source entity resolution.
  • Familiarity with CI/CD pipelines in Git/GitLab, ensuring tested, version-controlled, and production-ready deployments.
  • Experience working with BI ecosystems (e.g., Power BI, dbt-like transformations, semantic layers).
  • Upper-Intermediate English or higher, with the ability to document and explain complex concepts.

We offer*:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits

*not applicable for freelancers

Set alerts for more jobs like Middle/Senior Data Engineer
Set alerts for new jobs by N-ix
Set alerts for new Data Analysis jobs in Ukraine
Set alerts for new jobs in Ukraine
Set alerts for Data Analysis (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙