Architect and maintain end-to-end scalable data pipelines using AWS services (S3, Athena, Redshift, etc.);
Design and implement transformation pipelines with DBT and ensure efficient orchestration with Airflow;
Optimize data lake architecture, ensuring scalability, security, and performance;
Develop and manage Kafka Connect ingestion pipelines, handling complex data sources;
Work closely with business stakeholders to ensure alignment of data solutions with organizational goals;
Provide technical leadership, mentor junior and middle engineers, and promote engineering best practices;
Collaborate with DevOps to enhance infrastructure and CI/CD processes for the data platform.
5+ years of experience in Data Engineering or similar roles;
Expertise in AWS services (S3, Athena, Glue, Redshift, Lambda, etc.);
Advanced proficiency in SQL, Python, and data transformation tools (e.g., DBT);
Experience with Kafka Connect and managing ingestion pipelines;
Strong knowledge of data warehousing concepts and performance optimization;
Proficient in using orchestration tools like Airflow;
Familiarity with Infrastructure as code tools like Terraform.
Strong communication and presentation skills;
Attention to detail;
Proactive and result-oriented mindset.
GROWE TOGETHER: Our team is our main asset. We work together and support each other to achieve our common goals;
DRIVE RESULT OVER PROCESS: We set ambitious, clear, measurable goals in line with our strategy and driving Growe to success;
BE READY FOR CHANGE: We see challenges as opportunities to grow and evolve. We adapt today to win tomorrow.
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded