We are seeking a Middle Data Engineer with proven expertise in AWS, Snowflake, and dbt to design and build scalable data pipelines and modern data infrastructure. You will play a key role in shaping the data ecosystem, ensuring data availability, quality, and performance across business units. Key responsibilities include developing data pipelines using Python, deep understanding of data modeling and ELT best practices, and experience with CI/CD and version control systems like Git. Strong communication and collaboration skills are also essential. The role involves performance tuning, storage layers, cost management for Snowflake, and production-level proficiency with dbt, including modular development, testing, and deployment.