We are seeking a skilled Data Engineer with 5 years of experience to design, build, and maintain scalable data pipelines and infrastructure. The ideal candidate will have hands-on experience with data warehousing, ETL processes, and big data technologies, enabling efficient data flow and analytics support across the organization.
Key Responsibilities:
Design, develop, and optimize data pipelines and workflows for batch and real-time processing.
Build and maintain data warehouse and data lake architectures.
Integrate data from multiple sources using ETL/ELT tools.
Collaborate with data scientists, analysts, and software engineers to support data-driven initiatives.
Ensure data quality, security, and compliance standards are met.
Monitor and troubleshoot pipeline performance and issues.
Requirements:
Bachelor’s degree in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering or related role.
Strong programming skills in Python, SQL, and/or Scala.
Hands-on experience with data tools like Apache Spark, Kafka, Airflow, and DBT.
Familiarity with cloud platforms (AWS, Azure, or GCP), especially services like S3, Redshift, BigQuery, or Snowflake.
Experience with data modeling and working with structured/unstructured data.