Responsibilities:
• Design and build scalable data pipelines to support AI/ML workflows.
• Ensure data is accessible, clean, and ready for analysis by the AI/ML team.
• Manage and optimize databases for storing and retrieving large datasets.
• Collaborate with AI eng. to integrate model outputs into existing data structures.
• Work with infra teams to ensure seamless integration of data processing tools.
Qualifications:
• Degree in Computer Science, Engineering, or a related field.
• 5+ years of Strong experience in data engineering tools like Apache Spark, Hadoop, or similar.
• Experience with SQL/NoSQL databases, data warehousing solutions, and cloud platforms.
• Strong Python/Java skills for data processing and workflow automation.
Upload your resume, increase your shortlisting chances by 80%
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded