ROLE OVERVIEW
You’ll work at the intersection of data science and engineering to build, deploy, and scale machine learning systems. This includes improving ML infrastructure, designing reliable real-time data systems, and ensuring models run efficiently and reliably in production.
ROLE RESPONSIBILITIES
- Consult with data scientists on training machine learning models
- Support improvements and additions to the ML infrastructure, including getting your hands dirty with data engineering and DevOps engineering
- Design systems to meet throughput and latency requirements
- Implement NFRs (Non-Functional Requirements) to ensure a high degree of system reliability
THE SKILLS AND EXPERIENCE WE ARE LOOKING FOR
- Prior experience with productionising ML systems is a must.
- Prior experience training machine learning models is highly desirable.
- Advanced knowledge of Python and familiarity with SQL.
- Good working knowledge of Terraform for Infrastructure as Code (IaC)
- A solid understanding and hands-on experience with real-time and event-driven systems such as Kafka, Kafkaconnect, Pub/Sub.
- Solid experience with Kubernetes, docker, deployment types (canary, blue-green etc.)
- Experience with setting up CI/CD systems using tools such as CircleCI, drone, Github actions, ArgoCD.
- Working experience with Big Data technologies such as Spark, Dataflow, and Flink.
- Experience with system design - keeping performance and efficiency in mind, whilst aware of trade-offs.
- Experience applying software engineering rigor to ML, including CI/CD/CT, unit-testing, automation etc.
- Hands-on experience with some MLOps tools such as KubeFlow, DVC, MLFlow.
- Experience with cloud providers, such as GCP, AWS, or Azure
- Prior experience or a strong interest in FinTech space
Please note that all appointments are subject to our background checking process, which may include Credit, Criminal and any other job inherent checks.