Machine Learning Operations (MLOps) Engineer (GCP)

2 Days ago • 3 Years + • Research Development

Job Summary

Job Description

We are seeking a seasoned Machine Learning Operations (MLOps) Engineer with expertise in Google Cloud Platform (GCP) to build and optimize machine learning platforms. The role involves developing CI/CD workflows for ML models and data pipelines, automating model training, validation, and deployment. Responsibilities include monitoring and maintaining ML models in production, ensuring reproducibility and traceability of experiments, and managing model versioning. Collaboration with data scientists and software engineers is crucial, as is optimizing model inference infrastructure for latency, throughput, and cost efficiency. Implementing data and model governance policies and staying current with evolving GCP MLOps practices are key. The position requires strong problem-solving skills and effective remote work capabilities.
Must have:
  • Build and optimize ML Platforms
  • Develop CI/CD workflows for ML models
  • Automate model training, validation, and deployment
  • Monitor and maintain ML models in production
  • Ensure reproducibility and traceability of experiments
  • Manage model versioning and rollbacks
  • Optimize model inference infrastructure
  • Implement data and model governance policies
  • 3+ years of relevant industry experience
  • Experience with deep learning frameworks
  • Experience with MLOps on Google Cloud Platform (GCP)
  • Strong problem-solving skills
  • Effective remote work capabilities
  • Expertise in Google Cloud Platform (GCP) and Vertex AI
  • Experience building agentic AI systems
  • Understanding of large language model (LLM) architectures
Good to have:
  • Collaborate with cross-functional teams
  • Solid understanding of machine learning algorithms
  • Solid understanding of core computer science concepts

Job Details

As a full spectrum cloud integrator, we assist hundreds of companies to realize the value, efficiency, and productivity of the cloud. We take customers on their journey to enable, operate, and innovate using cloud technologies – from migration strategy to operational excellence and immersive transformation.
 
If you like a challenge, you’ll love it here, because we solve complex business problems every day, building and promoting great technology solutions that impact our customers’ success. The best part is, we’re committed to you and your growth, both professionally and personally.

We are looking for a seasoned Machine Learning Operations (MLOps) Engineer to build and optimize machine learning platforms. This role requires deep expertise in machine learning engineering and infrastructure, with a strong focus on developing scalable inference systems. Proven experience in building and deploying ML platforms in production environments is essential. This remote position also requires excellent communication skills and the ability to independently tackle complex challenges with innovative solutions.
If you get a thrill working with cutting-edge technology and love to help solve customers’ problems, we’d love to hear from you. It’s time to rethink the possible. Are you ready?
 

What you will be doing:

    • Build and optimize ML Platforms to support cutting-edge machine learning and deep learning models.
    • Collaborate closely with cross-functional teams to translate business objectives into scalable engineering solutions.
    • Develop CI/CD workflows for ML models and data pipelines using tools like Cloud Build, GitHub Actions, or Jenkins.
    • Automate model training, validation, and deployment across development, staging, and production environments.
    • Monitor and maintain ML models in production using Vertex AI Model Monitoring, logging (Cloud Logging), and performance metrics.
    • Ensure reproducibility and traceability of experiments using ML metadata tracking tools like Vertex AI Experiments or MLflow.
    • Manage model versioning and rollbacks using Vertex AI Model Registry or custom model management solutions.
    • Collaborate with data scientists and software engineers to translate model requirements into robust and scalable ML systems.
    • Optimize model inference infrastructure for latency, throughput, and cost efficiency using GCP services such as Cloud Run, Kubernetes Engine (GKE), or custom serving frameworks.
    • Implement data and model governance policies, including auditability, security, and access control using IAM and Cloud DLP.
    • Stay current with evolving GCP MLOps practices, tools, and frameworks to continuously improve system reliability and automation

Qualifications and skills:

    • Bachelor's degree in computer science, Information Technology, or a related field.
    • 3+ years of relevant industry experience.
    • Proven track record in designing and implementing cost-effective, scalable machine learning inference systems.
    • Hands-on experience with leading deep learning frameworks such as TensorFlow, PyTorch, Hugging Face, and LangChain.
    • Proven experience in implementing MLOps solutions on Google Cloud Platform (GCP) using services such as Vertex AI, Cloud Storage, BigQuery, Cloud Functions, and Dataflow.
    • Solid understanding of machine learning algorithms, natural language processing (NLP), and statistical modeling.
    • Solid understanding of core computer science concepts, including algorithms, distributed systems, data structures, and database management.
    • Strong problem-solving skills, with the ability to tackle complex challenges using critical thinking and propose innovative solutions.
    • Effective in remote work environments, with excellent written and verbal communication skills. Proven ability to collaborate with team members and stakeholders to ensure clear understanding of technical requirements and project goals.
    • Expertise in public cloud platforms, particularly Google Cloud Platform (GCP) and Vertex AI.
    • Proven experience in building and scaling agentic AI systems in production environments.
    • In-depth understanding of large language model (LLM)architectures, parameter scaling, optimization strategies and deployment trade-offs.
    • Location - Remote in Egypt
    • #LI-JB2
#LI-JB2

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in Giza, Giza Governorate, Egypt

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Research Development Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

About The Company

Gurugram, Haryana, India (Hybrid)

Giza, Giza Governorate, Egypt (Hybrid)

Riyadh, Riyadh Province, Saudi Arabia (On-Site)

Giza, Giza Governorate, Egypt (Hybrid)

Gurugram, Haryana, India (Remote)

San Antonio, Texas, United States (Remote)

Gurugram, Haryana, India (Remote)

Giza, Giza Governorate, Egypt (On-Site)

View All Jobs

Get notified when new jobs are added by Rackspace Technology

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug