AI Engineer (SA25)
BBD
Job Summary
BBD is seeking a highly skilled AI Engineer to design, build, and deploy scalable AI and Machine Learning solutions. This role involves working at the intersection of Data Engineering, DevOps, and Machine Learning to create robust Generative AI solutions and traditional ML pipelines. The engineer will leverage cutting-edge frameworks and cloud services to solve complex business problems, ensuring AI systems are reliable, efficient, and secure.
Must Have
- Design and implement end-to-end AI/ML pipelines.
- Build and optimise Generative AI applications using RAG architectures and LLM frameworks.
- Develop and maintain scalable data processing workflows using modern big data tools.
- Implement MLOps best practices for model lifecycle management, versioning, and automated deployment.
- Collaborate with data scientists and stakeholders to translate business requirements into technical solutions.
- Ensure data quality, governance, and security across all AI and data platforms.
- Minimum 5 years professional experience.
- At least 2 years’ experience within the AI/ML space.
- A year in Generative AI highly desirable.
Good to Have
- NVIDIA Certified Associate: AI in the Data Center or similar GenAI certifications
- DeepLearning.AI: Start or Advance Your Career in AI: Specialised courses (e.g., Generative AI with LLMs)
- AWS Certified Machine Learning – Specialty
- AWS Certified Solutions Architect – Associate
- Microsoft Certified: Azure AI Engineer Associate (AI-102)
- Microsoft Certified: Azure Data Scientist Associate (DP-100)
- Databricks Certified Machine Learning Professional
- Databricks Certified Generative AI Engineer Associate
- Databricks Certified Data Engineer Professional
Perks & Benefits
- Collaboration, innovation, and inclusion culture
- Relaxed yet professional work environment
- Flat management structure
- Support for career growth and continuous learning
- Diverse project teams
- Flexible, hybrid working environment
- Access to hubs for networking, knowledge sharing
- Snacks, great coffee, and catered lunches
- Social, sport, and cultural gatherings
- Awards Nominations and shoutouts
- Exceptional bonuses for exceptional performance
Job Description
Job Description
The company
BBD is an international custom software solutions company that solves real-world problems with innovative solutions and modern technology stacks. With extensive experience across various sectors and a wide array of technologies, BBD’s core services encompass digital enablement, software engineering and solutions support, which includes cloud engineering, data science, product design and managed services.
Over the past 40 years, we have built a reputation for hiring the best talent and collaborating with client teams to deliver exceptional value through software. As the company has grown, this unwavering commitment to quality and continuous innovation has ensured clients get the full benefit from software that meets their unique environment.
The culture
BBD’s culture is one that encourages collaboration, innovation and inclusion. Our relaxed yet professional work environment extends into a flat management structure. At BBD, you are not just a number, but a valuable member of the team, working with like-minded, passionate individuals on challenging projects in interesting spaces. We deeply believe in the importance of each individual taking control of their career growth, with the support, encouragement and guidance of the company. We do this for every BBDer, creating the space and opportunity to continue learning, growing and expanding their skillsets. We also proudly support and ensure diverse project teams as varied perspectives will always make for stronger solutions.
With hubs in 7 cities, we have mastered distributed development and support a flexible, hybrid working environment. Our hubs are also a great place to get to know people, share knowledge, and enjoy snacks, great coffee and catered lunches as well as social, sport and cultural gatherings.
Lastly, recognition is deeply ingrained in the BBD culture and we use every appropriate opportunity to show this through our Awards Nominations, shoutouts and of the course the exceptional bonuses that come from exceptional performance.
The role
We are looking for a highly skilled AI Engineer to join our team. In this role, you will be responsible for designing, building, and deploying scalable AI and Machine Learning solutions. You will work at the intersection of Data Engineering, DevOps, and Machine Learning to create robust Generative AI solutions and traditional ML pipelines. You will leverage cutting-edge frameworks and cloud services to solve complex business problems, ensuring our AI systems are reliable, efficient, and secure.
Key responsibilities
- Design and implement end-to-end AI/ML pipelines, from data ingestion to model deployment and monitoring
- Build and optimise Generative AI applications using RAG (Retrieval-Augmented Generation) architectures and LLM frameworks
- Develop and maintain scalable data processing workflows using modern big data tools
- Implement MLOps best practices for model lifecycle management, versioning and automated deployment
- Collaborate with data scientists and stakeholders to translate business requirements into technical solutions
- Ensure data quality, governance, and security across all AI and data platforms
Requirements
- A minimum of 5 years professional experience, with at least 2 years’ experience within the AI/ML space, and a year in Generative AI highly desirable
Skills and Experience
Core skills, tools and frameworks
- Programming mastery: Expert-level proficiency in Python, with a strong grasp of Object-Oriented Programming (OOP) principles, design patterns, and asynchronous programming
- AI & LLM frameworks: Experience with modern AI frameworks such as LangChain, Langflow, and AutoGen. Ability to build agents and orchestrate complex LLM workflows
- GenAI architecture: Deep understanding of Generative AI patterns, including RAG (Retrieval-Augmented Generation), Vector Search implementation (e.g., Pinecone, Chroma, Milvus), and advanced Prompt Engineering techniques
- ML engineering & MLOps: Practical experience with MLOps tools like MLflow for experiment tracking and model registry, implementation of Feature Stores, and Model Serving infrastructures (e.g., TFServing, TorchServe, KServe)
- Big Data processing: Proficiency in PySpark, Spark SQL, and Delta Lake for handling large-scale datasets and optimising data pipelines
- Version control & CI/CD: Background in Git workflows and CI/CD pipelines using tools like GitHub Actions or Azure DevOps for automated testing and deployment
- Orchestration: Experience scheduling and managing complex workflows using Apache Airflow, Databricks Workflows, or Delta Live Tables (DLT)
- Data architecture: Familiarity with modern data architecture patterns such as the Medallion Architecture (Bronze/Silver/Gold) and data modelling techniques (ETL/ELT)
- Data governance: Knowledge of data governance frameworks, specifically Unity Catalog, including data lineage, access controls, and security policies
- Streaming: Experience with real-time data processing technologies like Apache Kafka, AWS Kinesis, or Spark Structured Streaming
- Data quality: Implementation of data quality checks and validation using libraries like Great Expectations or Deequ
- BI & visualisation: Ability to create dashboards and visualise insights using tools like PowerBI, Tableau, or Databricks SQL Dashboards
AWS skills
- Storage: Comprehensive knowledge of Amazon S3 for data lake storage, lifecycle policies, and security configurations
- IAM & security: Proficiency in AWS IAM (Identity and Access Management) for managing roles, policies, and secure access to resources
- Networking: Understanding of VPCs, subnets, security groups, and general AWS networking concepts to ensure secure deployment
- AI/ML services: Hands-on experience with managed AI services such as Amazon Bedrock for building GenAI applications and Amazon SageMaker for model training and deployment
- Streaming: Experience configuring and managing Amazon Kinesis for real-time data ingestion and processing
Azure skills
- Storage: Expertise in Azure Data Lake Storage Gen2 (ADLS Gen2), including hierarchical namespaces and ACLs
- Azure OpenAI Service: Experience deploying and managing LLMs via Azure OpenAI Service, including model fine-tuning and content filtering
- AI Search: Implementation of Azure AI Search (formerly Cognitive Search) with vector capabilities for semantic search applications
- Core Azure knowledge: Solid understanding of core Azure infrastructure, resource management (ARM templates / Bicep), and monitoring
- Networking & security: Proficiency in Azure networking (VNETs, Private Endpoints) and security best practices (Managed Identities, Key Vault)
Additional Skills & Advantageous Certificates
Databricks skills
- Mosaic AI: Utilisation of Mosaic AI tools for Large Language Models, Vector Search integration, and Model Serving endpoints
- MLflow: Advanced usage of MLflow within Databricks for full lifecycle management of ML models and experiments
- Unity Catalog: Implementation of Unity Catalog for centralised governance of data and AI assets (models, functions)
- Jobs & DLT: Building reliable production pipelines using Databricks Jobs, Workflows, and Delta Live Tables (DLT)
- Optimisation: Performance tuning of Spark SQL and PySpark jobs to ensure cost-effectiveness and low latency
- Automation: Scripting and automation using the Databricks CLI and REST APIs
Certifications (nice to have)
Core & General
- NVIDIA Certified Associate: AI in the Data Center or similar GenAI certifications
- DeepLearning.AI: Start or Advance Your Career in AI: Specialised courses (e.g., Generative AI with LLMs)
Cloud Platform: AWS
- AWS Certified Machine Learning – Specialty: Validates expertise in building, training, tuning and deploying ML models on AWS
- AWS Certified Solutions Architect – Associate
Cloud Platform: Azure
- Microsoft Certified: Azure AI Engineer Associate (AI-102): Validates ability to build, manage, and deploy AI solutions leveraging Azure Cognitive Services and Azure Applied AI Services
- Microsoft Certified: Azure Data Scientist Associate (DP-100): Focuses on designing and implementing a data science solution on Azure
Data Platform: Databricks
- Databricks Certified Machine Learning Professional: Demonstrates ability to use Databricks Machine Learning and MLflow for production ML tasks
- Databricks Certified Generative AI Engineer Associate: Validates understanding of Generative AI concepts and the ability to develop LLM applications on Databricks
- Databricks Certified Data Engineer Professional: For candidates with a stronger data engineering background
Internal candidate profile
We are open to training internal candidates who demonstrate strong engineering fundamentals and a passion for data. Ideal internal candidates might currently be in the following roles:
Python Back-end Engineers:
- Background: Strong experience in Python, API development (FastAPI/Flask), and system architecture
- Gap to bridge: Need to learn specific AI frameworks (LangChain), ML concepts (embeddings, vectors), and data-centric tools (Spark, Vector DBs)
Data Engineers:
- Background: Proficient in Spark, SQL, pipelines, and cloud infrastructure
- Gap to bridge: Need to deepen knowledge in Model Serving, MLflow, LLM architectures (RAG), and non-deterministic software behavior (prompt engineering)
Data Scientists:
- Background: Strong in statistics, model building, and experimentation in notebooks
- Gap to bridge: Need to improve software engineering practices (CI/CD, testing, modular code) and learn production deployment patterns (serving, monitoring)
BBD is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, family, gender identity or expression, genetic information, marital status, political affiliation, race, religion or any other characteristic protected by applicable laws, regulations or ordinances.