Cloud Engineer (NL25)

BBD

Job Summary

BBD is seeking a specialized Cloud Engineer to build, secure, and maintain cloud infrastructure for modern data platforms, focusing on huge scale storage, high-throughput networking, and specialized compute clusters. This role bridges DevOps and Data Engineering, ensuring robust, secure, and automated platforms for data scientists and analysts. Responsibilities include designing scalable cloud infrastructure using Infrastructure as Code, implementing robust security and governance, building CI/CD pipelines, establishing monitoring for data workloads, and creating self-service infrastructure patterns, primarily on Databricks, AWS, and Azure.

Must Have

  • Minimum 5 years of professional cloud engineering experience
  • Deep expertise in Infrastructure as Code (Terraform)
  • Proficiency with Docker and Kubernetes (EKS, AKS)
  • Advanced proficiency with Git, Azure DevOps, and GitHub Actions
  • Strong coding skills in Python and Bash
  • Familiarity with CLIs (DBX CLI, AWS CLI, Azure CLI) and REST APIs
  • Experience implementing Least Privilege access, RBAC, Encryption, and Secrets Management
  • Experience designing complex cloud networks (VNETs/VPCs, Private Link/Endpoints, DNS, Firewalls)
  • Understanding of modern data patterns like Medallion architecture, Delta Lake, and Data Vault
  • Experience configuring cost management budgets, logging (CloudWatch/Azure Monitor), and alerting systems
  • Familiarity with data engineering tools like PySpark, SQL, dbt, and Spark Structured Streaming
  • Expert configuration of AWS S3, IAM, VPC, Lambda, Step Functions, Glue, Kinesis, and EMR
  • Experience managing Azure ADLS Gen2, Blob Storage, Microsoft Entra ID, Virtual Networks, NSGs, ADF, and Synapse integrations
  • Automated deployment of Databricks workspaces and settings via Terraform provider
  • Experience configuring Unity Catalog, Cluster Policies, and Instance Profiles for Databricks
  • Experience configuring Private Link, VNet Injection, and IP Access Lists for Databricks security
  • Experience managing infrastructure for Databricks Workflows, Delta Live Tables, and Airflow integrations
  • Experience provisioning infrastructure for MLflow, Mosaic AI, and Vector Search

Good to Have

  • AWS Certified Solutions Architect – Associate / Professional
  • Microsoft Certified: Azure Solutions Architect Expert
  • HashiCorp Certified: Terraform Associate
  • Certified Kubernetes Administrator (CKA)
  • Databricks Certified Data Engineer Professional
  • AWS Certified Data Engineer – Associate
  • Microsoft Certified: Azure Data Engineer Associate (DP-203)

Perks & Benefits

  • Flexible, hybrid working environment
  • Snacks, great coffee and catered lunches at hubs
  • Social, sport and cultural gatherings
  • Awards Nominations and shoutouts
  • Exceptional bonuses for exceptional performance

Job Description

Job Description

The Company

BBD is an international custom software solutions company that solves real-world problems with innovative solutions and modern technology stacks. With extensive experience across various sectors and a wide array of technologies, BBD’s core services encompass digital enablement, software engineering and solutions support, which includes cloud engineering, data science, product design and managed services.

Over the past 40 years, we have built a reputation for hiring the best talent and collaborating with client teams to deliver exceptional value through software. As the company has grown, this unwavering commitment to quality and continuous innovation has ensured clients get the full benefit from software that meets their unique environment.

The culture

BBD’s culture is one that encourages collaboration, innovation and inclusion. Our relaxed yet professional work environment extends into a flat management structure. At BBD, you are not just a number, but a valuable member of the team, working with like-minded, passionate individuals on challenging projects in interesting spaces. We deeply believe in the importance of each individual taking control of their career growth, with the support, encouragement and guidance of the company. We do this for every BBDer, creating the space and opportunity to continue learning, growing and expanding their skillsets. We also proudly support and ensure diverse project teams as varied perspectives will always make for stronger solutions.

With hubs in 7 cities, we have mastered distributed development and support a flexible, hybrid working environment. Our hubs are also a great place to get to know people, share knowledge, and enjoy snacks, great coffee and catered lunches as well as social, sport and cultural gatherings.

Lastly, recognition is deeply ingrained in the BBD culture and we use every appropriate opportunity to show this through our Awards Nominations, shoutouts and of the course the exceptional bonuses that come from exceptional performance.

The role

We are looking for a specialised Cloud Engineer to build, secure and maintain the underlying infrastructure for our modern data platforms. Unlike a traditional Cloud Engineer who might focus on general application hosting, this role is dedicated to the unique requirements of data workloads—spanning huge scale storage, high-throughput networking, and specialised compute clusters for Spark/SQL.

You will bridge the gap between pure DevOps and Data Engineering, ensuring our data scientists and analysts have a robust, secure, and automated platform on which to build their pipelines and models. You will be responsible for the “plumbing” of the data ecosystem, primarily focusing on Databricks, AWS, and Azure.

Responsibilities

  • Platform architecture: Design and deploy scalable cloud infrastructure for data lakes and analytics platforms using Infrastructure as Code (Terraform)
  • Security & governance: Implement robust identity management (IAM/Entra ID), network security (Private Link/VPC), and data governance controls (Unity Catalog)
  • Automation: Build CI/CD pipelines for infrastructure and data products, automating the provisioning of compute resources and workspaces
  • Observability: Establish monitoring for cost, performance, and reliability of data workloads, ensuring efficient resource utilisation
  • Enabling data teams: Create self-service infrastructure patterns that allow Data Engineers to deploy pipelines without managing the underlying servers

Requirements

  • A minimum of 5 years of professional cloud engineering experience with experience in data engineering and Databricks highly desirable

Skills and Experience

Core skills, tools & frameworks:

  • Infrastructure as Code (IaC): Deep expertise in Terraform (creating modules, managing state, workspaces). Experience with CloudFormation, Bicep, or Crossplane is also valuable
  • Containerisation & orchestration: Proficiency with Docker and Kubernetes (EKS, AKS, or self-managed) for managing data workloads
  • CI/CD & version control: Advanced proficiency with Git, Azure DevOps and GitHub Actions for automating infrastructure deployment and policy checks
  • Scripting & automation: Strong coding skills in Python and Bash for code and automation. Familiarity with CLIs (DBX CLI, AWS CLI, Azure CLI) and REST APIs
  • Cloud security: Implementation of Least Privilege access, Role-Based Access Control (RBAC), Encryption at rest/transit, and Secrets Management (Key Vault/Secrets Manager)
  • Cloud networking: Designing complex networks involving VNETs/VPCs, Private Link/Endpoints, DNS resolution, and Firewalls to isolate data traffic
  • Data platform architecture: Understanding of modern data patterns like the Medallion architecture, Delta Lake, and Data Vault
  • Observability: Configuring cost management budgets, logging (CloudWatch/Azure Monitor), and alerting systems
  • Data workload understanding: Familiarity with how Data Engineers use tools like PySpark, SQL, dbt, and Spark Structured Streaming to better support them

AWS platform skills:

  • Storage: Expert configuration of S3 (lifecycle policies, access points, intelligent tiering)
  • Identity: Complex IAM Role & Policy management (cross-account roles, identity federation)
  • Networking: VPC design, Gateway Endpoints, Transit Gateways, and Route53 DNS
  • Serverless: Automation using AWS Lambda and Step Functions
  • Data services: Operational knowledge of Glue, Kinesis, and EMR

Azure platform skills:

  • Storage: Managing ADLS Gen2 (hierarchical namespace, ACLs) and Blob Storage
  • Identity: Microsoft Entra ID (formerly Azure AD) configuration, Service Principals, and Managed Identities
  • Networking: Virtual Networks, Private Link integration, and Network Security Groups (NSGs)
  • Data Services: Support for Azure Data Factory (ADF) and Synapse integrations

Databricks (platform & infrastructure):

  • Workspace administration: Automated deployment of workspaces and settings via the Databricks Terraform provider
  • Unity catalog: Configuring the governance layer, including metastores, catalogs, and external locations
  • Compute management: Creating and enforcing Cluster Policies and Instance Profiles to control costs and security
  • Security: Configuring Private Link for front-end/back-end connectivity, VNet Injection, and IP Access Lists
  • Orchestration: Managing infrastructure for Databricks Workflows (Jobs), Delta Live Tables (DLT), and Airflow integrations
  • AI/ML support: Provisioning infrastructure for MLflow, Mosaic AI, and Vector Search

Other

Certifications (nice to have)

While practical experience is paramount, the following certifications demonstrate a strong baseline of knowledge:

General cloud & DevOps:

  • AWS Certified Solutions Architect – Associate / Professional
  • Microsoft Certified: Azure Solutions Architect Expert
  • HashiCorp Certified: Terraform Associate
  • Certified Kubernetes Administrator (CKA)

Data & platform specific:

  • Databricks Certified Data Engineer Professional (highly desirable)
  • AWS Certified Data Engineer – Associate
  • Microsoft Certified: Azure Data Engineer Associate (DP-203)

Internal candidate profile

We are open to training internal candidates who demonstrate strong engineering fundamentals and a passion for data. Ideal internal candidates might currently be in the following roles:

  • Cloud / Platform Engineers: Who have mastered the “Ops” side of things (Terraform, Networking) and are curious to learn specific data technologies like Spark and Databricks
  • DevOps Engineers: Who are experts in CI/CD and automation and want to apply those patterns to the rapidly growing field of DataOps and MLOps
  • Data Engineers: Who find themselves more interested in how the platform runs (infrastructure, security, costs) than in writing the ETL pipelines themselves

BBD is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, family, gender identity or expression, genetic information, marital status, political affiliation, race, religion or any other characteristic protected by applicable laws, regulations or ordinances.

24 Skills Required For This Role

Cost Management Github Unity Game Texts Networking Dns Aws Identity Federation Azure Azure Devops Terraform Spark Cloud Security Data Science Ci Cd Docker Front End Kubernetes Back End Git Python Sql Github Actions Bash

Similar Jobs