Senior Software Engineer in Data Engineering

enduring games

Job Summary

Rockstar Games is seeking a Senior Software Engineer in Data Engineering to build a cutting-edge game analytics platform and tools. This role involves developing complex ingestion and transformation processes with an emphasis on reliability and performance. The candidate will collaborate with Data Engineers, Machine Learning Engineers, and other Software Engineers to empower Analysts and Data Scientists, delivering data-driven insights and applications to company stakeholders. This is a full-time, in-office position in Carlsbad, CA.

Must Have

  • Design, build, and maintain high-throughput streaming and batch data processing services, focusing on raw and bronze-level ingestion.
  • Develop and operate stream-based applications for real-time data transformation, enrichment, validation, and routing.
  • Own and evolve event schemas and data contracts, including Avro schemas and Schema Registry governance.
  • Ensure scalability, fault tolerance, and performance of streaming and ingestion pipelines under heavy load.
  • Contribute to platform-level concerns like deployment automation, observability, operational tooling, and CI/CD.
  • Participate in the design and implementation of cloud-native data infrastructure.
  • 5+ years of professional experience building production software systems, preferably in a distributed or data-intensive environment.
  • Strong experience with Java (and/or Scala) as well as Python, in backend or data processing applications.
  • Experience designing and operating streaming systems using Kafka or Kafka Streams (or similar).
  • Experience working with event-driven architectures, including schema evolution and compatibility.
  • Experience building real-time and/or near-real-time data pipelines at scale.
  • Solid understanding of distributed systems concepts (partitioning, fault tolerance, backpressure, exactly-once/at-least-once semantics).
  • Familiarity with Avro, Protobuf, or similar serialization formats and schema governance practices.

Good to Have

  • Experience with Databricks, particularly for ingestion, bronze-layer processing, or structured streaming.
  • Experience deploying and scaling applications in containerized environments (e.g., Kubernetes, AKS).
  • Experience working with artifact repositories (e.g., Artifactory, ProGet, Maven repositories).
  • Experience with Infrastructure-as-Code (e.g., Terraform, Databricks Asset Bundles).
  • Familiarity with the Microsoft Azure cloud ecosystem.
  • Familiarity with Apache Spark.
  • Familiarity with CI/CD pipelines, automated testing, and deployment workflows.

Perks & Benefits

  • Bonus and/or equity awards
  • Full range of medical benefits
  • Full range of financial benefits
  • Other benefits
  • Equal opportunity, dignity and respect in the work environment
  • Reasonable accommodations for qualified job applicants with disabilities

Job Description

WHAT WE DO

  • The Rockstar Games Online Services team builds the technology foundation that powers our games and delivers world-class player experiences.
  • Our Data Engineering group manages petabyte-scale data, integrating dozens of streaming and batch sources with strict requirements for reliability, compliance, and low-latency processing.
  • We approach data engineering with the rigor of software engineering by applying modern practices such as clean architecture, modular design, and automated testing. Our focus is on building scalable, reusable platforms and frameworks that enable our partners in Data Science and Game Development to deliver insights and unlock new possibilities for players.

RESPONSIBILITIES

  • Design, build, and maintain high-throughput streaming and batch data processing services, with a primary focus on raw and bronze-level ingestion.
  • Develop and operate stream–based applications responsible for real-time data transformation, enrichment, validation, and routing.
  • Own and evolve event schemas and data contracts, including Avro schemas and Schema Registry governance.
  • Ensure scalability, fault tolerance, and performance of streaming and ingestion pipelines under heavy load.
  • Contribute to platform-level concerns such as deployment automation, observability, operational tooling, and CI/CD.
  • Participate in the design and implementation of cloud-native data infrastructure supporting real-time and batch workloads.

REQUIREMENTS

  • 5+ years of professional experience building production software systems, preferably in a distributed or data-intensive environment.
  • Strong experience with Java (and/or Scala) as well as Python, in backend or data processing applications.
  • Experience designing and operating streaming systems using Kafka or Kafka Streams (or similar).
  • Experience working with event-driven architectures, including schema evolution and compatibility.
  • Experience building real-time and/or near-real-time data pipelines at scale.
  • Solid understanding of distributed systems concepts (partitioning, fault tolerance, backpressure, exactly-once/at-least-once semantics).
  • Familiarity with Avro, Protobuf, or similar serialization formats and schema governance practices.

PLUSES

Please note that these are desirable skills and are not required to apply for the position.

  • Experience with Databricks, particularly for ingestion, bronze-layer processing, or structured streaming.
  • Experience deploying and scaling applications in containerized environments (e.g., Kubernetes, AKS).
  • Experience working with artifact repositories (e.g., Artifactory, ProGet, Maven repositories).
  • Experience with Infrastructure-as-Code (e.g., Terraform, Databricks Asset Bundles).
  • Familiarity with the Microsoft Azure cloud ecosystem.
  • Familiarity with Apache Spark.
  • Familiarity with CI/CD pipelines, automated testing, and deployment workflows.

13 Skills Required For This Role

Game Texts Automated Testing Azure Terraform Spark Maven Microsoft Azure Data Science Ci Cd Kubernetes Python Scala Java

Similar Jobs