Senior Data Engineer

NetApp

Job Summary

As a Sr. Data Engineer at Fandango, you will be responsible for architecting, designing, and building scalable data solutions for analytics, insights, and personalization. This role involves leading complex data initiatives, mentoring junior engineers, and ensuring the resilience and efficiency of the data infrastructure. You will develop data pipelines, data lakes, and data warehouses using AWS services, design ETL/ELT frameworks, optimize PySpark, Python, and SQL workflows, and implement data quality measures to support data-driven applications and machine learning use cases.

Must Have

  • Architect, design, and develop scalable data pipelines, data lakes, and data warehouses (AWS Redshift, S3, Glue, EMR).
  • Lead the design of ETL/ELT frameworks and data models (relational and dimensional).
  • Build and optimize PySpark, Python, and SQL workflows for large-scale data processing.
  • Orchestrate workflows using Airflow or AWS MWAA.
  • Implement data quality, validation, and observability frameworks.
  • Partner with data scientists, analysts, and product teams for high-quality data.
  • Drive best practices in version control, testing, CI/CD, and infrastructure as code (Terraform).
  • Troubleshoot, tune, and optimize database performance.
  • Support production data systems, ensuring reliability and timeliness.
  • Mentor other data engineers and contribute to design reviews.
  • 7+ years of experience in data engineering, data warehousing, or OLAP environments.
  • 5+ years of SQL experience, including performance tuning and query optimization.
  • 4+ years of Python or PySpark development for data pipelines and transformations.
  • 3+ years of experience building and maintaining pipelines using AWS Glue, Airflow, EMR, Lambda, or equivalent orchestration tools.
  • Strong experience in AWS Data Services (S3, Redshift, Athena, DynamoDB) and distributed data processing frameworks.
  • Proven experience in data modeling (dimensional and relational) and schema design for analytical workloads.
  • Demonstrated ability to troubleshoot complex data issues and lead root-cause analysis in production systems.
  • Strong understanding of ETL/ELT design patterns, data governance, and metadata management.
  • Excellent collaboration and communication skills with technical and non-technical stakeholders.

Good to Have

  • Experience with real-time data streaming (Kafka, Kinesis, or similar).
  • Familiarity with Terraform, CI/CD, and automated testing frameworks.
  • Hands-on experience with BI tools such as Tableau, QuickSight, or Looker.
  • Exposure to NoSQL systems (MongoDB, DynamoDB, or Cassandra).
  • Background in data observability and data quality monitoring tools (Great Expectations, Soda, Datadog).
  • Strong Agile/Scrum experience and mentorship background.

Job Description

As a Sr. Data Engineer, you will architect, design, and build data solutions that power analytics, insights, and personalization across Fandango’s digital platforms. You’ll lead complex data initiatives, mentor junior engineers, and ensure that our data infrastructure is scalable, efficient, and resilient.

Responsibilities

  • Architect, design, and develop scalable data pipelines, data lakes, and data warehouses (AWS Redshift, S3, Glue, EMR) to support analytics and product intelligence.
  • Lead the design of ETL/ELT frameworks and data models (relational and dimensional) to power reporting, dashboards, and data-driven applications.
  • Build and optimize PySpark, Python, and SQL workflows for large-scale batch and streaming data processing.
  • Orchestrate workflows using Airflow or AWS MWAA and implement data ingestion from APIs, third-party sources, and internal microservices.
  • Implement data quality, validation, and observability frameworks across all data pipelines.
  • Partner with data scientists, analysts, and product teams to ensure high-quality, well-modeled data for analytics and ML use cases.
  • Drive best practices in version control, testing, CI/CD, and infrastructure as code (Terraform).
  • Troubleshoot, tune, and optimize database performance; create indexes and partitioning strategies to improve query efficiency.
  • Support production data systems, ensuring reliability and timeliness of data delivery.
  • Mentor other data engineers, contributing to design reviews, code quality, and operational excellence.
  • Report progress and blockers to leadership, and collaborate with cross-functional IT and infrastructure teams.

#LI-Remote

Qualifications

  • 7+ years of experience in data engineering, data warehousing, or OLAP environments.
  • 5+ years of SQL experience, including performance tuning and query optimization.
  • 4+ years of Python or PySpark development for data pipelines and transformations.
  • 3+ years of experience building and maintaining pipelines using AWS Glue, Airflow, EMR, Lambda, or equivalent orchestration tools.
  • Strong experience in AWS Data Services (S3, Redshift, Athena, DynamoDB) and distributed data processing frameworks.
  • Proven experience in data modeling (dimensional and relational) and schema design for analytical workloads.
  • Demonstrated ability to troubleshoot complex data issues and lead root-cause analysis in production systems.
  • Strong understanding of ETL/ELT design patterns, data governance, and metadata management.
  • Excellent collaboration and communication skills with technical and non-technical stakeholders.

Desired Qualifications

  • Experience with real-time data streaming (Kafka, Kinesis, or similar).
  • Familiarity with Terraform, CI/CD, and automated testing frameworks.
  • Hands-on experience with BI tools such as Tableau, QuickSight, or Looker.
  • Exposure to NoSQL systems (MongoDB, DynamoDB, or Cassandra).
  • Background in data observability and data quality monitoring tools (Great Expectations, Soda, Datadog).
  • Strong Agile/Scrum experience and mentorship background.

This position has been designated as fully remote, meaning that the position is expected to contribute from a non-NBCUniversal worksite, most commonly an employee’s residence

Additional Information

As part of our selection process, external candidates may be required to attend an in-person interview with an NBCUniversal employee at one of our locations prior to a hiring decision. NBCUniversal's policy is to provide equal employment opportunities to all applicants and employees without regard to race, color, religion, creed, gender, gender identity or expression, age, national origin or ancestry, citizenship, disability, sexual orientation, marital status, pregnancy, veteran status, membership in the uniformed services, genetic information, or any other basis protected by applicable law.

If you are a qualified individual with a disability or a disabled veteran, you have the right to request a reasonable accommodation if you are unable or limited in your ability to use or access nbcunicareers.com as a result of your disability. You can request reasonable accommodations by emailing AccessibilitySupport@nbcuni.com.

Although you'll be hired as an NBCU employee, your employment and the responsibilities associated with this job likely will transition to Versant in the future. By joining at this pivotal time, you'll be a part of this exciting company as it takes shape.

For LA County and City Residents Only: NBCUniversal will consider for employment qualified applicants with criminal histories, or arrest or conviction records, in a manner consistent with relevant legal requirements, including the City of Los Angeles' Fair Chance Initiative For Hiring Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, where applicable.

17 Skills Required For This Role

Cross Functional Communication Design Patterns Game Texts Agile Development Automated Testing Aws Nosql Terraform Looker Tableau Mongodb Ci Cd Cassandra Microservices Python Sql

Similar Jobs