Senior Research Engineer - Interactive Avatars

Synthesia

Job Summary

As a Senior Research Engineer, you will join Synthesia's R&D Department, focusing on cutting-edge Generative AI challenges, specifically avatar-centric interactive video diffusion models. You will work on applying research to directly impact global solutions used by over 60,000 businesses. This role involves adapting diffusion models, developing real-time video streaming methods, improving perceptual layers for interactive agents, enhancing visual quality, and building robust evaluation frameworks. You will collaborate with the data team and stay updated on relevant research to shape the future of AI video agents.

Must Have

  • Comfortable owning and executing on the responsibilities listed.
  • Strong ML (e.g., diffusion, GANs, VAEs) and computer vision background with relevant industry experience.
  • Hands-on experience with diffusion models (ideally avatar-centric or video-focused) and up to date with recent advances.
  • Proficient in PyTorch and familiar with modern ML frameworks and tooling.
  • Strong Python engineering skills, confident with git and version control, and a commitment to clean, maintainable research code.
  • Outcome-driven, detail-oriented, and motivated to push state-of-the-art research into real product impact.
  • Clear communicator of hypotheses, experiments, and results.

Good to Have

  • Experience with audio-conditioned video diffusion models and deep knowledge of recent video DiT architectures.
  • Demonstrated ability to own the full model development pipeline end to end, from data preparation to model design, training, and evaluation.
  • A strong publication record in areas such as world models, interactive agents, or video diffusion models.

Perks & Benefits

  • Competitive compensation (salary + stock options + bonus)
  • Hybrid work setting with an office in London, Amsterdam, Zurich, Munich, or remote in Europe.
  • 25 days of annual leave + public holidays
  • Great company culture with the option to join regular planning and socials at our hubs
  • Other benefits depending on your location

Job Description

Senior Research Engineer - Interactive Avatars

Welcome to the video first world

From your everyday PowerPoint presentations to Hollywood movies, AI will transform the way we create and consume content.

Today, people want to watch and listen, not read — both at home and at work. If you’re reading this and nodding, check out our brand video.

Despite the clear preference for video, communication and knowledge sharing in the business environment are still dominated by text, largely because high-quality video production remains complex and challenging to scale—until now….

Meet Synthesia

We're on a mission to make video easy for everyone. Born in an AI lab, our AI video communications platform simplifies the entire video production process, making it easy for everyone, regardless of skill level, to create, collaborate, and share high-quality videos. Whether it's for delivering essential training to employees and customers or marketing products and services, Synthesia enables large organizations to communicate and share knowledge through video quickly and efficiently. We’re trusted by leading brands such as Heineken, Zoom, Xerox, McDonald’s and more. Read stories from happy customers and what 1,200+ people say on G2.

In February 2024, G2 named us as the fastest growing company in the world. Today, we're at a $2.1bn valuation and we recently raised our Series D. This brings our total funding to over $330M from top-tier investors, including Accel, Nvidia, Kleiner Perkins, Google and top founders and operators including Stripe, Datadog, Miro, Webflow, and Facebook.

What you'll do at Synthesia:

As a Research Engineer, you will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on avatar-centric interactive video diffusion models. Within the team you’ll have the opportunity to work on the applied side of our research efforts and directly impact our solutions that are used worldwide by over 60,000 businesses.

This is a unique opportunity for experts in machine learning and diffusion models to shape the future of AI video agents that can think, act, and react like humans. As part of our Interactive Avatars Team, you’ll work on cutting-edge research with a clear focus on turning breakthrough ideas into real product capabilities. You’ll join a team that moves fast, iterates often, and builds models that ship and make a meaningful impact. Example tasks and responsibilities include:

  • Adapt diffusion models to incorporate diverse conditioning signals (e.g., audio, motion, interaction cues).
  • Develop methods for streaming infinitely long video sequences at real-time rates.
  • Work on the perceptual layer of interactive agents, including understanding user audio and generating appropriate contextual reactions.
  • Improve lip-sync accuracy, motion realism, and overall visual quality in video diffusion models.
  • Build robust evaluation frameworks and test suites to enable continuous quality tracking.
  • Collaborate closely with our data team to define data needs and ensure high-quality datasets.
  • Stay up to date with research in world models, interactive human/agent modeling, diffusion models, and related areas.

What we're looking for:

  • Comfortable owning and executing on the responsibilities listed above.
  • Strong ML (e.g., diffusion, GANs, VAEs) and computer vision background with relevant industry experience.
  • Hands-on experience with diffusion models (ideally avatar-centric or video-focused) and up to date with recent advances.
  • Proficient in PyTorch and familiar with modern ML frameworks and tooling.
  • Strong Python engineering skills, confident with git and version control, and a commitment to clean, maintainable research code.
  • Outcome-driven, detail-oriented, and motivated to push state-of-the-art research into real product impact.
  • Clear communicator of hypotheses, experiments, and results.

What will make you stand out:

  • Experience with audio-conditioned video diffusion models and deep knowledge of recent video DiT architectures.
  • Demonstrated ability to own the full model development pipeline end to end, from data preparation to model design, training, and evaluation.
  • A strong publication record in areas such as world models, interactive agents, or video diffusion models.

Why join us?

We’re living the golden age of AI. The next decade will yield the next iconic companies, and we dare to say we have what it takes to become one. Here’s why,

Our culture

At Synthesia we’re passionate about building, not talking, planning or politicising. We strive to hire the smartest, kindest and most unrelenting people and let them do their best work without distractions. Our work principles serve as our charter for how we make decisions, give feedback and structure our work to empower everyone to go as fast as possible. You can find out more about these principles here.

Serving 50,000+ customers (and 50% of the Fortune 500)

We’re trusted by leading brands such as Heineken, Zoom, Xerox, McDonald’s and more. Read stories from happy customers and what 1,200+ people say on G2.

Proprietary AI technology

Since 2017, we’ve been pioneering advancements in Generative AI. Our AI technology is built in-house, by a team of world-class AI researchers and engineers. Learn more about our AI Research Lab and the team behind.

AI Safety, Ethics and Security

AI safety, ethics, and security are fundamental to our mission. While the full scope of Artificial Intelligence's impact on our society is still unfolding, our position is clear: People first. Always. Learn more about our commitments to AI Ethics, Safety & Security.

The good stuff...

  • Competitive compensation (salary + stock options + bonus)
  • Hybrid work setting with an office in London, Amsterdam, Zurich, Munich, or remote in Europe.
  • 25 days of annual leave + public holidays
  • Great company culture with the option to join regular planning and socials at our hubs
  • + other benefits depending on your location

You can see more about Who we are and How we work here: https://www.synthesia.io/careers

LI-MD1

Create a Job Alert

Interested in building your career at Synthesia? Get future opportunities sent straight to your email.

Create alert

Apply for this job

  • indicates a required field

Autofill with MyGreenhouse

First Name*

Last Name*

Email*

Phone

Country

Phone

Resume/CV*

AttachAttach

Dropbox

Google Drive

Enter manuallyEnter manually

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

AttachAttach

Dropbox

Google Drive

Enter manuallyEnter manually

Accepted file types: pdf, doc, docx, txt, rtf

---

Have you worked with diffusion model to incorporate diverse conditioning signals? (audio, motion, interaction)

Are you based in EU?

Are you legally authorised to work in the country you wish to work in without the need for visa sponsorship?*

Select...

Do you require ongoing employer support to maintain your right to work (e.g., visa renewals or work permit sponsorship)?*

Select...

---

By checking this box, I agree to allow Synthesia to retain my data for future opportunities for employment for up to 700 days after the conclusion of consideration of my current application for employment.

Submit application

13 Skills Required For This Role

Github Game Texts Miro Test Suites React Video Editing Pytorch Computer Vision Notion Git Python Stripe Machine Learning

Similar Jobs