Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems)

19 Minutes ago • All levels • $149,000 PA - $279,800 PA
Audio

Job Description

This role involves joining a core research team focused on speech and audio within large-scale, native multimodal model systems. The Research Scientist will develop general-purpose, end-to-end large speech models for multilingual ASR, speech translation, synthesis, paralinguistic, and general audio understanding. Key responsibilities include advancing research on speech representation learning, exploring audio/speech alignment and fusion in multimodal models, and building high-quality multimodal speech datasets. Candidates should have a Ph.D. or Master's with relevant experience, a solid understanding of speech processing, acoustic modeling, and large model architectures, and proficiency in deep learning frameworks.
Good To Have:
  • Experience with multilingual, multitask, or end-to-end speech systems.
  • In-depth research or practical experience in Speech representation pretraining (e.g., HuBERT, Wav2Vec, Whisper).
  • In-depth research or practical experience in Multimodal alignment and cross-modal modeling (e.g., audio-visual-text).
  • Experience driving state-of-the-art (SOTA) performance on audio understanding tasks with large models.
  • Experience with large-scale training and distributed systems.
Must Have:
  • Develop general-purpose, end-to-end large speech models covering multilingual ASR, speech translation, speech synthesis, paralinguistic understanding, and general audio understanding.
  • Advance research on speech representation learning and encoder/decoder architectures to build unified acoustic representations for multi-task and multimodal applications.
  • Explore representation alignment and fusion mechanisms between audio/speech and other modalities in large multimodal models.
  • Build and maintain high-quality multimodal speech datasets.
  • Ph.D. in Computer Science, Electrical Engineering, Artificial Intelligence, Linguistics, or a related field; or Master’s degree with several years of relevant experience.
  • Solid understanding of speech and audio signal processing, acoustic modeling, language modeling, and large model architectures.
  • Proficient in one or more core speech system development pipelines such as ASR, TTS, or speech translation.
  • Proficient in deep learning frameworks such as PyTorch or TensorFlow.
  • Familiar with Transformer-based architectures and their applications in speech and multimodal training/inference.
Perks:
  • Medical benefits
  • Dental benefits
  • Vision benefits
  • Life and disability benefits
  • Participation in the Company’s 401(k) plan
  • Sign-on payment (case-by-case)
  • Relocation package (case-by-case)
  • Restricted stock units (case-by-case)
  • 15 to 25 days of vacation per year (depending on tenure)
  • Up to 13 days of holidays throughout the calendar year
  • Up to 10 days of paid sick leave per year

Add these skills to join the top 1% applicants for this job

game-texts
pytorch
deep-learning
tensorflow

Business Unit

What the Role Entails

Job Responsibilities:

  • We are building large-scale, native multimodal model systems that jointly support vision, audio, and text to enable comprehensive perception and understanding of the physical world. You will join the core research team focused on speech and audio, contributing to the following key research areas:
  • Develop general-purpose, end-to-end large speech models covering multilingual automatic speech recognition (ASR), speech translation, speech synthesis, paralinguistic understanding, and general audio understanding.
  • Advance research on speech representation learning and encoder/decoder architectures to build unified acoustic representations for multi-task and multimodal applications.
  • Explore representation alignment and fusion mechanisms between audio/speech and other modalities in large multimodal models, enabling joint modeling with image and text.
  • Build and maintain high-quality multimodal speech datasets, including automatic annotation and data synthesis technologies.

Who We Look For

  • Ph.D. in Computer Science, Electrical Engineering, Artificial Intelligence, Linguistics, or a related field; or Master’s degree with several years of relevant experience.
  • Solid understanding of speech and audio signal processing, acoustic modeling, language modeling, and large model architectures.
  • Proficient in one or more core speech system development pipelines such as ASR, TTS, or speech translation; experience with multilingual, multitask, or end-to-end systems is a plus.
  • Candidates with in-depth research or practical experience in the following areas are strongly preferred:
  • Speech representation pretraining (e.g., HuBERT, Wav2Vec, Whisper)
  • Multimodal alignment and cross-modal modeling (e.g., audio-visual-text)
  • Experience driving state-of-the-art (SOTA) performance on audio understanding tasks with large models
  • Proficient in deep learning frameworks such as PyTorch or TensorFlow; experience with large-scale training and distributed systems is a plus.
  • Familiar with Transformer-based architectures and their applications in speech and multimodal training/inference.

The expected base pay range for this position in the location(s) listed above is $149,000.00 to $279,800.00 per year. Actual pay may vary depending on job-related knowledge, skills, and experience. Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis. Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company’s 401(k) plan. The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee’s tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year. Your benefits may be adjusted to reflect your location, employment status, duration of employment with the company, and position level. Benefits may also be pro-rated for those who start working during the calendar year.

Equal Employment Opportunity at Tencent

As an equal opportunity employer, we firmly believe that diverse voices fuel our innovation and allow us to better serve our users and the community. We foster an environment where every employee of Tencent feels supported and inspired to achieve individual and common goals.

Who we are

Tencent is a world-leading internet and technology company that develops innovative products and services to improve the quality of life for people around the world.

Set alerts for more jobs like Research Scientist – Speech and Audio Understanding (Large Models & Multimodal Systems)
Set alerts for new jobs by Tencent
Set alerts for new Audio jobs in United States
Set alerts for new jobs in United States
Set alerts for Audio (Remote) jobs
Contact Us
hello@outscal.com
Made in INDIA 💛💙