Research Scientist - Multimodal Language Models

7 Months ago • All levels • $200,000 PA - $300,000 PA
Research Development

Job Description

Luma is seeking experienced researchers to build multimodal AI, focusing on integrating vision and audio with language models. The role involves end-to-end work on cutting-edge multimodal language models, with a strong emphasis on audio and visual data. Contributions will significantly influence research projects and product roadmaps. Responsibilities include designing and implementing novel AI algorithms, building evaluation tools, developing large-scale training and inference methods, ensuring efficient model implementation, processing multimodal data, collaborating with teams, and implementing prototypes for multimodal generative AI.
Good To Have:
  • Experience with interleaved audio, video, image, and/or text data
Must Have:
  • Expertise in Python & Pytorch
  • Experience with full AI development pipeline
  • Experience with large-scale text data
  • Hands-on experience with LLMs or VLM or ALM or generative video models
Perks:
  • Competitive equity packages in the form of stock options
  • Comprehensive benefits plan

Add these skills to join the top 1% applicants for this job

game-texts
prototyping
pytorch
python
algorithms

Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision and audio. So, we are working on training and scaling up multimodal foundation models for systems that can see, hear and understand, show and explain, and eventually interact with our world to effect change.

We are looking for researchers with significant experience solving hard problems in multimodal language models. You will work end–to–end on cutting edge multimodal language models with strong emphasis on audio and visual data. Your contributions will be pivotal in shaping various research projects and product roadmaps.

Responsibilities

  • Design and implement novel AI algorithms and architectures for multimodal language models.

  • Build tools to evaluate and benchmark multimodal language models.

  • Develop large-scale AI training and inference methods.

  • Ensure efficient implementation of models & systems for data processing and training.

  • Build tools to analyze and process multimodal data.

  • Collaborate with research and engineering teams across Luma to transfer research to products and services.

  • Implement cutting-edge product prototypes based on multimodal generative AI.

Experience

  • Expertise in Python & Pytorch, including practical experience working with the full development pipeline from data processing & data loading to training, inference, and optimization.

  • Experience working with large-scale text data, or (bonus) interleaved data spanning audio, video, image, and/or text.

  • Hands-on experience in developing or benchmarking at least one of the following topics: LLMs, Vision Language Models, Audio Language Models, generative video models .

Compensation

  • The pay range for this position in California is $200,000 - $300,000yr; however, base pay offered may vary depending on job-related knowledge, skills, candidate location, and experience. We also offer competitive equity packages in the form of stock options and a comprehensive benefits plan. 

Your application is reviewed by real people.

Set alerts for more jobs like Research Scientist - Multimodal Language Models
Set alerts for new jobs by Luma
Set alerts for new Research Development jobs in United States
Set alerts for new jobs in United States
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙