Research Engineer - Multimodal Language Models

6 Months ago • All levels • $200,000 PA - $300,000 PA
Research Development

Job Description

Luma is building multimodal AI to expand human imagination and capabilities. This role involves training and scaling multimodal foundation models that can see, understand, show, explain, and interact with the world. The engineer will work end-to-end on cutting-edge multimodal language models with a focus on audio and visual data, significantly contributing to research projects and product roadmaps. Responsibilities include designing large-scale annotation efforts, building evaluation tools for multimodal models, developing AI training and inference methods, and ensuring efficient implementation of models and systems.
Good To Have:
  • Experience with interleaved data (audio, video, image, text)
  • Experience developing or benchmarking generative video models
  • Experience designing annotation tools
  • Experience with synthetic data
Must Have:
  • Expertise in Python and PyTorch
  • Experience with full development pipeline
  • Experience processing large-scale text data
  • Experience with LLMs, Vision Language Models, or Audio Language Models
Perks:
  • Competitive equity packages (stock options)
  • Comprehensive benefits plan

Add these skills to join the top 1% applicants for this job

game-texts
prototyping
pytorch
python

Luma’s mission is to build multimodal AI to expand human imagination and capabilities.

We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.


We are looking for engineers with significant experience solving hard problems in PyTorch, multimodal data, and distributed systems. You will work as a team to end-to-end build cutting edge multimodal language models with strong emphasis on audio and visual data. Your contributions will be pivotal in shaping various research projects and product roadmaps.

Responsibilities

  • Design and develop large-scale annotation efforts for model post-training.

  • Build tools to evaluate and benchmark multimodal language models.

  • Develop large-scale AI training and inference methods.

  • Ensure efficient implementation of models & systems for data processing and training.

  • Build tools to visualize, evaluate and filter datasets.

  • Collaborate with research and engineering teams across Luma to transfer research to products and services.

  • Implement cutting-edge product prototypes based on multimodal generative AI.

Experience

  • Expertise in Python & Pytorch, including practical experience working with the full development pipeline from data processing, preparation & data loading to training and inference.

  • Experience processing large-scale text data, or (bonus) interleaved data spanning audio, video, image, and/or text.

  • Hands-on experience in developing or benchmarking at least one of the following topics: LLMs, Vision Language Models, Audio Language Models, generative video models.


  • Good to have

    • Experience in design and development of annotation tools

    • Experience in synthetic data

Compensation

  • The pay range for this position in California is $200,000 - $300,000/yr; however, base pay offered may vary depending on job-related knowledge, skills, candidate location, and experience. We also offer competitive equity packages in the form of stock options and a comprehensive benefits plan. 

Your application is reviewed by real people.

Set alerts for more jobs like Research Engineer - Multimodal Language Models
Set alerts for new jobs by Luma
Set alerts for new Research Development jobs in United States
Set alerts for new jobs in United States
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙