Meta's Llama LLM Research team seeks a Research Scientist specializing in Vision Large Language Models (VLLMs). Responsibilities include leading and collaborating on cutting-edge research in multimodal reasoning and generation, contributing to experiments (design, coding, evaluation, result organization), working with a large team, publishing research, mentoring colleagues, and fostering cross-functional collaboration. The ideal candidate possesses expertise in vision encoders, data filtering/curation, RLHF, responsible AI, and model controllability, with a focus on applying research to Meta's product development. The role requires experience in generative AI, LLMs, and a strong publication record.
Good To Have:- Generative AI and LLM research experience
- First-author publications at top AI conferences
- Experience with RLHF and responsible AI
Must Have:- PhD in CS/AI or related field
- 3+ years AI experience
- Publications in ML/CV/NLP/audio
- Experience with large AI models and datasets
- Software proficiency (Python, PyTorch)
- Lead research on VLLMs