Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change. We are looking for engineers with significant experience solving hard problems in PyTorch, CUDA and distributed systems. You will work alongside the rest of the research team to build & train cutting edge foundation models on thousands of GPUs that are built to scale from the ground up.
The Training Infrastructure team at Luma is responsible for building and maintaining the distributed systems that enable training of our large-scale multimodal models across thousands of GPUs. This team ensures our researchers can focus on innovation while having access to reliable, efficient, and scalable training infrastructure that pushes the boundaries of what's possible in AI model development.