Applied Research Engineer - Multimodal Reasoning, SIML
3 Months ago • All levels • $143,100 PA - $264,200 PA
Research Development
Job Description
We are seeking senior technical leaders experienced in architecting and deploying production-scale multimodal machine learning. The ideal candidate will lead cross-functional efforts in ML modeling, prototyping, validation, and private learning. Strong ML fundamentals and the ability to contextualize research contributions within the state of the art are essential. Experience training and adapting large language models is crucial. The team focuses on multimodal machine learning and system experiences, including Spotlight Search, Photos Memories, Generative Playgrounds, Stickers, and Smart Wallpapers. We scale production ML workflows through distributed training and optimize LLMs for on-device user experiences.
Good To Have:
Experience with validation and private learning
Knowledge of the state of the art in ML research
Must Have:
Senior technical leadership in multimodal ML
Experience in ML modeling and prototyping
Strong ML fundamentals
Experience training/adapting large language models
Ability to scale ML workflows
Add these skills to join the top 1% applicants for this job
prototyping
machine-learning
Do you believe generative models can transform creative workflows and smart assistants used by billions? Do you believe it can fundamentally shift how people interact with devices and communicate? Our Scene Understanding team strives to turn cutting edge research into compelling user experiences that realize all these goals and more, working on Apple Intelligence technologies such as Image Playground, Genmoji, Generative Memories, Semantic Search, and many more. We are looking for senior technical leaders experienced in architecting and deploying production scale multimodal ML. An ideal candidate has the ability to lead diverse cross functional efforts ranging from ML modeling, prototyping, validation and private learning. Solid ML fundamentals and an ability to place research contributions with respect to state of the art would be an essential part of the role. Experience with training and adapting large language models would be an important need. We are the Intelligence System Experience (ISE) team within Apple’s software organization. The team works at the intersection between multimodal machine learning and system experiences. For example, experiences like Spotlight Search, Photos Memories, Generative Playgrounds, Stickers, Smart wallpapers, etc are all areas that the team has had a significant part in delivering through ML core technologies. These experiences that our users enjoy are backed by production ML workflows, which our team works to scale through distributed training. Additionally, our team also focuses on approaches to optimizing and adapting LLMs to best suit on-device user experiences. SELECTED REFERENCES TO OUR TEAM’S WORK: - https://machinelearning.apple.com/research/introducing-apple-foundation-models (https://machinelearning.apple.com/research/introducing-apple-foundation-models) - https://machinelearning.apple.com/research/stable-diffusion-coreml-apple-silicon (https://machinelearning.apple.com/research/stable-diffusion-coreml-apple-silicon) - https://machinelearning.apple.com/research/on-device-scene-analysis (https://machinelearning.apple.com/research/on-device-scene-analysis) - https://machinelearning.apple.com/research/panoptic-segmentation (https://machinelearning.apple.com/research/panoptic-segmentation)
Set alerts for more jobs like Applied Research Engineer - Multimodal Reasoning, SIML
Set alerts for new jobs by Apple
Set alerts for new Research Development jobs in United States