Join the exciting Generative AI team at Qualcomm focused on integrating cutting edge GenAI models on Qualcomm chipsets. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of GenAI models on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. Utilize this power efficient hardware and Software stack to run Large Language Models (LLMs) and Large Vision Models (LVM) at near GPU speeds!
In this role, you will spearhead the development and commercialization of the Qualcomm AI Runtime (QAIRT) SDK on Qualcomm SoCs. As an AI inferencing expert, you'll push the limits of performance from large models. Your mastery in deploying large C/C++ software stacks using best practices will be essential. You'll stay on the cutting edge of GenAI advancements, understanding LLMs/Transformers and the nuances of edge-based GenAI deployment. Most importantly, your passion for the role of edge in AI's evolution will be your driving force.
Get notified when new jobs are added by Qualcomm