Are you excited about the impact that optimizing deep learning models can have on enabling transformative user experiences? The field of ML compression research continues to grow rapidly and new techniques to perform quantization, pruning etc are increasingly available to be ported and adopted by the ML developer community looking to ship more models in a constrained memory budget and make them run faster. We are passionate about productizing and pushing the envelope of the state of the art model optimization algorithms, to further compress and speed up the thousands of deep learning models shipping as part of Apple internal and external apps, running locally on millions of Apple devices. We work on a python library that implements a variety of training time and post training quantization algorithms and provides them to developers as simple to use, turnkey APIs, and ensures that these optimizations work seamlessly with the Core ML inference stack and Apple hardware. We are a team that collaborates heavily with researchers at Apple, ML software and hardware architecture teams and external/internal product teams shipping state of the art optimized models on Apple devices. If you are excited about making a big impact and playing a critical role in growing the user base and driving the adoption of a relatively new library, this is a great opportunity for you. We are looking for someone who is highly self motivated and looking for an opportunity to lead the testing and automation initiatives for a model optimization library for on device execution. If you are someone who is passionate about maintaining a high code quality and testability of production code, have experience setting up and maintaining CI pipelines for software projects, we strongly encourage you to apply.