AIML - ML Engineer, Responsible AI
Apple
Job Summary
Would you like to play a part in building the next generation of generative AI applications at Apple? We're looking for scientists and engineers to work on ambitious projects that will impact the future of Apple, our products, and the broader world. In this role, you'll have the opportunity to tackle innovative problems in machine learning, particularly focused on generative AI. As a member of the Apple HCMI group, you will be working on Apple's generative models that will power a wide array of new features. Our team is currently working on large generative models for vision and language, with particular interest on safety, robustness, and uncertainty in models.
Must Have
- Develop models, tools, metrics, and datasets for assessing and evaluating the safety of generative models over the model deployment lifecycle
- Develop methods, models, and tools to interpret and explain failures in language and diffusion models
- Build and maintain human annotation and red teaming pipelines to assess quality and risk of various Apple products
- Prototype, implement, and evaluate new ML models and algorithms for red teaming LLMs
- Work with highly-sensitive content with exposure to offensive and controversial content
- Strong engineering skills and experience in writing production-quality code in Python, Swift or other programming languages
- Background in generative models, natural language processing, LLMs, or diffusion models
- Experience with failure analysis, quality engineering, or robustness analysis for AI/ML based features
- Experience working with crowd-based annotations and human evaluations
- Experience working on explainability and interpretation of AI/ML models
Good to Have
- BS, MS or PhD in Computer Science, Machine Learning, or related fields or an equivalent qualification acquired through other avenues
- Proven track record of contributing to diverse teams in a collaborative environment
Perks & Benefits
- Comprehensive medical and dental coverage
- Retirement benefits
- A range of discounted products and free services
- Reimbursement for certain educational expenses — including tuition, for formal education related to advancing your career at Apple
- Opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs
- Eligible for discretionary restricted stock unit awards
- Can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan
- Eligible for discretionary bonuses or commission payments
- Relocation
Job Description
Would you like to play a part in building the next generation of generative AI applications at Apple? We're looking for scientists and engineers to work on ambitious projects that will impact the future of Apple, our products, and the broader world. In this role, you'll have the opportunity to tackle innovative problems in machine learning, particularly focused on generative AI. As a member of the Apple HCMI group, you will be working on Apple's generative models that will power a wide array of new features. Our team is currently working on large generative models for vision and language, with particular interest on safety, robustness, and uncertainty in models.
- Develop models, tools, metrics, and datasets for assessing and evaluating the safety of generative models over the model deployment lifecycle
- Develop methods, models, and tools to interpret and explain failures in language and diffusion models
- Build and maintain human annotation and red teaming pipelines to assess quality and risk of various Apple products
- Prototype, implement, and evaluate new ML models and algorithms for red teaming LLMs
- Work with highly-sensitive content with exposure to offensive and controversial content
Key Qualifications
- Strong engineering skills and experience in writing production-quality code in Python, Swift or other programming languages
- Background in generative models, natural language processing, LLMs, or diffusion models
- Experience with failure analysis, quality engineering, or robustness analysis for AI/ML based features
- Experience working with crowd-based annotations and human evaluations
- Experience working on explainability and interpretation of AI/ML models
Education & Experience
- BS, MS or PhD in Computer Science, Machine Learning, or related fields or an equivalent qualification acquired through other avenues
- Proven track record of contributing to diverse teams in a collaborative environment