AI Safety - Machine Learning and Applied Research

Apple Inc.
Seattle, United States of America
1 month ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate

Job location

Seattle, United States of America

Tech stack

Artificial Intelligence
Apple Products
Computer Vision
Machine Learning
Large Language Models
Information Technology

Job description

Our team leads Responsible AI & Safety initiatives for global generative AI products, operating at the intersection of policy, product, and GenAI. We're seeking candidates who will shape safety policies in partnership with leadership, design, engineering, legal, and regulatory stakeholders-ensuring our safeguards advance both user protection and product innovation.

You will collaborate closely with top machine learning researchers and engineers, software engineers, and design teams to develop and deliver groundbreaking solutions for Apple products. We believe that the most exciting problems in machine learning research arise at the intersection of emerging technologies and real-world use cases. This is also where the most critical breakthroughs come from.

You will also work on producing safety evaluations that uphold Apple's Responsible AI values requires thoughtful data sampling, creation, and curation for evaluation datasets; high quality, detailed annotations and careful auto-grading to assess feature performance; and mindful analysis to understand what the evaluation means for the user experience.

Requirements

  • 3+ years of proven ability in machine learning, including work with generative models (Transformers, LLMs, VLMs), NLP, or Computer Vision
  • 4+ Years research or product deployment record in areas related to responsible AI, with publications in top ML venues (e.g., ACL, CHI, CVPR, EMNLP, FAccT, ICML, Interspeech, NeurIPS, UIST, etc.)
  • Strong research fundamentals, machine learning principles, and development methodologies around LLMs, foundation models, and diffusion models
  • Experience working with generative models for evaluation and/or product development, and up-to-date knowledge of common challenges and failures.
  • PhD, MS or BS in Computer Science, Machine Learning, or related fields or an equivalent qualification acquired through other avenues, * Experience working in the Responsible AI space.
  • Curiosity about fairness and bias in generative AI systems, and a strong desire to help make the technology more equitable.
  • Proven success contributing in a highly cross-functional environment
  • Experience shipping complex AI systems at global scale

Apply for this position