Research Scientist, Applied Machine Learning Security (Agent Systems)

Apple Inc.
Cupertino, United States of America
1 month ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Compensation
$ 181K

Job location

Cupertino, United States of America

Tech stack

Machine Learning
Large Language Models

Job description

This role focuses on applied security research for production ML systems, with an emphasis on agentic and tool-using models deployed at scale. You will lead research efforts that surface real security risks in shipped or near-shipped systems, and you will drive mitigations that integrate cleanly into Apple's ML platforms and products.

You will operate at the boundary between research, platform engineering, and product security, conducting original research grounded in real system behavior and translating it into concrete design changes, launch requirements, and long-term hardening strategies. Impact is measured by risk reduction in production, not theoretical results alone.

Requirements

Ph.D. or equivalent experience in machine learning, security, systems, or a related field.

Demonstrated experience in applied ML security, adversarial ML, or systems security with real-world impact.

Strong experimental and engineering skills, with an emphasis on reproducibility and operational relevance.

Preferred Qualifications

Experience researching or securing LLM-based or tool-augmented ML systems.

Ability to work fluidly across research, engineering, and security review processes.

Track record of influencing production systems through research-driven insights.

Publications in top venues are a plus, but production impact is the primary signal.

Benefits & conditions

At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $181,100 and $318,400, and your base pay will depend on your skills, qualifications, experience, and location.

Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.

Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.

About the company

At Apple, we believe privacy is a fundamental human right. Our Security Engineering & Architecture (SEAR) organization is at the forefront of protecting billions of users worldwide, building security into every product, service, and experience we create. The SEAR ML Security Engineering team combines cutting-edge machine learning with world-class security engineering to defend against evolving threats at unprecedented scale. We're responsible for developing intelligent security systems for Apple Intelligence that protect Apple's ecosystem while preserving the privacy our users expect and deserve. We're seeking a staff-level ML Security Research Scientist who operates at the intersection of applied research and production impact. You'll lead original security research on agentic ML systems deployed at scale-driving secure agentic design directly into shipping products, identifying real vulnerabilities in tool-using models and designing adversarial evaluations that reflect actual attacker behavior. You'll work at the boundary between research, platform engineering, and product security, translating findings into architectural decisions, launch requirements, and long-term hardening strategies that protect billions of users. Your impact will be measured by risk reduction in production systems that ship.

Apply for this position