AI Security Engineer
Role details
Job location
Tech stack
Job description
The AI Security Engineer is responsible for securing the enablement and use of AI, GenAI, LLM, and agentic technologies across the enterprise, balancing business velocity with protection of Applied Materials' intellectual property, sensitive data, and customer trust.
This role drives AI security governance, risk management, technical guardrails, and operational oversight for AI systems and AI-integrated applications across the full lifecycle-from intake and design through deployment, monitoring, and incident response. The role serves as a key focal point for AI security execution in the US and partners closely with global counterparts and cross-pillar security teams to deliver scalable, measurable, and auditable AI security controls., Technical Mindset & Operating Style
- Highly technology-savvy and continuously current on rapidly evolving AI/LLM platforms, agent frameworks, developer tooling, and emerging attack techniques through hands-on experimentation and learning.
- Brings strong engineering intuition through prior software development experience or equivalent hands-on technical background, enabling effective architecture reviews, threat modeling, and pragmatic security guidance.
- Comfortable reading, writing, and reviewing code (e.g., Python, TypeScript, or similar) to understand AI workflows, model integrations, APIs, pipelines, and real-world failure modes.
- Practical experience experimenting with AI tooling, copilots, agents, and "vibe-coding" workflows, with an understanding of how developers' prototype, iterate, and ship AI-enabled systems.
- Able to translate modern developer behaviors (prompt-driven development, agent orchestration, rapid iteration) into realistic, enforceable security controls rather than theoretical policy.
- Uses technical credibility to influence engineering teams, accelerate adoption of secure AI patterns, and ensure security enables-rather than blocks-innovation.
AI Security Governance & Intake
- Own enterprise AI discovery, inventory, and intake workflows covering AI use cases, models, tools, agents, and integrations
- Define and enforce AI risk tiering and classification (data sensitivity, model risk, autonomy level, exposure)
- Partner with AI Governance, Legal, Privacy, and Risk teams to establish approval, exception, and waiver processes
- Ensure AI security controls align with enterprise risk management and audit expectations
AI Threat Modeling & Risk Management
- Lead AI-specific threat modeling, including prompt injection, data leakage, model poisoning, tool abuse, agentic risk, and supply-chain threats
- Define secure AI architecture patterns and prohibited design patterns
- Conduct and oversee risk assessments for LLM-integrated applications, internal copilots, and external AI services
- Track AI security risks and exceptions through remediation and closure
Technical Controls & Guardrails
- Define and operationalize AI security guardrails, including:
- Authentication and authorization for AI systems
- Data boundaries, retention, and usage controls
- Output/content controls and policy enforcement
- Identity, secrets, and key management for AI workloads
- Lead security requirements for agent frameworks, MCP servers/clients, AI gateways, and proxies
- Partner with AppSec and Platform teams to deliver secure "paved-road" AI solutions for engineering teams
Secure AI Lifecycle, Testing & Monitoring
- Establish secure AI lifecycle gates (pre-prod, prod, post-deployment)
- Own AI security testing and validation, including red teaming, abuse testing, and guardrail effectiveness
- Define requirements for telemetry, audit logging, and retention for AI sessions, tool calls, and memory usage
- Integrate AI signals into SIEM, detection, and incident response workflows
Incident Response & Continuous Improvement
- Own AI-specific detection use cases and alerting strategies
- Partner with IR teams to develop and maintain AI incident response posture and integration with SIEM tools
- Lead post-incident reviews and drive control improvements
- Publish executive and operational AI security metrics and dashboards
Requirements
- 10+ years in security architecture, application security, cloud/platform security, or related fields
- Demonstrated experience securing AI/ML or LLM-based systems in enterprise environments
- Strong background in threat modeling, secure design, and risk management
- Experience working cross-functionally with engineering, product, legal, and compliance teams
- Strong written and verbal communication skills, including executive-level communication, * Prior experience as a software engineer, platform engineer, or security engineer with significant coding responsibilities
- Experience with AI governance frameworks or enterprise risk management programs
- Familiarity with security testing, red teaming, and detection engineering
- Experience building security programs with clear KPIs, metrics, and audit readiness
Benefits & conditions
The salary offered to a selected candidate will be based on multiple factors including location, hire grade, job-related knowledge, skills, experience, and with consideration of internal equity of our current team members. In addition to a comprehensive benefits package, candidates may be eligible for other forms of compensation such as participation in a bonus and a stock award program, as applicable.
For all sales roles, the posted salary range is the Target Total Cash (TTC) range for the role, which is the sum of base salary and target bonus amount at 100% goal achievement.