AI Security Engineer
Role details
Job location
Tech stack
Job description
We are seeking a highly skilled and proactive AI Security Engineer to join our Information Security team. This role is responsible for enabling the secure adoption of artificial intelligence technologies across the enterprise by designing, implementing, and enforcing security controls for AI platforms, tools, and integrations., AI Platform & Application Security
- Conduct security reviews and risk assessments for internal and third-party AI platforms, tools, agents, and APIs.
- Support secure onboarding of generative AI tools, AI-assisted development platforms, and SaaS-based AI services.
- Define security requirements and architectural guardrails for AI use cases prior to production deployment.
Security Controls, Monitoring & Enforcement
- Design and implement preventive and detective controls for AI services using enterprise security tooling (firewalls, XDR, CASB, CSPM, SIEM).
- Develop detections and alerts for:
- Unsanctioned or unapproved AI tool usage
- Risky AI-driven data access patterns
- Unauthorized OAuth integrations and API activity
- Enforce policy-based controls to prevent sensitive or regulated data exposure to AI systems.
Data Protection & Governance
- Partner with Legal, Privacy, and Security teams to align AI usage with data classification, DLP policies, and regulatory obligations.
- Translate AI governance policy into technical enforcement mechanisms.
- Support internal audits, risk assessments, and approvals related to AI adoption.
Cloud, Identity & API Security
- Secure AI workloads and integrations across cloud platforms (Azure, AWS, GCP).
- Govern identity, access, and OAuth permissions for AI tools accessing enterprise data sources.
- Implement secure authentication, authorization, secrets management, and API protection patterns.
Threat Modeling & Incident Response
- Perform threat modeling for AI systems, workflows, and integrations.
- Investigate and respond to AI-related security incidents, policy violations, or data exposure concerns.
- Contribute AI-specific detections, response procedures, and automation into SOAR and incident response playbooks.
Cross-Functional Collaboration
- Serve as the security subject-matter expert for AI initiatives across Engineering, IT, and Product teams.
- Provide security guidance and secure design recommendations to application and platform teams.
- Stay current on emerging AI threats, abuse techniques, and industry best practices.
Requirements
The ideal candidate has experience securing and governing AI systems, including generative AI platforms and AI integrations, with a deep understanding of AI-specific security risks and how to mitigate them through policy, architecture, and controls. This role focuses on real-world AI security enablement, governance enforcement, and protection of enterprise data., * Strong background in security engineering, cloud security, or application security.
- Experience securing SaaS platforms, APIs, and cloud workloads.
- Hands-on experience with enterprise security tools (firewalls, XDR/EDR, SIEM, CASB, CSPM).
- Solid understanding of identity and access management, including OAuth and API authorization.
- Ability to translate security policy, governance, and risk requirements into enforceable technical controls., * Experience securing AI platforms, generative AI tools, or LLM-based services.
- Familiarity with AI risk domains such as prompt injection, model misuse, data leakage, training data exposure, and third-party AI services.
- Experience assessing or governing AI integrations, agents, plugins, or API-based AI services in enterprise environments.
- Understanding of AI governance, risk management, or responsible AI principles (policy, controls, or assurance-not model development).
- Strong communication skills with the ability to explain complex security concepts to technical and non-technical stakeholders.