Senior AI Security & Robustness Engineer
Role details
Job location
Tech stack
Job description
We are seeking a Senior ML Security & Robustness Engineer to strengthen the adversarial robustness, privacy, and trustworthiness of AI models deployed across edge, embedded, hybrid, and cloud environments. You will shape frameworks, defenses, and best practices that secure classical, deep learning, and foundation models against real-world attacks. This includes model protection (obfuscation, watermarking), secure ML lifecycle management, and evaluation under adversarial threat models. Responsibilities
This is a hands-on and high-impact role, blending applied research and production engineering:
- Design, test, and deploy adversarial defenses for ML models across varied deployment architectures (edge, hybrid, cloud)
- Own robustness evaluation pipelines, red-team, and model penetration testing
- Secure ML artifacts via fingerprinting, obfuscation, and model watermarking
- Implement privacy-preserving learning techniques (e.g., FL, DP-SGD)
- Contribute to threat modeling and secure ML lifecycle governance
- Develop and maintain tooling for continuous robustness testing and secure MLOps workflows
- Collaborate with research and product teams to transition prototype defenses into production
- Publish and communicate findings internally and externally when appropriate
Requirements
- Master's or PhD in Computer Science, Cybersecurity, Applied Mathematics, Electrical Engineering, or related field
- Strong foundations in deep learning, optimization, statistics, and reliability evaluation
- Expertise in adversarial ML methods and evaluation frameworks
- Hands-on proficiency with PyTorch (preferred) or TensorFlow
- Experience deploying hardened models to embedded / constrained environments
- Experience with secure ML lifecycle concepts and threat modeling
- Experience with at least one ML security tool (e.g., ART, CleverHans, Foolbox)
- Model IP protection: watermarking, fingerprinting, secure model storage
- Strong communication and cross-functional collaboration skills in English
Desired Qualifications
- Experience with FL frameworks (e.g., Flower, OpenFL)
- Familiarity with cryptographic principles and secure computation techniques
- MLOps tooling experience (MLflow, W&B, CI/CD)
- Publications in top AI and/or security venues (NeurIPS, ICML, AAAI, IEEE S&P, USENIX, ACM CCS, etc.)
- Contributions to open-source ML security projects