Embedded AI Platform Security Engineer
Role details
Job location
Tech stack
Job description
Join our Innovation Team, where we explore cutting-edge concepts at the intersection of Machine Learning and Security. Our mission is to develop forward-looking solutions-such as model protection, privacy-preserving ML, security for agentic AI, and anomaly detection-that will later be integrated into our Edge products. This requires high-level innovation skills combined with a hands-on mindset.
If you are passionate about building secure AI systems, exploring new ideas, and turning concepts into prototypes, this role is for you.
Design and implement security solutions for customer-provided ML models running on embedded hardware using secure execution environments and enclaves.
Responsibilities:
-
Develop secure boot and firmware integrity mechanisms for AI-enabled embedded platforms.
-
Implement runtime hardening and isolation for ML workloads.
-
Enable model protection using Trusted Execution Environments (TEE) and secure enclaves.
-
Ensure secure loading and execution of ML models (ONNX, TensorFlow Lite) on edge devices.
Requirements
-
Degree in in Computer Science, Cybersecurity, or Cryptography and a strong interest in applied ML
-
min. 5 years or more expierences in embedded programming (C/C++, Rust).
-
Good understanding of secure boot, firmware integrity, and runtime security.
-
Experience with or interest in TEEs, TPM integration, and secure enclaves.
-
Familiarity with ML model formats and deployment workflows.
-
Knowledge of cryptographic primitives for integrity and confidentiality
Please note: The successful candidate may/will be responsible for security related tasks. The assignment may/will be in scope of security certifications, therefore a conscious and reliable way of working is necessary.