Principal Engineer - Edge AI & Intelligent Sensing
Role details
Job location
Tech stack
Job description
- Lead the design of AI-first perception pipelines for real-time localization, mapping, and scene understanding.
- Architect models that integrate deep learning (CNNs, transformers, foundation models) with classical robotics algorithms.
- Develop algorithms for pose estimation, depth prediction, loop closure, semantic scene understanding, and multimodal fusion.
- Design and curate datasets for sensor-rich, dynamic environments - including image, depth, IMU, LiDAR, and multi-timescale time series.
- Collaborate with robotics, embedded, and hardware teams to bring models from research into robust, production-grade systems.
- Drive advances in self-supervised learning, multimodal fusion, and Edge AI model compression/acceleration.
- Contribute to AI platform architecture, tooling, and MLOps for continuous deployment.
Requirements
We are seeking a Principal AI Engineer with deep expertise in machine learning for perception, multimodal intelligence, or time-series understanding. Robotics experience is welcome but not required - what matters is your ability to bring modern AI methods to embodied systems and help shape the future of autonomous machines., * 10+ years in applied AI, perception, or machine learning (robotics background not required).
-
M.S. or Ph.D. in Computer Science, Robotics, EE, Applied Math, or similar.
-
Demonstrated expertise in AI-based perception: vision transformers, depth estimation, optical flow, implicit representations, or multimodal fusion;time-series modeling, dynamical systems learning, or predictive models.
-
Strong experience taking AI systems from concept to deployment.
-
Programming proficiency in Python and ML frameworks (PyTorch preferred).
-
Experience collaborating with cross-functional engineering teams (embedded, systems, hardware). Highly Valued (Not Required) Experience in any of:
-
Visual-inertial odometry (VIO), SLAM, or multi-sensor fusion.
-
Foundation models for robotics, self-supervised learning, or large-scale representation learning.
-
Open-source contributions (e.g., DROID-SLAM, RTAB-Map, OpenVINS).
-
Real-time or resource-constrained deployment (CUDA, TensorRT, edge accelerators).
-
Robotics simulators (Isaac Sim, Gazebo, Unreal).