Research Engineer - Dexterous Manipulation (Egocentric Models)
Role details
Job location
Tech stack
Job description
We are seeking an expert in dexterous manipulation and large-scale modeling to lead the development of our physical foundation models. The goal of this position is to leverage internet-scale egocentric video to build Vision-Language-Action (VLA) models that enable our humanoid robots to interact with the world with human-like fluidity. You bring a deep understanding of how to bridge the gap between observing human actions in video and executing high-DOF (20+) motor control., * Scalable Egocentric Pre-training: Architect and implement large-scale pre-training objectives for egocentric video datasets to learn generalizable representations of hand-object interactions and spatial-temporal dynamics.
- VLA Foundation Modeling: Develop and scale multi-modal Foundation Models that unify visual perception and natural language instructions into actionable robotic trajectories.
- Generative Policy Design: Design and optimize generative action heads using Diffusion Models and Flow-matching techniques to capture the multi-modal distribution of complex human movements.
- Humanoid Motion Alignment: Develop novel algorithms to align human-centric video representations with the kinematic constraints of 20+ DoF humanoid systems, ensuring fluid and stable execution.
- Reinforcement Learning & Fine-tuning: Utilize Offline RL and high-fidelity simulation fine-tuning to optimize foundation model performance for high-success-rate physical manipulation.
- Cross-Functional Research: Translate cutting-edge research in scaling laws and world models into production-ready architectures that enhance robot reliability and autonomy.
Requirements
- PhD or Master's degree in Robotics, Machine Learning, or a closely related field, with a strong focus on data-driven manipulation, egocentric vision, or foundation models.
- Experience with Humanoid or Dexterous Manipulation, including a deep understanding of contact-rich physics.
- Excellent knowledge of Python, PyTorch, and the distributed training of large-scale neural networks (FSDP, NCCL).
- Proven expertise in Diffusion Models, Flow Matching, and Transformers.
- Hands-on experience deploying learning-based controllers on real robot hardware.
- Experience with Reinforcement Learning and simulation environments (e.g., IsaacLab, MuJoCo)
Benefits & conditions
- Competitive compensation package
- A front-row seat at one of Europe's most ambitious robotics companies
- An energetic, collaborative team with a bias for action