Machine Learning Research Engineer - Perception & Foundation Models

Zendar
Paris, France
6 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
€ 95K

Job location

Paris, France

Tech stack

Computer Vision
Python
Machine Learning
Object Detection
TensorFlow
Sensor Fusion
PyTorch
Deep Learning
Lidar

Job description

Zendar's "Semantic Spectrum" technology extracts rich scene understanding from radar sensing. As a Senior ML Research Engineer in Paris, your goal is to evolve this technology into a multi-modal foundation model architecture.

You will design and implement the architecture end-to-end. This involves training models from scratch on massive datasets, defining evaluation metrics for long-tail validation, and partnering with platform teams to ensure successful deployment in real-time embedded systems.

Why this role is exciting:

  • Ownership: You will drive architectural decisions, making rigorous tradeoffs between approach A vs. B.
  • Scale: You will work with a real-world dataset covering tens of thousands of kilometers across multiple continents.
  • Impact: You will see your work validated on real vehicles, bridging the gap between research and production.

What You'll Do:

  • Architect Multi-Sensor Strategies: Own the technical strategy for multi-sensor perception models. Design fusion architectures for streaming inputs (camera/radar/Lidar) utilizing early fusion and temporal fusion.
  • Deliver Production-Ready Models: Build and deploy models for:
  • Full-Scene Understanding: Occupancy grids, free-space, and dynamic occupancy.
  • 3D Perception: Object detection and tracking.
  • Static Environment: Lane line and road structure estimation.
  • Drive Reliability: Target "four nines" reliability behavior in defined conditions, focusing on the messy long tail of real-world driving.
  • Optimize for Real-Time: Partner with embedded teams to ensure models meet strict constraints (latency, memory, throughput) and integrate cleanly via stable interfaces.

Requirements

Do you have a Doctoral degree?, * Experience: 5+ years of experience (or a PhD) designing and implementing ML systems, with demonstrated ownership of research-to-production outcomes.

  • Deep Learning Expertise: Strong background in perception, specifically transformer-based architectures, temporal modeling, and multi-modal learning.
  • Training Mastery: Demonstrated experience training large models from scratch (not just fine-tuning) on large-scale datasets.
  • Engineering Proficiency: Proficient in Python and a major deep learning framework (PyTorch or TensorFlow).
  • Strategic Thinking: Ability to lead architectural discussions, articulate tradeoffs, quantify risks, and set realistic milestones.

Bonus Points:

  • Sensor Knowledge: Experience with multi-sensor fusion (camera, radar, Lidar) and the nuances of real-world sensor noise.
  • Advanced Education: PhD in Machine Learning, Computer Vision, or Robotics.
  • Foundation Models: Experience with multi-modal pretraining, self-supervised learning, and scaling laws/strategies for autonomy.
  • Modern Architectures: Familiarity with "Transfusion-style" paradigms (transformer-based fusion across modalities and time) and BEV-centric perception.
  • Advanced Perception Tasks: Experience with 3D detection, occupancy networks, tracking, and streaming inference.

Benefits & conditions

  • Competitive salary ranging from €75,000 to €95,000 annually depending on experience and equity
  • Hybrid work model: in office 3 days per week (Monday, Tuesday, Thursday), the rest… work from wherever!
  • Modern Workspace: Fully equipped, modern office in the heart of Paris
  • Transportation/Commute: Commuter benefits (e.g., partial reimbursement for public transport or cycling programs, where applicable)
  • Subsidized meal vouchers (tickets restaurant
  • Wellness Pass (ex Gymlib)

About the company

Zendar is looking for a Senior Machine Learning Research Engineer to join our Paris office. We are currently deploying one of the world's most advanced 360-degree radar-based perception systems. We are now expanding our capabilities to deliver full-scene perception using the early fusion of camera and radar, scaling these technologies across the automotive and robotics industries. This is a unique opportunity to join a team that is not bogged down by legacy code. You will define, own, and build a next-generation perception stack that enables reliable autonomy at scale., Autonomous vehicles need to be able to understand the world around them not only in bright daylight, but also at night, when it is foggy or rainy, or when the sun is shining right in your face. At Zendar, we make this possible by developing the highest-resolution, most information-rich radar in the world. What makes radar powerful - its long wavelength which makes it robust to all sorts of weather and lighting conditions - also makes it really challenging to work with. We have used our deep understanding of radar physics to build radar perception models that bring a rich and complete understanding of the environment around the AV from free space to object detections to road structure. Check out what our technology can do here (https://www.youtube.com/watch?v=MUxE2T2Qe8g) - all produced with only radar information, no camera and no lidar! Zendar has a diverse and dynamic team of hardware, machine learning, signal processing and software engineers with a deep background in sensing technology. We have a global team of 60 people, distributed across our sites in Berkeley, Lindau (Germany), and Paris. Zendar is backed by Tier-1 VCs, has raised more than $80M in funding and has established strong partnerships with industry leaders.

Apply for this position