Senior Computer Vision Engineer (Perception & Autonomy)

Twentyfour GmbH
München, Germany
11 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Remote
München, Germany

Tech stack

Computer Vision
C++
Nvidia CUDA
Global Positioning Systems (GPS)
Python
Motion Planning
Object Detection
OpenCV
TensorFlow
Sensor Fusion
PyTorch
Deep Learning
Lidar
Data Generation

Job description

  • Design and deploy lightweight, low-latency deep learning models for 2D and 3D object detection (e.g., YOLO, SSD).
  • Implement robust multi-object tracking (MOT) to maintain awareness of dynamic obstacles across frames.
  • Build semantic scene understanding to distinguish traversable space, vegetation, and hard obstacles.
  • Develop pipelines for depth estimation, structure-from-motion (SfM), and monocular depth cues.
  • Work on occupancy grid mapping for 3D environment representation and downstream path planning.
  • Implement visual odometry and SLAM to ensure precise localization in GPS-denied environments.
  • Address challenging visual conditions such as glare, shadows, motion blur, and low-light scenes.
  • Optimize perception models for embedded edge devices (e.g., NVIDIA Jetson Orin) using TensorRT, CUDA, and quantization (INT8).
  • Balance inference speed and accuracy to achieve high frame rates with minimal end-to-end latency.

Requirements

  • Have deep expertise in computer vision fundamentals, including epipolar and projective geometry, calibration, and photogrammetry.
  • Are highly proficient with PyTorch or TensorFlow, and OpenCV.
  • Write production-grade C++ and use Python effectively for prototyping and experimentation.
  • Have hands-on experience with visual odometry, SLAM, or visual-inertial systems (e.g., ORB-SLAM, VINS-Mono).
  • Bring a strong mathematical foundation in linear algebra, probability, filtering (Kalman / Particle Filters), and optimization.
  • Enjoy translating research ideas into robust, real-world systems on constrained hardware., * Experience with sensor fusion across camera, IMU, LiDAR, or radar.
  • Familiarity with ROS2 for perception and autonomy pipelines.
  • Experience working with thermal / IR sensors or event-based cameras.
  • Exposure to synthetic data generation for training computer vision models.
  • Background in robotics, aerospace, defense, or other safety-critical systems.

Benefits & conditions

  • Competitive salary and equity options.
  • A high-impact role shaping the perception systems of next-generation UAV platforms.
  • Direct influence on core autonomy capabilities, not just incremental features.
  • A collaborative, ambitious, and technically rigorous startup environment.
  • Flexible working conditions, with remote work and relocation support for top candidates.

About the company

Twentyfour Industries is committed to building a fair, inclusive, and high-performance workplace where people from all backgrounds can contribute and thrive. Our team brings together individuals with different perspectives, experiences, and skills to shape the future of European security and technology.

Apply for this position