Working Student - Machine Learning
Role details
Job location
Tech stack
Job description
Conventional frame-based pipelines and large neural networks are often too slow and power-hungry for always-on, real-time AR. They process every frame exhaustively, move large amounts of data through memory, and quickly hit strict latency, energy, and bandwidth limits on embedded hardware. Event-based sensing and processing, combined with other efficiency-oriented techniques, open up a fundamentally different design space. By exploiting temporal and spatial sparsity, we can:
- Turn always-on perception into something that fits within strict power budgets
- Push more intelligence closer to the sensor, reducing latency and data movement
- Co-design models and systems that are built for edge hardware, rather than shrinking down server-scale architectures
In this project, you will explore how to combine modern deep learning with event-based and embedded processors to push the limits of what AR glasses can do on-device. You will help answer questions such as:
- How can we architect models that are both accurate and ultra-efficient for real-world AR tasks on event-driven or low-power hardware?
- What are the right trade-offs between accuracy, latency, memory, and energy for different AR scenarios?
- How do we turn promising research ideas into practical, measurable improvements on realistic platforms and workloads?
Your work will directly inform how future AR experiences can run locally, responsively, and efficiently on next-generation devices
As a thesis student, you will define and drive a focused research direction in efficient on-device ML for AR, with a particular emphasis on event-driven or embedded processors. Possible directions within this space include:
- Design and prototype ML models tailored to AR use cases under embedded constraints (e.g., event-based vision models, lightweight CNNs/Vision Transformers, or hybrid frame+event pipelines).
- Set up datasets and baselines relevant to AR tasks (e.g., detection, tracking, segmentation, gesture/interaction), and define evaluation metrics across accuracy, latency, memory usage, and energy.
- Implement and train models in PyTorch, including data pipelines, training loops, and evaluation scripts that are easy to extend and reproduce.
- Explore efficiency techniques such as sparsity, pruning, quantization (PTQ/QAT), or event-based representations, and study their impact on performance-efficiency trade-offs.
- Profile models under embedded-like conditions using simulators, profiling tools, or edge accelerators to understand system-level behavior (e.g., FLOPs, latency, memory footprint, bandwidth).
- Communicate your findings through ablation studies, a clear thesis report (and optionally a paper-style write-up), and a reproducible codebase with pre-trained checkpoints.
Expected Outcomes
By the end of the project, you are expected to:
- Demonstrate proof-of-concepts on AR hardware (e.g., Spectacles) showcasing real-world impact
- Deliver measurable improvements in runtime performance, efficiency, and adaptability for representative AR tasks
- Provide insights into model-system co-design for low-power, on-device ML
- Contribute to ML frameworks, tooling, or deployment strategies for embedded AR systems
- Produce a high-quality thesis report (and optionally a paper-style write-up) with reproducible code and results
Requirements
- Currently enrolled in a Master's program (e.g., Computer Science, Electrical/Computer Engineering, Artificial Intelligence, Robotics, or a related field).
- Degree program allows a Master's thesis / graduation project in collaboration with an external organization.
- Strong background in:
- Linear algebra, probability, and optimization
- Deep learning fundamentals, including backpropagation, regularization, and basic model architectures
- Hands-on experience training deep learning models for computer vision, including:
- Experience with PyTorch (preferred) or a similar framework
- Comfort implementing and training CNNs and/or vision transformers
- Proficiency in Python and standard ML tooling (e.g., NumPy, PyTorch, Git, basic experiment management).
- Interest in turning research ideas into robust, reproducible codebases that others can build on., * Experience with one or more of:
- Event-based or streaming vision, or other non-conventional sensor modalities
- Model compression techniques: pruning, sparsity, quantization, or knowledge distillation
- Efficient architectures for embedded or real-time applications (e.g., lightweight backbones, dynamic computation, conditional execution)
- Familiarity with embedded / on-device ML toolchains (e.g., TensorFlow Lite, ONNX Runtime, or similar frameworks).
- Experience with AI-assisted development and research tools (e.g., experiment tracking, ML tooling, or LLM-based coding and analysis assistants).
- Exposure to performance profiling and basic systems concepts: FLOPs, latency, memory access patterns, and bandwidth.
Practical details
- Project type: Master's Thesis / Graduation Project
- Focus: Efficient on-device ML for AR applications on embedded and/or event-driven processors
- Duration & scope: Minimum of 8 months, up to 12 months, aligned with university and team requirements
- Location: Eindhoven, the Netherlands, with a minimum of 4 days per week in the office
- Start date: Flexible, to be agreed based on candidate and university timelines