Senior ML/AI Speech Enhancement and Denoising SW Engineer
Role details
Job location
Tech stack
Job description
We are seeking an experienced Senior ML/AI Audio Denoising Software Engineer to contribute to the development of advanced deep learning models for next-generation audio and speech systems on hearables (earbuds, headphones, head mounted devices) and wearables. You will participate to the full lifecycle of ML-based solutions - from research and prototyping to optimization and deployment on embedded hardware with customer support - while collaborating with cross-functional teams in AI, DSP, acoustics, hardware, and systems. Responsibilities
- Research and development of state-of-the-art ML models for audio applications: speech enhancement, separation, echo cancellation, spatial audio, and classification.
- Help drive the creation and curation of large-scale audio datasets, data augmentation strategies, and evaluation metrics.
- Optimize and update ML pipelines for training, validation, and deployment with scalability and reproducibility in mind.
- Optimize models for edge/embedded platforms (quantization, pruning, distillation, hardware accelerators).
- Collaborate with acoustics and hardware teams to integrate ML algorithms into products, considering latency, power, and robustness.
- Mentor more junior engineers, review code, and contribute to a culture of technical excellence.
- Publish and patent innovative methods in ML for audio, speech enhancement, hearing augmentation and acoustics.
- Stay at the forefront of research in audio ML, evaluating new architectures and techniques.
Requirements
Do you have a Master's degree?, * PhD (or Master's with industry experience) in Machine Learning, Computer Science, Electrical Engineering, or related field - in an audio related context.
- 5+ years of experience developing and deploying ML models, with at least 3+ years specifically in audio/speech.
- Proven expertise in deep learning architectures (CNNs, RNNs, Transformers, diffusion models, generative approaches) applied to acoustic data.
- Strong proficiency in Python and ML frameworks (PyTorch, TensorFlow), including model training and optimization.
- Hands-on experience with ML deployment on embedded/mobile platforms (C/C++, ONNX, TensorRT, TFLite, CoreML).
- Track record of publications, patents, or productized audio ML solutions.
- Excellent communication and ability to drive cross-functional collaboration.
Preferred Qualifications
- Experience working in small teams or start-ups.
- Good understanding of ANC principles
- Expertise in large-scale training (multi-GPU, distributed training, cloud pipelines).
- Knowledge of psychoacoustics and perceptual evaluation methods.
- Strong connections with academic or industrial research communities.