Machine Learning Engineer

European Tech Recruit
Municipality of Madrid, Spain
6 days ago

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate

Job location

Municipality of Madrid, Spain

Tech stack

Artificial Intelligence
Amazon Web Services (AWS)
Computer Vision
Software Debugging
Python
Machine Learning
Quantum Computing
PyTorch
Large Language Models
Deep Learning
Information Technology
HuggingFace
Docker

Job description

We are working with a leading Quantum AI start-up taking on a mission to make AI faster, greener, and more accessible. Within this role you will be collaborating with world-leading experts in quantum computing and AI, building breakthrough solutions that deliver real-world impact for global clients.

This is your chance to work on cutting-edge projects at the intersection of LLMs, quantum-inspired tech, and applied AI innovation.

This is a 9-month fixed-term contract (with potential extension), based in Madrid with hybrid working flexibility.

What you'll be doing

  • Invent new ways to compress and optimise Large Language Models with quantum-inspired methods.
  • Benchmark, stress-test, and fine-tune LLMs to boost accuracy, efficiency, and robustness.
  • Build and deploy LLM-powered apps - from RAG systems to AI agents.
  • Act as an LLM specialist, spotting opportunities where quantum AI can make the impossible possible.
  • Design and train custom deep learning models, not just for language, but also in computer vision and beyond.
  • Keep experiments transparent with clear documentation, while driving innovation at speed.
  • Mentor teammates, share knowledge, and help grow a culture of technical excellence.
  • Stay ahead of the curve with the latest research, tools, and breakthroughs in AI.

Requirements

  • A degree in AI, Computer Science, Data Science (BSc, MSc, or PhD).
  • 2+ years of hands-on deep learning experience - designing, training, or fine-tuning transformers or vision models.
  • Strong track record with transformer models (Hugging Face Transformers, Accelerate, Datasets, etc.).
  • Solid grasp of deep learning theory, training & inference.
  • Excellent Python skills with PyTorch + Hugging Face expertise.
  • Understanding of GPU architectures and LLM infrastructure.
  • Hands-on with AWS (or similar), Docker, and deploying models in production.
  • Problem-solver with sharp debugging, testing, and performance optimisation skills.
  • Bonus points for published research in AI/deep learning.

Keywords: Machine Learning / GPU / LLM / Large Language / Deep Learning / AWS / Orchestration / Docker / Mistral AI / OpenAI / LangChain / TPU / Hugging Face / PyTorch / Transformer Models / Fine-Tuning

Apply for this position