Machine Learning and AI Platform Engineer (LLMs) | Barcelona | Fixed term contract until July 2026 | €50k-75k p/a | Hybrid remote WFH | Quantum Computing | Artificial Intelligence
Role details
Job location
Tech stack
Requirements
-
Strong Python engineering background with experience supporting ML systems
-
Experience running production services on cloud platforms (AWS, Azure or similar)
-
Familiarity with containers, orchestration and automated deployment pipelines
-
Understanding of the ML lifecycle and how models behave once deployed
-
Comfort working across engineering, infrastructure and data teams
Nice to have (but not essential)
-
Experience with ML pipelines, experiment tracking or model serving
-
Exposure to large language models, RAG systems or inference optimisation
-
Observability, monitoring or platform reliability experience
-
Background in data engineering, backend engineering or platform teams, You don't need to come from a role called "MLOps" though MLOps experience is highly desirable - if you've been enabling ML systems in production, we want to speak with you.
Benefits & conditions
Machine Learning and AI Platform Engineer (LLMs) | Barcelona | Fixed term contract until July 2026 | €50k-75k p/a | Hybrid remote WFH | Quantum Computing | Artificial Intelligence
- Hybrid remote working 2-3 days a week in office, remote WFH the rest of the time
- Open to EU Citizens only (sponsorship not available)
- Fixed term temporary contract until 1st July 2026
- Salary c. €50k-60k p/a for a mid level hire, €65k-75k p/a for a senior
- Signing bonus, retention bonus and relocation fund available
We're working with a fast-growing deep-tech company building large-scale AI systems used by enterprise clients across multiple industries.
They're looking for a Machine Learning Platform Engineer / AI Platform Engineer to help design, deploy and operate production-grade machine learning and AI systems. This is a hands-on engineering role focused on reliability, scalability and automation rather than pure research.
What you'll be working on
-
Deploying and operating machine learning and AI models in real production environments
-
Building automated pipelines covering training, validation, deployment and monitoring
-
Running ML workloads on cloud-native platforms using containers, orchestration and CI/CD
-
Improving reliability, performance and observability of ML and AI services
-
Collaborating closely with ML engineers, data scientists, backend engineers and infrastructure teams