Senior AI Engineer
SR2
Charing Cross, United Kingdom
2 days ago
Role details
Contract type
Temporary contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
Senior Compensation
£ 156KJob location
Remote
Charing Cross, United Kingdom
Tech stack
Artificial Intelligence
Airflow
Amazon Web Services (AWS)
Azure
Cloud Computing
DevOps
Github
Python
Machine Learning
Performance Tuning
Management of Software Versions
Data Logging
Large Language Models
Gitlab-ci
Kubernetes
Machine Learning Operations
Docker
Jenkins
Job description
This is a role for someone who enjoys engineering robust ML platforms rather than purely building models - enabling data scientists and ML engineers to deploy, monitor and scale AI workloads efficiently., You'll take ownership of the AI platform layer, designing and improving the systems that allow machine learning models to move from experimentation to reliable production services. Working closely with ML Engineers, DevOps and Back End teams, you'll ensure AI workloads are scalable, observable and secure., * Design and maintain ML infrastructure and model deployment frameworks
- Build and optimise CI/CD pipelines for ML workflows
- Implement monitoring, logging and observability for production models
- Improve reproducibility, versioning and governance of models and datasets
- Containerise and orchestrate ML services in cloud-native environments
- Support performance tuning and cost optimisation of AI workloads
Requirements
- Strong Python engineering background
- Proven experience in MLOps or ML platform engineering
- Experience with Docker and Kubernetes in production environments
- Strong knowledge of cloud platforms (AWS, Azure or GCP)
- Experience building CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins etc.)
- Understanding of model life cycle management and deployment strategies
Desirable
- Experience with MLflow, Kubeflow, Airflow or similar tooling
- Familiarity with feature stores and data versioning tools
- Knowledge of LLM deployment or large-scale inference systems
- Experience working in high-growth or product-led environments