Senior Machine Learning Engineer

Sage Group plc
Newcastle upon Tyne, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Newcastle upon Tyne, United Kingdom

Tech stack

API
Artificial Intelligence
Amazon Web Services (AWS)
Systems Engineering
Code Review
Continuous Integration
Python
Machine Learning
Software Engineering
Management of Software Versions
Large Language Models
FastAPI
Integration Tests
Solid Principles
Performance Monitor
Machine Learning Operations
REST
Software Version Control
Docker
Microservices

Job description

Job Description We are looking for a Senior ML Engineer to take technical ownership of our machine learning production environment. You will lead the transition of experimental models into production-grade services that are reliable, scalable, and cost-effective. Your mission is to build the "highway" that allows our data science team to deploy models rapidly while ensuring those models are observable and fiscally responsible. You will own the entire ML lifecycle-from automated training pipelines to real-time inference clusters-and serve as a key software engineering contributor to our AI product stack., * Lifecycle & Pipeline Architecture: Design and own the automated "Continuous Training" (CT) and deployment pipelines. Architect reusable, modular infrastructure for model training and serving, ensuring the entire lifecycle is versioned and reproducible.

  • Software Engineering Best Practices: Lead the team in adopting professional engineering standards. This includes owning the strategy for unit/integration testing, peer code reviews, and applying SOLID principles to ML codebases to ensure they remain modular and maintainable.
  • ML Observability: Establish and own the telemetry framework for the AI stack. Implement proactive monitoring for system health and model-specific metrics, such as data drift, concept drift, and prediction accuracy.
  • FinOps & Cost Management: Own the strategy for AI cloud spend. Build monitoring and alerting frameworks to track compute costs (training and inference) and implement optimization strategies like auto-scaling and spot-instance usage.
  • AI Systems Engineering: Act as a lead software engineer to integrate models into the product ecosystem. Develop high-performance, secure APIs and microservices that wrap our ML capabilities for production consumption.
  • Data & Model Governance: Own the versioning strategy for the "Holy Trinity" of ML: code, data, and model artifacts. Ensure clear documentation and audit trails for all production deployments., Advert Working at Sage means you're supporting millions of small and medium sized businesses globally with technology to work faster and smarter. We leverage the future of AI, meaning business owners spend less time doing routine tasks, like entering invoices and generating reports, and more time pursuing their ambitions.

Requirements

  • Demonstrating strong software engineering fundamentals, including production-quality Python, testing, CI/CD practices, and version control
  • Designing and operating reliable, versioned REST APIs using an API-first approach
  • Building, deploying, and operating backend services in cloud environments, with AWS as the primary platform (experience on other major clouds considered transferable)
  • Using containerisation and modern deployment approaches, including Docker, automated pipelines, and basic observability
  • Working effectively with real-world data and production systems in collaboration with product, data, and platform teams
  • Bringing either hands-on experience delivering machine-learning systems in production or a very strong software-engineering background with clear motivation to grow into ML and MLOps

Desirable skills (strong differentiators):

  • Using AWS SageMaker for training, deploying, and operating machine-learning workloads, or demonstrating equivalent experience on similar cloud ML platforms
  • Exposing machine-learning models via APIs (e.g. FastAPI-based inference services) and operating them reliably at scale
  • Applying MLOps practices, including model and version management, monitoring, and handling model or data drift
  • Implementing advanced service patterns such as asynchronous processing, event-driven architectures, or multi-version services
  • Serving LLM or GenAI-based capabilities in production, including model serving, RAG pipelines, and inference controls
  • Designing reusable, platform-level services and shared ML patterns rather than one-off implementations
  • Managing cloud operational trade-offs, including cost efficiency, latency, scalability, and reliability

Apply for this position