Staff / Principal Machine Learning Engineer, Serving

Inworld AI
Bristol, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Compensation
£ 200K

Job location

Bristol, United Kingdom

Tech stack

C++
Nvidia CUDA
Distributed Systems
Python
Machine Learning
System Programming
Rust
Graphics Processing Unit (GPU)
Load Balancing
Large Language Models
Caching
Backend
Kubernetes
Free and Open-Source Software
Machine Learning Operations

Requirements

A year ago, reliably working agentic systems and sub-second multimodal inference at scale barely existed. Nobody has a decade of experience here. So we're not screening for a resume template - we're looking for strong people from varied backgrounds who learn fast, thrive in ambiguity, and can show us what they've built, broken, and understood.

Experience We Find Useful

You don't need all of this. But you need enough to make a case.

  • Inference Optimization. Deep understanding of modern serving frameworks and techniques like vLLM or TRT-LLM.
  • Model Acceleration. Hands-on experience with quantization, distillation, caching strategies , continuous batching, paged attention, and speculative decoding.
  • High-Performance Systems. Proficiency in C++, CUDA, Rust, or highly optimized Python. You know how to profile code and squeeze every ounce of performance out of NVIDIA GPUs.
  • Distributed Systems & Scaling. Experience with Kubernetes, Ray, custom load balancing, multi-GPU/multi-node inference, and reliably handling thousands of concurrent connections.
  • Public work. Non-trivial systems programming projects, open-source contributions to major inference engines, or deep-dive technical write-ups.
  • Full-cycle ownership. You can take a model from the research team, containerize it, optimize its serving, and ensure it runs reliably in production.
  • Background. PhD in CS, Physics, Math, or equivalent practical experience building backend or ML systems.

Who Thrives Here

  • You don't need a roadmap to start walking; you're comfortable picking a direction and building the map as you go.
  • You believe engineering isn't finished until it's shipped and stable. You have a bias for impact over purely theoretical optimizations.
  • You don't just ship code; you obsess over the why. You're the first to question an architecture if you think there's a better way to solve the core latency or throughput problem.
  • You aren't satisfied with "the PM said so." You thrive on deep context and want to understand the fundamental logic behind every decision we make.

Benefits & conditions

The base salary range for this full-time position is £140,000 - £200,000. In addition to base pay, total compensation includes equity and benefits. Within the range, individual pay is determined by work location, level, and additional factors, including competencies, experience, and business needs. The base pay range is subject to change and may be modified in the future.

About the company

About Inworld Inworld is a product-oriented research lab of top AI researchers and engineers, developing best-in-class realtime multimodal models and the only realtime orchestration platform optimized for thousands of queries per second. We've raised more than $125M from Lightspeed, Section 32, Kleiner Perkins, Microsoft's M12 venture fund, Founders Fund, Meta and Stanford, among others. Our technology has powered experiences from companies such as NVIDIA, Microsoft Xbox, Niantic, Logitech Streamlabs, Wishroll, Little Umbrella and Bible Chat. We've also been recognized by CB Insights as one of the 100 most promising AI companies globally and have been named one of LinkedIn's Top 10 Startups in the USA.

Apply for this position