Machine Learning Performance Engineer

G-Research
Charing Cross, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English

Job location

Charing Cross, United Kingdom

Tech stack

C++
Profiling
Nvidia CUDA
Computer Programming
Data Structures
Linux
Memory Management
Python
Machine Learning
PyTorch
Deep Learning
Kubernetes
Information Technology

Job description

We tackle the most complex problems in quantitative finance, by bringing scientific clarity to financial complexity.

From our London HQ, we unite world-class researchers and engineers in an environment that values deep exploration and methodical execution - because the best ideas take time to evolve. Together we're building a world-class platform to amplify our teams' most powerful ideas.

As part of our engineering team, you'll shape the platforms and tools that drive high-impact research - designing systems that scale, accelerate discovery and support innovation across the firm.

Take the next step in your career.

The role

We are seeking an exceptional ML Performance Engineer to optimise large-scale workloads across our GPU and CPU infrastructure.

This is a hands-on, impactful role. You will design and implement techniques that improve performance and capabilities of research workloads on cutting-edge compute infrastructure, ensuring our researchers and engineers can make the best use of current and future systems.

You will work directly with internal research teams and infrastructure engineers to profile and analyse workloads, eliminate bottlenecks and develop reference solutions.

Your work will influence long-term platform evolution and help shape the architecture, software stack and tooling that underpins large-scale machine learning computation., * Collaborating with researchers, senior stakeholders and engineers to understand their compute challenges and design optimised solutions.

  • Profiling, benchmarking and tuning large-scale training and inference workloads for performance on distributed CPU, GPU and memory-intensive jobs.
  • Developing reference implementations, libraries and tools to improve job efficiency and reliability.
  • Collaborating closely with systems, architecture and platform teams to evolve our compute stack.
  • Influencing long-term platform and infrastructure decisions.

Requirements

Do you have experience in Python?, Do you have a Master's degree?, * Bachelors, Masters or PhD degree in computer science, or equivalent experience.

  • Proven track record of profiling, benchmarking and optimising distributed workloads.
  • Strong knowledge of Python, C++, and CUDA.
  • Strong understanding of one or more deep learning frameworks, such as PyTorch.
  • Strong background in data structures, algorithms, and parallel programming on heterogeneous systems.
  • Deep understanding of Linux OS fundamentals, such as as scheduling, memory management, NUMA, networking, and filesystems.
  • Experience with HPC schedulers and Kubernetes-based workload orchestration.
  • Familiarity with profiling and monitoring tools, such as nsys, ncu, eBPF-based tools, and performance counters.
  • Strong communication skills with the ability to collaborate across research, infrastructure and engineering teams.

Benefits & conditions

  • Highly competitive compensation plus annual discretionary bonus
  • Lunch provided (via Just Eat for Business) and dedicated barista bar
  • 35 days' annual leave
  • 9% company pension contributions
  • Informal dress code and excellent work/life balance
  • Comprehensive healthcare and life assurance
  • Cycle-to-work scheme
  • Monthly company events

Apply for this position