Staff AI Infrastructure Engineer

BIOHUB LLC
Redwood City, United States of America
9 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 331K

Job location

Redwood City, United States of America

Tech stack

Artificial Intelligence
Computing Platforms
Build Automation
Bash
C++
Computer Clusters
Software Debugging
Linux
Distributed Systems
InfiniBand
Python
Language Modeling
Node.js
Open Source Technology
Posix
Remote Direct Memory Access
Prometheus
TCP/IP
Weka
AI Infrastructure
Graphics Processing Unit (GPU)
Cloud Platform System
High Performance Computing
PyTorch
Grafana
Kubernetes
Slurm
Machine Learning Operations
Dynatrace

Job description

The AI Cluster Production Engineering team is part of the AI Compute Platform organization at Biohub, a non-profit research lab committed to open science and open-source AI. We own the design, operation, and reliability of large-scale multi-GPU AI clusters that power frontier AI biology research: protein language models, genomic foundation models, and scientific reasoning systems built to be shared, not monetized. Our clusters run Slurm on Kubernetes infrastructure and support everything from day-to-day AI researcher workflows to multi-node hero training runs at thousands of GPUs. The team works at the intersection of AI tooling, distributed systems, HPC, and frontier AI, debugging deep AI infrastructure problems and building AI systems critical to the entire AI organization., * Own reliability, observability, and incident response for multi-site GPU clusters running Slurm on Kubernetes. Build the systems, automation, and processes that keep clusters healthy, and that enable fast, efficient recovery when things break.

  • Debug and resolve deep infrastructure failures across storage, networking, scheduling, and GPU compute layers. Build the tooling and operational patterns that make these failures easier to detect, diagnose, and prevent.
  • Design and execute GPU cluster scaling plans, systematically validating storage, networking, interconnect, and scheduler behavior as clusters grow to support larger training runs.
  • Build automation and tooling to manage cluster operations at scale: capacity planning, GPU utilization monitoring workload manager policy management, and pod lifecycle automation.
  • Drive configuration-as-code practices, ensuring cluster state is reproducible and auditable, and managed through version-controlled pipelines.
  • Collaborate directly with AI researchers and hero run leads to understand training workload patterns and design infrastructure that meets frontier-scale requirements.
  • Own the vendor relationship on technical issues - escalating SEV1s, coordinating across multiple partners and network backbone teams, holding them accountable to root/proximate cause analysis and SLAs.
  • Contribute to capacity planning: projecting GPU demand, managing cluster expansion across GPU generations, and coordinating multi-cluster strategy.
  • Improve operational resilience, reducing mean time to detect and resolve incidents, reducing toil through automation, and developing runbooks that scale the team's operational knowledge beyond any individual.

Requirements

  • 8+ years of AI/ML infrastructure engineering experience, with deep expertise in at least one of: HPC/Slurm cluster operations, Kubernetes at scale, distributed systems debugging, or GPU compute infrastructure.
  • Strong Linux systems fundamentals - networking (TCP/IP, InfiniBand, RDMA, MTU/MSS/PMTUD), storage (NFS, VAST, WEKA, POSIX semantics), kernel internals (cgroups, namespaces, eBPF, sysctls).
  • Hands-on experience with Kubernetes and cloud-native infrastructure - pod lifecycle, CNI plugins (Cilium preferred), StatefulSets, Helm, ArgoCD, or equivalent GitOps tooling.
  • Experience with HPC workload managers - Slurm strongly preferred (QoS, partitions, preemption, accounting, Sunk/CoreWeave patterns a plus).
  • Debugging instinct: ability to form hypotheses quickly, design controlled experiments, and root cause complex multi-system failures under pressure. You enjoy finding the hard bugs.
  • Proficiency in Python and Bash for automation and tooling. Go, Rust, or C/C++ a plus.
  • Experience with observability stacks - Prometheus/VictoriaMetrics, Grafana, DCGM metrics, distributed tracing. You know how to instrument systems you don't control.
  • Excellent communication - you can write a crisp incident summary for researchers, a technical escalation to a vendor CTO, and a system design doc for teammates, all in the same day.
  • Bonus: experience with distributed AI training infrastructure (NCCL, PyTorch DDP, multi-node job debugging, checkpoint/restart patterns, container environments for large-scale training).

Benefits & conditions

The Redwood City, CA base pay range for a new hire in this role is $241,000 - $331,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process., We're thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible.

  • Provides a generous employer match on employee 401(k) contributions to support planning for the future.
  • Paid time off to volunteer at an organization of your choice.
  • Funding for select family-forming benefits.
  • Relocation support for employees who need assistance moving

About the company

Biohub is the first large-scale initiative bringing frontier AI models, massive compute, and frontier experimental capabilities under one roof. We're building a general-purpose system to accelerate scientific discovery, integrating frontier AI models, biological foundation models, and lab capabilities, with the ultimate goal of curing disease. Our technology powers scientists around the world, translating AI capabilities into tools that accelerate research everywhere.

Apply for this position