Software Engineer, Compute Infrastructure

OpenAI Inc.
New York, United States of America
4 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Compensation
$ 230K

Job location

New York, United States of America

Tech stack

API
Artificial Intelligence
Computing Platforms
Communications as a Service (CaaS)
Data Centers
ETL
Software Debugging
Programming Tools
Microprocessors
Distributed Systems
Network Interface Controllers
Firmware
Network Protocols
Performance Tuning
Remote Direct Memory Access
Reliability Engineering
Software Engineering
System Software
Graphics Processing Unit (GPU)
Computer Networking Systems
High Performance Computing
Reliability of Systems
Kubernetes
Hardware Infrastructure

Job description

We are looking for engineers who want to build the compute platform behind OpenAI's research and products. You may be strongest in low-level systems, high-performance computing, distributed infrastructure, reliability, CaaS, agent infrastructure, developer platforms, tooling, or the user experience around infrastructure. What matters is that you can reason carefully about complex systems, write durable software, and raise the quality and velocity of the people around you.

Depending on your background and interests, you might work close to hardware, close to users, on CaaS and agent infrastructure, or on the control planes and data planes in between. You could help bring new supercomputing capacity online, optimize training workloads from profiler traces and benchmarks, improve NCCL and collective communication behavior, reason about GPUs, NICs, topology, firmware, thermals, and failure modes, or design abstractions that make heterogeneous clusters feel like one coherent platform.

We do not expect every candidate to have worked at every layer. Some engineers will go deep on systems performance, kernel or runtime behavior, large-scale networking protocols, RDMA, NCCL, GPU hardware behavior, benchmarking, scheduling, or hardware reliability; others will make the platform more usable through APIs, tools, workflows, and developer experience. The common thread is strong engineering judgment and excitement about making enormous compute systems faster, more reliable, and easier to use.

This is a general opening for Compute Infrastructure. We will consider candidates for teams across Compute Infrastructure and match you based on your strengths, the problems that motivate you, and where the infrastructure needs are highest.

Where you might work

  • Compute Foundations: Build the low-level platform primitives that make heterogeneous hardware, providers, and data centers repeatable, automatable, and operable at scale.
  • Fleet / Orchestration: Turn raw capacity into reliable, efficient clusters and scheduling systems that researchers and product teams can use with minimal friction and great experience.
  • Core Network Engineering: Build and operate the high-performance networking fabrics, protocols, and observability needed for the largest training and serving workloads.
  • Hardware Health and Observability: Detect, diagnose, remediate, and prevent hardware and fleet-health issues so usable compute stays high across providers and accelerator generations.
  • Storage: Build scalable, performant, durable storage abstractions that keep data movement and storage access from becoming a bottleneck to research or products.
  • Agent Infrastructure: Build sandboxed execution infrastructure for agentic workloads across research and production, with strong isolation, reliability, and scale.

In this role, you will:

  • Build and deeply optimize reliable system software for large-scale compute systems that run some of the world's most demanding AI workloads
  • Design and operate infrastructure across accelerators, CPUs, NICs, switches, networking protocols, storage, data centers, cluster orchestration, scheduling, and fleet health
  • Profile, benchmark, and optimize training workloads across compute, memory, storage, networking, NCCL and collective communication, and cluster scheduling bottlenecks
  • Create hardware-aware automation that makes provisioning, firmware and driver upgrades, incident response, and day-to-day operations faster and less error-prone
  • Build CaaS, agent infrastructure, profiling, observability, benchmarking, and platform tools that help researchers, product engineers, and operators launch, debug, and optimize workloads with less friction
  • Turn operational lessons into better systems, stronger abstractions, and clearer ownership boundaries across teams
  • Collaborate across research, engineering, security, networking, hardware, and data center teams to make compute capacity more capable and easier to use

You might thrive in this role if you:

  • Have built or operated distributed systems, infrastructure platforms, high-performance computing environments, large-scale networking systems, Kubernetes clusters, developer tools, or production systems with demanding reliability requirements
  • Enjoy working across layers of the stack and are comfortable moving between software, hardware, networking, systems performance, reliability, and user needs
  • Care about making complex infrastructure understandable, observable, and usable for the people depending on it
  • Can diagnose hard problems under real operational pressure while still investing in long-term engineering quality
  • Like building leverage for others, whether through APIs, automation, debugging tools, CaaS and agent infrastructure primitives, workflow improvements, or better platform abstractions
  • Are motivated by scale, efficiency, reliability, and disciplined measurement through benchmarks, profiles, and production evidence
  • Communicate clearly, take ownership, and work well with teams whose constraints and goals differ from your own

Requirements

  • Strong software engineering skills and experience building, operating, or improving production infrastructure systems
  • Experience in one or more relevant areas such as distributed systems, operating systems, networking protocols, RDMA, NCCL or collective communication, storage, Kubernetes, scheduling, observability, reliability engineering, high-performance computing, GPU infrastructure, CaaS, agent infrastructure, hardware-aware performance optimization, benchmarking, developer experience, or infrastructure tooling
  • Ability to debug complex system behavior across software, hardware, networking, and workload layers, then turn findings into robust improvements
  • Comfort with ambiguity, strong ownership, and a bias toward practical, durable solutions
  • Interest in working on infrastructure that directly enables frontier AI research and product impact

Benefits & conditions

$230K - $405K medical insurance, dental insurance, vision insurance, parental leave, paid time off, paid holidays, 401(k), retirement plan United States, New York, New York Apr 28, 2026

About the Team:

Compute Infrastructure builds the platform that turns enormous amounts of compute into a reliable engine for frontier AI. We design, provision, schedule, operate, and optimize the systems that connect accelerators, CPUs, networks, storage, data centers, orchestration software, agent infrastructure, developer tools, and observability into one coherent experience for researchers and product teams.

Our work spans the entire stack: capacity planning and cluster lifecycle, bare-metal automation, distributed systems, Kubernetes and scheduling, deep system optimization, high-performance networking, storage, fleet health, reliability, workload profiling, benchmarking, and the developer experience that lets teams use enormous compute systems with confidence. At this scale, small improvements to communication, scheduling, hardware efficiency, or debugging workflows can compound into meaningful research velocity. We are hiring across Compute Infrastructure rather than for a single narrow team, and we use this opening to match strong engineers to the problems where they can have the most leverage.

About the company

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity., At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Apply for this position