Fellow GPU Performance Optimization Engineer
Role details
Job location
Tech stack
Job description
We are seeking a Fellow GPU Performance Optimization Engineer to join our Models and Applications team. This role focuses on maximizing performance and efficiency of large-scale AI training workloads on AMD GPU platforms. You will drive innovations across the full software-hardware stack, optimizing distributed training at scale and pushing the limits of system throughput, scalability, and utilization for generative AI workloads., Lead performance optimization of large-scale AI training workloads on AMD GPU platforms across single-node and multi-node environments.
-
Identify and eliminate system bottlenecks across compute, memory, and communication (e.g., kernel efficiency, memory bandwidth, network utilization).
-
Optimize distributed training strategies (Data, Tensor, Pipeline Parallelism, ZeRO, etc.) for scalability and efficiency on AMD hardware.
-
Drive cross-stack optimizations spanning kernels, compilers, runtimes, communication libraries, and ML frameworks.
-
Develop and apply advanced profiling, benchmarking, and performance modeling methodologies.
-
Collaborate with hardware, compiler, and framework teams to influence next-generation GPU architecture and software stack design.
-
Contribute to and lead open-source efforts to improve ecosystem performance on AMD platforms.
-
Define best practices and guide teams on performance tuning for large-scale training workloads.
-
Stay at the forefront of advancements in large-scale training systems and performance optimization techniques., AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.
Requirements
This position requires deep expertise in GPU performance analysis, distributed systems, and ML workloads, along with the ability to influence architecture, software ecosystems, and best practices across the organization.
THE PERSON:
The ideal candidate is a recognized technical leader with deep expertise in GPU performance optimization, large-scale distributed training, and system-level bottleneck analysis. You have a strong understanding of GPU architecture, interconnects, memory hierarchies, and communication patterns, and can translate this knowledge into measurable improvements in training efficiency at scale.
You are comfortable operating across layers-from kernels and runtimes to frameworks and distributed strategies-and have a track record of driving impactful optimizations and influencing technical direction., Deep expertise in GPU architecture and performance characteristics (compute units, memory hierarchy, interconnects such as PCIe/Infinity Fabric/RDMA).
-
Strong experience with performance profiling tools (e.g., ROCm tools, Nsight-like systems, custom profilers) and bottleneck analysis.
-
Proven experience optimizing large-scale distributed training workloads across thousands of GPUs.
-
Experience with distributed training frameworks such as Megatron-LM, Torchtitan, MaxText, or equivalent.
-
Strong understanding of communication libraries and patterns (e.g., NCCL/RCCL, collective ops, overlap of compute and communication).
-
Expertise in ML frameworks (PyTorch, JAX, TensorFlow) with a focus on performance tuning.
-
Proficiency in Python and at least one systems language (C++/CUDA/HIP), including debugging and low-level optimization.
-
Experience with compiler stacks, kernel optimization, or graph-level optimization is a strong plus.
-
Demonstrated technical leadership and ability to influence cross-functional teams.
ACADEMIC CREDENTIALS:
- Ph.D. in Computer Science, Computer Engineering, or a related field preferred, or equivalent industry experience with significant technical impact.
Benefits & conditions
$252,000.00/Yr.-$378,000.00/Yr.