Software Development Engineer AI/ML, Inference Serving, AWS Neuron

Amazon.com, Inc.
Cupertino, United States of America
1 month ago

Role details

Contract type
Internship / Graduate position
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 262K

Job location

Cupertino, United States of America

Tech stack

Java
Artificial Intelligence
Amazon Web Services (AWS)
C Sharp (Programming Language)
C++
Code Review
Computer Programming
Software Debugging
Software Design Patterns
Machine Learning
Object-Oriented Software Development
Open Source Technology
Performance Tuning
Software Prototyping
Software Engineering
PyTorch
Large Language Models
Reliability of Systems
Information Technology
Software Coding
Software Version Control

Job description

AWS Neuron is the software stack powering AWS Inferentia and Trainium machine learning accelerators, designed to deliver high-performance, low-cost inference at scale. The Neuron Serving team develops infrastructure to serve modern machine learning models-including large language models (LLMs) and multimodal workloads-reliably and efficiently on AWS silicon. We are seeking a Software Development Engineer to lead and architect our next-generation model serving infrastructure, with a particular focus on large-scale generative AI applications., * Architect and lead the design of distributed ML serving systems optimized for generative AI workloads

  • Drive technical excellence in performance optimization and system reliability across the Neuron ecosystem
  • Design and implement scalable solutions for both offline and online inference workloads
  • Lead integration efforts with frameworks such as vLLM, SGLang, Torch XLA, TensorRT, and Triton
  • Develop and optimize system components for tensor/data parallelism and disaggregated serving
  • Implement and optimize custom PyTorch operators and NKI kernels
  • Mentor team members and provide technical leadership across multiple work streams
  • Drive architectural decisions that impact the entire Neuron serving stack
  • Collaborate with customers, product owners, and engineering teams to define technical strategy
  • Author technical documentation, design proposals, and architectural guidelines

A day in the life You'll lead critical technical initiatives while mentoring team members. You'll collaborate with cross-functional teams of applied scientists, system engineers, and product managers to architect and deliver state-of-the-art inference capabilities. Your day might involve:

  • Leading design reviews and architectural discussions
  • Rapidly prototyping software to show customer value
  • Debugging complex performance issues across the stack
  • Mentoring junior engineers on system design and optimization
  • Collaborating with research teams on new ML serving capabilities
  • Driving technical decisions that shape the future of Neuron's inference stack

About the team The Neuron Serving team is at the forefront of scalable and resilient AI infrastructure at AWS. We focus on developing model-agnostic inference innovations, including disaggregated serving, distributed KV cache management, CPU offloading, and container-native solutions. Our team is dedicated to upstreaming Neuron SDK contributions to the open-source community, enhancing performance and scalability for AI workloads. We're committed to pushing the boundaries of what's possible in large-scale ML serving.

Recent shares: https://github.com/aws-neuron/upstreaming-to-vllm/releases/tag/2.25.0 https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/disaggregated-inference.html

Requirements

5+ years of programming using a modern programming language such as Java, C++, or C#, including object-oriented design experience

  • 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience
  • 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
  • 5+ years of non-internship professional software development experience
  • Experience as a mentor, tech lead or leading an engineering team

Preferred Qualifications

  • Master's degree in computer science or equivalent
  • Deep expertise in ML Frameworks/Libraries such as JAX, PyTorch, vLLM, SGLang, Dynamo, TorchXLA, TensorRT.

Benefits & conditions

The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at https://amazon.jobs/en/benefits.

USA, CA, Cupertino - 193,300.00 - 261,500.00 USD annually

Apply for this position