Infrastructure Engineer (Storage)

Lightning AI
New York, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 200K

Job location

Remote
New York, United States of America

Tech stack

Board Bringup
Artificial Intelligence
Amazon Web Services (AWS)
Systems Engineering
Data Centers
Data Transmissions
Data Security
Software Debugging
Linux
Distributed Data Store
Distributed Systems
Storage Area Network (SAN)
Python
Linux System Administration
Machine Learning
Performance Tuning
Remote Direct Memory Access
Ceph
High Performance Computing
Low Latency
Bare Metal

Job description

  • Focus: We complete one goal at a time with care, collaborating as a team to deliver features with precision.
  • Balance: Sustained performance comes from rest and recovery. We ensure a healthy work-life balance to keep you at your best.
  • Craftsmanship: Innovation through excellence. Every detail matters, and we take pride in mastering our craft.
  • Minimal: Simplicity drives our innovation. We eliminate complexity through discipline and focus on what truly matters., In this role, you will focus on building and operating the storage systems that power large-scale AI/ML training, inference, and HPC workloads. You will work at the intersection of software, hardware, and operations-developing automation, improving reliability, and scaling distributed storage systems across our bare-metal infrastructure.

You will help own the data plane of our storage infrastructure, supporting high-throughput, low-latency data access for some of the most demanding AI workloads. You'll play a key role in managing and evolving our storage stack (including VAST and S3-compatible systems like Ceph), ensuring performance, reliability, and efficiency at scale., * Operate and scale distributed storage systems, including VAST and S3-compatible object storage (e.g., Ceph)

  • Improve performance, reliability, and efficiency of storage systems supporting large-scale AI/ML workloads
  • Troubleshoot complex storage and data path issues across hardware and software layers
  • Optimize storage performance to support high-throughput, low-latency AI training and inference workloads, * Build and maintain automation for provisioning, managing, and monitoring storage infrastructure
  • Develop Python-based tools and workflows to reduce manual operational overhead
  • Improve lifecycle management of storage clusters, from deployment through maintenance and scaling, * Manage and operate Linux-based systems in production, including bare-metal environments
  • Partner with infrastructure and data center teams on hardware bring-up, upgrades, and issue resolution
  • Support capacity planning, utilization tracking, and forecasting for storage systems
  • Leverage monitoring and telemetry to diagnose issues and improve system performance and reliability, * Work closely with Infrastructure Engineering, Network Engineering, and Platform teams to integrate storage into the broader platform
  • Contribute to design discussions around new infrastructure deployments and scaling strategies
  • Help define best practices for operating storage systems in high-performance computing environments

Requirements

  • 5+ years of experience in infrastructure engineering, systems engineering, or related roles
  • Hands-on experience operating distributed storage systems (e.g., VAST, Ceph, or similar)
  • Strong Linux systems experience in production environments
  • Proficiency in Python or similar scripting/programming languages for automation
  • Experience working with bare-metal infrastructure and hardware-oriented systems
  • Ability to debug complex issues across system boundaries (storage, OS, hardware, networking)
  • Experience with storage networking protocols (e.g., NFS or similar)
  • Experience with capacity planning, monitoring, and performance tuning, * Experience with VAST storage systems in production environments
  • Experience operating S3-compatible object storage at scale
  • Data center operations experience, including working with physical hardware
  • Familiarity with AI/ML or HPC workloads and their storage requirements
  • Background in high-performance or low-latency distributed systems
  • Familiarity with high-performance data transfer technologies (e.g., RDMA, GPU Direct Storage)
  • Experience supporting GPU-based workloads or large-scale compute clusters

Benefits & conditions

Paid parental leave, Parental leave, Health insurance, Paid time off, Vision insurance, Dental insurance, Paid holidays, We offer a comprehensive and competitive benefits package designed to support our employees' health, well-being, and long-term success. Benefits may vary by location, team, and role., * Comprehensive medical, dental and vision coverage (U.S.); Private medical and dental insurance (U.K.)

  • Retirement and financial wellness support (U.S.); Pension contribution (U.K.)
  • Generous paid time off, plus holidays
  • Paid parental leave
  • Professional development support
  • Wellness and work-from-home stipends
  • Flexible work environment

About the company

Lightning AI is the company behind PyTorch Lightning. Founded in 2019, we build an end-to-end platform for developing, training, and deploying AI systems-designed to take ideas from research to production with less friction. Through our merger with Voltage Park, a neocloud and AI Factory, Lightning AI combines developer-first software with cost-efficient, large-scale compute. Teams get the tools they need for experimentation, training, and production inference, with security, observability, and control built in. We serve solo researchers, startups, and large enterprises. Lightning AI operates globally with offices in New York City, San Francisco, Seattle, and London, and is backed by Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.

Apply for this position