Principal Architect, Memory-Centric Computing * AI Infrastructure
Role details
Job location
Tech stack
Job description
This role exists to bring rigor to that question. You will build workload-grounded models that evaluate the full solution space, quantify where each approach wins and why, and translate those findings into architecture decisions that directly shape product strategy and investment. You will work closely with architects across compute, networking, storage, and software, and present directly to senior technical leadership. This is a principal individual contributor role: you personally build the models, own the conclusions, and drive the decisions., Location: Daily onsite presence at our San Jose, CA office / U.S. headquarters in alignment with our Flexible Work policy.
What You'll Do
Architecture Strategy & Trade Studies
- Define and evaluate the memory solution space - GPU-side shared memory, DRAM and Flash capacity tiers, pooled/disaggregated memory, and fabric-attached approaches - with quantified value propositions across performance, power, cost/TCO, density, and operability
- Identify break-even conditions and decision criteria across solution approaches; produce architecture briefs and sensitivity analyses ready for executive audiences
Workload-Driven Analysis
- Ground every architectural comparison in real AI behavior: large model training/inference (including long-context and KV-cache dynamics), MoE and sparse workloads, multi-step agentic pipelines, and recommendation/embedding workloads
- Build and maintain a workload methodology - microbenchmarks, proxy models, traces - tied to throughput, latency, tail latency, utilization, and SLA impact
Memory Hierarchy & Tiered Design
- Architect and compare memory hierarchies spanning local high-bandwidth memory, DRAM capacity tiers, Flash (NVMe/NVMe-oF), pooled/remote memory, and storage-class approaches; evaluate placement, caching, prefetching, eviction, QoS, and contention policies across tiers
- Define the software exposure and operational model - runtime, OS, and library expectations - with deployability and observability as first-class requirements
Connectivity & Pooling Approaches
- Evaluate the connectivity and pooling solution space as complementary or competing answers to the memory capacity and bandwidth problem - including GPU-side shared memory (e.g., NVLink-class, Vera Rubin-style), fabric-attached pooling (e.g., CXL-class), and emerging interconnect directions (UALink/UEth-class)
- Quantify how latency, bandwidth, congestion, topology, and coherency assumptions affect end-to-end AI performance across approaches; drive cross-domain alignment on connectivity trade decisions
Hands-On Modeling & Validation
- Build and extend system simulators and trace-driven models spanning compute, memory, Flash/storage, and IO; write analysis code (Python, C/C++) to automate experiments and process results
- Profile and instrument GPU/CPU/system stacks to validate model assumptions; run disciplined studies with baselines, parameter sweeps, and reproducible documentation
Requirements
- Cross-domain reasoning. The core requirement. You connect AI workload behavior, memory hierarchy (including DRAM and Flash tiers), connectivity/fabric, and storage/IO into coherent, quantified arguments - evaluating a broad solution space rather than advocating for any single technology.
- Proven impact. 12+ years in system architecture, performance engineering, or infrastructure modeling with a track record of studies that influenced product direction, investment decisions, or platform strategy.
- AI infrastructure fluency. Working knowledge of training and inference bottlenecks, data movement patterns, and memory pressure across transformers, MoE, and recommendation workloads. Engineering literacy required; researcher depth is not.
- Memory and storage grounding. Solid understanding of memory hierarchy and tiering principles across DRAM and Flash; storage/IO fundamentals including tail latency, QoS, and NVMe/NVMe-oF behavior; and connectivity/fabric options for shared, pooled, and disaggregated memory.
- Analytical rigor. Credible quantitative modeling, clean experimental methodology, and the ability to defend assumptions under scrutiny from both hardware and software engineers.
- Communication that moves decisions. Converts complex multi-domain analysis into clear recommendations for engineering and executive audiences, written and verbal.
- Hands-on experience with GPU-side shared memory architectures, DRAM/Flash tiering for AI workloads, or fabric-attached memory pooling/disaggregation.
- Familiarity with NVLink-class fabrics, CXL-class pooling, or emerging interconnect standards (UALink/UEth-class).
- Prior ownership of benchmarking strategy for memory-intensive or storage-tiered AI workloads.
- Familiarity with inference caching, KV-cache management, or Flash-backed serving at scale.
- Experience with discrete-event or trace-driven system simulation.
- You're inclusive, adapting your style to the situation and diverse global norms of our people.
- An avid learner, you approach challenges with curiosity and resilience, seeking data to help build understanding.
- You're collaborative, building relationships, humbly offering support and openly welcoming approaches.
- Innovative and creative, you proactively explore new ideas and adapt quickly to change.
Benefits & conditions
The pay range below is for all roles at this level across all US locations and functions. Paywithin this range varies by work locationand may also depend on job-related knowledge, skills,and experience. We also offer incentive opportunities that reward employees based on individual and company performance.
This is in addition to our diverse package of benefits centered around the wellbeing of our employees and their loved ones. In addition to the usual Medical/Dental/Vision/401k, our inclusive rewards plan empowers our people to care for their whole selves. An investment in your future is an investment in ours.
Give Back With a charitable giving match and frequent opportunities to get involved, we take an active role in supporting the community. Enjoy Time Away You'll start with 4+ weeks of paid time off a year, plus holidays and sick leave, to rest and recharge. Care for Family Whatever family means to you, we want to support you along the way-including a stipend for fertility care or adoption, medical travel support, and virtual vet care for your fur babies. Prioritize Emotional Wellness With on-demand apps and free confidential therapy sessions, you'll have support no matter where you are. Stay Fit Eating well and being active are important parts of a healthy life. Our onsite Cafe and gym, plus virtual classes, make it easier. Embrace Flexibility Benefits are best when you have the space to use them. That's why we facilitate a flexible environment so you can find the right balance for you. Base Pay Range $219,000-$351,000 USD