Member of Technical Staff, AI Systems Engineer
Role details
Job location
Tech stack
Job description
We are building next-generation customized AI silicon designed to accelerate AI workloads with unprecedented efficiency. We are looking for an exceptional Systems Engineer to bridge the gap between our custom hardware and modern AI inference frameworks., As a Senior AI Systems Engineer, you will own the software integration layer between our custom AI chip's proprietary SDK and SGLang, a state-of-the-art serving framework for Large Language Models (LLMs) and Vision-Language Models. You will be responsible for ensuring that our silicon can seamlessly run SGLang inference workloads at peak performance, bypassing the traditional CUDA ecosystem entirely. Responsibilities
- Framework Integration: Architect and develop the backend integration to make our custom AI chip a first-class citizen in SGLang.
- Custom Operator Development: Write custom C++ / PyTorch extensions that map SGLang's primitive operations (e.g., RadixAttention, FlashAttention, matrix multiplications) to our custom chip's proprietary software layer.
- Performance Optimization: Profile and optimize end-to-end LLM inference latency, throughput, and memory utilization (Paged Attention) on our hardware.
- Cross-Functional Collaboration: Work closely with our hardware architecture and compiler teams to provide feedback on our custom software stack and silicon design based on framework-level bottlenecks.
- Testing & Deployment: Build robust testing pipelines to validate model accuracy and performance parity against standard GPU baselines.
Requirements
Do you have experience in Python?, Do you have a Master's degree?, * BS, MS, or PhD in Computer Science, Computer Engineering, or a related field.
- Software engineering experience focusing on systems programming, ML infrastructure, or AI compilers.
- Expertise in Python: Deep understanding of memory management, concurrent programming.
- Experience with LLM Inference Engines: Hands-on experience modifying or extending frameworks like SGLang, vLLM, DeepSpeed-FastGen, or TensorRT-LLM.
- PyTorch Internals: Strong experience writing PyTorch C++ extensions and custom operators.
- Hardware Interfacing: Proven track record of integrating machine learning workloads with hardware accelerators (GPUs, TPUs, NPUs) using custom SDKs, APIs, or low-level drivers.
Nice-to-Have Qualifications:
- Prior experience working on non-CUDA software ecosystems (e.g., AMD ROCm, AWS Neuron, Google XLA).
- Familiarity with AI compilers and intermediate representations (MLIR, Apache TVM, OpenAI Triton).
- Strong understanding of underlying LLM architectures (Transformers, MoE) and state-of-the-art attention algorithms (FlashAttention v2/v3).
- Previous experience at an AI silicon startup or working on custom accelerators (e.g., Google TPU, AWS Trainium).
This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.