Software Engineer, Systems ML - Compilers / Backend
Role details
Job location
Tech stack
Job description
We are seeking a software engineer to support the development of the compiler tool-chain for state-of-the-art deep learning hardware components optimized for AR/VR systems. You will be part of our efforts to architect, design and implement a clean slate compiler for this activity and will be part of a team that includes compiler, machine learning algorithms and software, firmware and ASIC experts. You will contribute to a full stack development effort compiling PyTorch models down to binaries for custom hardware accelerator blocks., 1. Analyze and design effective compiler passes and optimizations. Implement and/or enhance code generation targeting machine learning accelerators
- Work with algorithm research teams to map ML graphs to hardware implementations, model data-flows, create cost-benefit analysis and estimate silicon power and performance
- Work with hardware architects to co-design hardware features that maximize performance, power efficiency and programmability
- Contribute to the development of machine-learning libraries, intermediate representations, export formats, and analysis tools
- Analyze and improve the efficiency, scalability, and stability of our toolchains. Optimize and tune kernels and compiled code to achieve latency targets for ML inference
- Conduct design and code reviews. Evaluate code performance, debug, diagnose and drive resolution of compiler and cross-disciplinary system issues
- Interface with other compiler-focused teams to evaluate and incorporate their innovations and vice versa
Requirements
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
- 2+ years experience developing compilers, toolchains, runtime, or similar code optimization software
- Experience in software design and programming experience in Python and/or C/C++ for development, debugging, testing and performance analysis
- Experience in AI framework development or accelerating models on hardware architectures (GPU, TPU, custom AI ASICs), 1. Experience working and communicating cross functionally in a team environment
- Experience with machine-code generation or compiler back-ends for on-device inference workloads
- Experience working on and contributing to an active compiler toolchain codebase, such as LLVM, MLIR, GCC, MSVC, Glow
- Experience in deep learning algorithms and techniques, e.g., convolutional neural networks, recurrent networks, etc
- Experience developing high-performance kernels or runtime components and tuning them for inference specific accelerator platforms
- Experience of developing in a mainstream machine-learning framework, e.g. PyTorch, MLIR, Tensorflow or Caffe