Machine Learning Systems Engineer
Role details
Job location
Tech stack
Job description
As a Machine Learning Engineer, you will contribute directly to our machine learning infrastructure, to the ScalarLM open source codebase, and build large-scale language model applications on top of it. You'll operate at the intersection of high-performance computing, distributed systems, and cutting-edge machine learning research, developing the fundamental infrastructure that enables researchers and organizations worldwide to train and deploy large language models at scale.
This is an opportunity to take on technically demanding projects, contribute to foundational systems, and help shape the next generation of intelligent computing. You Will:
- Contribute code and performance improvements to the open source project.
- Develop and optimize distributed training algorithms for large language models.
- Implement high-performance inference engines and optimization techniques.
- Work on integration between vLLM, Megatron-LM, and HuggingFace ecosystems.
- Build tools for seamless model training, fine-tuning, and deployment.
- Optimize performance of advanced GPU architectures.
- Collaborate with the open source community on feature development and bug fixes.
- Research and implement new techniques for self-improving AI agents.
Requirements
Experience Level: 3+ years of experience in machine learning engineering or research, * Programming Languages: Proficiency in both C/C++ and Python
- High Performance Computing: Deep understanding of HPC concepts, including:
- MPI (Message Passing Interface) programming and optimization
- Bulk Synchronous Parallel (BSP) computing models
- Multi-GPU and multi-node distributed computing
- CUDA/ROCm programming experience preferred
- Machine Learning Foundations:
- Solid understanding of gradient descent and backpropagation algorithms
- Experience with transformer architectures and the ability to explain their mechanics
- Knowledge of deep learning training and its applications
- Understanding of distributed training techniques (data parallelism, model parallelism, pipeline parallelism, large batch training, optimization)
Research and Development
- Publications: Experience with machine learning research and publications preferred
- Research Skills: Ability to read, understand, and implement techniques from recent ML research papers
- Open Source: Demonstrated commitment to open source development and community collaboration
Experience
- 3+ years of experience in machine learning engineering or research.
- Experience with large-scale distributed training frameworks (Megatron-LM, DeepSpeed, FairScale, etc.).
- Familiarity with inference optimization frameworks (vLLM, TensorRT, etc.).
- Experience with containerization (Docker, Kubernetes) and cluster management.
- Background in systems programming and performance optimization.
- PhD or MS in Computer Science, Computer Engineering, Machine Learning, or related field.
- Experience with SLURM, Kubernetes, or other cluster orchestration systems.
- Knowledge of mixed precision training, data parallel training, and scaling laws.
- Experience with transformer architecture, pytorch, decoding algorithms.
- Familiarity with high performance GPU programming ecosystem.
- Previous contributions to major open source ML projects.
- Experience with MLOps and model deployment at scale.
- Understanding of modern attention mechanisms (multi-head attention, grouped query attention, etc.).
Benefits & conditions
At RelationalAI, you will:
- Work from anywhere in the world
- Earn competitive salary + equity
- Enjoy open PTO, flexible schedules, and recharge weeks
- Access global benefits, mental-health support, and learning stipends
- Join a transparent, inclusive, and globally connected culture that values curiosity, excellence, and impact
- Regular team offsites and global events - Building strong connections while working remotely through team offsites and global events that bring everyone together.
- A culture of transparency & knowledge-sharing - Open communication through team standups, fireside chats, and open meetings.
Country Hiring Guidelines:
RelationalAI hires people from around the world. All of our roles are remote; however, some locations might carry specific eligibility requirements.
Because of this, understanding location & visa support helps us better prepare to onboard our colleagues.