Software Engineer, Technical Lead, Inference
Role details
Job location
Tech stack
Job description
As the Technical Lead for the Inference team, you will drive the architecture and optimization of our inference backbone, ensuring high performance, scalability, and efficiency in a dynamic environment. You will lead the acquisition and automation of benchmarks, collaborate with cross-functional teams, and innovate solutions to enhance our AI-powered applications.
What you will do
-
Architect and optimize the inference for high-volume, low-latency, and high-availability environments.
-
Lead the acquisition and automation of benchmarks at both micro and macro scales.
-
Introduce new techniques and tools to improve performance, latency, throughput, and efficiency in our model inference stack.
-
Build tools to identify bottlenecks and sources of instability, and design solutions to address them.
-
Collaborate with machine learning researchers, engineers, and product managers to bring cutting-edge technologies into production.
-
Optimize code and infrastructure to maximize hardware utilization and efficiency.
-
Mentor and guide team members, fostering a culture of collaboration, innovation, and continuous learning., This role is primarily based at our HQ in Paris, France. We will prioritize candidates who either reside in Paris or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team. Our remote work policy is designed to offer flexibility, enhance work-life balance, and boost productivity. The number of remote workdays is determined by each manager, taking into account individual autonomy and specific circumstances-such as increased flexibility during the summer months. Regardless of the arrangement, we expect all employees to maintain open lines of communication with their teams and be available during core working hours.
Requirements
-
Extensive experience in C++ and Python, with a strong focus on backend development and performance optimization.
-
Deep understanding of modern ML architectures and experience with performance optimization for inference.
-
Proven track record with large-scale distributed systems, particularly performance-critical ones.
-
Familiarity with PyTorch, TensorRT, CUDA, NCCL.
-
Strong grasp of infrastructure, continuous integration, and continuous development principles.
-
Ability to lead and mentor team members, driving projects from concept to implementation.
-
Results-oriented mindset with a bias towards flexibility and impact.
-
Passion for staying ahead of emerging technologies and applying them to Al-driven solutions.
-
Humble attitude, eagerness to help colleagues, and a desire to see the team succeed.