TELECOMMUTE
Role details
Job location
Tech stack
Job description
You will design and execute rigorous benchmarks and define dataset standards. Collaborating closely with our R&D team, you will build the evaluation infrastructure that guides the evolution of Pathway's posttransformer models.
You Will
- Proactively identify, prioritize, and curate relevant public and client-driven benchmarks across our target use cases and markets.
- Evaluate candidate benchmarks for clarity, data quality, evaluation methodology, and fit with our model roadmap.
- Run benchmarks with baseline models to validate setup, uncover edge cases, and derisk R&D runs.
- Hand off benchmark-ready packages to R&D (specs, data, evaluation scripts, expected metrics, constraints)
- Maintain a shared vocabulary and documentation around benchmarks, datasets, and evaluation formats that GTM and R&D can both use.
- Track and organize benchmark results, model leaderboards, and what good looks like for different customers and scenarios.
- Contribute to demos and publicfacing proof points based on benchmark outcomes.
You will play a key role in defining and driving the benchmarking process for AI model evaluation. Your work will directly influence what we build, how we talk about it, and how customers and the market experience BDH., * Be a pioneer: you get to work with a new type of "Live AI" challenges around long sequences and changing data.
- Be part of one of an early-stage AI startup that believes in impactful research and foundational changes.
Requirements
Type of contract: Full-time, permanentPreferable joining date: Immediate. The positions are open until filled - please apply immediately.Compensation: based on profile and location.Location: Remote work. Possibility to work or meet with other team members in one of our offices: Palo Alto, CA; Paris, France or Wroclaw, Poland. Candidates based anywhere in the EU, UK, United States, and Canada will be considered.If you meet our broad requirements but are missing some experience, don't hesitate to reach out to us.Cover letterIt's always a pleasure to say hi! If you could leave us 2-3 lines, we'd really appreciate that.You are expected to meet at least one of the following criteria:You have published at least one paper at NeurIPS, ICLR, or ICML - where you were the lead author or made significant conceptual & code contributions. You have significantly contributed to an LLM training effort which became newsworthy (topped a Hugging Face benchmark, best in class model, etc.), preferably using multiple GPU's.You have spent at least 6 months working in a leading Machine Learning research center (e.g. at: Google Brain / Deepmind, Apple, Meta, Anthropic, Nvidia, MILA).You were an ICPC World Finalist, or an IOI, IMO, or IPhO medalist in High School. YouHave experience with ML/LLM evaluation, data science, or technical product roles, ideally around benchmarks or experimentation.Are comfortable reading papers, leaderboards, and Github repos, and turning them into clear, repeatable benchmark specs.Can talk comfortably with both engineers and customers, and translate between technical detail and business value.Care about highquality data, reproducible experiments, and crisp documentationAre respectful of othersAre fluent in English Bonus PointsPublished or opensourced work on LLM evaluation, benchmarking or data quality.Experience designing custom benchmarks or evaluation protocols for novel model capabilities.Why You Should ApplyJoin an intellectually stimulating work environment.Be a pioneer: you get to work with a new type of "Live AI" challenges around long sequences and changing data.Be part of one of an early-stage AI startup that believes in impactful research and foundational changes., 1. You have published at least one paper at NeurIPS, ICLR, or ICML - where you were the lead author or made significant conceptual & code contributions.
- You have significantly contributed to an LLM training effort which became newsworthy (topped a Hugging Face benchmark, best in class model, etc.), preferably using multiple GPU's.
- You have spent at least 6 months working in a leading Machine Learning research center (e.g. at: Google Brain / Deepmind, Apple, Meta, Anthropic, Nvidia, MILA).
- You were an ICPC World Finalist, or an IOI, IMO, or IPhO medalist in High School.
You
- Have experience with ML/LLM evaluation, data science, or technical product roles, ideally around benchmarks or experimentation.
- Are comfortable reading papers, leaderboards, and Github repos, and turning them into clear, repeatable benchmark specs.
- Can talk comfortably with both engineers and customers, and translate between technical detail and business value.
- Care about highquality data, reproducible experiments, and crisp documentation
- Are respectful of others
- Are fluent in English
Bonus Points
- Published or opensourced work on LLM evaluation, benchmarking or data quality.
- Experience designing custom benchmarks or evaluation protocols for novel model capabilities.