Senior Data Scientist- AI Evaluation

RELX Group
4 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
£ 63K

Job location

Tech stack

Artificial Intelligence
Airflow
Data analysis
Code Review
Computational Linguistics
Statistical Hypothesis Testing
Python
SQL Databases
Management of Software Versions
Large Language Models
GIT
Power Analysis (Cryptography)

Job description

About the RoleAs a Senior Data Scientist III, you will design and implement end-to-end evaluation studies and pipelines for AI products. You'll translate product requirements into statistically sound test designs and metrics, build reproducible Python/SQL pipelines, run analyses and QC, and deliver concise readouts that drive roadmap decisions and risk mitigation. You'll collaborate closely with SMEs, contribute to our shared evaluation libraries, and produce audit-ready documentation aligned with Responsible AI and governance expectations.

Responsibilities

· Study design & metrics - Translate product questions into hypotheses, tasks/rubrics, datasets, and success criteria; define metrics (accuracy/correctness, groundedness, reliability, safety/bias/toxicity) with acceptance thresholds.

· Pipelines & tooling - Build and maintain Python/SQL evaluation pipelines (data prep, prompt/rubric generation, LLM-as-judge with guardrails, scoring, QC, reporting); contribute to shared packages and CI.

· Statistical rigor - Plan for power, confidence intervals, inter-rater reliability (e.g., Cohen's ?/ICC), calibration, and significance testing; document assumptions and limitations.

· SME integration - Partner with SME Ops and domain leads to create clear rater guidance, run calibration, monitor IRR, and incorporate feedback loops.

· Analytics & reporting - Create analyses that highlight regressions, safety risks, and improvement opportunities; deliver crisp write-ups and executive-level summaries.

· Governance & compliance - Produce audit-ready artifacts (evaluation plans, datasheets/model cards, risk logs); follow privacy/security guardrails and Responsible AI practices.

· Quality & reliability - Implement test hygiene (dataset/versioning, golden sets, seed control), observability, and failure analysis; help run post-release regression monitoring.

· Collaboration - Work closely with Product and Engineering to scope, estimate, and land evaluation work; participate in code reviews and design sessions alongside fellow Data Scientists.

Requirements

Do you have hands-on experience designing reliable evaluations for LLM/NLP features?Do you enjoy turning messy product questions into clear study designs, metrics, and production-ready code?, · Education/Experience: Master's + 3 years, or Bachelor's + 5 years, in CS, Data Science, Statistics, Computational Linguistics, or related field; strong track record shipping evaluation or ML analytics work.

· Technical: Strong Python and SQL; experience with LLM/NLP evaluation, data/versioning, testing/CI, and cloud-based workflows; familiarity with prompt/rubric design and LLM-as-judge patterns.

· Statistics: Comfortable with power analysis, CIs, hypothesis testing, inter-rater reliability, and error/slice analysis.

· Practices: Git, code reviews, reproducibility, documentation; ability to turn ambiguous product needs into executable study plans.

· Communication: Clear written/oral communication; ability to produce crisp dashboards and decision-ready summaries for non-technical stakeholders.

· Mindset: Ownership, curiosity, bias-for-action, and collaborative ways of working.

Nice to have

· Experience with evaluation of retrieval-augmented or agentic systems and/or with safety/bias/toxicity measurements.

· Familiarity with lightweight orchestration (e.g., Airflow/Prefect) and containerization basics.

· Exposure to healthcare or education content or working with clinician/academic SMEs.

About the company

About our TeamElsevier's AI Evaluation team designs, builds, and operates NLP/LLM evaluation solutions used across multiple product lines. We partner with Product, Technology, Domain SMEs, and Governance to ensure our AI features are safe, effective, and continuously improving., RELX is a global provider of information-based analytics and decision tools for professional and business customers, enabling them to make better decisions, get better results and be more productive. Our purpose is to benefit society by developing products that help researchers advance scientific knowledge; doctors and nurses improve the lives of patients; lawyers promote the rule of law and achieve justice and fair results for their clients; businesses and governments prevent fraud; consumers access financial services and get fair prices on insurance; and customers learn about markets and complete transactions. Our purpose guides our actions beyond the products that we develop. It defines us as a company. Every day across RELX our employees are inspired to undertake initiatives that make unique contributions to society and the communities in which we operate.

Apply for this position