Sign up or log in to watch the video
Lies, Damned Lies and Large Language Models
Jodie Burchell - 9 months ago
Would you like to use large language models (LLMs) in your own project, but are troubled by their tendency to frequently “hallucinate”, or produce incorrect information? Have you ever wondered if there was a way to easily measure an LLM’s hallucination rate, and compare this against other models? And would you like to learn how to help LLMs produce more accurate information? In this talk, we’ll have a look at some of the main reasons that hallucinations occur in LLMs, and then focus on how we can measure one specific type of hallucination: the tendency of models to regurgitate misinformation that they have learned from their training data. We’ll explore how we can easily measure this type of hallucination in LLMs using a dataset called TruthfulQA in conjunction with Python tooling including Hugging Face’s datasets, and LangChain. We’ll end by looking at initiatives to reduce hallucinations in LLMs, and how complex this can be.
Jobs with related skills
Senior Agentic Data Scientist, Applied Mathematics & AI
Dynatrace
·
13 days ago
Vienna, Austria
+1
Hybrid
Lead Agentic AI Scientist (m/f/x)
Dynatrace
·
13 days ago
Vienna, Austria
+1
Hybrid
KI-Spezialist (m/w/d)
Schottel GmbH
·
1 month ago
Rhein-Mosel, Germany
Hybrid
Python Developer (x|f|m)
Sartorius
·
1 month ago
Municipality of Madrid, Spain
Hybrid
Related Videos