Martin O'Hanlon

Martin O'Hanlon - Make LLMs make sense with GraphRAG

How do you stop LLMs from making things up? By connecting them to a knowledge graph that acts as their factual 'left brain'.

Martin O'Hanlon - Make LLMs make sense with GraphRAG
#1about 2 minutes

Understanding the problem of LLM hallucinations

Large language models are powerful but often invent facts, a problem known as hallucination, which presents made-up information as truth.

#2about 5 minutes

Demonstrating how context can ground LLM responses

A live demo in the OpenAI playground shows how an LLM hallucinates a weather report but provides a factual response when given context.

#3about 2 minutes

Introducing retrieval-augmented generation (RAG)

Retrieval-augmented generation is an architectural pattern that improves LLM outputs by augmenting the prompt with retrieved, factual information.

#4about 5 minutes

Understanding the fundamentals of graph databases

Graph databases like Neo4j model data using nodes for entities, labels for categorization, and relationships to represent connections between them.

#5about 6 minutes

Using graphs for specific, fact-based queries

While vector embeddings are good for fuzzy matching, knowledge graphs excel at providing context for highly specific, fact-based questions.

#6about 3 minutes

Demonstrating GraphRAG with a practical example

A live demo shows how adding factual context from a knowledge graph, such as a beach closure, dramatically improves the LLM's recommendation.

#7about 2 minutes

Summarizing the two main uses of GraphRAG

GraphRAG serves two key purposes: extracting entities from unstructured text to build a knowledge graph and using that graph to provide better context for LLMs.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.