Sergio Perez & Harshita Seth
Adding knowledge to open-source LLMs
#1about 4 minutes
Understanding the LLM training pipeline and knowledge gaps
LLMs are trained through pre-training and alignment, but require new knowledge to stay current, adapt to specific domains, and acquire new skills.
#2about 5 minutes
Adding domain knowledge with continued pre-training
Continued pre-training adapts a foundation model to a specific domain by training it further on specialized, unlabeled data using self-supervised learning.
#3about 6 minutes
Developing skills and reasoning with supervised fine-tuning
Supervised fine-tuning uses instruction-based datasets to teach models specific tasks, chat capabilities, and complex reasoning through techniques like chain of thought.
#4about 8 minutes
Aligning models with human preferences using reinforcement learning
Preference alignment refines model behavior using reinforcement learning, evolving from complex RLHF with reward models to simpler methods like DPO.
#5about 2 minutes
Using frameworks like NeMo RL to simplify model alignment
Frameworks like the open-source NeMo RL abstract away the complexity of implementing advanced alignment algorithms like reinforcement learning.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
02:07 MIN
How LLMs generate text and learn behavior
You are not my model anymore - understanding LLM model behavior
00:53 MIN
Understanding LLMs, context windows, and RAG
Beyond Prompting: Building Scalable AI with Multi-Agent Systems and MCP
02:12 MIN
Understanding the fundamentals of large language models
Building Blocks of RAG: From Understanding to Implementation
00:57 MIN
Why large language models need retrieval augmented generation
Build RAG from Scratch
00:04 MIN
The evolution of NLP from early models to modern LLMs
Harry Potter and the Elastic Semantic Search
23:35 MIN
Defining key GenAI concepts like GPT and LLMs
Enter the Brave New World of GenAI with Vector Search
35:15 MIN
Advanced techniques like RAG, function calling, and fine-tuning
Creating Industry ready solutions with LLM Models
30:39 MIN
Shifting from general LLMs to specialized models
ChatGPT vs Google: SEO in the Age of AI Search - Eric Enge
Featured Partners
Related Videos
Inside the Mind of an LLM
Emanuele Fabbiani
Unlocking the Power of AI: Accessible Language Model Tuning for All
Cedric Clyburn & Legare Kerrison
LLMOps-driven fine-tuning, evaluation, and inference with NVIDIA NIM & NeMo Microservices
Anshul Jindal
Self-Hosted LLMs: From Zero to Inference
Roberto Carratalá & Cedric Clyburn
Exploring LLMs across clouds
Tomislav Tipurić
Give Your LLMs a Left Brain
Stephen Chin
Large Language Models ❤️ Knowledge Graphs
Michael Hunger
Three years of putting LLMs into Software - Lessons learned
Simon A.T. Jiménez
From learning to earning
Jobs that call for the skills explored in this talk.


AI Engineer Knowledge Graphs & Large Language Models
digatus it group
Remote
€62-79K
Intermediate
API
ETL
Java
+6

AIML -Machine Learning Research, DMLI
Apple
Python
PyTorch
TensorFlow
Machine Learning
Natural Language Processing

Machine Learning Research Engineer in Natural Language Processing and Media Mining
Epfl Digital Humanities Laboratory
€95K
Junior
API
Unix
NoSQL
Python
+4




Machine Learning Algorithm/SW Optimization Engineer
Leuven MindGate
Python
PyTorch
TensorFlow
Machine Learning

Data Scientist- Python/MLflow-NLP/MLOps/Generative AI
ITech Consult AG
Azure
Python
PyTorch
TensorFlow
Machine Learning