Jodie Burchell
A beginner’s guide to modern natural language processing
#1about 5 minutes
Understanding the core challenge of natural language processing
Machine learning models require numerical inputs, so raw text must be converted into a numerical format called a vector or text embedding.
#2about 6 minutes
Exploring bag-of-words methods for text vectorization
Binary and count vectorization create features based on the presence or frequency of words in a document, ignoring their original context.
#3about 4 minutes
How Word2Vec captures word meaning in vector space
The Word2Vec model learns numerical representations for words by analyzing their surrounding context, grouping similar words closer together in a multi-dimensional space.
#4about 5 minutes
Training a Word2Vec model in Python using Gensim
A practical demonstration shows how to clean text data and train a custom Word2Vec model to generate embeddings for a specific vocabulary.
#5about 3 minutes
Creating document embeddings by averaging word vectors
A simple yet effective method to represent an entire document is to retrieve the embedding for each word and calculate their average vector.
#6about 2 minutes
Evaluating the performance of the Word2Vec classifier
The classifier trained on averaged word embeddings achieves 95% accuracy, with errors often occurring on headlines with misleading topics or tones.
#7about 3 minutes
Overcoming context limitations with transformer models
Transformer models use a self-attention mechanism to weigh the importance of other words in a sentence, allowing them to understand a word's meaning in its specific context.
#8about 5 minutes
Understanding how the BERT model is pre-trained
BERT learns a deep understanding of language by being pre-trained on tasks like predicting masked words and determining correct sentence order, enabling it to be fine-tuned for specific applications.
#9about 7 minutes
Fine-tuning a BERT model with the Transformers library
Using the Hugging Face Transformers library, a pre-trained DistilBERT model is fine-tuned for the clickbait classification task, requiring specific tokenization with attention masks.
#10about 2 minutes
Choosing the right text processing model for your task
While the fine-tuned BERT model achieves the highest accuracy at 99%, simpler methods like count vectorization can outperform Word2Vec and may be sufficient depending on the use case.
#11about 2 minutes
Using word embeddings to improve downstream NLP tasks
Word embeddings can be combined with other techniques, such as TF-IDF weighting, to extract more signal and improve performance on tasks like sentiment analysis.
#12about 2 minutes
Addressing overfitting and feature leakage in production
Preventing overfitting involves using validation sets, ensuring representative data samples, and checking for feature leakage where a feature inadvertently reveals the outcome.
#13about 2 minutes
Handling out-of-vocabulary and rare terms in NLP
For rare or out-of-vocabulary terms that models struggle with, symbolic rule-based approaches can be used as a complementary system to handle important edge cases.
#14about 3 minutes
Advice for starting a career in data science
Aspiring data scientists should focus on gaining hands-on experience with real-world datasets and building a portfolio of projects to develop an intuition for common issues.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
00:04 MIN
The evolution of NLP from early models to modern LLMs
Harry Potter and the Elastic Semantic Search
23:35 MIN
Defining key GenAI concepts like GPT and LLMs
Enter the Brave New World of GenAI with Vector Search
16:03 MIN
From Word2Vec and LSTMs to modern transformers
What do language models really learn
09:55 MIN
Shifting from traditional code to AI-powered logic
WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
00:05 MIN
Moving beyond hype with real-world generative AI
Semantic AI: Why Embeddings Might Matter More Than LLMs
02:12 MIN
Understanding the fundamentals of large language models
Building Blocks of RAG: From Understanding to Implementation
13:57 MIN
The recent evolution of generative AI models
Enter the Brave New World of GenAI with Vector Search
41:09 MIN
The future of AI tools and how to get started
Graphs and RAGs Everywhere... But What Are They? - Andreas Kollegger - Neo4j
Featured Partners
Related Videos
Multilingual NLP pipeline up and running from scratch
Kateryna Hrytsaienko
Lies, Damned Lies and Large Language Models
Jodie Burchell
What do language models really learn
Tanmay Bakshi
Overview of Machine Learning in Python
Adrian Schmitt
Multimodal Generative AI Demystified
Ekaterina Sirazitdinova
From ML to LLM: On-device AI in the Browser
Nico Martin
Creating Industry ready solutions with LLM Models
Vijay Krishan Gupta & Gauravdeep Singh Lotey
Develop AI-powered Applications with OpenAI Embeddings and Azure Search
Rainer Stropek
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.

Machine Learning Engineer - Large Language Models (LLM) - Startup
Startup
Charing Cross, United Kingdom
PyTorch
Machine Learning

FTE / Full Time Position: Data Engineer _AIML(LLM, Agentic AI) & Python exp- Onsite: Bournemouth UK
KBC Technologies UK LTD
Bournemouth, United Kingdom
NLTK
NumPy
Scrum
React
Python
+5

Manager of Machine Learning (LLM/NLP/Generative AI) - Visas Supported
European Tech Recruit
Municipality of Bilbao, Spain
Junior
GIT
Python
Docker
Computer Vision
Machine Learning
+2

Deep Learning Engineer
Here Technologies
Birmingham, United Kingdom
Remote
€54-59K
Azure
Speech Recognition
Google Cloud Platform
+2

ML Data Engineer - Object Detection & Active Learning
autonomous-teaming
München, Germany
Remote
ETL
NoSQL
NumPy
Python
+3


Remote Head of AI Development - Python, Azure, AI Powered Platform
Neologik
Windsor, United Kingdom
Remote
Senior
API
Azure
DevOps
Python
+5

Data Scientist für Natural Language Processing
Know Center GmbH
Graz, Austria
Remote
€44K
Python
Natural Language Processing

Web Developer (short-term, 2 months) In Open-Source Machine Learning
Eindhoven University of Technology
Eindhoven, Netherlands
Remote
React
Plotly
Next.js
Machine Learning