Jodie Burchell
A beginner’s guide to modern natural language processing
#1about 5 minutes
Understanding the core challenge of natural language processing
Machine learning models require numerical inputs, so raw text must be converted into a numerical format called a vector or text embedding.
#2about 6 minutes
Exploring bag-of-words methods for text vectorization
Binary and count vectorization create features based on the presence or frequency of words in a document, ignoring their original context.
#3about 4 minutes
How Word2Vec captures word meaning in vector space
The Word2Vec model learns numerical representations for words by analyzing their surrounding context, grouping similar words closer together in a multi-dimensional space.
#4about 5 minutes
Training a Word2Vec model in Python using Gensim
A practical demonstration shows how to clean text data and train a custom Word2Vec model to generate embeddings for a specific vocabulary.
#5about 3 minutes
Creating document embeddings by averaging word vectors
A simple yet effective method to represent an entire document is to retrieve the embedding for each word and calculate their average vector.
#6about 2 minutes
Evaluating the performance of the Word2Vec classifier
The classifier trained on averaged word embeddings achieves 95% accuracy, with errors often occurring on headlines with misleading topics or tones.
#7about 3 minutes
Overcoming context limitations with transformer models
Transformer models use a self-attention mechanism to weigh the importance of other words in a sentence, allowing them to understand a word's meaning in its specific context.
#8about 5 minutes
Understanding how the BERT model is pre-trained
BERT learns a deep understanding of language by being pre-trained on tasks like predicting masked words and determining correct sentence order, enabling it to be fine-tuned for specific applications.
#9about 7 minutes
Fine-tuning a BERT model with the Transformers library
Using the Hugging Face Transformers library, a pre-trained DistilBERT model is fine-tuned for the clickbait classification task, requiring specific tokenization with attention masks.
#10about 2 minutes
Choosing the right text processing model for your task
While the fine-tuned BERT model achieves the highest accuracy at 99%, simpler methods like count vectorization can outperform Word2Vec and may be sufficient depending on the use case.
#11about 2 minutes
Using word embeddings to improve downstream NLP tasks
Word embeddings can be combined with other techniques, such as TF-IDF weighting, to extract more signal and improve performance on tasks like sentiment analysis.
#12about 2 minutes
Addressing overfitting and feature leakage in production
Preventing overfitting involves using validation sets, ensuring representative data samples, and checking for feature leakage where a feature inadvertently reveals the outcome.
#13about 2 minutes
Handling out-of-vocabulary and rare terms in NLP
For rare or out-of-vocabulary terms that models struggle with, symbolic rule-based approaches can be used as a complementary system to handle important edge cases.
#14about 3 minutes
Advice for starting a career in data science
Aspiring data scientists should focus on gaining hands-on experience with real-world datasets and building a portfolio of projects to develop an intuition for common issues.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
00:04 MIN
The evolution of NLP from early models to modern LLMs
Harry Potter and the Elastic Semantic Search
23:35 MIN
Defining key GenAI concepts like GPT and LLMs
Enter the Brave New World of GenAI with Vector Search
16:03 MIN
From Word2Vec and LSTMs to modern transformers
What do language models really learn
09:55 MIN
Shifting from traditional code to AI-powered logic
WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
00:05 MIN
Moving beyond hype with real-world generative AI
Semantic AI: Why Embeddings Might Matter More Than LLMs
02:12 MIN
Understanding the fundamentals of large language models
Building Blocks of RAG: From Understanding to Implementation
13:57 MIN
The recent evolution of generative AI models
Enter the Brave New World of GenAI with Vector Search
41:09 MIN
The future of AI tools and how to get started
Graphs and RAGs Everywhere... But What Are They? - Andreas Kollegger - Neo4j
Featured Partners
Related Videos
Lies, Damned Lies and Large Language Models
Jodie Burchell
Overview of Machine Learning in Python
Adrian Schmitt
Harry Potter and the Elastic Semantic Search
Iulia Feroli
Vectorize all the things! Using linear algebra and NumPy to make your Python code lightning fast.
Jodie Burchell
Multilingual NLP pipeline up and running from scratch
Kateryna Hrytsaienko
What do language models really learn
Tanmay Bakshi
Multimodal Generative AI Demystified
Ekaterina Sirazitdinova
From ML to LLM: On-device AI in the Browser
Nico Martin
From learning to earning
Jobs that call for the skills explored in this talk.


Machine Learning Engineer
Understanding Recruitment
Charing Cross, United Kingdom
Python
Terraform
Kubernetes
Machine Learning
Continuous Integration
Machine Learning Engineer
Speechmatics
Charing Cross, United Kingdom
Remote
€39K
Machine Learning
Speech Recognition
Machine Learning Engineering Manager | Computer Vision | Deep Learning | Python | C++ | London, Hybrid
Enigma
Charing Cross, United Kingdom
Remote
€59K
C++
Python
Computer Vision
+1
Machine Learning Solutions Engineer
Understanding Recruitment
Charing Cross, United Kingdom
Remote
€60K
DevOps
Python
Terraform
+3
Machine Learning Engineer
K2 Partnering Solutions Ltd
Gijón, Spain
Kubernetes
Machine Learning
Amazon Web Services (AWS)
Security-by-Design for Trustworthy Machine Learning Pipelines
Association Bernard Gregory
Machine Learning
Continuous Delivery


