Vijay Krishan Gupta & Gauravdeep Singh Lotey
Creating Industry ready solutions with LLM Models
#1about 3 minutes
Understanding LLMs and the transformer self-attention mechanism
Large Language Models (LLMs) are defined by their parameters and training data, with the transformer's self-attention mechanism being key to resolving ambiguity in language.
#2about 4 minutes
Exploring the business adoption and emergent abilities of LLMs
Businesses are rapidly adopting LLMs due to their emergent abilities like in-context learning, instruction following, and chain-of-thought reasoning, which go beyond their original design.
#3about 9 minutes
Demo of an enterprise assistant for integrated systems
The Simplify Path demo showcases a unified chatbot interface that integrates with various enterprise systems like HRMS, Jira, and Salesforce for both informational queries and transactional tasks.
#4about 3 minutes
Demo of a document compliance checker for pharmaceuticals
The Doc Compliance tool validates pharmaceutical documents against a source-of-truth compliance document to ensure all parameters meet regulatory requirements.
#5about 3 minutes
Demo of a chatbot builder for any website
Web Water is a product that converts any website into an interactive chatbot by scraping its HTML, text, and media content to answer user questions.
#6about 5 minutes
Navigating the common challenges of building with LLMs
Key challenges in developing LLM applications include managing hallucinations, ensuring data privacy for sensitive industries, improving usability, and addressing the lack of repeatability.
#7about 7 minutes
Using prompt optimization to improve LLM usability
Prompt optimization techniques, such as defining a role, using zero-shot, few-shot, and chain-of-thought prompting, can significantly improve the quality and relevance of LLM outputs.
#8about 4 minutes
Advanced techniques like RAG, function calling, and fine-tuning
Overcome LLM limitations by using Retrieval-Augmented Generation (RAG) for domain-specific knowledge, function calling for real-time tasks, and fine-tuning for specialized models.
#9about 10 minutes
Code walkthrough for building a RAG-based chatbot
A practical code demonstration shows how to build a RAG pipeline using LangChain, ChromaDB for vector storage, and an open-source Llama 2 model to answer questions from a specific document.
#10about 9 minutes
Q&A on integration, offline RAG, and the future of LLMs
The discussion covers integrating LLMs into organizations, running RAG offline, suitability for small businesses, and the evolution towards large action models (LAMs).
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
25:25 MIN
Exploring practical industry use cases for LLMs
Exploring LLMs across clouds
03:36 MIN
The rapid evolution and adoption of LLMs
Building Blocks of RAG: From Understanding to Implementation
23:35 MIN
Defining key GenAI concepts like GPT and LLMs
Enter the Brave New World of GenAI with Vector Search
17:00 MIN
Designing developer tools and documentation for LLMs
WAD Live 22/01/2025: Exploring AI, Web Development, and Accessibility in Tech with Stefan Judis
09:55 MIN
Shifting from traditional code to AI-powered logic
WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
00:05 MIN
Moving beyond hype with real-world generative AI
Semantic AI: Why Embeddings Might Matter More Than LLMs
00:02 MIN
Understanding the problem of LLM hallucinations
Martin O'Hanlon - Make LLMs make sense with GraphRAG
00:07 MIN
Understanding the dual nature of large language models
Lies, Damned Lies and Large Language Models
Featured Partners
Related Videos
Data Privacy in LLMs: Challenges and Best Practices
Aditi Godbole
How to Avoid LLM Pitfalls - Mete Atamel and Guillaume Laforge
Meta Atamel & Guillaume Laforge
Using LLMs in your Product
Daniel Töws
Lies, Damned Lies and Large Language Models
Jodie Burchell
From Traction to Production: Maturing your LLMOps step by step
Maxim Salnikov
Building Blocks of RAG: From Understanding to Implementation
Ashish Sharma
Exploring LLMs across clouds
Tomislav Tipurić
DevOps for AI: running LLMs in production with Kubernetes and KubeFlow
Aarno Aukia
From learning to earning
Jobs that call for the skills explored in this talk.

Lead Fullstack Engineer AI
Hubert Burda Media
München, Germany
€80-95K
Intermediate
React
Python
Vue.js
Langchain
+1







AIML -Machine Learning Research, DMLI
Apple
Python
PyTorch
TensorFlow
Machine Learning
Natural Language Processing
