Simon A.T. Jiménez

Three years of putting LLMs into Software - Lessons learned

What if LLMs are just a new, unreliable kind of API? Learn the critical lessons from three years of building real-world software with them.

Three years of putting LLMs into Software - Lessons learned
#1about 4 minutes

Understanding the fundamental nature of LLMs

LLMs are unreliable pattern matchers that appear intelligent but lack true understanding, requiring developers to manage context and anticipate failures.

#2about 4 minutes

Controlling LLM output with API parameters

API parameters like temperature and top_p allow for control over the determinism and creativity of LLM responses by manipulating token selection probabilities.

#3about 7 minutes

Viewing LLMs as a new kind of API

LLMs should be treated as a new type of API for text manipulation, not as intelligent agents, because they are advanced pattern matchers with significant limitations.

#4about 5 minutes

Implementing practical LLM use cases in software

LLMs can be used for tasks like audio transcription, image analysis for OCR, and text reformulation by providing clear instructions and examples in the prompt.

#5about 4 minutes

Navigating legal compliance and data privacy

Using paid APIs with data privacy contracts, implementing human-in-the-loop workflows, and understanding the European AI Act are crucial for legal compliance.

#6about 2 minutes

Understanding the security risks of AI integrations

Integrating LLMs with external APIs or internal data creates significant security risks like prompt injection, requiring careful control over the AI's permissions and actions.

Related jobs
Jobs that call for the skills explored in this talk.
Picnic Technologies B.V.

Picnic Technologies B.V.
Amsterdam, Netherlands

Intermediate
Senior
Python
Structured Query Language (SQL)
+1

Featured Partners

Related Articles

View all articles

From learning to earning

Jobs that call for the skills explored in this talk.