Anshul Jindal

LLMOps-driven fine-tuning, evaluation, and inference with NVIDIA NIM & NeMo Microservices

What if deploying custom LLMs was fully automated? Learn to build a repeatable, end-to-end pipeline from fine-tuning to inference with NVIDIA NeMo and NIM.

LLMOps-driven fine-tuning, evaluation, and inference with NVIDIA NIM & NeMo Microservices
Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.