LLM Ops: How to Monitor & Evaluate Large Language Models in Production | Jeremy @ Pattern

LLM Ops: How to Monitor & Evaluate Large Language Models in Production | Jeremy @ Pattern

66 Lượt nghe
LLM Ops: How to Monitor & Evaluate Large Language Models in Production | Jeremy @ Pattern
LLM Ops is here—and it’s reshaping how we manage AI in production. In this Forge Utah MLOps Chapter talk, Jeremy, Lead AI Engineer at Pattern, introduces the emerging field of LLM Ops: the practices, tools, and challenges of monitoring and evaluating large language models in real-world applications. Jeremy shares insights from building groundbreaking AI-powered products like Pattern's “Content Brief”, and offers a practical look at how LLMs can be responsibly deployed, measured, and improved over time. What you’ll learn: What is LLM Ops, and why it matters How to think about observability for generative AI Tools and frameworks for evaluation Lessons from deploying LLMs in ecommerce and content workflows 👨‍💻 Speaker: Jeremy (Lead AI Engineer @ Pattern) 🎓 Background: Brigham Young University AI researcher (deepfake detection) 🏢 Hosted by: Forge Utah — Join the Slack to get involved in the MLOps Utah chapter! #LLMOps #MLOps #AIEngineering #MachineLearning #LLM #Observability #ForgeUtah #UtahTech #OpenSourceAI #AIinProduction