Automated Reasoning to Prevent LLM Hallucination with Byron Cook - 712

Automated Reasoning to Prevent LLM Hallucination with Byron Cook - 712

1.604 Lượt nghe
Automated Reasoning to Prevent LLM Hallucination with Byron Cook - 712
Today, we're joined by Byron Cook, VP and distinguished scientist in the Automated Reasoning Group at AWS to dig into the underlying technology behind the newly announced Automated Reasoning Checks feature of Amazon Bedrock Guardrails. Automated Reasoning Checks uses mathematical proofs to help LLM users safeguard against hallucinations. We explore recent advancements in the field of automated reasoning, as well as some of the ways it is applied broadly, as well as across AWS, where it is used to enhance security, cryptography, virtualization, and more. We discuss how the new feature helps users to generate, refine, validate, and formalize policies, and how those policies can be deployed alongside LLM applications to ensure the accuracy of generated text. Finally, Byron also shares the benchmarks they’ve applied, the use of techniques like ‘constrained coding’ and ‘backtracking,’ and the future co-evolution of automated reasoning and generative AI. 🎧 / 🎥 Listen or watch the full episode on our page: https://twimlai.com/go/712. 🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_confirmation=1 🗣️ CONNECT WITH US! =============================== Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai/ Follow us on Twitter: https://twitter.com/twimlai Follow us on LinkedIn: https://www.linkedin.com/company/twimlai/ Join our Slack Community: https://twimlai.com/community/ Subscribe to our newsletter: https://twimlai.com/newsletter/ Want to get in touch? Send us a message: https://twimlai.com/contact/ 📖 CHAPTERS =============================== 00:00 - Introduction 2:08 - Automated reasoning 8:01 - Applications of automated reasoning at AWS 12:13 - Breakthroughs in automated reasoning 17:55 - Relation to RL 19:17 - Automated reasoning in Bedrock guardrails 23:02 - Validating the policies 26:02 - Automated reasoning in GenAI 33:57 - Automated reasoning eliminates hallucinations 45:02 - Intractable problems and real-world customer insights 47:09 - Benchmarks of automated reasoning in genAI 50:18 - Constrained coding 52:11 - Future directions 🔗 LINKS & RESOURCES =============================== AWS re:Invent 2024 - https://reinvent.awsevents.com/ Prevent factual errors from LLM hallucinations with mathematically sound Automated Reasoning checks - https://aws.amazon.com/blogs/aws/prevent-factual-errors-from-llm-hallucinations-with-mathematically-sound-automated-reasoning-checks-preview/ Amazon Bedrock Guardrails - https://aws.amazon.com/bedrock/guardrails/ 📸 Camera: https://amzn.to/3TQ3zsg 🎙️Microphone: https://amzn.to/3t5zXeV 🚦Lights: https://amzn.to/3TQlX49 🎛️ Audio Interface: https://amzn.to/3TVFAIq 🎚️ Stream Deck: https://amzn.to/3zzm7F5