RAG is a powerful technique that supplies your agent with external information, and can improve agent performance. However, relying on RAG alone, agents can often pull in irrelevant documents to the user question.
What if you could catch and fix that before generating an answer? In this video, we’ll show how to use in-the-loop evaluations to filter out noisy results and boost answer quality.
OpenEvals: https://github.com/langchain-ai/openevals
Corrective RAG agent repo: https://github.com/jacoblee93/corrective-local-rag-qwen?tab=readme-ov-file
0:00 – Intro: What we’re building today
0:28 – What is RAG? (Concept overview)
1:45 – Evaluating RAG with OpenEvals
2:35 – Baseline Agent (No Reflection)
3:26 – Improved Architecture: Reflection Steps
4:09 – Code Walkthrough
6:18 – Live Demo & Trace in LangSmith