Reliable, fully local RAG agents with LLaMA3.2-3b
LLaMA3.2 has released a new set of compact models designed for on-device use cases, such as locally running assistants. Here, we show how LangGraph can enable these types of local assistant by building a multi-step RAG agent - this combines ideas from 3 advanced RAG papers (Adaptive RAG, Corrective RAG, and Self-RAG) into a single control flow using LangGraph. But we show LangGraph makes it possible to run a complex agent locally.
Code:
https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_adaptive_rag_local/
Llama3.2:
https://huggingface.co/blog/llama32#what-is-special-about-llama-32-1b-and-3b
Full course on LangGraph:
https://academy.langchain.com/courses/intro-to-langgraph