Anthropic's new improved RAG: Explained (for all LLM)

Anthropic's new improved RAG: Explained (for all LLM)

5.588 Lượt nghe
Anthropic's new improved RAG: Explained (for all LLM)
NEW contextual retrieval for a better RAG experience. Anthropic's new improved RAG: Explained in Detail, with prompt caching, cBM25 and cReRanking (plus code and official cookbook by Anthropic). This new idea can be easily implemented on all other LLMs (from Google to Mistral). 00:00 The Problem with RAG 02:55 Add BM25 for exact term match 05:15 My explanation of the Vector Space failure 09:00 Anthropic new Contextual Retrieval (new idea) 12:33 Generating prompt for Contextual Retrieval 13:55 Detailed code for Contextual Retrieval 17:10 Contextual Retrieval Preprocessing 17:50 Prompt caching (explained) 20:42 Absolute improvements 22:10 ReRanking for Contextual prompts 23:39 Recommendations for NEW ContextualRAG 29:18 Performance benchmarks ContextualRAG 32:35 Anthropic GitHub cookbook (code) All rights w/ authors: Introducing Contextual Retrieval https://www.anthropic.com/news/contextual-retrieval https://github.com/anthropics/anthropic-cookbook/blob/main/skills/contextual-embeddings/guide.ipynb https://github.com/anthropics/anthropic-cookbook/tree/main/skills/contextual-embeddings/contextual-rag-lambda-function #ai #anthropic #coding