In this video, we will build a RAG (Retrieval-Augmented Generation) system using Deepseek, LangChain, and Streamlit to chat with PDFs and answer complex questions about your local documents. I will guide you step by step in setting up Ollama's Deepseek-r1 llm model, which features strong reasoning capabilities, integrating it with a LangChain-powered RAG, and then showing you how to use a simple Streamlit interface so you can query your PDFs in real time. If you’re curious about Deepseek, reasoning models, LangChain, or how to build your own AI chatbot that handles complicated queries, this video is for you.
You can find the source code here: https://github.com/NarimanN2/ollama-playground
#deepseek #ollama #langchain #python #streamlit
0:00 Demo
1:06 Introduction
1:56 How to run Deepseek models?
3:16 Project Overview
7:31 Build RAG with Deepseek and Langchain
14:17 Chatbot with Streamlit
17:29 Chat with PDF in Action
22:28 Conclusion