The Model Context Protocol (MCP) is an open standardized protocol that connects AI models with various data sources and tools. It functions like a "USB-C port for AI applications," allowing LLMs to seamlessly interact with local files, databases, APIs, and custom tools. But, it can be challenging to understand. Here, we breakdown how MCP works, build an MCP server for LangGraph docs from scratch, and connect it to Claude, Cursor, and Windsurf.
Video notes:
https://mirror-feeling-d80.notion.site/MCP-From-Scratch-1b9808527b178040b5baf83a991ed3b2?pvs=4
Chapters:
00:00 - Introduction to MCP
00:21 - Demo: MCP Tool in Cursor
00:40 - Demo: MCP Tool in Windsurf
00:54 - Demo: MCP Tool in Claude Desktop
01:15 - Motivation for MCP
01:38 - Building a Tool from Scratch
02:00 - Loading LangGraph Docs and Creating a Vector Store
02:44 - Testing the Basic Vector Store Query
03:00 - Creating a LangChain Tool
03:22 - Binding Tools to LLMs
04:00 - Bridge to MCP: Connecting Tools to AI Applications
04:24 - MCP as a Client-Server Protocol
05:00 - How MCP Works with Host Applications
05:37 - Server Initialization by Host Applications
06:05 - Defining an MCP Server
06:41 - Adding Resources to MCP
07:05 - Running the MCP Inspector
07:27 - Testing Tools in the Inspector
08:06 - Configuring MCP Servers for Different Hosts
08:48 - Demo: MCP Tool in Cursor (Revisited)
09:10 - Demo: MCP Server in Windsurf (Revisited)
09:32 - Demo: MCP in Claude Desktop (Revisited)
09:56 - Using MCP Resources in Claude Desktop
10:19 - Recap: Augmenting LLMs with Context and Tools
10:57 - Conclusion and Final Thoughts