I trained an AI to think like me — here’s what happened

I trained an AI to think like me — here’s what happened

55.067 Lượt nghe
I trained an AI to think like me — here’s what happened
I fine-tuned a local LLM on my Obsidian second brain to explore the future of augmented intelligence. Here's a walkthrough of fine-tuning DeepSeek R1 Llama 8B, explore embedding visualizations, and my reflections on what this means for the future of personal AI. I also go into how I use AI today and what we need for true Augmented Intelligence. Key topics: Fine-tuning LLMs on personal knowledge bases Visualizing note embeddings and knowledge clusters Creating targeted QA datasets Using Llama Factory for efficient fine-tuning How I use AI today The future of augmented intelligence 🕐 TIMESTAMPS: 0:37 - My Obsidian Second Brain 1:00 - Creating Embeddings 2:03 - Exploring Embeddings & Note Clusters 5:43 - Building QA Dataset 6:17 - Creating QA Pairs 7:04 - Fine-tuning Process 9:44 - Was it Actually Useful? 10:10 - How I Use AI in my Second Brain Today 10:45 - How I Use AI for Augmented Intelligence 12:07 - Future of Augmented Intelligence Code lives here: https://github.com/bitsofchris/deep-learning/tree/main/code/06_obsidian-rag-fine-tuning