The future of AI and AI agents is running everything locally - your LLMs, databases, agent tooling, knowledge bases, and automation platforms. No cloud dependencies, no data leaving your machine, and full control over your AI infrastructure. There are pros and cons to local vs. cloud AI, but the advantages of cloud AI are diminishing every day and we're heading toward a future where running your own LLMs is a must - so master this now.
In this masterclass, I'll walk you through everything you need to know about local AI: what it is, why it's critical, how you can run your own LLMs and infrastructure, how to create local AI agents with both n8n and Python, and finally how to deploy everything to a private server so you can productionize your local AI agents.
Nothing in this video is sponsored - all platforms, LLMs, and frameworks/tools are my genuine recommendations for running all of your AI locally.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you're looking to join a community for early AI adopters to master AI & AI Agents and transform your career or business, check out Dynamous here:
https://dynamous.ai
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are a LOT of resources from this masterclass, here they are!
The Local AI Package:
https://github.com/coleam00/local-ai-packaged
The agents built in this masterclass:
https://github.com/coleam00/ottomator-agents/tree/main/python-local-ai-agent
The entire part list for my PC I built to run local LLMs (I did buy the 3090s used for $700 each):
https://pcpartpicker.com/user/coleam00/saved/#view=3zHNvK
~~~~~~~
Ollama FAQs: https://github.com/ollama/ollama/blob/main/docs/faq.md
Ollama OpenAI API compatibility: https://ollama.com/blog/openai-compatibility
Supabase self-hosting docs: https://supabase.com/docs/guides/self-hosting
My YouTube video for RAG with Local AI:
https://youtu.be/T2QWhXpnT5I
~~~~~~~
Models on Ollama referenced in this video:
DeepSeek R1: https://ollama.com/library/deepseek-r1
Qwen 3: https://ollama.com/library/qwen3
Qwen 2.5: https://ollama.com/library/qwen2.5
Mistral: https://ollama.com/library/mistral
Devstral: https://ollama.com/library/devstral
~~~~~~~
Cloud platforms mentioned in this video for hosting the Local AI Package:
DigitalOcean (CPU and GPU instances): https://www.digitalocean.com/
TensorDocker (GPU instances): https://tensordock.com/
Hostinger (CPU instances): https://www.hostinger.com/vps-hosting
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:00 - Welcome to the Local AI Masterclass
00:41 - Agenda for the Local AI Masterclass
03:23 - Dynamous AI Agent Mastery
04:26 - What is Local AI?
06:08 - Run Your 1st Local LLM in 5 Minutes w/ Ollama
10:11 - Why Local AI? (Local AI vs. Cloud AI)
16:26 - Hardware Requirements for Local LLMs
24:29 - Specific Local LLM Recommendations
28:07 - Quantization (Run Bigger LLMs)
33:08 - Downloading Quantized LLMs in Ollama
35:16 - Offloading (Splitting LLMs between GPU + CPU)
37:11 - Critical Ollama Configuration
41:45 - Ollama's OpenAI API Compatibility
45:58 - OpenAI Compatible Demo
53:20 - Introducing the Local AI Package
57:42 - Instructions for Installing the Local AI Package
1:09:07 - Customizing the Local AI Package
1:11:59 - Running the Local AI Package
1:20:24 - Testing Our Local AI Services
1:24:41 - Testing Ollama within Open WebUI
1:29:49 - Building a Local n8n AI Agent
1:47:18 - Building a Local Python AI Agent
1:57:37 - Containerizing our Local Python Agent
2:02:31 - Introduction to Deploying & Cloud Provider Options
2:07:11 - Deploying the Local AI Package to the Cloud
2:23:18 - Testing Our Deployed Local AI Package
2:25:32 - Deploying Our Python AI Agent to the Cloud
2:32:12 - Testing Our Full Agent Setup
2:36:08 - Additional Resources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Join me as I push the limits of what is possible with AI.
I'll be uploading videos every Wednesday at
7:00 PM CDT!