Run LLMs Locally with Docker Model Runner | Simplify AI Dev with Docker Desktop
Docker just made running LLMs locally a breeze!
In this video, I walk you through Docker's latest feature — Docker Model Runner — which allows developers to run and test large language models (LLMs) right from their local environment using Docker Desktop 4.40.
Guest speaker - Kevin Wittek(https://www.linkedin.com/in/kevin-wittek/)
We'll explore:
- Why for Docker model runner?
- What Docker Model Runner is
- How it simplifies local AI development
- GPU acceleration on Apple silicon
- Model packaging as OCI artifacts
- Integrations with HuggingFace
- The future roadmap
Whether you're building GenAI apps or experimenting with LLMs, this is a game-changer for your local dev loop.
📌 Try Docker Model Runner: https://docs.docker.com/desktop/features/model-runner/
Subscribe for more hands-on content around AI, Kubernetes, and cloud-native tools!
►►►Connect with me ►►►
► Kubesimplify: https://kubesimplify.com/newsletter
► Newsletter: https://saiyampathak.com/newsletter
► Discord: https://saiyampathak.com/discord
► Twitch: https://saiyampathak.com/twitch
► YouTube: https://saiyampathak.com/youtube.com
► GitHub: https://github.com/saiyam1814
► LinkedIn: https://www.linkedin.com/in/saiyampathak/
► Website: https://saiyampathak.medium.com/
► Instagram: http://instagram.com/saiyampathak/
► https://twitter.com/saiyampathak