ComfyUI Tutorial Series: Ep13 - Exploring Ollama, LLaVA, Gemma Models
In this episode, we’ll show you how to use the Ollama tool with ComfyUI to run and test models like LLaMA, Gemma, and LLaVA. I’ll guide you through installing Ollama, choosing models, and using them to generate prompts from text and images. Whether you're new to AI or looking for ways to improve your workflow, this tutorial will make things easier for you.
What You'll Learn:
- How to install and use Ollama
- Running models like LLaVA and Gemma in ComfyUI
- Generating text and image prompts easily
Join us and explore new possibilities with these models!
Get the workflows and instructions from discord
https://discord.gg/gggpkVgBf3
Unlock exclusive perks by joining our channel:
https://www.youtube.com/channel/UCmMbwA-s3GZDKVzGZ-kPwaQ/join
#comfyui #ollama
---
Install Ollama from
https://ollama.com/
Install these custom nodes if you don't have it
ComfyUI Ollama created by stavsap
ComfyUI Easy Use
Restart ComfyUI
---
If you open a command window - start, run, cmd and press enter
you can check how many models you have installed using command
ollama list
To remove a model you can use
ollama rm model_name
After installed a model you can talk to it in command window, to exit use the command
/bye
---
!!! IMPORTANT
If you are using the LLM in ComfyUI, it requires the Ollama server to run. You can start it by opening the Ollama app. Keep in mind that while Ollama is running, it uses VRAM. If you are not using Ollama, simply quit it from the taskbar notification area by right-clicking on the Ollama icon and selecting "Quit."