This video covers how to convert a still image into a dynamic sequence for storytelling in comfyui using the WAN 2 video models, as well as how to install them.
Workflow Downloads: https://goshnii.gumroad.com/
Eleven Labs: https://try.elevenlabs.io/itm1hcyhi76c
Artlist SFX for Creators: https://bit.ly/3TdAqIA (get 2 extra months free)
Topaz: https://topazlabs.com/ref/2497/
Watch Next : WAN 2.1 Video to Video using WanFun ControlNet in ComfyUI
https://youtu.be/iWdJXbLIdRw
✅ Minimum Specs:
🔹 GPU: NVIDIA RTX 3060 (12GB VRAM) or higher
🔹 RAM: 16GB+ (helps with stability)
🔹 VRAM Usage: You can run lower resolutions (480p, 720p) on GPUs with 8GB VRAM, but you might need optimizations like GGUF models.
✅ Ideal Specs for Smooth Performance:
🔹 GPU: NVIDIA RTX 3090, 4080, 4090 (More VRAM = better results at higher resolutions)
🔹 RAM: 32GB+
🔹 Storage: SSD (NVMe preferred) for faster loading
🔹 VRAM: 12GB+ recommended for 1080p+ generations (when available)
⚡ Mac Users?
M1/M2/M3 Macs can run it using DirectML or Metal Performance Shaders, but performance may vary.
If your GPU is struggling, consider optimizing batch sizes, reducing resolution, or using FP8/quantized models.
Video Models for WAN 2 (Hugging Face)
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files
GGUF Model List:
https://huggingface.co/city96/Wan2.1-I2V-14B-480P-gguf/tree/main
GGUF Custom node installation Video
https://youtu.be/AzeZkosyqp4?si=-D0AVy99JRvW-NKU
RunwayML
https://app.runwayml.com
#comfyui #wan2.1 #wan2.1comfyui #imagetovideo
*There are affiliate links here, which means I get rewarded when someone makes a qualifying purchase. You won't pay anything, and it supports this channel.