A high level overview of the excellent ControlNet research paper which has been used recently to grant stable diffusion users highly fine grained control over the image generation process.
Discord: https://discord.gg/CNTQPUqK
======= Links =======
ControNet paper: https://arxiv.org/abs/2302.05543
Huggingface ControlNet models: https://huggingface.co/lllyasviel/ControlNet/tree/main/models
Huggingface Depth-2-img models: https://huggingface.co/stabilityai/stable-diffusion-2-depth
Controlnet github: https://github.com/lllyasviel/ControlNet
======= Music =======
From Youtube Audio Library:
Organic Guitar House - Dyalla
======= Media =======
Thumbnail Source: https://www.reddit.com/r/StableDiffusion/comments/1139mnd/controlnet_is_awesome_here_are_some_memes_made/
Anime Motion Tracking: https://www.youtube.com/watch?v=EAXUInT70TA&ab_channel=VladimirChopine%5BGeekatPlay%5D
Carpark Lady: https://www.reddit.com/r/StableDiffusion/comments/11jgbby/controlnet_ebsynth/