This extension aim for integrating AnimateDiff w/ CLI into AUTOMATIC1111 Stable Diffusion WebUI w/ ControlNet. You can generate GIFs in exactly the same way as generating images after enabling this extension.
This extension implements AnimateDiff in a different way. It does not require you to clone the whole SD1.5 repository. It also applied (probably) the least modification to ldm
, so that you do not need to reload your model weights if you don't want to.
Batch size on WebUI will be replaced by GIF frame number internally: 1 full GIF generated in 1 batch. If you want to generate multiple GIF at once, please change batch number.
Batch number is NOT the same as batch size. In A1111 WebUI, batch number is above batch size. Batch number means the number of sequential steps, but batch size means the number of parallel steps. You do not have to worry too much when you increase batch number, but you do need to worry about your VRAM when you increase your batch size (where in this extension, video frame number). You do not need to change batch size at all when you are using this extension.
You might also be interested in another extension I created: Segment Anything for Stable Diffusion WebUI.
- Update your WebUI to v1.6.0 and ControlNet to v1.1.410, then install this extension via link. I do not plan to support older version.
- Download motion modules and put the model weights under
stable-diffusion-webui/extensions/sd-webui-animatediff/model/
. If you want to use another directory to save the model weights, please go toSettings/AnimateDiff
. See model zoo for a list of available motion modules. - Enable
Pad prompt/negative prompt to be same length
inSettings/Optimization
and clickApply settings
. You must do this to prevent generating two separate unrelated GIFs. CheckingBatch cond/uncond
is optional, which can improve speed but increase VRAM usage.
- Go to txt2img if you want to try txt2gif and img2img if you want to try img2gif.
- Choose an SD1.5 checkpoint, write prompts, set configurations such as image width/height. If you want to generate multiple GIFs at once, please change batch number, instead of batch size.
- Enable AnimateDiff extension, and set up each parameter, and click
Generate
.-
Number of frames — Choose whatever number you like.
If you enter 0 (default):
- If you submit a video via
Video source
/ enter a video path viaVideo path
/ enable ANY batch ControlNet, the number of frames will be the number of frames in the video (use shortest if more than one videos are submitted). - Otherwise, the number of frames will be your
Batch size
described below.
If you enter something smaller than your
Batch size
other than 0: you will get the firstNumber of frames
frames as your output GIF from your whole generation. All following frames will not appear in your generated GIF, but will be saved as PNGs as usual. - If you submit a video via
-
FPS — Frames per second, which is how many frames (images) are shown every second. If 16 frames are generated at 8 frames per second, your GIF’s duration is 2 seconds. If you submit a source video, your FPS will be the same as the source video.
-
Display loop number — How many times the GIF is played. A value of
0
means the GIF never stops playing. -
Batch size — How many frames will be passed into the motion module at once. The model is trained with 16 frames, so it’ll give the best results when the number of frames is set to
16
. Choose [1, 24] for V1 motion modules and [1, 32] for V2 motion modules. -
Closed loop — If you enable this option and your number of frames is greater than your batch size, this extension will try to make the last frame the same as the first frame.
-
Stride — Max motion stride as a power of 2 (default: 1).
-
Overlap — Number of frames to overlap in context. If overlap is -1 (default): your overlap will be
Batch size
// 4. -
Save — Format of the output. Choose at least one of "GIF"|"MP4"|"PNG". Check "TXT" if you want infotext, which will live in the same directory as the output GIF.
-
Reverse — Append reversed frames to your output. See #112 for instruction.
-
Frame Interpolation Interpolate between frames with Deforum's FILM implementation. Requires Deforum extension. #128
-
Interp X Replace each input frame with X interpolated output frames. #128.
-
Video source — [Optional] Video source file for video to video generation. You MUST enable ControlNet. It will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet panel. You can of course submit a control image via
Single Image
tab or an input directory viaBatch
tab, they will override this video source input and work as usual. -
Video path — [Optional] Folder for source frames for video to video generation, but lower priority than
Video source
. You MUST enable ControlNet. It will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet. You can of course submit a control image viaSingle Image
tab or an input directory viaBatch
tab, they will override this video path input and work as usual.- For people who want to inpaint videos: enter a folder which contains two sub-folders
image
andmask
. They should contain the same number of images. This extension will match them according to the same sequence. You should not upload a video directly to Video source mentioned above. Using my Segment Anything extension can make your life much easier.
- For people who want to inpaint videos: enter a folder which contains two sub-folders
-
- You should see the output GIF on the output gallery. You can access GIF output at
stable-diffusion-webui/outputs/{txt2img or img2img}-images/AnimateDiff
. You can also access image frames atstable-diffusion-webui/outputs/{txt2img or img2img}-images/{date}
.
You need to go to img2img and submit an init frame via A1111 panel. You can optionally submit a last frame via extension panel (experiment feature, not tested, not sure if it will work).
By default: your init_latent
will be changed to
init_alpha = (1 - frame_number ^ latent_power / latent_scale)
init_latent = init_latent * init_alpha + random_tensor * (1 - init_alpha)
If you upload a last frame: your init_latent
will be changed in a similar way. Read this code to understand how it works.
Just like how you use ControlNet. Here is a sample. You will get a list of generated frames. You will have to view GIF in your file system, as mentioned at #WebUI item 4.
'alwayson_scripts': {
'AnimateDiff': {
'args': [{
'enable': True, # enable AnimateDiff
'video_length': 16, # video frame number, 0-24 for v1 and 0-32 for v2
'format': ['GIF', 'PNG'], # 'GIF' | 'MP4' | 'PNG' | 'TXT'
'loop_number': 0, # 0 = infinite loop
'fps': 8, # frames per second
'model': 'mm_sd_v15_v2.ckpt', # motion module name
'reverse': [], # 0 | 1 | 2 - 0: Add Reverse Frame, 1: Remove head, 2: Remove tail
# parameters below are for img2gif only.
'latent_power': 1,
'latent_scale': 32,
'last_frame': None,
'latent_power_last': 1,
'latent_scale_last': 32
}
]
}
},
mm_sd_v14.ckpt
&mm_sd_v15.ckpt
&mm_sd_v15_v2.ckpt
by @guoyww: Google Drive | HuggingFace | CivitAI | Baidu NetDiskmm_sd_v14.safetensors
&mm_sd_v15.safetensors
&mm_sd_v15_v2.safetensors
by @neph1: HuggingFacemm-Stabilized_high.pth
&mm-Stabbilized_mid.pth
by @manshoety: HuggingFacetemporaldiff-v1-animatediff.ckpt
by @CiaraRowles: HuggingFace
2023/07/20
v1.1.0: Fix gif duration, add loop number, remove auto-download, remove xformers, remove instructions on gradio UI, refactor README, add sponsor QR code.2023/07/24
v1.2.0: Fix incorrect insertion of motion modules, add option to change path to save motion modules in Settings/AnimateDiff, fix loading different motion modules.2023/09/04
v1.3.0: Support any community models with the same architecture; fix grey problem via #63 (credit to @TDS4874 and @opparco)2023/09/11
v1.4.0: Support official v2 motion module (different architecture: GroupNorm not hacked, UNet middle layer has motion module).2023/09/14
: v1.4.1: Always changebeta
,alpha_comprod
andalpha_comprod_prev
to resolve grey problem in other samplers.2023/09/16
: v1.5.0: Randomize init latent to support better img2gif, credit to this forked repo; add other output formats and infotext output, credit to @zappityzap; add appending reversed frames; refactor code to ease maintaining.2023/09/19
: v1.5.1: Support xformers, sdp, sub-quadratic attention optimization - VRAM usage decrease to 5.60GB with default setting. See FAQ 1st item for more information.2023/09/22
: v1.5.2: Option to disable xformers atSettings/AnimateDiff
due to a bug in xformers, API support, option to enable GIF paletter optimization atSettings/AnimateDiff
(credit to @rkfg), gifsicle optimization move toSettings/AnimateDiff
.2023/09/25
: v1.6.0: Motion LoRA supported. Download and use them like any other LoRA you use (example: download motion lora tostable-diffusion-webui/models/Lora
and add<lora:v2_lora_PanDown:0.8>
to your positive prompt). Motion LoRA only supports V2 motion modules.2023/09/27
: v1.7.0: ControlNet supported. Please closely follow the instructions in How to Use, especially the explanation ofVideo source
andVideo path
attributes. ControlNet is way more complex than what I can test and I ask you to test for me. Please submit an issue whenever you find a bug. You may want to checkDo not append detectmap to output
inSettings/ControlNet
to avoid having a series of control images in your output gallery. Safetensors for some motion modules are also available now.2023/09/29
: v1.8.0: Infinite generation (with/without ControlNet) supported.2023/10/01
: v1.8.1: Now you can uncheckBatch cond/uncond
inSettings/Optimization
if you want. This will reduce your VRAM (5.31GB -> 4.21GB for SDP) but take longer time.2023/10/08
: v1.9.0: Prompt travel supported. You must have ControlNet installed (you do not need to enable ControlNet) to try it. See FAQ for how to trigger this feature.2023/10/11
: v1.9.1: Use state_dict key to guess mm version, replace match case with if else to support python<3.10, option to save PNG to custom dir (seeSettings/AnimateDiff
for detail), move hints to js, install imageio[ffmpeg] automatically when MP4 save fails.
-
Q: How much VRAM do I need?
A: Actual VRAM usage depends on your image size and context batch size. You can try to reduce image size or context batch size to reduce VRAM usage. I list some data tested on Ubuntu 22.04, NVIDIA 4090, torch 2.0.1+cu117, H=W=512, frame=16 (default setting) below.
w/
/w/o
meansBatch cond/uncond
inSettings/Optimization
is checked/unchecked.Optimization VRAM w/ VRAM w/o No optimization 12.13GB xformers/sdp 5.60GB 4.21GB sub-quadratic 10.39GB -
Q: Can I use SDXL to generate GIFs?
A: You will have to wait for someone to train SDXL-specific motion modules which will have a different model architecture. This extension essentially inject multiple motion modules into SD1.5 UNet. It does not work for other variations of SD, such as SD2.1 and SDXL.
-
Q: How should I write prompts to trigger prompt travel?
A: See example below. The first line is head prompt, which is optional. You can write no/single/multiple lines of head prompts. The second and third lines are for prompt interpolation, in format
frame number
:prompt
. The last line is tail prompt, which is optional. You can write no/single/multiple lines of tail prompts. If you don't need this feature, just write prompts in the old way.1girl, yoimiya (genshin impact), origen, line, comet, wink, Masterpiece, BestQuality. UltraDetailed, <lora:LineLine2D:0.7>, <lora:yoimiya:0.8>, 0: closed mouth 8: open mouth smile
Coming soon.
AnimateDiff | Extension v1.2.0 | Extension v1.3.0 | img2img |
---|---|---|---|
Note that I did not modify random tensor generation when producing v1.3.0 samples.
No LoRA | PanDown | PanLeft |
---|---|---|
You can sponsor me via WeChat, AliPay or PayPal. You can also support me via patreon, ko-fi or afdian.
AliPay | PayPal | |
---|---|---|