๐ EasyAnimate is an end-to-end solution for generating high-resolution and long videos. We can train transformer based diffusion generators, train VAEs for processing long videos, and preprocess metadata.
๐ We use DIT and transformer as a diffuser for video and image generation.
๐ Welcome!
English | ็ฎไฝไธญๆ | ๆฅๆฌ่ช
- Table of Contents
- Introduction
- Quick Start
- Video Result
- How to use
- Model zoo
- TODO List
- Contact Us
- Reference
- License
EasyAnimate is a pipeline based on the transformer architecture, designed for generating AI images and videos, and for training baseline models and Lora models for Diffusion Transformer. We support direct prediction from pre-trained EasyAnimate models, allowing for the generation of videos with various resolutions, approximately 6 seconds in length, at 8fps (EasyAnimateV5, 1 to 49 frames). Additionally, users can train their own baseline and Lora models for specific style transformations.
We will support quick pull-ups from different platforms, refer to Quick Start.
New Features:
- Updated to version v5.1, the Qwen2 VL is used as the text encoder, and Flow is used as the sampling method. It supports bilingual prediction in both Chinese and English. In addition to common controls such as Canny and Pose, it also supports trajectory control, camera control. [2025.01.21]
- Use reward backpropagation to train Lora and optimize the video, aligning it better with human preferences, detailes in here. EasyAnimateV5-7b is released now. [2024.11.27]
- Updated to v5, supporting video generation up to 1024x1024, 49 frames, 6s, 8fps, with expanded model scale to 12B, incorporating the MMDIT structure, and enabling control models with diverse inputs; supports bilingual predictions in Chinese and English. [2024.11.08]
- Updated to v4, allowing for video generation up to 1024x1024, 144 frames, 6s, 24fps; supports video generation from text, image, and video, with a single model handling resolutions from 512 to 1280; bilingual predictions in Chinese and English enabled. [2024.08.15]
- Updated to v3, supporting video generation up to 960x960, 144 frames, 6s, 24fps, from text and image. [2024.07.01]
- ModelScope-Sora โData Directorโ Creative Race โ The third Data-Juicer Big Model Data Challenge is now officially launched! Utilizing EasyAnimate as the base model, it explores the impact of data processing on model training. Visit the competition website for details. [2024.06.17]
- Updated to v2, supporting video generation up to 768x768, 144 frames, 6s, 24fps. [2024.05.26]
- Code Created! Now supporting Windows and Linux. [2024.04.12]
Function๏ผ
Our UI interface is as follows:
DSW has free GPU time, which can be applied once by a user and is valid for 3 months after applying.
Aliyun provide free GPU time in Freetier, get it and use in Aliyun PAI-DSW to start EasyAnimate within 5min!
Our ComfyUI is as follows, please refer to ComfyUI README for details.
If you are using docker, please make sure that the graphics card driver and CUDA environment have been installed correctly in your machine.
Then execute the following commands in this way:
# pull image
docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate
# enter image
docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate
# clone code
git clone https://github.com/aigc-apps/EasyAnimate.git
# enter EasyAnimate's dir
cd EasyAnimate
# download weights
mkdir models/Diffusion_Transformer
mkdir models/Motion_Module
mkdir models/Personalized_Model
# Please use the hugginface link or modelscope link to download the EasyAnimateV5.1 model.
# https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-InP
# https://modelscope.cn/models/PAI/EasyAnimateV5.1-12b-zh-InP
# https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh
# https://modelscope.cn/models/PAI/EasyAnimateV5.1-12b-zh
We have verified EasyAnimate execution on the following environment:
The detailed of Windows:
- OS: Windows 10
- python: python3.10 & python3.11
- pytorch: torch2.2.0
- CUDA: 11.8 & 12.1
- CUDNN: 8+
- GPU๏ผ Nvidia-3060 12G
The detailed of Linux:
- OS: Ubuntu 20.04, CentOS
- python: python3.10 & python3.11
- pytorch: torch2.2.0
- CUDA: 11.8 & 12.1
- CUDNN: 8+
- GPU๏ผNvidia-V100 16G & Nvidia-A10 24G & Nvidia-A100 40G & Nvidia-A100 80G
We need about 60GB available on disk (for saving weights), please check!
The video size for EasyAnimateV5.1-12B can be generated by different GPU Memory, including:
GPU memory | 384x672x25 | 384x672x49 | 576x1008x25 | 576x1008x49 | 768x1344x25 | 768x1344x49 |
---|---|---|---|---|---|---|
16GB | ๐งก | ๐งก | โ | โ | โ | โ |
24GB | ๐งก | ๐งก | ๐งก | ๐งก | ๐งก | โ |
40GB | โ | โ | โ | โ | โ | โ |
80GB | โ | โ | โ | โ | โ | โ |
Due to the float16 weights of qwen2-vl-7b, it cannot run on a 16GB GPU. If your GPU memory is 16GB, please visit Huggingface or Modelscope to download the quantized version of qwen2-vl-7b to replace the original text encoder, and install the corresponding dependency libraries (auto-gptq, optimum).
The video size for EasyAnimateV5-7B can be generated by different GPU Memory, including:
GPU memory | 384x672x25 | 384x672x49 | 576x1008x25 | 576x1008x49 | 768x1344x25 | 768x1344x49 |
---|---|---|---|---|---|---|
16GB | ๐งก | ๐งก | โ | โ | โ | โ |
24GB | โ | โ | โ | ๐งก | ๐งก | โ |
40GB | โ | โ | โ | โ | โ | โ |
80GB | โ | โ | โ | โ | โ | โ |
โ indicates it can run under "model_cpu_offload", ๐งก represents it can run under "model_cpu_offload_and_qfloat8", โญ๏ธ indicates it can run under "sequential_cpu_offload", โ means it can't run. Please note that running with sequential_cpu_offload will be slower.
Some GPUs that do not support torch.bfloat16, such as 2080ti and V100, require changing the weight_dtype in app.py and predict files to torch.float16 in order to run.
The generation time for EasyAnimateV5.1-12B using different GPUs over 25 steps is as follows:
GPU | 384x672x25 | 384x672x49 | 576x1008x25 | 576x1008x49 | 768x1344x25 | 768x1344x49 |
---|---|---|---|---|---|---|
A10 24GB | ~120s (4.8s/it) | ~240s (9.6s/it) | ~320s (12.7s/it) | ~750s (29.8s/it) | โ | โ |
A100 80GB | ~45s (1.75s/it) | ~90s (3.7s/it) | ~120s (4.7s/it) | ~300s (11.4s/it) | ~265s (10.6s/it) | ~710s (28.3s/it) |
(Obsolete) EasyAnimateV3:
The video size for EasyAnimateV3 can be generated by different GPU Memory, including:
GPU memory | 384x672x72 | 384x672x144 | 576x1008x72 | 576x1008x144 | 720x1280x72 | 720x1280x144 |
---|---|---|---|---|---|---|
12GB | โญ๏ธ | โญ๏ธ | โญ๏ธ | โญ๏ธ | โ | โ |
16GB | โ | โ | โญ๏ธ | โญ๏ธ | โญ๏ธ | โ |
24GB | โ | โ | โ | โ | โ | โ |
40GB | โ | โ | โ | โ | โ | โ |
80GB | โ | โ | โ | โ | โ | โ |
(โญ๏ธ) indicates it can run with low_gpu_memory_mode=True, but at a slower speed, and โ means it can't run.
We'd better place the weights along the specified path:
EasyAnimateV5.1:
๐ฆ models/
โโโ ๐ Diffusion_Transformer/
โ โโโ ๐ EasyAnimateV5.1-12b-zh-InP/
โ โโโ ๐ EasyAnimateV5.1-12b-zh/
โโโ ๐ Personalized_Model/
โ โโโ your trained trainformer model / your trained lora model (for UI load)
1.mp4 |
2.mp4 |
3.mp4 |
4.mp4 |
1.mp4 |
2.mp4 |
3.mp4 |
4.mp4 |
1.mp4 |
2.mp4 |
3.mp4 |
4.mp4 |
1.mp4 |
2.mp4 |
3.mp4 |
4.mp4 |
5.mp4 |
6.mp4 |
7.mp4 |
8.mp4 |
Trajectory Control:
EasyAnimate-Trajectory_00030.mp4 |
EasyAnimate-Trajectory-Merge_00009.mp4 |
EasyAnimate_00105.mp4 |
Generic Control Video (Canny, Pose, Depth, etc.):
demo_pose.mp4 |
demo_scribble.mp4 |
demo_depth.mp4 |
demo_pose_out.mp4 |
demo_scribble_out.mp4 |
demo_depth_out.mp4 |
Pan Up | Pan Left | Pan Right |
Up.mp4 |
Left.mp4 |
Right.mp4 |
Pan Down | Pan Up + Pan Left | Pan Up + Pan Right |
Down.mp4 |
Up_Left.mp4 |
Up_Right.mp4 |
Since EasyAnimateV5 and V5.1 have very large parameters, we need to consider memory-saving options to adapt to consumer-grade graphics cards. We provide GPU_memory_mode for each prediction file, allowing you to choose from model_cpu_offload, model_cpu_offload_and_qfloat8, or sequential_cpu_offload.
- model_cpu_offload means the entire model will move to the CPU after use, saving some memory.
- model_cpu_offload_and_qfloat8 means the entire model will move to the CPU after use and applies float8 quantization to the transformer model, saving more memory.
- sequential_cpu_offload means each layer of the model moves to CPU after use, which is slower but saves a lot of memory.
qfloat8 may reduce model performance but saves more memory. If memory is sufficient, it's recommended to use model_cpu_offload.
For more details, see the ComfyUI README.
- Step 1: Download the corresponding weights and place them in the models folder.
- Step 2: Use different files for predictions based on the weights and prediction goals.
- Text-to-Video:
- Modify the prompt, neg_prompt, guidance_scale, and seed in the predict_t2v.py file.
- Then run the predict_t2v.py file and wait for the results, which are stored in the samples/easyanimate-videos folder.
- Image-to-Video:
- Modify validation_image_start, validation_image_end, prompt, neg_prompt, guidance_scale, and seed in the predict_i2v.py file.
- validation_image_start is the starting image, and validation_image_end is the ending image of the video.
- Then run the predict_i2v.py file and wait for the results, which are stored in the samples/easyanimate-videos_i2v folder.
- Video-to-Video:
- Modify validation_video, validation_image_end, prompt, neg_prompt, guidance_scale, and seed in the predict_v2v.py file.
- validation_video is the reference video for video-to-video. You can run a demo with the following video: Demo Video
- Then run the predict_v2v.py file and wait for the results, which are stored in samples/easyanimate-videos_v2v folder.
- Generic Control Video (Canny, Pose, Depth, etc.):
- Modify control_video, validation_image_end, prompt, neg_prompt, guidance_scale, and seed in the predict_v2v_control.py file.
- control_video is the control video for video generation, extracted using Canny, Pose, Depth, etc. You can run a demo with the following video: Demo Video
- Then run the predict_v2v_control.py file and wait for the results, which are stored in samples/easyanimate-videos_v2v_control folder.
- Trajectory Control Video:
- Modify control_video, ref_image, validation_image_end, prompt, neg_prompt, guidance_scale, and seed in the predict_v2v_control.py file.
- control_video is the control video, and ref_image is the reference first frame image. You can run a demo with the following image and video: Demo Image, Demo Video
- Then run the predict_v2v_control.py file and wait for the results, which are stored in samples/easyanimate-videos_v2v_control folder.
- Interaction via ComfyUI is recommended.
- Camera Control Video:
- Modify control_video, ref_image, validation_image_end, prompt, neg_prompt, guidance_scale, and seed in the predict_v2v_control.py file.
- control_camera_txt is the control file for camera control video, and ref_image is the reference first frame image. You can run a demo with the following image and control file: Demo Image, Demo File (from CameraCtrl)
- Then run the predict_v2v_control.py file and wait for the results, which are stored in samples/easyanimate-videos_v2v_control folder.
- Interaction via ComfyUI is recommended.
- Text-to-Video:
- Step 3: To combine with other backbones and Lora trained by yourself, modify predict_t2v.py and lora_path accordingly in the predict_t2v.py file.
WebUI supports text-to-video, image-to-video, video-to-video, and control-based video generation (such as Canny, Pose, Depth, etc.).
- Step 1: Download the corresponding weights and place them in the models folder.
- Step 2: Run the app.py file to enter the Gradio page.
- Step 3: Choose the generation model from the page, fill in prompt, neg_prompt, guidance_scale, seed, etc., click generate, and wait for the results, which are stored in the sample folder.
A complete EasyAnimate training pipeline should include data preprocessing, Video VAE training, and Video DiT training. Among these, Video VAE training is optional because we have already provided a pre-trained Video VAE.
We provide two simple demos:
- Train a Lora model using image data. For more details, you can refer to the wiki.
- Perform SFT model training using video data. For more details, you can refer to the wiki.
A complete data preprocessing link for long video segmentation, cleaning, and description can refer to README in the video captions section.
If you want to train a text to image and video generation model. You need to arrange the dataset in this format.
๐ฆ project/
โโโ ๐ datasets/
โ โโโ ๐ internal_datasets/
โ โโโ ๐ train/
โ โ โโโ ๐ 00000001.mp4
โ โ โโโ ๐ 00000002.jpg
โ โ โโโ ๐ .....
โ โโโ ๐ json_of_internal_datasets.json
The json_of_internal_datasets.json is a standard JSON file. The file_path in the json can to be set as relative path, as shown in below:
[
{
"file_path": "train/00000001.mp4",
"text": "A group of young men in suits and sunglasses are walking down a city street.",
"type": "video"
},
{
"file_path": "train/00000002.jpg",
"text": "A group of young men in suits and sunglasses are walking down a city street.",
"type": "image"
},
.....
]
You can also set the path as absolute path as follow:
[
{
"file_path": "/mnt/data/videos/00000001.mp4",
"text": "A group of young men in suits and sunglasses are walking down a city street.",
"type": "video"
},
{
"file_path": "/mnt/data/train/00000001.jpg",
"text": "A group of young men in suits and sunglasses are walking down a city street.",
"type": "image"
},
.....
]
Video VAE training is an optional option as we have already provided pre trained Video VAEs. If you want to train video vae, you can refer to README in the video vae section.
If the data format is relative path during data preprocessing, please set scripts/train.sh
as follow.
export DATASET_NAME="datasets/internal_datasets/"
export DATASET_META_NAME="datasets/internal_datasets/json_of_internal_datasets.json"
If the data format is absolute path during data preprocessing, please set scripts/train.sh
as follow.
export DATASET_NAME=""
export DATASET_META_NAME="/mnt/data/json_of_internal_datasets.json"
Then, we run scripts/train.sh.
sh scripts/train.sh
For details on setting some parameters, please refer to Readme Train and Readme Lora.
(Obsolete) EasyAnimateV1:
If you want to train EasyAnimateV1. Please switch to the git branch v1.EasyAnimateV5.1:
12B:
Name | Type | Storage Space | Hugging Face | Model Scope | Description |
---|---|---|---|---|---|
EasyAnimateV5.1-12b-zh-InP | EasyAnimateV5.1 | 39 GB | ๐คLink | ๐Link | Official image-to-video weights. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports for multilingual prediction. |
EasyAnimateV5.1-12b-zh-Control | EasyAnimateV5.1 | 39 GB | ๐คLink | ๐Link | Official video control weights, supporting various control conditions such as Canny, Depth, Pose, MLSD, and trajectory control. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports for multilingual prediction. |
EasyAnimateV5.1-12b-zh-Control-Camera | EasyAnimateV5.1 | 39 GB | ๐คLink | ๐Link | Official video camera control weights, supporting direction generation control by inputting camera motion trajectories. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports for multilingual prediction. |
EasyAnimateV5.1-12b-zh | EasyAnimateV5.1 | 39 GB | ๐คLink | ๐Link | Official text-to-video weights. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports for multilingual prediction. |
(Obsolete) EasyAnimateV5:
7B:
Name | Type | Storage Space | Hugging Face | Model Scope | Description |
---|---|---|---|---|---|
EasyAnimateV5-7b-zh-InP | EasyAnimateV5 | 22 GB | ๐คLink | ๐Link | Official 7B image-to-video weights. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports bilingual prediction in Chinese and English. |
EasyAnimateV5-7b-zh | EasyAnimateV5 | 22 GB | ๐คLink | ๐Link | Official 7B text-to-video weights. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports bilingual prediction in Chinese and English. |
EasyAnimateV5-Reward-LoRAs | EasyAnimateV5 | - | ๐คLink | ๐Link | The official reward backpropagation technology model optimizes the videos generated by EasyAnimateV5-12b to better match human preferences. ๏ฝ |
12B:
Name | Type | Storage Space | Hugging Face | Model Scope | Description |
---|---|---|---|---|---|
EasyAnimateV5-12b-zh-InP | EasyAnimateV5 | 34 GB | ๐คLink | ๐Link | Official image-to-video weights. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports bilingual prediction in Chinese and English. |
EasyAnimateV5-12b-zh-Control | EasyAnimateV5 | 34 GB | ๐คLink | ๐Link | Official video control weights, supporting various control conditions such as Canny, Depth, Pose, MLSD, etc. Supports video prediction at multiple resolutions (512, 768, 1024) and is trained with 49 frames at 8 frames per second. Bilingual prediction in Chinese and English is supported. |
EasyAnimateV5-12b-zh | EasyAnimateV5 | 34 GB | ๐คLink | ๐Link | Official text-to-video weights. Supports video prediction at multiple resolutions (512, 768, 1024), trained with 49 frames at 8 frames per second, and supports bilingual prediction in Chinese and English. |
EasyAnimateV5-Reward-LoRAs | EasyAnimateV5 | - | ๐คLink | ๐Link | The official reward backpropagation technology model optimizes the videos generated by EasyAnimateV5-12b to better match human preferences. ๏ฝ |
(Obsolete) EasyAnimateV4:
Name | Type | Storage Space | Hugging Face | Model Scope | Description |
---|---|---|---|---|---|
EasyAnimateV4-XL-2-InP | EasyAnimateV4 | Before extraction: 8.9 GB / After extraction: 14.0 GB | ๐คLink | ๐Link |
(Obsolete) EasyAnimateV3:
Name | Type | Storage Space | Hugging Face | Model Scope | Description |
---|---|---|---|---|---|
EasyAnimateV3-XL-2-InP-512x512 | EasyAnimateV3 | 18.2GB | ๐คLink | ๐Link | EasyAnimateV3 official weights for 512x512 text and image to video resolution. Training with 144 frames and fps 24 |
EasyAnimateV3-XL-2-InP-768x768 | EasyAnimateV3 | 18.2GB | ๐คLink | ๐Link | EasyAnimateV3 official weights for 768x768 text and image to video resolution. Training with 144 frames and fps 24 |
EasyAnimateV3-XL-2-InP-960x960 | EasyAnimateV3 | 18.2GB | ๐คLink | ๐Link | EasyAnimateV3 official weights for 960x960 text and image to video resolution. Training with 144 frames and fps 24 |
(Obsolete) EasyAnimateV2:
Name | Type | Storage Space | Url | Hugging Face | Model Scope | Description |
---|---|---|---|---|---|---|
EasyAnimateV2-XL-2-512x512 | EasyAnimateV2 | 16.2GB | - | ๐คLink | ๐Link | EasyAnimateV2 official weights for 512x512 resolution. Training with 144 frames and fps 24 |
EasyAnimateV2-XL-2-768x768 | EasyAnimateV2 | 16.2GB | - | ๐คLink | ๐Link | EasyAnimateV2 official weights for 768x768 resolution. Training with 144 frames and fps 24 |
easyanimatev2_minimalism_lora.safetensors | Lora of Pixart | 485.1MB | Download | - | - | A lora training with a specifial type images. Images can be downloaded from Url. |
(Obsolete) EasyAnimateV1:
Name | Type | Storage Space | Url | Description |
---|---|---|---|---|
easyanimate_v1_mm.safetensors | Motion Module | 4.1GB | download | Training with 80 frames and fps 12 |
Name | Type | Storage Space | Url | Description |
---|---|---|---|---|
PixArt-XL-2-512x512.tar | Pixart | 11.4GB | download | Pixart-Alpha official weights |
easyanimate_portrait.safetensors | Checkpoint of Pixart | 2.3GB | download | Training with internal portrait datasets |
easyanimate_portrait_lora.safetensors | Lora of Pixart | 654.0MB | download | Training with internal portrait datasets |
- Support model with larger params.
- Use Dingding to search group 77450006752 or Scan to join
- You need to scan the image to join the WeChat group or if it is expired, add this student as a friend first to invite you.
- CogVideo: https://github.com/THUDM/CogVideo/
- Flux: https://github.com/black-forest-labs/flux
- magvit: https://github.com/google-research/magvit
- PixArt: https://github.com/PixArt-alpha/PixArt-alpha
- Open-Sora-Plan: https://github.com/PKU-YuanGroup/Open-Sora-Plan
- Open-Sora: https://github.com/hpcaitech/Open-Sora
- Animatediff: https://github.com/guoyww/AnimateDiff
- HunYuan DiT: https://github.com/tencent/HunyuanDiT
- ComfyUI-KJNodes: https://github.com/kijai/ComfyUI-KJNodes
- ComfyUI-EasyAnimateWrapper: https://github.com/kijai/ComfyUI-EasyAnimateWrapper
- ComfyUI-CameraCtrl-Wrapper: https://github.com/chaojie/ComfyUI-CameraCtrl-Wrapper
- CameraCtrl: https://github.com/hehao13/CameraCtrl
- DragAnything: https://github.com/showlab/DragAnything
This project is licensed under the Apache License (Version 2.0).