I may stop develop this repo right now,
AnimateDiff is not designed to do I2V mission at first,
I spent lots of time to read diffusers source codes,
this route maybe not the best compared to webui(ldm injection) at the end.
Though it has potential, I believe new motion model trained on bigger-datasets/specific-motion will be released soon.
Still under development.
- update diffusers to 0.20.1
- support IP-Adapter
- reconstruction codes and make animatediff a diffusers plugin like sd-webui-animatediff
- controlnet from TDS4874
- solve/locate color degrade problem, check TDS_ solution, It seems that any color problems came from DDIM params.
- controlnet reference mode
- controlnet multi module mode
- ddim inversion from Tune-A-Video
- support AnimateDiff v2
- support AnimateDiff MotionLoRA
- support FreeU
- keyframe controlnet apply
- controlnet inpainting mode
- support AnimateDiff v3 wo SparseCtrl
- keyframe prompts apply
inpainting + canny
inpainting + canny | |||
tail + tail |
Zoom In / Zoom Out results from this old branch
results from this old branch
all / without denoise strength / without ipadapter / without controlnet(first frame)
Below is old results from this ols branch
First image from pikalabs, second was generated from sd1.5
First used IPAdapter+init-image-denoise, second used only IPAdapter
- 23.8.22: Drop local training scripts, using authors repo to do training experiences(I2V). First, make image injection refer IP-Adapter. Already test in AI_power.
- 23.8.8: Here are some results of mine, ref talesofai's folk and diffusers to do image latent injection.
Character Model:Yoimiya (with an initial reference image.)
- 23.8.9 test sd-webui-text2video noise-add policy, got bad results
check README.md