PRODU

Comfyui animatediff examples reddit

Comfyui animatediff examples reddit. Sand to water: My txt2video workflow for ComfyUI-AnimateDiff-IPadapter-PromptScheduler. Examples of what you can pull off with the camera are cooler than pictures of the camera (even though we love those pics, too). Test with lower resolution First around 512. From only 3 frames and it followed the prompt exactly and imagined all the weight of the motion and timing! And the sparsectrl rgb is likely aiding as a clean up tool and blend different batches together to achieve something flicker free. We ask that you please take a minute to read through the rules and check out the resources provided before creating a post, especially if you are new here. Bad Apple. Not to mention CPU/MOBO prices are also going insane. • 9 days ago. AnimateLCM-I2V is also extremely useful for maintaining coherence at higher resolutions (with ControlNet and SD LoRAs active, I could easily upscale from 512x512 source to 1024x1024 in a single pass). Warning, the workflow is quite pushed together, I don't really like noodles going everywhere. This is my new workflow for txt2video, it's highly optimized using XL-turbo, SD 1. That’s 236 days of 24hr rented GPU power at 30 cents / hr 😉. Ooooh boy! I guess you guys know what this implies. Posting it below. gg/JqqXdnn5. Then switch to this model in the checkpoint node. json, and simply drag it into comfyUI. I think I have a basic setup to start replicating this, at least for techy people: I'm using comfyUI, together with comfyui-animatediff nodes. cgpixel23. A lot of people are just discovering this technology, and want to show off what they created. I have been using comfyui and animatediff to try to make Audio reactive videos for music. Just need a few more frames, a clean loop, and a few lip flap sequences. workflow link: https://app. Sort by: Add a Comment. Don't use highres fix or upscaler in comfyUI it is glitchy, try with normal first. Use 10 frames first for testing. install those and then go to /animatediff/nodes. I'm currently working on advanced workflows and examples ;) Holy shit, I remember getting a 2080 for $600 and now the 4000 series is almost three times as expensive are cards were even 4 years ago. Anyone has an idea how to stabilise sdxl? Have either rapid movement in every frame or almost no movement. 6. Upload the video to the Video source canvas. Firstly, download an AnimateDiff model Fork of the ltdrdata/ComfyUI-Manager notebook with a few enhancements, namely: Install AnimateDiff (Evolved) UI for enabling/disabling model downloads. Not sure what youve seen but I'd love to see some examples of dynamic motion since most of the ones I've seen including this post is of a person turning their head. Hot shot XL vibes. That's why I programmed an alternative that works with batch count and recently released it. People can then share their workflows by sharing images so that others can create similar things. 2. such a beautiful creation, thanks for sharing. Try changing SD model, some models does not work well with animatediff. 19K subscribers in the comfyui community. Our model was specifically trained with longer videos so the results are more consistent than these limited tricks. Tbh, this looks more like Gen2 or Pika than AninmateDiff - I havent seen that much consistency with AnimateDiff yet - Also, the image init are also definetaly coming from MJ! Oh wow, that's awesome, looks great! We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. Alternatively you can save a workflow in its own separate file. Nice idea to use this as base. . This works fine, but it is very inefficient. Video has three examples created using still images, simple masks, IP-Adapter and the inpainting controlnet with AnimateDiff in ComfyUI. Because it's changing so rapidly, some of the nodes used in certain workflows may have become deprecated, so changes There is obviously some part of how animatediff that I am not aware of that makes it work with a lot of models but not function with others. I. video animation using comfyui+ animatediff+ controlnet and lora : r/comfyui. 9K subscribers in the comfyui community. I have heard it only works for SDXL but it seems to be working somehow for me. Makeing a bit of progress this week in ComfyUI. Step 1: Upload video. ' in there. Animation | Video. Belittling their efforts will get you banned. How did you get sdxl animatediff ti work this well? I had all grainy low quality results, had to switch back to 1. py; Note: Remember to add your models, VAE, LoRAs etc. Tried to make a little short film. The water one uses only a prompt and the octopus tentacles (in reply below) has both a text prompt and IP-Adapter hooked in. format: supports image/gif, image/webp (better compression), video/webm, video/h264-mp4, video/h265-mp4. 5 models. Given that I'm using these models it's not tolerate well high resolutions. I am using AnimateDiff in ComfyUI and I love it. AnimateDiff With LCM workflow. Prompt: Add a Load Image node to upload the picture you want to modify. Adjusts the weight type of IPAdapter to suit the situation. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can find various AD workflows here. UI for downloading custom resources (and saving to drive directory) Simplified, user-friendly UI (hidden code editors, removed optional downloads and alternate run setups) Hope it can be of use I have 32gb on 4090 system, 7800x3d, do you think i would beneefit by upgrading to 64 or is that overkill, dling auto1111 witg adetailer combined wkth animate doff, also comfyui vid2vid woth animatediff. ASSISTANTS such as RUNWAY, PIKA LABS, STABLE VIDEO DIFFUSION and similar AI VIDEO tools capable of: TEXT TO VIDEO, IMAGE TO VIDEO, VIDEO TO VIDEO, AI DEEP FAKE, AI VOICE OVER ACTING, AI MUSIC, AI NEWSROOM, AI CGI VFX and AI VIDEO EDITING Share your creations, questions and tutorials about AI VIDEO. Please keep posted images SFW. ComfyUI Manager extension has not been updated yet to point to the fixed fork instead of the original. Making HotshotXL + AnimateDiff -comfyUI experiments in SDXL. 5 and LCM. Discussion, samples, tips and tricks on the Sigma FP. #ComfyUI Hope you all explore same. Ohh you are thinking about it wrong! AnimateDiff can only animate up to 24 (version 1) or 36 (version 2) frames at once (but anything too much more or less than 16 kinda looks awful). Anyone used both of them? Is there any option to make animation more coherent and slow? I've experimented with animatediff but my animations seems to be much faster than this. Please read the AnimateDiff repo README and Wiki for more information about how it works at its core. Looking to dive into animatediff and am looking to learn from the mistakes of those that walked the path before me🫡🙌🫡🙌🫡🙌🫡🙌 Are people using auto1111 or comfyUI for animatediff? Is auto just as well suited for this as comfy or are there significant advantages to one over the other here? Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this software to create your AI art. The goal is to have AnimateDiff follow the girl’s motion in the video. Please share your tips, tricks, and workflows for using this…. This is an amazing work! Very nice work, can you tell me how much VRAM do you have. In this guide, I will demonstrate the basics of AnimateDiff and the most common techniques to generate various types of animations. Fast test render: Euler a, 10 steps (0:27) Medium quality: Euler a, 30 steps or DPM++ 2S a Karras, 15 steps (1:04) High quality: DPM2 a Karras, 30 steps or DPM++ 2S a Karras, 35 steps (2:01) All 40 steps Install the ComfyUI dependencies. Specifically changing the motion_scale or lora_strength values Example of what that all means in clear terms, generated from " a highly realistic video of batman running in a mystic forest, depth of field, epic lights, high quality, trending on artstation" according to the github readme: Once you trained a lora you can just use it the 'normal' way through ComfyUI-AnimateDiff-Evolved animatediffLoraLoader. Image files created with comfyui store the generated image and the comfyui configuration (called a workflow) used to generate it. • 2 mo. 6K subscribers in the comfyui community. 17K subscribers in the comfyui community. ago. LCM with AnimateDiff workflow. [🔥 ComfyUI - Creating Character Animation with One Image using AnimateDiff x IPAdapter] . 5. py --force-fp16. Scoring samplers for Animatediff videos. Install the ComfyUI dependencies. flowt. I don't know how these light effects appeared, I didn't use any prompts about light sources, it could be because my prompts used words related to space stations and control centers, and these scenes just happened to be filled with various light sources from machines. Animdiff definitely has more potential ,so I'm excited to see where things go. Stop pounding your head against the wall and read the manual first. What you want is something called 'Simple Controlnet interpolation. loop_count: use 0 for infinite loop. If anyone wants my workflow for this GIF it's here. UI for downloading custom resources (and saving to drive directory) Simplified, user-friendly UI (hidden code editors, removed optional downloads and alternate run setups) Hope it can be of use There are new stuff everywhere, Animatediff is going to blow like controlnet, very nice to see new motion modules, but the different versions of Animatediff seems to start causing issues ! thx for sharing guoyww's motion-module anyway Animatediff comfyui Animation - Video /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers comfyui. Uses AnimateDiff to blend the embedded image with the prompt, converting it into a. Also, seems to work well from what I've seen! Great stuff. Uses one character image for the IPAdapter. The obtained result is as follows: When I removed the prompt, I couldn't achieve a similar result. If you install this fork instead (for now you can delete your Comfy-Ui AnimateDiff repo and git clone this one you linked directly), and rerun your workflows, you should see a massive improvement Welcome to the unofficial ComfyUI subreddit. 14K subscribers in the comfyui community. r/comfyui. Comfyui Tutorial: Creating Animation using Animatediff, SDXL and LoRA. It's better for your health 😁 from the kosinkadink GitHub page: "context_options: optional context window to use while sampling; if passed in, total animation length has no limit. I would like to know what that part is, so that I can either mitigate it and successfully use the models I wish to use, or so that I can make an entirely new model blend from scratch that plays ball with People want to find workflows that use AnimateDiff (and AnimateDiff Evolved!) to make animation, do txt2vid, vid2vid, animated controlNet, IP-Adapter, etc. If you have another Stable Diffusion UI you might be able to reuse the dependencies. Jan 13, 2024 · Introduction. 21K subscribers in the comfyui community. Welcome to r/aivideo!🥤🍿 A community focused on the use of FULL MOTION VIDEO GENERATIVE A. Upscaled in Topaz. also, would love to see a small breakdown on YT or here, since alot of us can't access tictok. Use Epic realism model or meinamix. AnimateDiff v3 - sparsectrl scribble sample. AnimateDiffCombine. ComfyUI Update: Stable Video Diffusion on 8GB vram with 25 frames and more. TODO: add examples. The checkpoint, the VAE (if it’s not embedded), but also LoRAs, IP-Adapters (and everything coming with it), AnimateDiff…. 🙌 ️ Finally got #SDXL Hotshot #AnimateDif f to give a nice output and create some super cool animation and movement using prompt interpolation. 9 Animatediff Comfy workflows that will steal your weekend (but in return may give you immense creative satisfaction) ConsumeEm (☞゚ヮ゚)☞ And now: Tutorial for ControlNet Keyframe interpolation in animatediff-cli-prompt-travel For example, the Checkpoint Loader is plugged to every Sampler in that workflow already! Without all the noodles! - In the top-left corner: THE LOADER. I can achieve this by repeating the entire sampling process across multiple KSamplers, with different denoise settings for each KSampler. Hi guys, my computer doesn't have enough VRAM to run certain workflows, so I been working on an opensource custom node that lets me run my workflows using cloud GPU resources! Why are you calling this "cloud vram" it insinuates it's different than just Jan 13, 2024 · Introduction. Produced using the SD15 model in ComfyUI. Often times I just get meh results with not much interesting motion when I play around with the prompt boxes, so just trying to get an idea of your methodology behind setting up / tweaking the prompt composition part of the flow. ai/c/ilKpVL. You'll be pleasantly surprised by how rapidly AnimateDiff is advancing in ComfyUI. And above all, BE NICE. Welcome to the unofficial ComfyUI subreddit. Feb 17, 2024 · Let’s use this reference video as an example. In contrast, this Serverless implementation only charges for actual GPU usage. UI for downloading custom resources (and saving to drive directory) Simplified, user-friendly UI (hidden code editors, removed optional downloads and alternate run setups) Hope it can be of use Welcome to the unofficial ComfyUI subreddit. Such an obvious idea in hindsight! Looks great. Cheers! I use lora / sampler : dpm++ 3m with 40 steps / 24FPS. To establish the look and feel that I want, I started by taking just the first video frame and altering the prompt until I had something I liked. save this file as a . Let’s say you run it over night every night for 8 hrs. Specifically changing the motion_scale or lora_strength values Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt Use cloud VRAM for SDXL, AnimateDiff, and upscaler workflows, from your local ComfyUI. Check out the AnimateDiff evloved github. Thanks for the workflow, playing with AnimateDiff is still on my "menu" :D It's insane to see how fast video generation evolves. You'll have to play around with the denoise value to find a sweetspot. Improved AnimateDiff integration for ComfyUI, as well as advanced sampling options dubbed Evolved Sampling usable outside of AnimateDiff. Adding LORAs in my next iteration. Firstly, download an AnimateDiff model Here is ComfyUI's workflow: Checkpoint: First, download the inpainting model Dreamshaper 8-inpainting (opens in a new tab) and place it in the models/checkpoints folder inside ComfyUI. Remove negative embeddings, it cause artifacts. I also tried some variations of the sand one. Has anyone noticed that when using animatediff with comfyui, the resulting videos have worse quality than the same videos generated in A1111? I can post some examples in the comments if needed. frame_rate: number of frame per second. I followed the provided reference and used the workflow below, but I am unable to replicate the image-to-video example. Add your thoughts and get the conversation going. This is where you put all the nodes that load anything. Also, I have the BlenderNeko Nodes for the CLIPTextEncode set to a1111 interpretation (tried both mean and normal with no real differences) so that I I'm using animatediff-evolved nodes for comfyui. I am playing around with animatediff using ComfyUI but I always get some text the the bottom of the output, I don't know why ? I used multiple different models and vaes but same issue. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by Check out the AnimateDiff evloved github. I would like to take a real video and gradually denoise this into animatediff generated frames. I'm trying to do a video to video workflow with traveling prompts. save_image: should GIF be saved to disk. This is achieved by making ComfyUI multi-tenant, enabling multiple users to share a GPU without sharing private workflows and files. I guess he meant runpods serverless worker. Comfy, AnimateDiff, ControlNet and QR Monster, workflow in the comments. To follow along, you’ll need to install ComfyUI and the ComfyUI Manager (optional but recommended), a node-based interface used to run Stable Diffusion models. Look for the example that uses controlnet lineart. 21-frame video. Nobody's responded to this post yet. I'm also using the cardos-anime model that was used in the repo examples. This always leads to VRAM issues when you want to create longer or larger videos. AnimateDiff on ComfyUI with prompt scheduling. It's much more coherent than I would have predicted earlier this year. I found a way to use the audio scheduler and batch value scheduler to change the prompt values throughout the video, and I am looking for a way to change other animatediff values the same way. Feel free to ask questions or for clarifying tests, I'll respond when I can. https://discord. Dec 15, 2023 · From the AnimateDiff repository, there is an image-to-video example. Note that --force-fp16 will only work if you installed the latest pytorch nightly. ComfyUI + AnimateDiff. Reply. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments ComfyUI + AnimateDiff + ControlNet + LatentUpscale. py and at the end of inject_motion_modules (around line 340) you could set the frames, here is the edited code to set the last frame only, play around with it: Welcome to the unofficial ComfyUI subreddit. Please include technical info (lens, iso, f-stop, etc) if you can. So how do you guys feel about animatediff vs stable video, is there a There are new stuff everywhere, Animatediff is going to blow like controlnet, very nice to see new motion modules, but the different versions of Animatediff seems to start causing issues ! thx for sharing guoyww's motion-module anyway I`m always trying to implement what learned from the workflows downloaded from others, don't really know if anybody else is applying AYS to Animatediff, so this time I'm sharing my workflow, hope it helps to others the same way others helped me to learn. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. 🍬 #HotshotXLAnimate diff experimental video using only Prompt scheduler in # The problem is that AnimateDiff works with batch size (all images in one image). 9. That would be any animatediff txt2vid workflow with an image input added to its latent, or a vid2vid workflow with the load video node and whatever's after it before the vaeencoding replaced with a load image node. This is ready to go for a Star Craft style animated head in a box. An image of the node graph might help (although those aren't that useful to scan at thumbnail size) but the ability to search by nodes or features used, and the generation of models, would That's exactly it! :-) I thought about how we often wait and search for news about new "seasons" of our favorite series and the idea came to me to turn Netflix's opening into seasons of the year as a playful take on this wait for series seasons. 5 models but results may vary, somehow no problem for me and almost makes then feel like sdxl models, if it's actually working then it's working really well with getting rid of double people that You'll be still paying for idle GPU unless you terminate it. On the txt2img page, scroll down the AnimateDiff section. To use video formats, you'll need ffmpeg installed and The apply_ref_when_disabled can be set to True to allow the img_encoder to do its thing even when the end_percent is reached. Fork of the ltdrdata/ComfyUI-Manager notebook with a few enhancements, namely: Install AnimateDiff (Evolved) UI for enabling/disabling model downloads. A Classic. . I havent actually used it for sdxl yet because I rarely go over 1024x1024, but I can say it can do 1024x1024 for sd 1. null_hax. Launch ComfyUI by running python main. Thanks for all your quick updates and new implementations, works great on a 2060 rtx super 8gb!! The fp16 versions of the models give the same result/use same vram, but greatly reduce disk space. Combine GIF frames and produce the GIF image. The entire comfy workflow is there which you can use. Sliding-window tricks are being used together with AnimateDiff to create longer videos than 16 frames (for example this is what happens behind the scenes in ComfyUI). cheers sirolim. Abject-Recognition-9. ku mm jf fe sd fn eg sg sj ia