-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support loading a batch of multiple images with LoadImage #628
Comments
Also, ability to load one (or more) images and duplicate their latents into a batch, to be able to support img2img variants |
It needs a whole new file selection system to support this feature. Something that loads up the images in a selectable way. But I don't know any init system that takes a batch of inits? Usually just one init for each batch gen. My WAS Node Suite has a batch load node for directories, but it only works with prompt batching not batch gen. |
This is pretty simple, you just have to repeat the tensor along the batch dimension, I have a couple nodes for it. Loading multiple images seems hard. |
So you can do a batch run with batch inits by just giving a batched init? Curiously, and off-topic, do you know if ComfyUI supports torchvision transform tensors? Or is it different? |
Yep, this works both with latents and images, and you can increase the batch size at any point in the workflow, not just when encoding. I haven't done anything with torchvision so not entirely sure but looking at the docs it just uses a different shape. You should be able to work with the tensors if you first permute to 0, 3, 1, 2 and then back to 0, 2, 3, 1 at the end. //I made a crude dir loading node that works with batch gens. The biggest issue (other than the lack of a picker) is that you can't torch stack the images if their dimensions are different (as expected) so they need to be resized/have padding added. Not sure if there's any way around that. |
You can specify a batch count for EmptyLatentImage. However LoadImage can only load one image. It would be nice to have batch support for that (even if not exposed to the frontend)
The text was updated successfully, but these errors were encountered: