-
-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Protect against ludicrous image sizes #2832
Comments
Thank you for the suggestion.
This is heavily dependent on your computer and how much RAM you have. We would need to be able to do some kind of on the fly estimate of RAM usage per image and what max size makes sense per decide in order to make this meaningful. 4K for my computer is just fine, but for a crappy potato laptop it isn't -- so we'd need a way to automatically determine what is acceptable for the individual device. I can't think of a way we could realistically do something like that.
After a quick search, it appears python does have a way to place a hard cap on memory usage. The problem is that this will just result in a bunch of errors and most likely chaiNNer's backend crashing when reached, which while not ideal I suppose still is better than your entire PC crashing, so it's probably worthwhile to implement this. But it won't behave like minecraft where things will just get really laggy if you reach your memory cap, it'll just completely break. |
those are great points. i don't have any python experience so i wouldn't have known this was basically impossible to achieve smoothly, and I'm on a pretty good pc myself so it didn't cross my mind for the specs of slower/older computers.... but I'm glad my suggestion at least gave food for thought. thanks for being patient with me in your reply. :) |
Couldn't we use a conservative heuristic? We know exactly how much memory every image takes up ( Note that This would prevent the worst at runtime and give us a condition to check in the UI to prevent such chains from being executed. What do you think @joeyballentine? In code, this can simply be implemented by calling a function at the start of a node to check whether the output image fits into RAM. So e.g. the Resize node could look like this: # ...
def resize_node(img: np.ndarray, new_width: int, new_height: int) -> np.ndarray:
h, w, c = get_h_w_c(img)
# check whether the output image fits into RAM before doing the operation
assert_image_size((new_height, new_width, c))
# perform the actual operation
return resize(img, new_width, new_height) |
Sure, that would be good enough probably |
Motivation
sometimes when upscaling images in a batch, or resizing them, etc. its possible to crash the entire computer because i unwittingly ran an operation that would result in a 8192+ pixels wide/tall image, and processing it takes up 100% of both the cpu and gpu memory. somewhat related to #2671
Description
when an operation will result in an image being more than 4k resolution at any point, pop up a confirmation window saying something like, "Warning: the current configuration will likely freeze your entire computer due to massive image sizes. do you wish to continue?"
Alternatives
nodes could visibly turn bright orange with caution stripes or something if they detect an overly large image, and hovering over them will show tooltip-esque text explaining the potential memory overuse. heres a very rough mockup:
another alternative, which deals with the root of the issue, would be to have a setting that hard-limits how much memory chainner is allowed to use at any given time- so i could set it to half my installed system memory, or half of my gpu memory, for example.
the following screenshot is from a minecraft launcher, and controls java memory limits, but communicates the same concept:
The text was updated successfully, but these errors were encountered: