Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add memories.image.highres/convert_all_images_formarts/format/quality/_max_x/_max_y/ #653

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

JanisPlayer
Copy link

@JanisPlayer JanisPlayer commented May 17, 2023

Solution to my suggestion:
#652
memories.image.highres_format support: jpeg, webp
memories.image.highres_quality 0-100
memories.image.highres_max_x & memories.image.highres_max_y for the max res of the picture

Example:
'memories.image.highres.convert_all_images_formarts_enabled' => 'true',
'memories.image.highres.format' => 'webp',
'memories.image.highres.quality'=> '80',
'memories.image.highres_max_x'=> '6144',
'memories.image.highres_max_y'=> '6144',

Too Do:

Line 283 needs to be changed and the setting name is wrong memories.image.highres.convert_all_images_formarts_enabled


The config settings should still be changed something like this, but before I take things that are too long again, I'll just ask.
'memories.image.highres.convert_all_images_formarts_enabled' => 'true',
'memories.image.highres.format' => 'webp',
'memories.image.highres.quality'=> '80',
'memories.image.highres.max_x'=> '6144',
'memories.image.highres.max_y'=> '6144',


And please try it yourself, I've tried it and I'm happy, but I don't know if you're happy with the result.

Unfortunately, I don't know how to build an interface for an APP.
Otherwise I would have done it and added the settings.
But in the short term I don't think it would be good enough, so I'll leave that to them if they wish.
I'm just a hobby developer, please watch out for mistakes.
I can also add the documentation if you want.

I also had problems with:
composer run cs:fix

In Factory.php line 319:

"./composer.json" does not match the expected JSON schema:
- name : Does not match the regex pattern ^a-z0-9/[a-z0-9](([_.]|- {1,2})?[a-z0-9]+)$

Maybe you can tell me what I did wrong.

…ew.x, memories.preview.y

memories.preview.format support jpeg webp avif
memories.preview.quality between 0-100
memories.preview.x max & memories.preview.y size

Example:
'memories.preview.format'=> 'webp',
'memories.preview.quality'=> '80',
'memories.preview.x'=> '2048',
'memories.preview.y'=> '2048',
@pulsejet
Copy link
Owner

pulsejet commented May 17, 2023

Okay so there are some major issues here:

  1. By design, we can't use more efficient formats for this API, since it works on the fly. Even if the default is JPEG, it is very likely that server admins will find this setting and set it to a "better" format like WebP/AVIF. Unfortunately that'll make the user experience much worse since e.g. AVIF encoding is horribly slow.
  2. We can't solve the above problem with caching, since the whole purpose of this API is to not cache high-res thumbs to save storage space.
  3. More fundamentally, I'm missing the problem here. If you simply desire to cap the max size of image that will be loaded, and you want that to be cached, you can simply turn off loading full-res images altogether and use larger thumb sizes (see docs)

Now it might worth be worth considering capping the max size of the loaded image and not caching it, if that's what you desire; I just want to note that this will not be very beneficial in most cases, since network bandwidth tends to be much cheaper than CPU time in general, and image processing is very expensive.

A second missing that would definitely be useful is disallowing loading max-res altogether, as a global admin setting.

@JanisPlayer
Copy link
Author

JanisPlayer commented May 17, 2023

Yes, that's why I want it to be optimal and with limited resolution.
I did a test myself, and the previews load much faster with these settings.
This way, I don't have to generate my regular previews in high resolution anymore.
Nextcloud is currently working with me on WebP support,
but even then, the images are still too large at 200-500 KB for 30 thousand images.
Here's the link to the pull request: nextcloud/server#38032.
That's why I resize them to a maximum of 1024 pixels with 80% quality.
Now I found your app, and it does exactly what I want except for that.
Maybe you could consider making it an optional feature.
You need to test it, as the bandwidth saved makes a big difference.
My full-screen images are 4 MB or larger, and loading them is really difficult, especially on mobile devices.
These WebP images, on the other hand, are only around 200 KB in size and have minimal loss in quality when viewed on a smartphone.
But perhaps you should add a warning not to increase the resolution too much and only use this function when zooming in.
However, I think this would be a great solution for users with limited storage but many photos.
It's just an idea for a specific niche.
I have the computing power for it—I even have a dedicated server that generates preview images for me using Imaginary. Would Imaginary be a possible solution?
I think the best thing is to see it as an alternative to full screen images for powerful servers maybe even limited to certain users.

Edit:
So I tested it on my own VPS with higher resolutions that I require, ranging from 4000x4000 to 10000x10000 pixels with 80% quality, using WebP for such high resolutions.
The images look good, very small in size, yet closely resembling the original.
The performance is now acceptable even for a single user.
However, it could become problematic with up to 10 simultaneous image requests on a standard VPS with 4 V-Cores and a Geekbench 5 score of 1481 points.
But I believe the performance can be easily estimated with WebP.
So, if used, the function is only performant with zooming since excessive scripting triggers the process, which isn't interrupted when switching.
Even with caching, it's difficult to predict what the user will load next.
Perhaps it could be an idea for the Preview app to store highly zoomed images at a very high resolution for 1 month.
However, caching locally and on the web is challenging to implement.
It would be a niche feature separate from the maximum resolution and recommended only for zooming and perhaps only for specific users or with a limit on requests.
Otherwise, I don't see a way to maintain stability with many users simultaneously without limiting it or storing the preview indefinitely.
So, I think users with limited resources and a large user base will face limitations.
Users like me, who currently have sufficient resources and not too many users but always feel a lack of storage, are quite happy to receive almost the original image with minimal bandwidth usage.
In summary, yes, it needs optimization, with limits on requests and perhaps even utilizing an additional server if available.
It needs clear communication that it consumes resources but saves bandwidth and storage.
Furthermore, it needs to be determined if other users have the same problem as me.

@pulsejet
Copy link
Owner

pulsejet commented May 18, 2023

No idea what kind of machine you're using, but WebP/AVIF encoding (at least with Imagick) is basically unusable.

On my dev machine (AMD EPYC 64 core/128 thread, 256G RAM),

Format 12MP 64MP Size (12MP, 95%) Size (12MP, 90%)
JPEG 0.21s 1.3s 3578KB 1982KB
WebP 1.7s 27.1s 3029KB 1364KB
AVIF 69.5s Crash 1882KB ?

As I mentioned, I'm okay with a config option to restrict the max size and encode quality of the "full" image (at the admin's risk), but WebP is a no-go for now.

Also the props need to be named more accurately, e.g. memories.image.highres_max_x, and the conversion needs to happen only when a query param is specified (since we use the same API for native sharing, where we explicitly want to share the full res image)

@pulsejet
Copy link
Owner

pulsejet commented May 18, 2023

My full-screen images are 4 MB or larger, and loading them is really difficult, especially on mobile devices. These WebP images, on the other hand, are only around 200 KB in size and have minimal loss in quality when viewed on a smartphone.

That's comparing apples and oranges. If you make a fair comparison of JPEG with WebP, the differences are much smaller (see the representative numbers above). An improvement of <20% but over 5-20 times slower.

AVIF, on the other hand is very efficient, but no good encoders exist.

pulsejet added a commit that referenced this pull request May 18, 2023
We no longer use this API for image editing, so this is
an acceptable compromise for now

Signed-off-by: Varun Patil <[email protected]>

#653
@pulsejet
Copy link
Owner

pulsejet commented May 18, 2023

BTW I bumped down the default quality to 85; it was 95 earlier because this API was used for image editing.

Also, there is indeed a "proper" solution to this whole problem, but that's unfortunately very hard to implement. When the user zooms in, we don't need to load the entire image at all, but only those parts that are visible in the viewport. So theoretically we could decode the image just once, store a temporary bitmap and cut out parts of the bitmap for the frontend on-demand. Since these parts are downsized to match the viewport, it might be reasonably possible to use webp here (and it'll be super efficient even with JPEG).

There's some ways to do the UI (see https://dimsemenov.github.io/photoswipe-deep-zoom-plugin/), but the backend is very non-trivial. This is also how Google Photos solves this problem.

@JanisPlayer
Copy link
Author

JanisPlayer commented May 18, 2023

By design, we can't use more efficient formats for this API, since it works on the fly. Even if the default is JPEG, it is very likely that server admins will find this setting and set it to a "better" format like WebP/AVIF. Unfortunately that'll make the user experience much worse since e.g. AVIF encoding is horribly slow.

Yes, when it comes to AVIF, I would advise against using resolutions above 2048 pixels. It is well-suited for generating Full HD images for mobile data. However, whether it's worth it to use AVIF considering the computational power required for a user with a standard server is quite substantial for the amount it saves. But it's not suitable for generating such high-resolution previews in a short time, especially through a PHP module. I agree that AVIF could be used as a benchmark.

That's comparing apples and oranges. If you make a fair comparison of JPEG with WebP, the differences are much smaller (see the representative numbers above). An improvement of <20% but over 5-20 times slower.

WebP, yes, the processing times are acceptable. Block artifacts are not as quickly visible as in JPEG, but the image appears somewhat blurred due to the overall reduction. The quality varies depending on the settings. With my very high settings, it's questionable whether it really makes sense, and I agree with that. The perceived quality is somewhat better, around 25% to 35%, I would say. With many users, WebP can also have disadvantages, as 200 KB is not that significant for some users if it allows more users to access it simultaneously.

JPEG remains the fastest option, and I fully agree with that. So, in summary, it heavily depends on the settings and the number of users on the cloud. However, AVIF is quite challenging for resolutions above Full HD, but it's impressive how beautiful the images still look and how the CPU melts.

BTW I bumped down the default quality to 85; it was 95 earlier because this API was used for image editing.

Yes, I think I observed this solution in Google Photos as well. It's actually very efficient. With different zoom levels, DPI, and precise client positioning, the communication between the server and client needs to be well-coordinated for it to work. But yes, in essence, you only need to inform the server about the visible area and the screen resolution.
I've never checked how they do it, just guesswork. Edit: Yes, that's the solution and I see you even wrote that one sentence next to it. To my surprise, they use JPEG for this. :D

Now comes the more complicated part: whether it's possible to implement this 1:1 with Imagick is questionable. However, it's definitely possible to generate a certain area and send it to the client. Whether it's equally efficient would need to be tested.

There's some ways to do the UI (see https://dimsemenov.github.io/photoswipe-deep-zoom-plugin/), but the backend is very non-trivial. This is also how Google Photos solves this problem.

Wow that looks really great, yes that would actually be the perfect solution.
It looks nice, saves power and can be controlled via PHP, via curl which then processes the image.
So the APP then sends the image to another server and they then process it for the APP.
That would be possible, the solution is nice. :)

Außerdem müssen die Requisiten genauer benannt werden, z. B. memories.image.highres_max_xund die Konvertierung muss nur erfolgen, wenn ein Abfrageparameter angegeben wird (da wir dieselbe API für die native Freigabe verwenden, bei der wir ausdrücklich das Bild in voller Auflösung teilen möchten).

And yes, I should have named and explained it better.

@pulsejet
Copy link
Owner

... and how the CPU melts.

Haha very true.

Now comes the more complicated part: whether it's possible to implement this 1:1 with Imagick is questionable. However, it's definitely possible to generate a certain area and send it to the client. Whether it's equally efficient would need to be tested.

It would actually be much easier if the server was implemented in something other than PHP. The problem with PHP is that

  1. we can't use in-memory cache easily (this isn't PHP specific since there might be more than one server)
  2. each request is bootstrapped in a very expensive process
  3. no websockets

To get around the last two, we either need a dedicated PHP file to skip request bootstrapping (otherwise loading each chunk is too expensive) or run a separate server (with a separate endpoint) for this. For both, we also need an entirely separate (fast) auth mechanism (this part might be relatively easy, e.g. with JWT).

The next (very) hard part is the UI. Unfortunately the viewer has become very complex due to the various corner cases (slideshow, live photo, full res on zoom in, videos of different types etc.). Using the above library directly would likely break things.

Unfortunately, it is only worthwhile for 1080p users and would have to be limited until then and even then the utilization is very high.
'memories.image.highres_convert_all_images_formarts_enabled' => 'true',
'memories.image.highres_format' => 'webp',
'memories.image.highres_quality'=> '80',
'memories.image.highres_max_x'=> '6144',
'memories.image.highres_max_y'=> '6144',
@JanisPlayer JanisPlayer changed the title add memories.preview.format, memories.preview.quality, memories.previ… add memories.image.highres/convert_all_images_formarts/format/quality/_max_x/_max_y/ May 18, 2023
Comment on lines 385 to 397
switch ($format) {
case 'jpeg':
$format = 'jpeg';
break;
case 'webp':
$format = 'webp';
break;
/*case 'avif': //CPU Benchmark
//$format = 'avif';
break*/
default:
$format = 'jpeg';
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't need this check. If the admin sets the value to something invalid, it's not our problem.

// Convert to JPEG
try {
$image->autoOrient();
$image->setImageFormat('jpeg');
$image->setImageCompressionQuality(95);
$format = $this->config->getSystemValueString('memories.highres_format', 'jpeg');
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Util::getSystemConfig. Also see Util::systemConfigDefaults.

The config values need to be correctly scoped, e.g. memories.image.highres.format

@@ -276,7 +276,8 @@ public function decodable(string $id): Http\Response
$blob = $file->getContent();

// Convert image to JPEG if required
if (!\in_array($mimetype, ['image/png', 'image/webp', 'image/jpeg', 'image/gif'], true)) {
$highres_enabled = $this->config->getSystemValueString('memories.image.highres_convert_all_images_formarts_enabled', 'false');
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not desirable. What we really want is a cap on the resolution, so the original image should be used to determine if we want to scale it.

E.g. if the cap is set to 12MP, then scaling a 4MP image is meaningless (even decoding it is a resource waste)

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also need to always exclude GIF, since webp animation depends on library support

Copy link
Author

@JanisPlayer JanisPlayer May 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I just noticed it when testing that the webp image wasn't really animated.
Then I wrote code for each case.
But it was so terribly big that I deleted it and the best was this:

            // Convert image to JPEG if required
            //You might want to lower the maximum execution time here.
            //And increase again, to the default value, when the image is finished.
            //And set a maximum number of concurrent executions, which might prevent thrashing.
            //JSON(tmp) Database where you just save the entries as a number and delete them when they are done or the entries are 5 minutes old.
            $highres_enabled = $this->config->getSystemValueString('memories.image.highres.convert_all_images_formarts_enabled', 'false');
            $format = $this->config->getSystemValueString('memories.image.highres.format', 'jpeg');
            if ($highres_enabled == 'true') {
              switch ($format) {
                case 'jpeg':
                  if (!\in_array($mimetype, ['image/png', 'image/webp', 'image/gif'], true)) {
                      [$blob, $mimetype] = $this->getImageJPEG($blob, $mimetype);
                  }
                  break;
                case 'webp':
                  if (!\in_array($mimetype, ['image/gif'], true)) {
                      [$blob, $mimetype] = $this->getImageJPEG($blob, $mimetype);
                  }
                  break;
              }
            } else {
              if (!\in_array($mimetype, ['image/png', 'image/webp', 'image/jpeg', 'image/gif'], true)) {
                [$blob, $mimetype] = $this->getImageJPEG($blob, $mimetype);
              }
              // why did i do that? :D
              /*Set maximum width and height
              $maxWidth = (int)$this->config->getSystemValue('memories.image.highres_max_x', '0');
              $maxHeight = (int)$this->config->getSystemValue('memories.image.highres_max_y', '0');

              if (!\in_array($mimetype, ['image/png', 'image/webp', 'image/jpeg', 'image/gif'], true)) {
                // Check if the image exceeds the maximum resolution
                //$imageInfo = getimagesize($blob); //Dont work with a Blob need Imagick
                //$width = $imageInfo[0];
                //$height = $imageInfo[1];
                
                $img = imagecreatefromstring($blob); //Better als Imagick?
                $width = imagesx($img);
                $width = imagesy($img);

                if ($maxWidth > 0 && $maxHeight > 0) {
                  if ($width > $maxWidth || $height > $maxHeight) {
                    [$blob, $mimetype] = $this->getImageJPEG($blob, $mimetype);
                  }
                } else {
                  [$blob, $mimetype] = $this->getImageJPEG($blob, $mimetype);
                }
              }*/
            }

I think I should go to sleep. :D

Have to admit, if a user loads this with always always full screen, he has a PC from the future.
PHP makes an effort to let the script run as long as possible as it is in the config and thus allows an infinite number of calls.
So always full you should be very careful.
But I think it's more of an experimental feature anyway.

Thank you for your support. :)
Learned some new things, using the bitmap to divide the image into parts, which google does, was very interesting.
I try to finish adapting the code after sleeping.
I think you've tried what I do before, heard the experience? :D

Edit: I added the funny code, so you should always sleep enough, otherwise you will do such funny things. :D

Update:
https://imagemagick.org/script/resources.php
https://www.php.net/manual/de/imagick.setresourcelimit.php
putenv("MAGICK_THREAD_LIMIT=maxcores");
$image->setResourceLimit($image::RESOURCETYPE_THREAD, maxcores);
putenv("MAGICK_THROTTLE_LIMIT=limit");
putenv("MAGICK_TIME_LIMIT=sec");
These parameters can be used to speed up the process and set a timeout.
However, a limit must be set for conversions, database entries, or a JSON file.
This stores the current number of running conversions and the timestamp of the last one, deleting the value after the timeout period. Resetting after a certain time helps avoid errors during server restarts.
This was the only way I could prevent server overload due to multiple users.
However, it does require a few lines of code and is only useful in environments that need protection against such attacks or have a high number of users using this function.
The resolutions are stored in the database for each image, for whatever purpose I needed them back then.

Comment on lines 417 to 424
$aspectRatio = $width / $height;
if ($width > $height) {
$newWidth = $maxWidth;
$newHeight = $maxWidth / $aspectRatio;
} else {
$newHeight = $maxHeight;
$newWidth = $maxHeight * $aspectRatio;
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

$image->scaleImage(maxh, maxw, true); does the same thing.

$blob = $image->getImageBlob();
$mimetype = $image->getImageMimeType();

//getImageMimeType() dont work for webp you can use pathinfo() and strtolower() but i make it shorter
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getImageMimeType seems to work fine for me. What does it return for you? Imagick version?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ImageMagick 6.9.11-60 Q16 x86_64 2021-01-25 https://imagemagick.org
is already the newest version (8:6.9.11.60+dfsg-1.3ubuntu0.22.04.3)
imagemagick/jammy-updates,jammy-security 8:6.9.11.60+dfsg-1.3ubuntu0.22.04.3 i386
imagemagick/jammy 8:6.9.11.60+dfsg-1.3build2 i386
It doesn't work, I don't know why.
But actually that's not bad, even if it's missing which format it is, the browser still shows it.
I've also tried installing 7.1, but it won't work with PHP, nor have I been able to test it.
But when I save the images in the browser, they do so without formatting.
But he still knows that it is an image.

…eMimeType

[Line 283](https://github.com/JanisPlayer/memories/blob/master/lib/Controller/ImageController.php#L283) needs to be changed and the setting name is wrong memories.image.highres.convert_all_images_formarts_enabled
It was unnecessary as it was just an idea to increase performance.
@JanisPlayer
Copy link
Author

I just reviewed the code that I wrote back then, and I realized that there might be something we could do without.
memories.image.highres.convert_all_images_formats_enabled seems unnecessary and doesn't make sense based on its description.
memories.image.highres.format, since JPEG has the fastest encoder performance, as demonstrated above, we could rename this option differently, like perhaps memories.image.highres.compression, and only use it to enable compression.
Since you mentioned focusing more on performance and given that JPEG is still unbeatable in that regard, WebP could only be used for images with an alpha channel or animation, to reduce traffic.
This way, performance issues would be resolved without complex limitations, although other formats would no longer be possible.
What remains, however, is a smaller file size.
For this, it would actually only be necessary to check the database to see if the file is truly smaller than the original.
So, this would be my idea of how to rework the whole thing.
Its sole purpose would then be to minimize traffic.
In fact, having a setting in the control panel of the web interface would be interesting, and having detection whether the webpage was loaded on mobile and letting the device decide would be helpful.

@pulsejet
Copy link
Owner

@JanisPlayer I'll try to take a look later today and get back. Basically I agree WebP would be useful if there's an alpha channel etc. What's most important here is to let the admin know what they're doing with good documentation. Especially if the output resolution is capped then WebP encoding performance may be acceptable.

For this, it would actually only be necessary to check the database to see if the file is truly smaller than the original.

Sure, but that likely needs you to re-encode the image anyway. If that's discarded it's a wasted effort.

Memories in general needs a better server-side cache which may solve a lot of these issues. With a well-sized cache these WebP (or even AVIF) images can be used easily.

@JanisPlayer
Copy link
Author

JanisPlayer commented Sep 6, 2023

Sure, but that likely needs you to re-encode the image anyway. If that's discarded it's a wasted effort.

"I experimented with estimating the potential file size of a JPEG, not the resolution, to determine whether compression would be worthwhile. However, accurately predicting the file size is challenging, and the estimates are often imprecise.

I then attempted to set a target file size for the JPEG. In Imagick Desktop, there's a command for this purpose. In PHP, I had to take a different approach.

To make this process efficient, it's necessary to reduce the steps and estimate how much each step could reduce the file size with minimal effort. This allows for resizing the image to a target file size at a reasonably good speed. However, there's room for improvement in the estimation techniques.

The primary challenge is that I can't reliably predict whether data savings compared to the original file will occur with the initial compression. Additionally, achieving an ideal target file size for users without creating multiple JPEGs at different qualities can be difficult.

While there are ways to enhance this process, it often involves generating multiple versions of a JPEG if the initial estimates don't meet the desired file size, especially when the goal is to keep the file size below a certain threshold compared to the original."

I also tried reducing the resolution of the image to lower computational workload and memory usage. While it can be estimated to some extent, achieving perfection in this regard is still challenging. The only approach that works quite well is when the preview has lower resolution than the original, as that can be estimated more accurately. However, finding a solution with minimal computational effort and still producing good results for this scenario is genuinely difficult. So, I'm not sure if that's possible.

<?php
error_reporting(E_ALL);
ini_set('display_errors', 1);

// Path to the input image and the output image
$inputImagePath = 'input.avif';

// Target size in KB
$targetSizeKB = 100;
//$targetSizeKB = 100 * 60 / 100; // Scaling down to 70% resolution while maintaining 55-60% of the file size yields almost the same result as using 100% resolution and 100% file size. However, that still doesn't suffice.

// Create an Imagick object
$image = new Imagick($inputImagePath);
$image->stripImage(); // Remove metadata
$image->setImageFormat('jpeg');
$image->setImageCompression(Imagick::COMPRESSION_JPEG);
//$image->scaleImage(
//    (int)($image->getImageWidth() * 0.7),
//    (int)($image->getImageHeight() * 0.7)
//);

$original_image = $image->getImageBlob();

// Compress the image to the desired size
$image->setImageCompressionQuality(100); // Quality setting

// Iterative compression to reach the target size
$newQuality = $image->getImageCompressionQuality() - 5;
while (strlen($image->getImageBlob()) > ($targetSizeKB * 1024)) { // If it could better predict the size of the next step based on the data from previous steps, it would be more efficient.
    $newQuality = $newQuality - 20;
    if ($newQuality <= 0) { // Minimum quality
        break;
    }
    $image->readImageBlob($original_image);
    $image->setImageCompressionQuality($newQuality);
}

header("Content-Type: image/jpeg");
echo $image->getImageBlob();

// Release the Imagick object
$image->clear();
$image->destroy();
unset($original_image);
?>

// Pixel dimensions of the image
$width = 1920; // Width
$height = 1080; // Height

// Quality setting (in percentage)
$quality = 90; // Example: 90 percent

// Calculate the estimated file size in bytes
$estimatedSize = ($width * $height * $quality) / 100;

// Convert to kilobytes (KB)
$estimatedSizeKB = $estimatedSize / 1024;

// Estimated JPEG size is 70% smaller.
$estimatedSizeKB = $estimatedSizeKB * 30 / 100;

echo "Estimated file size: " . round($estimatedSizeKB, 2) . " KB";

I'll try to take a look later today and get back. Basically I agree WebP would be useful if there's an alpha channel etc. What's most important here is to let the admin know what they're doing with good documentation. Especially if the output resolution is capped then WebP encoding performance may be acceptable.

Yes, WebP often results in better file sizes compared to PNG, and it's relatively easy to implement. However, I recently noticed that Imagick's ability to detect whether an image uses an alpha channel or not may not work with every format; that could be the only issue.

Absolutely, I completely agree. Documentation plays a crucial role in situations like this. If it's misconfigured or not properly estimated by the admin, it can significantly slow down Nextcloud or potentially lead to issues.

There's some ways to do the UI (see https://dimsemenov.github.io/photoswipe-deep-zoom-plugin/), but the backend is very non-trivial. This is also how Google Photos solves this problem.

I also once asked Chat GPT if there is a PHP solution. It actually kept trying to rewrite this project using imagecopyresampled GD and provided me with some very interesting versions that needed correction. I also examined in the network protocol how Google determines the first image. First, a preview is created in the display resolution that the user can view. Then, at different zoom levels, similar to the alternative you and I mentioned, the image is divided until the last zoom level divides the image into many blocks. It is clear that encoding the new images takes time, and it cannot be done live.

After all the experiments, I've learned one thing: it's challenging to implement this live while saving enough data, as initially assumed. So, the best solution, if it needs to be compressed and consume fewer data, might be as follows:

First, make a rough estimate: Is the resolution lower than the original image? What is the quality? This is quite imprecise except for the resolution but is sufficient for a rough estimate and a check to see if the original image exceeds a maximum size, which would always be impractical for the user. Then, perhaps another solution as mentioned above: recompress with lower resolution steps until a target value below the original is achieved and try this value for full resolution. The downside of this method is that it's about 5-10% inaccurate at a resolution of 70% of the original image, and the accuracy worsens as the resolution deviates further. However, it's adequate for a usable estimate. But it doesn't save a tremendous amount of processing power, so it's only possible to achieve a smaller file with a lot of effort. But I think it's not worth it; it's better to just output the original if the initial estimate fails.

So, I did my best, but unfortunately, it seems quite challenging, and the solutions for Deep Zoom that come to mind are almost like generating previews at different resolutions or rewriting Nextcloud in that regard, which, of course, consumes storage depending on the settings but causes less traffic, although it also depends on that. I had more fun experimenting with code for myself and can now say that solutions exist, but whether they are really useful is questionable.

I can only say that compression is possible if the conditions and settings are optimal. If the estimate is off, performance is consumed. And better solutions simply require a new form of preview generator, not just in one app. Whether it's worth it, you'd have to test again. I would say that if you achieve the goal of not sending images that are larger than 3 MB, that's already a significant advantage.

Sorry for the many edits and the length, but I was curious about what is possible. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants