-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When is a lazy-loaded image "about to intersect the viewport" #5408
Comments
What is "the viewport" as specified here? Layout viewport or visual viewport? Needs to reference the non-existent CSS Viewport spec (which I'm working on). |
+Scott Little <[email protected]>
…On Fri, Mar 27, 2020 at 8:40 AM Simon Fraser ***@***.***> wrote:
What is "the viewport" as specified here? Layout viewport or visual
viewport? Needs to reference the non-existent CSS Viewport spec (which I'm
working on).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5408 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAITEO32KQWFMJ6GF3ZCWT3RJTCGJANCNFSM4LVCHWRQ>
.
|
Hi! I think it should be configurable, like the Intersection Observer API. In this case, we could use |
How would you configure it? Would you configure it differently for different situations? What should the default be? |
One way is to have attitrubes that mimic what the intersectionObserver API provides. That's what a lot of JS based lazyloaders do. I could also imagine a more configurable API that could be media query - esque but respond to different effective connection speeds. But at minimum, parity with IntersectionObserver would go a long way. |
Ok, but I meant, if it was possible to configure the margin, how big would you make it? |
By default, I typically do one viewport height's distance.
If I was trying to be super smart about it, I'd factor in:
Especially number 3&4. If the image is just below the fold on initial pageload and the user hasn't scrolled at all yet, I'd want to load the image at the earlier of 2 events: the user begins to scroll or window.onload has fired. |
Besides above mentioned things, another factor to include is scrolling speed. At the extreme, how fast the user is actually scrolling, but in a simpler form, how fast users typically scroll on a given device. Just guessing, but I'd expect that, usually, users would scroll more than a viewport height faster on a typical mobile device than a typical desktop/laptop device. I'm not sure whether waiting for the load event before loading any images below the fold is right. Often the slowest outliers in a page are ad frames, and users can read and scroll without waiting for those. |
Correct, that's why I said to wait for load OR scroll, whichever is earlier. But if ad frames are a sticking point, amend my earlier statement with window.load (net of subresources). The reason to wait is that while users often do scroll before "above the fold" is completely loaded, it's less likely, and you don't want bandwidth contention between above the fold images, CSS, and JS vs. below the fold images. |
Research of JS libraries that do lazy-loading
Analysis summary
To doI haven't yet looked at httparchive to see how web pages typically configure the Comments on TwitterSee this twitter thread: https://twitter.com/bocoup/status/1243580618811666432 A few points:
|
I tried to figure out how common these libraries are used in httparchive. This was a bit tricky, and the actual usage might be different from this, but I hope this gives an indication.
This roughly matches with number of stars in GitHub, though -- lazysizes is most common, followed by lazyload. query
|
Looking at only pages that configure the
query
|
OK, so, what can we conclude?
I think the browser is usually in a better position to determine when to load images based on the user's connectivity and scrolling pattern and such. But this should be in the same ballpark as what web developers are doing, and should be consistent between browsers, so that web developers want to use the native feature over JS librarires. Ideally the behavior should be smart enough so users aren't annoyed by seeing images start loading after they scroll (which JS libraries are often failing at, as far as I can tell). I think the browser should have some margin also for images in element scroll containers (for image carousels) and iframes, not just the top-level page scrolling. In some situations the web developer is in a better position to predict when it's a good time to load an image (because the page might be driving scrolling, e.g. image carousel). There is an API already for "please load this image now", though -- set |
@zcorpan what amazing research you've done here. Indeed, lazysizes has the lions share of usage here but I'd hesitate to draw any conclusions about its default configuration being a signal to constrain what browsers make available to developers. What makes lazysizes so awesome is:
But what makes it even more awesome is the |
@zcorpan asked me wether I can describe the rationale and the functionality of lazySizes flexible expand feature. The rationale of this feature is the idea that lazy loaded elements that are currently not inside of the viewport should not consume network bandwidth while other in viewport elements are currently loading. At the end it should give you a better UX. On one hand we preload things before the user can see it, so the user doesn't have to wait. On the other hand as soon as the user sees something that needs to load we don't preload because this would divide the bandwidth for currently unneeded elements. I can describe some mechanics because they might be interesting for some implementation ideas.
Depending on the loading state of the document and how many lazy elements are currently loading lazysizes switches between those visibility checks and expands. For example until the page is not loaded and not scrolled (you had the same idea with ad frames as me) we use the shrink expand and do all visible checks. After that we switch between them based on how many elements are currently loading. So first we start with the most conservative check (0margin + all visible checks). After that if we have no currently loading elements we expand our search. About scroll speed: About the scroll container check: I'm currently on nicotine detox so I really have difficulties to concentrate, sorry for that. |
Sounds like we need more attributes to make it more configurable and cover more use cases, but with sensible defaults that different browser vendors reach a consensus about. Warning assumptions ahead. Other times we might want an image to download eventually. Sometimes we might even change our mind after DOMContentLoaded and want to set an attribute to say this image that was flagged as only download if about to be seen should now be also download after onLoad and the lazy load thread would notice this flag. Product images below the fold might have SEO juice and so would want to be loaded after the onLoad event, although I am not sure about this. The bots would still have the image URL, alt text, title text etc. Responsive images can further complicate matters on whether the designer needs the image to be there to hold the layout together, although they shouldn't be doing this. Will the picture element have loading="lazy" for each of it's different media queries attributes or just for the tag itself? Nowadays Microsoft has thousands of HTML/CSS tests. Has anyone heard from them about their defaults, or are they only using what Chromium provides? Sorry for the rambling, I just want this to be really useful in different cases. |
@aFarkas , thank you, that is very useful!
So I think there are two common cases for fast scrolling on touch devices:
For the first case, I think the browser already knows where the scroll position will end up, and could start loading those images as soon as the scrolling momentum is known. For the second case, it seems a bit harder to get right. On desktop browsers (without touch), the scrolling patterns are probably different. If the user uses the scrollbar thumb to quickly scroll somewhere, there is no scrolling momentum to predict the final scroll position. |
I'm not convinced of this. I think we should improve the defaults first, and then see what the remaining problems are (if any).
Yes. 🙂
Could they remove the entire footer?
When would you want to do this? Do you have a URL where this is done today?
I think this doesn't change anything for this issue.
You can set the right aspect ratio for the image with the
The
Which tests do you mean?
I assume the latter for this case. |
There are other things, such as links in the footer that some people want to see and will scroll to the bottom. I was just giving an example of a image that most of the time wouldn't be needed to be downloaded, but would be if it is going to be seen soon.
I'm not sure, just another scenario I thought of. Maybe something like this :- Some user interaction would cause the browser to scroll into view an element that is way down the page, while nearby is an image was set to lazy load if going to be seen soon, (and would be loaded if the user manually scrolled down there) but now because of some earlier interactions you are confident that the scroll into view is likely to happen and so want the image to download after onLoad as a sort of preload. Pretty contrived example, and probably not worth worrying about and I don't have any URL examples.
It's so long ago, I can't remember any real details. I think when they were working on IE8 and trying to be better with standards, mostly CSS. Around the ACID3 era I think. Some people there developed lots of tests to check they were following standards and they found some issues with the descriptions/explanations of some of the standards and in doing so help make them better. I have never worked for Microsoft, so I don't know any internal details. |
Ok, then I think a normal
You can tell the image to load by changing the As for tests, ok. We'll write new tests for this issue in https://github.com/web-platform-tests/wpt when we change the spec. 🙂 Cc @gregwhitworth for any input from MS. |
Regarding footers, I think the web platform is missing lazy CSS images |
For WebKit the current approach is to use compositor information (https://bugs.webkit.org/show_bug.cgi?id=203557). On my 15" macbook pro this typically gives values around 1800px on my test page (https://mathiasbynens.be/demo/img-loading-lazy) and on iPhone ES (simulator) around 800px. |
Thanks, @rwlbuis . Can you give a summary of the approach taken in your patch, and rationale? |
I must re-iterate on this. No matter wether you have a fixed "margin" of 100px, 300px, 800px or 1800px. A flexibel/adaptive value is always much more powerful. Think of the default situation during the onload phase you have two images in view but due to your extended margin value of 100 - 1800px you are loading for example 6 images in parallel. Those 4 unnecessary image downloads are cutting the bandwidth literally in half. Of course as soon as those two images are loaded you can start to preload those 4 images. Also in earlier versions of lazysizes I had a much higher extended margin values than now and a lot of developers where complaining about it (partially because they did not understand how the adaptive margin is speeding up in view images compared to out of view images). By cutting it down to max of the |
I agree that this needs to be specified more precisely, rather than just saying "it's based on something implemented in WebKit". WebKit changes the compositor coverage for scrollables based on scrolling velocity, in ways that could change in future. I don't think web-facing behavior should be built on top of it (sorry, I did suggest it initially, but now think that was a mistake). |
Hey folks. I wanted to provide some background for how we arrived on the current thresholds in Chromium in case it helps with alignment on the question "when should we consider an image is about to intersect with the viewport". Scroll speed : We believe how fast users typically scroll on a given device matters (perhaps similar to @othermaciej?) We attempted to optimize for perceived performance by setting conservative thresholds we believed would minimize how often users would quickly scroll down to an image that has not yet loaded - ideally, you shouldn't be staring at some blank pixels. Part of this is to workaround a platform limitation: you cannot easily configure a placeholder for a natively lazy-loaded image, without using JavaScript. JavaScript lazy-loaders often have more flexibility here. It's often possible to say use a generic placeholder image, LQIP, SQIP etc...but the platform doesn't exactly solve for this. We can reserve dimensions for the image, maybe even set some UA specific background-color, but nothing as close (yet) to what's possible in userland. Network quality: As captured in our implementation, we adjust thresholds based on the user's effective connection type. Given how widespread Chromium is used in regions where network quality can be highly variable, we wanted to balance giving users on a fast connection different thresholds (i.e load more images on 4G) while keeping in mind quality and data-plan costs and loading less if you're on say, slow 2G/3G. Now I personally believe Chromium's current thresholds are different enough to what users get by default with libraries like LazySizes that they can sometimes come across as unintuitive. Like @mikesherov, I often configure my JS lazy-loading libraries to use one viewport height's distance for rootMargin. The data savings here can be significant (e.g ~40-50%). In contrast, Chromium's current thresholds might get you ~10-15%.
+1 I would separate this out into two questions: what should the defaults be? what should the API surface for supporting configuration be? Fwiw, I would personally love to give developers control over lazy-loading sensitivity, whether this is done in a preset manner (e.g If I was throwing longer-term questions and ideas out there...
|
I can echo that feedback. Here's a scenario I'm seeing on a website I currently maintain (and I believe this is a common pattern): <link rel="stylesheet" href="stylesheet.css">
<!-- In viewport -->
<div style="background-image: url(hero.jpg)">
</div>
<!-- Below viewport -->
<img loading="lazy" src="product1.jpg" alt="">
<img loading="lazy" src="product2.jpg" alt="">
<img loading="lazy" src="product3.jpg" alt="">
<img loading="lazy" src="product4.jpg" alt=""> Browsers will start to download the stylesheet and the product images. Once the stylesheet is downloaded and layout performed, hero.jpg will start downloading but it is now competing for bandwidth with images that are irrelevant at the moment. During the initial load, Firefox's current behaviour has my preference. |
Would prefer a
We do somewhat, but existing JS solutions show empty pixels often enough that maybe having defaults that match them is good enough. Aggressive fetching seems worse than empty pixels. I do think that giving authors some customizability of lazy loading would be reasonable, but I'm not sure what that would look like declaratively. Maybe it would be OK for authors who want something more than the default behavior to fall back to Intersection Observer. |
To keep this on track, I'd like to scope this issue to getting consistency in the behavior for the feature as-is. New API for placeholder image or customizing the thresholds should be separate issues. Cases to considerScrolling vertically & horizontally for:
Input to the decision modelThe things that an implementation could use as input for the decision:
Not inputs to the decision model
PrivacyThe implemented behavior should not expose information about the user that the page doesn't already have access to otherwise. For example, if the implementation doesn't expose battery levels, the battery level should not be an input to the model. The "typical scrolling speed on the current device" shouldn't be so precise as to help finger-print a user. Issues
|
So, I'm not sure how this would work. In particular, for the image carousel use case, using only the implicit root (which I think browser implementations do now) would mean that there is no threshold for the element scroll container case, so those images would only start loading after they are partially in view. There is likely a performance hit to observing all scrollable elements when lazy images are used. Is there a good way to make it "do what I want" without adding more API surface? Or is the web developer explicitly setting the root the best way to solve this? Edit: filed w3c/IntersectionObserver#431
Should lazy images in iframes use the implicit root, or the images' node document as the intersection root? The former takes away the rootMargin if the origins aren't similar-origin, per IntersectionObserver spec. Edit: in #5510 we've set |
It seems that Chrome currently has (for toplevel document) logic that if the lazy image is within ~3000px of the visible viewport, start loading it. Firefox starts to load it once it should be already rendering the first row or column of the pixels within the image. Clearly Firefox logic is always going to cause visibly delayed rendering. On the other hand Chrome will often load the whole page. How about keeping track of a preload margin per site instead? Maybe start with 500px but keep a log about how many pixels you had extra margin at the time the image was fully loaded; if you had more than say 50px extra margin, reduce the margin. If you had less than 50px extra margin, increase the margin. How much to change margin at once? I'd suggest trying to target the 50px extra margin and do a binary search towards it. For example, start loading image by default when it's closer than 500px from the viewport. Once that image is fully loaded and user is still 400px from the image you can compute that image took "100px" worth of loading time and your preload margin should be closer to 150px (includes 50px extra margin above). Split the difference and use 0.5*(500px - 150px) = 175px as the new safety margin. This would result in pretty fast converging algorithm needing only one integer value per site of memory. I think one value per site is required because different sites have so huge variance in loading speed. Being logically a binary search it should be able to quickly adjust to scrolling speed changes even within a single infinite scroll page. The extra margin above is needed to combat the issue that different images will have different byte sizes even if the pixel dimensions were the same. With suitable tuning the above algorithm should be pretty good at getting the images just-in-time unless user changes scrolling speed very rapidly. |
If user is currently scrolling fast it might be sensible to load only one lazy loaded image in parallel to be able to skip more images if user scrolls so fast that all images cannot be loaded in any case. That should reduce the latency to start loading the visible images once user slows down enough. |
Thanks @mikkorantalainen , that sounds like an interesting approach. It's difficult to evaluate how well it would work in practice without an experimental implementation. It seems to me though that it may get too small margin if the user scrolls slowly for a while and then scrolls quickly, for example. If we'd like images to be available when users scroll a screen length or so quickly, I think the implementation needs to work from the assumption that the user can do so at any time. |
Sorry for the late reply here, I'd just like to add some more explanation for Chrome's choices of thresholds to what zcorpan and Addy mentioned earlier. Chrome currently uses relatively conservative thresholds, as other folks have mentioned above - typically 3000px on a fast network, and larger thresholds on slower networks (since the images are expected to need longer to load in). These current thresholds used for loading=lazy are the same ones that were developed for the Automatic LazyLoad behavior that Android Chrome users who've turned on Lite Mode will see, which attempts to lazily load page content where suitable (even if it's not marked loading=lazy) in order to reduce data usage and speed up critical content. The main regression metric that we've focused on in Chrome for these thresholds is what we're calling image visible load time, which measures how long an image is in the viewport before it finishes loading. The goal was to choose thresholds large enough that we can minimize visible load time regressions, such that typically the user experience would match what they'd see without lazy load, plus the data savings and speedups of critical content. The initial thresholds are purposely overly conservative, since that way the user experience errs more on the side of matching what users would see without any lazy loading. I am experimenting with more aggressive thresholds (1250px on 4G-speed networks) that get some additional data savings without any significant regressions in visible load time. I'm hoping to launch these more aggressive thresholds for Chrome soon. I've also experimented with even more aggressive thresholds (750px on 4G-speed networks), but at that point the visible load time regressions start to become more noticeable. I've also experimented with using less conservative thresholds for slow networks (e.g. 2G-speed networks), but from the data so far, it looks like there isn't much room to get more aggressive for slow networks. |
I've implemented my earlier comment in #5917 except not including this:
This has to do with fetch priority, and I think is a bit orthogonal to this issue. Image priority could depend on in-viewportness regardless of the |
The issues in IntersectionObserver that are related to this issue: |
This was discussed a few days ago in the WHATWG TPAC breakout session. Minutes at https://www.w3.org/2020/10/26-whatwg-minutes.html#lazy
|
There will be another TPAC breakout session tomorrow (30 October 14:00–15:00 UTC) to discuss changes to IntersectionObserver to better support lazy-loading use cases. https://www.w3.org/2020/10/TPAC/breakout-schedule.html#intersectionobserver |
https://html.spec.whatwg.org/multipage/images.html#updating-the-image-data:lazy-loading-attribute
https://html.spec.whatwg.org/multipage/rendering.html#intersect-the-viewport
When to start loading a lazy-loaded image is a key aspect of the feature, but the spec doesn't give advice beyond what is quoted above. Right now, different implementations do different things: Chromium starts loading early (I think currently 3000px to 8000px before entering the viewport, depending on effective network speed and latency), Gecko and WebKit start loading late (when at least 1px is visible). See https://www.ctrl.blog/entry/lazy-loading-viewports.html -- they argue that the implemented extremes are too early and too late; nobody has the goldilocks "just right" behavior, yet.
From my experiments, it seems Chromium only applies the "margin" for top-level page scrolling. For images that are in scrollable elements, or in iframes, the loading starts when the element is at least 1px visible. The spec doesn't differentiate between different cases of "about to become visible". The element scroll container case is common for image carousels.
See this demo: https://lazy-img-demo.glitch.me/
To view the same demo in an iframe: https://glitch.com/edit#!/lazy-img-demo - click "Show" and then "Next to The Code".
I'm curious what JS libraries that implement lazy-loaded images do. Have they iterated on this, and know something we could apply here?
Usually, details like this are left to the UA to optimize. However, I think it's important to get some consistency in implementations for web developers to be able to use the feature and know that browsers won't load all images anyway (because their scrollable area is smaller than the browser's lazy margins) and won't load images too late, resulting in users always seeing images load after they're within the viewport.
cc @domfarolino @bengreenstein @emilio @smfr @othermaciej @rwlbuis
The text was updated successfully, but these errors were encountered: