-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve spacing between srcset widths #185
Conversation
The largest size was being repeated in certain conditions, since the value stored in `prev` was exact (possibly fractional) while the number saved in the list of resolutions is rounded to an even number, but the value stored in `prev` was the one being compared to `maxWidth`. In this fix we remove the code which pushes `maxWidth` to the list *after* the loop finishes and compare the last value in the list to `maxWidth` rather than the running exact fractional width value. These together eliminate the areas where the bug can subtly appear. It is then necessary to refactor the loop, by initializing the list to already contain the `minWidth` and then exiting early if `maxWidth` is equal (which has the side benefit that the cache won't be polluted with trivial cases) and then in the loop body push the *new* rather than previous value to the list, after rounding and clamping.
This better describes its purpose, and performing the "double and add one" at this stage means the logic is simpler later. It does still make sense to use the raw user-supplied value in the cache key, however.
I just identified a bug with this, where if I specify an odd maxWidth I can get a repeated value. Working on it. |
With some basic maths we can calculate an adjusted increment ratio from the width tolerance which leads to a sequence of widths which remains regular(ly exponential) right up to and including the largest width. This avoids the case often seen where the largest two sizes are significantly closer-spaced than any other pair.
I take it from the lack of comments so far that this hasn't been reviewed yet, so I'm going to meld my fix to the existing commits. |
f336386
to
dd1b597
Compare
The failing set of cases included I fixed this with a check for whether the running fractional width rounds to the target width. If so, it doesn't run This probably never would have occurred in normal usage! |
Hey @tremby! @imgix would really like to thank you so much for the activity on this repo! Unfortunately this is a change that we are not on board with, and I'd like to explain a bit why, because we value our contributor's contributions and don't want you to think that we are just dismissing this without reason. Firstly, we place large weight on changes to this behaviour, since this would require changes to ~20 of our libraries, since we require 100% consistency across all our SDK libraries for the srcset generation feature. This is important for cache performance. Mainly, I think the fundamental problem with this PR for us is that it shifts the goal of the srcset feature away from the goal it was designed for, to an arbitrary goal. Our original goal with this srcset feature was to achieve a suitable trade-off between visual performance, bandwidth usage, and cache hits. This is the reasoning between the width tolerance of 8%, which was very intentionally chosen. Therefore, it's important for us that the widths scale up according to this increment percentage every time, because this will ensure optimal performance across the majority (99%) of browser widths and images. If we were to adopt this PR, we would be shifting our goal towards something like "smoothness of the srcset widths curve", or "making sure that every value is correctly spaced from every other value", rather than the aforementioned goal. This is not a trade-off we are willing to make, as we value the original goal of this feature, even if that causes small irregularities. We do understand that the issue you raised is an issue/"bug" with our srcset-generation logic, but as with most problems, no solution is perfect, and in this case this is a trade-off we feel that should make. Furthermore, I think the irregularity at the top end of the srcset range is a small issue since it will only affect a very small subset of users. Also, I think the goal of optimizing for this 100-8192px range is the wrong way to look at it. Honestly the 8192px hard limit is a constraint of the imgix API, and this will be changed in the future. It would be bad for cache performance in the future that when we change this, all the widths would change and thus all the caches would be invalidated. Instead, it is more important for us that these images remain in the cache, independent of this hard limit. And, finally, another small reason why I didn't like this change is because it makes the program not directly follow the developer's input. Right now, the width tolerance is very direct and easy to understand for developers - "If I set width_tolerance to 0.10, that means each width will increase by 10% every time", whereas with this change, they would no longer have control over how much the widths would change every time, which could be frustrating, and we don't want to take this clarity away from them. Anyway! That was a bit of a wall of text, but I would like to thank you again for investigating this real issue in our logic, and working on a fix for us! We really do appreciate it, and we are welcome to review/discuss any other changes in the future that you might think would improve our libraries. |
I'm rather confused about this decision.
Show me the libraries and I could do the legwork for you!
I strongly disagree that this changeset represents a shift in goal. I will discuss below why I don't believe this is a shift in goal at all, let alone to an arbitrary one.
But that 8% is just a default, and it's overrideable by the developer. In my opinion it's a much-too-small default (I'd be interested to understand exactly how 8% was decided upon), but that's not the point anyway. The fact is that my changes result in the exact same number of steps being produced as before the change. So I'm really not sure how you think this changeset reduces visual performance, increases bandwidth usage, or reduces cache hits.
I don't follow your logic here at all. The developer chooses the (Though I absolutely agree that performance is important (and don't think this change is harming it in any way) I can't help but be surprised about the level of weight you're putting on it here given both the (very low in my opinion) 8% default tolerance and the categorical ignorance of cacheability and performance over in your React background image component! imgix/react-imgix#672)
Again I don't follow. I don't see how this change affects "visual performance, bandwidth usage, and cache hits" other than if somebody updates and so their srcset changes, and so there are some cache misses the first time each size is requested. Again, I'll point out that the same number of steps are being generated under this algorithm. The way I see it your initial goals are still all met, and additionally the spacings are smooth, which can only be an improvement.
Surely nobody intentionally leaves The irregularity at the top end of the srcset range would affect many more users where
There's no optimization for that particular range in this code. I wonder if you misunderstood how the code works?
Nothing to do with the 100-8192 range is hardcoded. Would it help if I explained how the calculations work in more detail than the code comments already do?
This is fair enough, and would affect existing images for users who upgrade their library. But it's not like everybody's going to do so, and certainly not simultaneously. It's also a one-time thing. You know your infrastructure better than I do, but I'd be surprised if this is a big issue.
Setting I seriously doubt any developer is thinking this way. I don't think many developers care exactly what increment is between the candidate sizes, and I'd wager that many of those who do probably expect a linear increase in widths rather than an exponential one anyway. (I mean, do you know what 100 × 1.1 × 1.1 × 1.1 × 1.1 is? I don't, but it's not 140.) As I said above I'd expect But the option is called "width tolerance", not "precise scale factor". The algorithm both before and after this change works as advertised, and sticks to that given amount of tolerance. That is, "there should be an image available no more than 8% different in width from the size at which it's being rendered". My change to the code causes the scale factor to stay the same or to shrink minimally, never increase, so the tolerance the developer requested is always still met. Even if a developer interprets "tolerance" as "scale factor", the existing code does not follow that requested factor for each increment in most cases anyway -- the last gap is usually smaller, which is exactly what this change addresses.
Any developer who wants that amount of control is already specifying the exact sizes they want with the I wonder if you'll reconsider this. |
The only reduced performance I can begin to see here is if more different images are being requested from your server as users resize their browser viewports or use different-sized devices, after this change compared to before. Yes, the gaps between candidate images will for the most part be slightly smaller, and so it's possible more different image sizes will be requested, assuming requests for the different candidate widths follow something resembling a bell curve. But seeing that as "bad" requires the leap in logic that an 8% gap between image widths is acceptable (or perfect -- you did say it was very intentionally chosen) while slightly less than 8% is unacceptable. I really have to question this. If that is indeed the line of thinking, it seems to me the solution would be just to increase the default a little. If my maths is right, the most the requested ratio can be reduced by my code is to cut it exactly in half (actually a mite under that due to floating point error), and this only happens in precisely the cases where the existing library's code would make exactly three candidates (where in my code This, by the way, is the most realistic example I could find of this edge case, and I brute-force searched the domain of min from 10 to 2000 and max from 20 to 8190 in increments of 5 pixels each, always with 8% requested step. This totals 567,138 possible ranges. Taking into account only srcsets with 3 or more members (since 2 would heavily skew the results and would be the same before and after this code change anyway), over the entire range I brute-forced just now including the edge cases I mentioned above, the average calculated step was 7.7175%, with a median of 7.8131%. Surely we can agree that this is not a big change from 8%. Maybe the piece of the puzzle I'm missing is why 8% is special. The code for my brute-force search: const ratios = [];
for (let min = 10; min <= 2000; min += 5) {
for (let max = 20; max <= 8192; max += 5) {
if (max <= min) continue;
var numSteps = Math.ceil(Math.log(max / min) / Math.log(1.08));
var adjustedIncrementRatio = Math.pow(max / min, 1 / numSteps);
if (numSteps <= 1) continue;
ratios.push(adjustedIncrementRatio);
if (adjustedIncrementRatio <= 1.04) {
console.log("min", min, "max", max, "steps", numSteps, "ratio", adjustedIncrementRatio);
}
}
}
console.log("avg over", ratios.length, "sets is", ratios.reduce((acc, r) => acc + r, 0) / ratios.length);
console.log("median is", ratios.sort()[Math.floor(ratios.length / 2)]);
console.log("min is", ratios[0]); |
Hey @tremby I wanted to start by saying thanks again for the time and effort put into this PR and ensuing discussion. Our team is always excited to have these types of discussions with users, as it helps inform our understanding of how our SDK can be improved. Often times, part of this process requires that our team make choices about contributions based on the larger context that we have at an organizational level. We do our best to incorporate outside-contributions in each project, assuming they are aligned enough with that larger context, but can't promise everything will make it in. So to that end, I ask that you respect the project maintainers' decision not to merge this change and try to understand their position. I also ask that you please kindly refrain from using a hostile tone when partaking in these discussions. As I said, this type of discourse is very important to us, but will only get so far if there is an aggressive atmosphere. Ultimately we are all working towards the same goal, so let’s try to be respectful even if there are differing ideas about what should/shouldn’t be implemented. |
Description
In this changeset I derive an improved increment ratio for srcset generation from the width tolerance.
Before this patch the widths increase at a regularly-exponential rate until the second-largest width, then the largest width is equal to the maximum width. In all but some very particular cases this will mean the largest two widths are much more closely spaced than the others, in terms of the exponential scale at least. It is possible to have candidate widths of 8190 and 8192 pixels, for example, if the right minWidth, maxWidth, and widthTolerance values are chosen. (Obviously this is an extreme case.)
Checklist: Bug Fix
chore(readme): fixed typo
. See the end of this file for more information.Steps to Test
I think the changes to the tests already cover this, but as a realistic example where the difference is easier to see:
Before the change:
320,384,460,552,664,796,956,1146,1376,1440
(this plot looks more bumpy than it really is due to ascii; the only important part is the sudden change in direction for the last value)
After the change:
320,378,448,528,624,738,872,1030,1218,1440
Note
I've built this on top of #184 since that was a bug I found while writing this and they touch some of the same code. As such, you may not need to review all commits; only the last two are strictly relevant. I can rebase it to be straight on top of main if you like.