Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Not ready to merge. A bunch of this is rewrites from my branches which existed before big changes in
3d-tiles
happened.Source/Core/load.+?\.js
have been updated to include anxhrHandler
which is a callback to which theXMLHttpRequest
is passed. This is stored in the request asrequest.xhr
Source/Core/RequestScheduler.js
returns a differentScheduler
object instead ofRequestScheduler
, mostly because I still wanted to look at the old implementation.Budgets and request servers are not currently being used.
Cesium3DTileset
now time slices tile processing which makes moving around much smoother when many requests are coming in. The terrain does not do this (at least I don't think so), so Cesium gets very slow when many terrain tiles are requested. Tile processing also resorts them so the nearest and visible tiles get processed first.The
Scheduler
keeps a heap of all the requests that have been made. It has available requests ifmaxRequests - activeRequests + deferableRequests > 0
. This is not what it originally meant, but a deferable request is one that is allowed to be canceled / rescheduled. When new requests come in, they're inserted into this heap. At the end of every frame, every active and deferable request is moved out of the active list and put into the request heap. Then, until we've met the max number of requests, we pop requests off the heap and start them. Example: If we have a maximum of 50 requests, at the end of every frame, we keep only non-deferable and top 50 requests. All other requests get stopped. When they come back to the top of the heap they will be started again.Request sorting: if two requests both have screen space error, they are sorted by screen space error. Otherwise they are sorted by distance.
Cesium3DTileset
keeps track of all the tiles it has requested and updates their distance and screen space error so the requests can be reshuffled.XMLHttpRequest.prototype.abort
is perhaps not supported in some browsers? I had thoughts of adding something like this to check:Currently this approach is reasonable if there is only a 3D tileset. For me, the Philly dataset was loading about 5-10 seconds faster, even without HTTP/2 enabled. However, sometimes it takes longer for tiles to show up because tile processing is time sliced.
Adding in terrain causes many, many more requests in sort of an odd way. It seems like it's continuously canceling and restarting requests. Perhaps this is because terrain and 3d tile request distances are computed differently? I am led to believe this because if both a tileset and terrain are visible, the terrain never loads until the tileset has finished loading.
I tried adding request cancellation to the terrain as well, but everything that was being requested seemed to have been visited that frame, even if I moved the camera fast. I may have been missing something here.
It's difficult to balance how we want tiles to load because while request more tiles at once / using HTTP/2 reduces total load time, it generally takes longer for the closest tile to appear because other tiles are loading as well. The reason that total time decreases with more requests is because just 6 requests may not saturate the network bandwidth available. I think that user experience is better when the nearest data loads faster, but that's very subjective.
This is what I was going to try next. Perhaps continue to use the above heap method but additionally throttle to just 6 requests per server/dataset. The hope is that each data provider only serves us at most 6 things so there's still good throughput per server, but since we still have a large number of max requests, we saturate our download bandwidth. If there are multiple data providers, the number of requests will likely go above the browser max (especially if we're not using HTTP/2), but that's fine because we made the requests in order of priority. Anything past the browser max will just be automatically stalled by the browser.
Yes, HTTP/2 has benefits of compressed headers, binary data, and less TCP handshaking, but for us downloading data is the biggest bottleneck. From what I can gather, use of HTTP/2 is most beneficial when you need multiple things to draw anything at all, we don't need one tile to have been loaded to draw a different tile because they're totally independent, so we might want to try to limit the concurrency here so that each tile downloads faster. However, it could definitely be beneficial if we added an HTTP/2 PUSH to the server when there's additional data (such as textures or shaders) that are separate resources and are needed to draw the tile. It's good to shard them as separate resources on the same server so that they're cached separately and the size of the tile is smaller. HTTP/2 PUSH will eliminate the roundtrip to fetch the textures/shaders, but still respect cached data.
Something else worth testing is if domain sharding is still beneficial. People say NOT to domain shard when using HTTP/2 so that there's only one TCP handshake. But with only one TCP connection, you have less total bandwidth. I think most people say this because they're talking about websites where there are only a couple resources from each domain. The overhead of multiple TCP handshakes the benefits of having multiple TCP connections. HTTP/2 multiplexes the connection, but does that limit it to just using that one TCP connection? Would it be faster if I had 10 simultaneous TCP connections all using HTTP/2?