You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 12, 2024. It is now read-only.
It takes an unreasonable amount of time for the docs app to initialize. On a desktop we are talking up to 4 secs - on a phone or lower power machine it can be up to 20 secs!
Some investigation:
we need to be able to provide a number of the larger files zipped up if the browser supports it - this primarily involves configuring the server
we need to provide better cache expiration values to allow browsers to cache more effectively - this involves configuring the server but may also require dgeni (or some other tool) to append cache busting extras onto the file paths
we are downloading a large file (pages-data.js) containing mappings of page paths to partials (and titles). In reality a small configuration change to dgeni-packages and we could have a straightforward one-to-one mapping between page paths and their partials. If the partial also then contained the page title and breadcrumb information then we would not need to have all this large file.
the lunr full text search is the main cause of the delay in loading - it is building the search index every time from information in the pages-data.js file. We need a way to optimize this and/or move it from the main initial rendering thread, since the rendering is blocking until this has completed.
Lunr Search
The search index is generated on page load from a set of terms in the pages-data.js file. This file is large (~470Kb) and currently the app is blocking on this downloading. The search index generation takes a number of seconds and the JavaScript loop is blocked while this is happening. Currently this is happening at application bootstrap.
Load the data asynchronously
The obvious initial idea is to move the data into a file that is loaded via $http after the application has bootstrapped, since this will allow the initial page to render while we wait for the data to arrive.
Process the data in a WebWorker
Second, since the actual processing takes some time, we could consider moving the processing into a WebWorker for browsers that support it.
Cache the data in LocalStorage
Finally, we could consider caching the index in the LocalStorage, if available. This would require us to be able to work out how to invalidate the cache when the index is stale. Perhaps we should just store the index keyed on the version of Angular. We could use an LRU cache algorithm.
The text was updated successfully, but these errors were encountered:
what if we generated this stuff from Workers? (non-nested) WebWorkers are available on almost all target browsers. Then at least we wouldn't be blocking execution on the main thread, although we could wait for the search index to be built if a user tries to search before it's ready.
Ontop of that, we could be doing a better job of caching this stuff, I like the localStorage idea. We'd want to use a cache key which includes the version/build number, I think.
This commit refactors how the search index is built. The docsSearch service
is now defined by a provider, which returns a different implementation of
the service depending upon whether the current browser supports WebWorkers
or now.
* **WebWorker supported**: The index is then built and stored in a new worker.
The service posts and receives messages to and from this worker to make
queries on the search index.
* **WebWorker no supported**: The index is built locally but with a 500ms
delay so that the initial page can render before the browser is blocked as
the index is built.
Also the way that the current app is identified has been modified so we can
slim down the js data files (pages-data.js) to again improve startup time.
Closes#9204Closes#9203
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
It takes an unreasonable amount of time for the docs app to initialize. On a desktop we are talking up to 4 secs - on a phone or lower power machine it can be up to 20 secs!
Some investigation:
Lunr Search
The search index is generated on page load from a set of terms in the pages-data.js file. This file is large (~470Kb) and currently the app is blocking on this downloading. The search index generation takes a number of seconds and the JavaScript loop is blocked while this is happening. Currently this is happening at application bootstrap.
Load the data asynchronously
The obvious initial idea is to move the data into a file that is loaded via
$http
after the application has bootstrapped, since this will allow the initial page to render while we wait for the data to arrive.Process the data in a WebWorker
Second, since the actual processing takes some time, we could consider moving the processing into a WebWorker for browsers that support it.
Cache the data in LocalStorage
Finally, we could consider caching the index in the LocalStorage, if available. This would require us to be able to work out how to invalidate the cache when the index is stale. Perhaps we should just store the index keyed on the version of Angular. We could use an LRU cache algorithm.
The text was updated successfully, but these errors were encountered: