You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The way in which cabal-install currently uses the HTTP library API is quite sub-optimal. It starts a new Broswer session for each connection. This means that it cannot take advantage of HTTP pipelining, connection pooling and other goodies.
One design would involve making some HTTP download object and making requests through that. The download object would start a thread to make the actual requests. Requests would be serialised, eg using a Chan. It probably wants to work by directing URLS be downloaded to local files. On top of this we should build some caching mechanism so that we can check for example if our package index is up to date before downloading the whole thing. It would also be nice to check the length and md5 header to catch short or corrupted downloads.
The download function is given the URL and the local file path. If the local file exists it is taken to be the cached file. It should download the URL to a temp file in the target directory and if all successful it should atomically rename it over the target file.
Perhaps error handling needs to be more explicit than just using IOErrors.
The text was updated successfully, but these errors were encountered:
(Imported from Trac #448, reported by @dcoutts on 2009-01-10)
The way in which cabal-install currently uses the HTTP library API is quite sub-optimal. It starts a new Broswer session for each connection. This means that it cannot take advantage of HTTP pipelining, connection pooling and other goodies.
One design would involve making some HTTP download object and making requests through that. The download object would start a thread to make the actual requests. Requests would be serialised, eg using a Chan. It probably wants to work by directing URLS be downloaded to local files. On top of this we should build some caching mechanism so that we can check for example if our package index is up to date before downloading the whole thing. It would also be nice to check the length and md5 header to catch short or corrupted downloads.
Something like:
The download function is given the URL and the local file path. If the local file exists it is taken to be the cached file. It should download the URL to a temp file in the target directory and if all successful it should atomically rename it over the target file.Perhaps error handling needs to be more explicit than just using IOErrors.
The text was updated successfully, but these errors were encountered: