-
Notifications
You must be signed in to change notification settings - Fork 668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propagator: Balance concurrent up/down-load #1633
Comments
Status: With the new job scheduler that @ogoffart did for 1.8, this might be easier to implement now. |
Awesome, so the concept exists. Nice! Will do that surely. |
We probably should delete them indeed. |
Ok. will see what we can do about it. @ogoffart @guruz @DeepDiver1975 @cdamken During the loop over all files to synchronise, we structurise our directories so that they are being synced in order of insertion, does not matter is specific tree contains 100GB of data to sync, and another 1MB of very important document, but added to sync list later. We could do it a little bit smarter. After we have all the "directories" object, we can give the folder ordered sync instructions, with which files/folders to start within that specific folder - according to their total size. Imagine you have following folder structure to sync: In current implementation, these files will be synced as they are met in the structure below: ownCloud Root However, post-loop could quickly restructurise it as follows, looking of size of files within to sync (of course not by priority, because ownCloud does not support sth like FOLDER PRIORITIES: ownCloud Root |
When synchronizing, it makes sense to try to keep at least 1 PUT and 1 GET running at the same time so we don't actually waste time by not trying to use the connection both upstream and downstream at its fullest.
(MOVE, MKCOL, DELETE do not matter in this issue I'd say)
--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/1455057-propagator-balance-concurrent-up-down-load?utm_campaign=plugin&utm_content=tracker%2F216457&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F216457&utm_medium=issues&utm_source=github).The text was updated successfully, but these errors were encountered: