-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Faster ls-files #10
Comments
It doesn't actually try to open the file, although it does need to iterate though every entry in the database, including leaf nodes. There is nothing obvious, although there are some things I can try tweaking, but I will need to be able to some how reproduce this on my end and I don't have nearly as much data I can add. |
@jefft0 try updating to the latest version (either the master or kevina/filestore) branch. That should cut the time down to around 5 minutes. |
That's an improvement. Now down to 18 minutes. |
That is probably about as best as I can get it without doing anything fancy. It still has to scan through the entire multi-GB database. I leave it up to you if want to maintain your own external list of files. Once the different use cases for the filestore become clearer I might add some sort of index to speed this up in the future. |
Sounds good. |
This is still something I want to do somehow. Reopening. |
As I mentioned on the pull request, I did
ipfs filestore add -r
on a collection of about 80,000 files (5,333 ldb files in filestore-db). I need to get all the filenames with their multihash. The problem is thatipfs filestore ls-files > inventory.txt
takes 56 minutes. Is there an easy way to make ls-files faster? Maybe it is opening and closing the same files? Maybe cache an intermediate result in memory? (If not, then I'll need to maintain my own external list of files and update that each time I dofilestore add
.)The text was updated successfully, but these errors were encountered: