-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot read europe file #5
Comments
It even crashes when i use just the one thread |
That's strange. I'm regularly using osmpbf to parse the plant.osm.obf. I'll try to get your program running. |
Does the filteredCount example work for you? It works for me: $ /usr/bin/time -v ./filteredCount -k highway /data/osm/pbfs/europe-160714.osm.pbf |
Still crashes even on smaller 4 GB asia pbf after about 5 seconds of running at same place. Maybe you fixed that in your latest commit 85718e1 which i currently don´t have compiled. I will try it tomorrow if it resolves the issue. |
The commit you mention should not fix the issue. Did you try the filteredCount example on the europe export? |
I have created pull request to fix reading larger files than 3 GB #6 . When the mman owner commits also small offset fix( i was inspired by you),we will have complete Windows support alitrack/mman-win32#6. |
I don´t think i have properly fixed it though,because for Europe extract its working fine,but for planet file it starts to use all the memory. For the world extract it is |
I think memory mapped files on Windows can´t automatically free up pages for larger files. Btw do we need to use memory mapped file? Can´t we just read it sequentially? There is a lock used anyways to prevent the threads from reading the file at once anyways, so i don´t understand of what´s the point of mapping it all to memory when only one thread can read a block at a time. |
I ended up using https://github.com/osmcode/libosmium after all. It seems memory mapped files only work well on UNIX,because of better support. For world extract it was running with average 500 MB/ 400 MB (commit/working set). I haven´t studied their code thoroughly, but they are using memory buffers. |
I am experiencing issue with reading the 17 GB Europe extract. The application always crashes on line 186 in parsehelpers.h while trying to do
blobsRead += dbufs.size();
Somehow variables inFIle,processor,mtx,blobsRead,doProcessing,threadPrivateProcesor and maxBlobstoRead can´t be read by a thread, at least that´s what i see when break on exception in debugger. Please try the code i am trying to run if you experience same issues. The parser works with 1,4 GB Africa extract though. Both europe and africa extract were downlaoded from http://download.geofabrik.de/.
My example counter application accordign to your examples.
The text was updated successfully, but these errors were encountered: