You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I realized that even with the base data (major cities, before any downloaded areas) it's already 183 megs. But if you gzip search.db it's 16 megs. No wonder, it's a bunch of text.
I bet there's an sql feature or plugin we could use to make it smaller. Especially since it's read-only 99% of the time. The goal is to find and implement such a thing for the search database.
Though if there's a notable performance hit for reading, it would give me pause. In that case maybe we could look into something awful like decompressing on startup.
Or maybe we just need to run VACUUM or set auto_vacuum. I'd guess auto_vacuum only happens on writes. I just tried it and it at least cuts size in half. https://www.sqlite.org/lang_vacuum.html
The text was updated successfully, but these errors were encountered:
I realized that even with the base data (major cities, before any downloaded areas) it's already 183 megs. But if you gzip
search.db
it's 16 megs. No wonder, it's a bunch of text.I bet there's an sql feature or plugin we could use to make it smaller. Especially since it's read-only 99% of the time. The goal is to find and implement such a thing for the search database.
Though if there's a notable performance hit for reading, it would give me pause. In that case maybe we could look into something awful like decompressing on startup.
Or maybe we just need to run VACUUM or set auto_vacuum. I'd guess auto_vacuum only happens on writes. I just tried it and it at least cuts size in half. https://www.sqlite.org/lang_vacuum.html
The text was updated successfully, but these errors were encountered: