You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am currently using SuRVoS with a big dataset and I was wondering if there is a way to prepare the dataset (name it data.h5, place it in a dedicated folder, name the internal dataset '/data') and on the SuRVoS menu select load the dataset instead of opening it.
I would like to do that as my dataset is 57GB and loading it to memory (RAM) in order to make a copy is not an option. I know SuRVoS make a copy of the input dataset because it normalizes it. However, I don't want to normalize my dataset or at least I would like to have the option not to.
I know SuRVoS was not designed for big datasets, but I am planning to process my dataset region by region and my only obstacle is that SuRVoS has to load it as a whole at the beginning. I hope this change is easily applicable as it would allow people to use SuRVoS for big datasets.
Hi, I am currently using SuRVoS with a big dataset and I was wondering if there is a way to prepare the dataset (name it data.h5, place it in a dedicated folder, name the internal dataset '/data') and on the SuRVoS menu select load the dataset instead of opening it.
I would like to do that as my dataset is 57GB and loading it to memory (RAM) in order to make a copy is not an option. I know SuRVoS make a copy of the input dataset because it normalizes it. However, I don't want to normalize my dataset or at least I would like to have the option not to.
I know SuRVoS was not designed for big datasets, but I am planning to process my dataset region by region and my only obstacle is that SuRVoS has to load it as a whole at the beginning. I hope this change is easily applicable as it would allow people to use SuRVoS for big datasets.
Aha! Link: https://dls1.aha.io/features/D-28
The text was updated successfully, but these errors were encountered: