You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@SorooshMani-NOAA as we discussed before, analyze_ensemble.py might not successfully run for big storms/large ensemble due to memory issue.
A quick fix, as you suggested, was adding dask cluster under the if __name__ == '__main__': and before _analyze(tracks_dir, analyze_dir, mann_coef) as follow:
cluster = SLURMCluster(cores=16,
processes=1,
memory="500GB",
account="nos-surge", # "nos-surge" for Hercules, "compute" for PW
walltime="04:00:00",
interface="eth0", # only for PW
header_skip=['--mem'])
cluster.scale(6)
client = Client(cluster)
And then running the script manually.
Could you please add this to the main script, and maybe with user defined inputs in input.conf? thanks
The text was updated successfully, but these errors were encountered:
sure @FariborzDaneshvar-NOAA. One thing we need to test though, is what happens if we run this Dask code in the singularity container. I'm not sure how that would work with running new instances, etc.
@SorooshMani-NOAA as we discussed before, analyze_ensemble.py might not successfully run for big storms/large ensemble due to memory issue.
A quick fix, as you suggested, was adding dask cluster under the
if __name__ == '__main__':
and before_analyze(tracks_dir, analyze_dir, mann_coef)
as follow:And then running the script manually.
Could you please add this to the main script, and maybe with user defined inputs in input.conf? thanks
The text was updated successfully, but these errors were encountered: