You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we only have slurm_memory_gigabytes_per_cpu for controlling the Slurm memory allocation. Unfortunately, I don't think --mem-per-cpu is the most intuitive argument to be configuring. For one, Slurm might decide to give you double the amount of memory that you requested because of hyperthreading (see discussion here). Secondly, multithreading isn't enormously popular in R, especially when we already have parallelism provided by targets. Instead, I prefer --mem which just simply sets the total memory per job.
Of course, you can currently configure this using:
script_lines= c(
"#SBATCH --mem 500G",
...
)
However, making it a first-class argument would be even more user friendly.
The text was updated successfully, but these errors were encountered:
Proposal
Currently we only have
slurm_memory_gigabytes_per_cpu
for controlling the Slurm memory allocation. Unfortunately, I don't think--mem-per-cpu
is the most intuitive argument to be configuring. For one, Slurm might decide to give you double the amount of memory that you requested because of hyperthreading (see discussion here). Secondly, multithreading isn't enormously popular in R, especially when we already have parallelism provided bytargets
. Instead, I prefer--mem
which just simply sets the total memory per job.Of course, you can currently configure this using:
However, making it a first-class argument would be even more user friendly.
The text was updated successfully, but these errors were encountered: