Skip to content

Commit

Permalink
Use queue options directly from ert
Browse files Browse the repository at this point in the history
  • Loading branch information
oyvindeide committed Dec 13, 2024
1 parent 4301b63 commit dab7013
Show file tree
Hide file tree
Showing 23 changed files with 199 additions and 694 deletions.
185 changes: 4 additions & 181 deletions docs/everest/config_generated.rst
Original file line number Diff line number Diff line change
Expand Up @@ -870,25 +870,6 @@ Type: *Optional[SimulatorConfig]*

Simulation settings

**name (optional)**
Type: *Optional[str]*

Specifies which queue to use


**cores (optional)**
Type: *Optional[PositiveInt]*

Defines the number of simultaneously running forward models.

When using queue system lsf, this corresponds to number of nodes used at one
time, whereas when using the local queue system, cores refers to the number of
cores you want to use on your system.

This number is specified in Ert as MAX_RUNNING.



**cores_per_node (optional)**
Type: *Optional[PositiveInt]*

Expand All @@ -906,20 +887,6 @@ Simulation settings
Whether the batch folder for a successful simulation needs to be deleted.


**exclude_host (optional)**
Type: *Optional[str]*

Comma separated list of nodes that should be
excluded from the slurm run.


**include_host (optional)**
Type: *Optional[str]*

Comma separated list of nodes that
should be included in the slurm run


**max_runtime (optional)**
Type: *Optional[NonNegativeInt]*

Expand All @@ -929,18 +896,10 @@ Simulation settings



**options (optional)**
Type: *Optional[str]*

Used to specify options to LSF.
Examples to set memory requirement is:
* rusage[mem=1000]


**queue_system (optional)**
Type: *Optional[Literal['lsf', 'local', 'slurm', 'torque']]*
Type: *Optional[LocalQueueOptions, LsfQueueOptions, SlurmQueueOptions, TorqueQueueOptions]*

Defines which queue system the everest server runs on.
Defines which queue system the everest submits jobs to


**resubmit_limit (optional)**
Expand All @@ -956,54 +915,6 @@ Simulation settings
If not specified, a default value of 1 will be used.


**sbatch (optional)**
Type: *Optional[str]*

sbatch executable to be used by the slurm queue interface.


**scancel (optional)**
Type: *Optional[str]*

scancel executable to be used by the slurm queue interface.


**scontrol (optional)**
Type: *Optional[str]*

scontrol executable to be used by the slurm queue interface.


**sacct (optional)**
Type: *Optional[str]*

sacct executable to be used by the slurm queue interface.


**squeue (optional)**
Type: *Optional[str]*

squeue executable to be used by the slurm queue interface.


**server (optional)**
Type: *Optional[str]*

Name of LSF server to use. This option is deprecated and no longer required


**slurm_timeout (optional)**
Type: *Optional[int]*

Timeout for cached status used by the slurm queue interface


**squeue_timeout (optional)**
Type: *Optional[int]*

Timeout for cached status used by the slurm queue interface.


**enable_cache (optional)**
Type: *bool*

Expand All @@ -1019,72 +930,6 @@ Simulation settings
optimizer.


**qsub_cmd (optional)**
Type: *Optional[str]*

The submit command


**qstat_cmd (optional)**
Type: *Optional[str]*

The query command


**qdel_cmd (optional)**
Type: *Optional[str]*

The kill command


**qstat_options (optional)**
Type: *Optional[str]*

Options to be supplied to the qstat command. This defaults to -x, which tells the qstat command to include exited processes.


**cluster_label (optional)**
Type: *Optional[str]*

The name of the cluster you are running simulations in.


**memory_per_job (optional)**
Type: *Optional[str]*

You can specify the amount of memory you will need for running your job. This will ensure that not too many jobs will run on a single shared memory node at once, possibly crashing the compute node if it runs out of memory.
You can get an indication of the memory requirement by watching the course of a local run using the htop utility. Whether you should set the peak memory usage as your requirement or a lower figure depends on how simultaneously each job will run.
The option to be supplied will be used as a string in the qsub argument. You must specify the unit, either gb or mb.



**keep_qsub_output (optional)**
Type: *Optional[int]*

Set to 1 to keep error messages from qsub. Usually only to be used if somethign is seriously wrong with the queue environment/setup.


**submit_sleep (optional)**
Type: *Optional[float]*

To avoid stressing the TORQUE/PBS system you can instruct the driver to sleep for every submit request. The argument to the SUBMIT_SLEEP is the number of seconds to sleep for every submit, which can be a fraction like 0.5


**queue_query_timeout (optional)**
Type: *Optional[int]*


The driver allows the backend TORQUE/PBS system to be flaky, i.e. it may intermittently not respond and give error messages when submitting jobs or asking for job statuses. The timeout (in seconds) determines how long ERT will wait before it will give up. Applies to job submission (qsub) and job status queries (qstat). Default is 126 seconds.
ERT will do exponential sleeps, starting at 2 seconds, and the provided timeout is a maximum. Let the timeout be sums of series like 2+4+8+16+32+64 in order to be explicit about the number of retries. Set to zero to disallow flakyness, setting it to 2 will allow for one re-attempt, and 6 will give two re-attempts. Example allowing six retries:



**project_code (optional)**
Type: *Optional[str]*

String identifier used to map hardware resource usage to a project or account. The project or account does not have to exist.



install_jobs (optional)
-----------------------
Expand Down Expand Up @@ -1246,32 +1091,10 @@ requirements of the forward models.



**exclude_host (optional)**
Type: *Optional[str]*

Comma separated list of nodes that should be
excluded from the slurm run


**include_host (optional)**
Type: *Optional[str]*

Comma separated list of nodes that
should be included in the slurm run


**options (optional)**
Type: *Optional[str]*

Used to specify options to LSF.
Examples to set memory requirement is:
* rusage[mem=1000]


**queue_system (optional)**
Type: *Optional[Literal['lsf', 'local', 'slurm']]*
Type: *Optional[LocalQueueOptions, LsfQueueOptions, SlurmQueueOptions, TorqueQueueOptions]*

Defines which queue system the everest server runs on.
Defines which queue system the everest submits jobs to



Expand Down
13 changes: 5 additions & 8 deletions src/ert/config/queue_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ def driver_options(self) -> dict[str, Any]:

@pydantic.dataclasses.dataclass
class LocalQueueOptions(QueueOptions):
name: Literal[QueueSystem.LOCAL] = QueueSystem.LOCAL
name: Literal[QueueSystem.LOCAL, "local", "LOCAL"] = "local"

@property
def driver_options(self) -> dict[str, Any]:
Expand All @@ -98,7 +98,7 @@ def driver_options(self) -> dict[str, Any]:

@pydantic.dataclasses.dataclass
class LsfQueueOptions(QueueOptions):
name: Literal[QueueSystem.LSF] = QueueSystem.LSF
name: Literal[QueueSystem.LSF, "lsf", "LSF"] = "lsf"
bhist_cmd: NonEmptyString | None = None
bjobs_cmd: NonEmptyString | None = None
bkill_cmd: NonEmptyString | None = None
Expand All @@ -121,7 +121,7 @@ def driver_options(self) -> dict[str, Any]:

@pydantic.dataclasses.dataclass
class TorqueQueueOptions(QueueOptions):
name: Literal[QueueSystem.TORQUE] = QueueSystem.TORQUE
name: Literal[QueueSystem.TORQUE, "torque", "TORQUE"] = "torque"
qsub_cmd: NonEmptyString | None = None
qstat_cmd: NonEmptyString | None = None
qdel_cmd: NonEmptyString | None = None
Expand Down Expand Up @@ -157,7 +157,7 @@ def check_memory_per_job(cls, value: str | None) -> str | None:

@pydantic.dataclasses.dataclass
class SlurmQueueOptions(QueueOptions):
name: Literal[QueueSystem.SLURM] = QueueSystem.SLURM
name: Literal[QueueSystem.SLURM, "SLURM", "slurm"] = "slurm"
sbatch: NonEmptyString = "sbatch"
scancel: NonEmptyString = "scancel"
scontrol: NonEmptyString = "scontrol"
Expand Down Expand Up @@ -310,7 +310,6 @@ def from_dict(cls, config_dict: ConfigDict) -> QueueConfig:
)

queue_options = all_validated_queue_options[selected_queue_system]
queue_options_test_run = all_validated_queue_options[QueueSystem.LOCAL]
queue_options.add_global_queue_options(config_dict)

if queue_options.project_code is None:
Expand Down Expand Up @@ -345,7 +344,6 @@ def from_dict(cls, config_dict: ConfigDict) -> QueueConfig:
max_submit,
selected_queue_system,
queue_options,
queue_options_test_run,
stop_long_running=bool(stop_long_running),
)

Expand All @@ -355,8 +353,7 @@ def create_local_copy(self) -> QueueConfig:
self.realization_memory,
self.max_submit,
QueueSystem.LOCAL,
self.queue_options_test_run,
self.queue_options_test_run,
LocalQueueOptions(),
stop_long_running=bool(self.stop_long_running),
)

Expand Down
3 changes: 2 additions & 1 deletion src/ert/gui/simulation/experiment_panel.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@
)
from ert.trace import get_trace_id

from ...config.queue_config import LocalQueueOptions
from ..summarypanel import SummaryPanel
from .combobox_with_description import QComboBoxWithDescription
from .ensemble_experiment_panel import EnsembleExperimentPanel
Expand Down Expand Up @@ -376,7 +377,7 @@ def populate_clipboard_debug_info(self) -> None:
queue_opts = self.config.queue_config.queue_options

if isinstance(self.get_current_experiment_type(), SingleTestRun):
queue_opts = self.config.queue_config.queue_options_test_run
queue_opts = LocalQueueOptions(max_running=1)

for field in fields(queue_opts):
field_value = getattr(queue_opts, field.name)
Expand Down
2 changes: 0 additions & 2 deletions src/ert/resources/site-config
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,5 @@
WORKFLOW_JOB_DIRECTORY workflows/jobs/shell
WORKFLOW_JOB_DIRECTORY workflows/jobs/internal-gui/config

JOB_SCRIPT job_dispatch.py

QUEUE_SYSTEM LOCAL
QUEUE_OPTION LOCAL MAX_RUNNING 1
32 changes: 16 additions & 16 deletions src/ert/scheduler/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,23 +19,23 @@


def create_driver(queue_options: QueueOptions) -> Driver:
if queue_options.name == QueueSystem.LOCAL:
return LocalDriver()
elif queue_options.name == QueueSystem.TORQUE:
return OpenPBSDriver(**queue_options.driver_options)
elif queue_options.name == QueueSystem.LSF:
return LsfDriver(**queue_options.driver_options)
elif queue_options.name == QueueSystem.SLURM:
return SlurmDriver(
**dict(
{"user": getpwuid(getuid()).pw_name},
**queue_options.driver_options,
match str(queue_options.name).upper():
case QueueSystem.LOCAL:
return LocalDriver()
case QueueSystem.TORQUE:
return OpenPBSDriver(**queue_options.driver_options)
case QueueSystem.LSF:
return LsfDriver(**queue_options.driver_options)
case QueueSystem.SLURM:
return SlurmDriver(
**dict(
{"user": getpwuid(getuid()).pw_name},
**queue_options.driver_options,
)
)
)
else:
raise NotImplementedError(
"Only LOCAL, SLURM, TORQUE and LSF drivers are implemented"
)
raise NotImplementedError(
"Only LOCAL, SLURM, TORQUE and LSF drivers are implemented"
)


__all__ = [
Expand Down
Loading

0 comments on commit dab7013

Please sign in to comment.