diff --git a/CHANGELOG.md b/CHANGELOG.md index e79fc9ab..aa3f05eb 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,7 +5,7 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). -## Unreleased on the [23.2.x](https://github.com/PySlurm/pyslurm/tree/23.2.x) branch +## Unreleased on the [23.11.x](https://github.com/PySlurm/pyslurm/tree/23.11.x) branch - New Classes to interact with Database Associations (WIP) - `pyslurm.db.Association` @@ -13,10 +13,21 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - New Classes to interact with Database QoS (WIP) - `pyslurm.db.QualityOfService` - `pyslurm.db.QualitiesOfService` + +## [23.11.0](https://github.com/PySlurm/pyslurm/releases/tag/v23.11.0) - 2024-01-27 + +### Added + +- Support for Slurm 23.11.x - Add `truncate_time` option to `pyslurm.db.JobFilter`, which is the same as -T / --truncate from sacct. -- Add new Attributes to `pyslurm.db.Jobs` that help gathering statistics for a +- Add new attributes to `pyslurm.db.Jobs` that help gathering statistics for a collection of Jobs more convenient. +- Add new attribute `gres_tasks_per_sharing` to `pyslurm.Job` and + `pyslurm.JobSubmitDescription` + +### Fixed + - Fix `allocated_gres` attribute in the `pyslurm.Node` Class returning nothing. - Add new `idle_memory` and `allocated_tres` attributes to `pyslurm.Node` class - Fix Node State being displayed as `ALLOCATED` when it should actually be @@ -24,79 +35,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Fix crash for the `gres_per_node` attribute of the `pyslurm.Job` class when the GRES String received from Slurm contains no count. -## [23.2.2](https://github.com/PySlurm/pyslurm/releases/tag/v23.2.2) - 2023-07-18 - -### Added - -- Ability to modify Database Jobs -- New classes to interact with the Partition API - - [pyslurm.Partition][] - - [pyslurm.Partitions][] -- New attributes for a Database Job: - - `extra` - - `failed_node` -- Added a new Base Class [MultiClusterMap][pyslurm.xcollections.MultiClusterMap] that some Collections inherit from. -- Added `to_json` function to all Collections - -### Fixed - -- Fixes a problem that prevented loading specific Jobs from the Database if - the following two conditions were met: - - no start/end time was specified - - the Job was older than a day - -### Changed - -- Improved Docs -- Renamed `JobSearchFilter` to [pyslurm.db.JobFilter][] -- Renamed `as_dict` function of some classes to `to_dict` - -## [23.2.1](https://github.com/PySlurm/pyslurm/releases/tag/v23.2.1) - 2023-05-18 - -### Added - -- Classes to interact with the Job and Submission API - - [pyslurm.Job](https://pyslurm.github.io/23.2/reference/job/#pyslurm.Job) - - [pyslurm.Jobs](https://pyslurm.github.io/23.2/reference/job/#pyslurm.Jobs) - - [pyslurm.JobStep](https://pyslurm.github.io/23.2/reference/jobstep/#pyslurm.JobStep) - - [pyslurm.JobSteps](https://pyslurm.github.io/23.2/reference/jobstep/#pyslurm.JobSteps) - - [pyslurm.JobSubmitDescription](https://pyslurm.github.io/23.2/reference/jobsubmitdescription/#pyslurm.JobSubmitDescription) -- Classes to interact with the Database Job API - - [pyslurm.db.Job](https://pyslurm.github.io/23.2/reference/db/job/#pyslurm.db.Job) - - [pyslurm.db.Jobs](https://pyslurm.github.io/23.2/reference/db/job/#pyslurm.db.Jobs) - - [pyslurm.db.JobStep](https://pyslurm.github.io/23.2/reference/db/jobstep/#pyslurm.db.JobStep) - - [pyslurm.db.JobFilter](https://pyslurm.github.io/23.2/reference/db/jobsearchfilter/#pyslurm.db.JobFilter) -- Classes to interact with the Node API - - [pyslurm.Node](https://pyslurm.github.io/23.2/reference/node/#pyslurm.Node) - - [pyslurm.Nodes](https://pyslurm.github.io/23.2/reference/node/#pyslurm.Nodes) -- Exceptions added: - - [pyslurm.PyslurmError](https://pyslurm.github.io/23.2/reference/exceptions/#pyslurm.PyslurmError) - - [pyslurm.RPCError](https://pyslurm.github.io/23.2/reference/exceptions/#pyslurm.RPCError) -- [Utility Functions](https://pyslurm.github.io/23.2/reference/utilities/#pyslurm.utils) - -### Changed - -- Completely overhaul the documentation, switch to mkdocs -- Rework the tests: Split them into unit and integration tests - -### Deprecated - -- Following classes are superseded by new ones: - - [pyslurm.job](https://pyslurm.github.io/23.2/reference/old/job/#pyslurm.job) - - [pyslurm.node](https://pyslurm.github.io/23.2/reference/old/node/#pyslurm.node) - - [pyslurm.jobstep](https://pyslurm.github.io/23.2/reference/old/jobstep/#pyslurm.jobstep) - - [pyslurm.slurmdb_jobs](https://pyslurm.github.io/23.2/reference/old/db/job/#pyslurm.slurmdb_jobs) - -## [23.2.0](https://github.com/PySlurm/pyslurm/releases/tag/v23.2.0) - 2023-04-07 - -### Added - -- Support for Slurm 23.02.x ([f506d63](https://github.com/PySlurm/pyslurm/commit/f506d63634a9b20bfe475534589300beff4a8843)) - ### Removed -- `Elasticsearch` debug flag from `get_debug_flags` -- `launch_type`, `launch_params` and `slurmctld_plugstack` keys from the - `config.get()` output -- Some constants (mostly `ESLURM_*` constants that do not exist - anymore) +- `route_plugin`, `job_credential_private_key` and `job_credential_public_certificate` + keys are removed from the output of `pyslurm.config().get()` +- Some deprecated and unused Slurm constants diff --git a/README.md b/README.md index 93f5552c..85855627 100644 --- a/README.md +++ b/README.md @@ -8,16 +8,16 @@ pyslurm is the Python client library for the [Slurm Workload Manager](https://sl * [Python](https://www.python.org) - >= 3.6 * [Cython](https://cython.org) - >= 0.29.36 -This Version is for Slurm 23.02.x +This Version is for Slurm 23.11.x ## Versioning In pyslurm, the versioning scheme follows the official Slurm versioning. The first two numbers (`MAJOR.MINOR`) always correspond to Slurms Major-Release, -for example `23.02`. +for example `23.11`. The last number (`MICRO`) is however not tied in any way to Slurms `MICRO` version, but is instead PySlurm's internal Patch-Level. For example, any -pyslurm 23.02.X version should work with any Slurm 23.02.X release. +pyslurm 23.11.X version should work with any Slurm 23.11.X release. ## Installation @@ -29,8 +29,8 @@ the corresponding paths to the necessary files. You can specify those with environment variables (recommended), for example: ```shell -export SLURM_INCLUDE_DIR=/opt/slurm/23.02/include -export SLURM_LIB_DIR=/opt/slurm/23.02/lib +export SLURM_INCLUDE_DIR=/opt/slurm/23.11/include +export SLURM_LIB_DIR=/opt/slurm/23.11/lib ``` Then you can proceed to install pyslurm, for example by cloning the Repository: diff --git a/pyslurm/__version__.py b/pyslurm/__version__.py index de714f89..07f5afe1 100644 --- a/pyslurm/__version__.py +++ b/pyslurm/__version__.py @@ -5,4 +5,4 @@ # The last Number "Z" is the current Pyslurm patch version, which should be # incremented each time a new release is made (except when migrating to a new # Slurm Major release, then set it back to 0) -__version__ = "23.2.2" +__version__ = "23.11.0" diff --git a/pyslurm/core/job/job.pxd b/pyslurm/core/job/job.pxd index 8256c097..ab5e9197 100644 --- a/pyslurm/core/job/job.pxd +++ b/pyslurm/core/job/job.pxd @@ -347,9 +347,11 @@ cdef class Job: gres_per_node (dict): Generic Resources (e.g. GPU) this Job is using per Node. profile_types (list): - Types for which detailed accounting data is collected. + Types for which detailed accounting data is collected. gres_binding (str): Binding Enforcement of a Generic Resource (e.g. GPU). + gres_tasks_per_sharing (str): + Task Sharing of a Generic Resource (e.g. GPU). kill_on_invalid_dependency (bool): Whether the Job should be killed on an invalid dependency. spreads_over_nodes (bool): diff --git a/pyslurm/core/job/job.pyx b/pyslurm/core/job/job.pyx index fdb6ed0c..f13a45cf 100644 --- a/pyslurm/core/job/job.pyx +++ b/pyslurm/core/job/job.pyx @@ -74,7 +74,7 @@ cdef class Jobs(MultiClusterMap): """Retrieve all Jobs from the Slurm controller Args: - preload_passwd_info (bool, optional): + preload_passwd_info (bool, optional): Decides whether to query passwd and groups information from the system. Could potentially speed up access to attributes of the Job @@ -246,7 +246,7 @@ cdef class Job: job_info_msg_t *info = NULL Job wrap = None - try: + try: verify_rpc(slurm_load_job(&info, job_id, slurm.SHOW_DETAIL)) if info and info.record_count: @@ -282,7 +282,7 @@ cdef class Job: cdef _swap_data(Job dst, Job src): cdef slurm_job_info_t *tmp = NULL if dst.ptr and src.ptr: - tmp = dst.ptr + tmp = dst.ptr dst.ptr = src.ptr src.ptr = tmp @@ -305,7 +305,7 @@ cdef class Job: Implements the slurm_signal_job RPC. Args: - signal (Union[str, int]): + signal (Union[str, int]): Any valid signal which will be sent to the Job. Can be either a str like `SIGUSR1`, or simply an [int][]. steps (str): @@ -315,7 +315,7 @@ cdef class Job: signaled. The value `batch` in contrast means, that only the batch-step will be signaled. With `all` every step is signaled. - hurry (bool): + hurry (bool): If True, no burst buffer data will be staged out. The default value is False. @@ -338,7 +338,7 @@ cdef class Job: flags |= slurm.KILL_FULL_JOB elif steps.casefold() == "batch": flags |= slurm.KILL_JOB_BATCH - + if hurry: flags |= slurm.KILL_HURRY @@ -417,7 +417,7 @@ cdef class Job: Examples: >>> import pyslurm - >>> + >>> >>> # Setting the new time-limit to 20 days >>> changes = pyslurm.JobSubmitDescription(time_limit="20-00:00:00") >>> pyslurm.Job(9999).modify(changes) @@ -442,10 +442,10 @@ cdef class Job: Examples: >>> import pyslurm - >>> + >>> >>> # Holding a Job (in "admin" mode by default) >>> pyslurm.Job(9999).hold() - >>> + >>> >>> # Holding a Job in "user" mode >>> pyslurm.Job(9999).hold(mode="user") """ @@ -483,11 +483,11 @@ cdef class Job: Examples: >>> import pyslurm - >>> + >>> >>> # Requeing a Job while allowing it to be >>> # scheduled again immediately >>> pyslurm.Job(9999).requeue() - >>> + >>> >>> # Requeing a Job while putting it in a held state >>> pyslurm.Job(9999).requeue(hold=True) """ @@ -509,7 +509,7 @@ cdef class Job: Raises: RPCError: When sending the message to the Job was not successful. - + Examples: >>> import pyslurm >>> pyslurm.Job(9999).notify("Hello Friends!") @@ -539,7 +539,7 @@ cdef class Job: # # The copyright notices for the file this function was taken from is # included below: - # + # # Portions Copyright (C) 2010-2017 SchedMD LLC . # Copyright (C) 2002-2007 The Regents of the University of California. # Copyright (C) 2008-2010 Lawrence Livermore National Security. @@ -621,7 +621,7 @@ cdef class Job: @property def nice(self): - if self.ptr.nice == slurm.NO_VAL: + if self.ptr.nice == slurm.NO_VAL: return None return self.ptr.nice - slurm.NICE_OFFSET @@ -647,7 +647,7 @@ cdef class Job: @property def state_reason(self): - if self.ptr.state_desc: + if self.ptr.state_desc: return cstr.to_unicode(self.ptr.state_desc) return cstr.to_unicode(slurm_job_reason_string(self.ptr.state_reason)) @@ -808,7 +808,7 @@ cdef class Job: def cpus_per_task(self): if self.ptr.cpus_per_tres: return None - + return u16_parse(self.ptr.cpus_per_task, on_noval=1) @property @@ -1031,7 +1031,7 @@ cdef class Job: task_str = cstr.to_unicode(self.ptr.array_task_str) if not task_str: return None - + if "%" in task_str: # We don't want this % character and everything after it # in here, so remove it. @@ -1042,7 +1042,7 @@ cdef class Job: @property def end_time(self): return _raw_time(self.ptr.end_time) - + # https://github.com/SchedMD/slurm/blob/d525b6872a106d32916b33a8738f12510ec7cf04/src/api/job_info.c#L480 cdef _calc_run_time(self): cdef time_t rtime @@ -1153,6 +1153,15 @@ cdef class Job: else: return None + @property + def gres_tasks_per_sharing(self): + if self.ptr.bitflags & slurm.GRES_MULT_TASKS_PER_SHARING: + return "multiple" + elif self.ptr.bitflags & slurm.GRES_ONE_TASK_PER_SHARING: + return "one" + else: + return None + @property def kill_on_invalid_dependency(self): return u64_parse_bool_flag(self.ptr.bitflags, slurm.KILL_INV_DEP) @@ -1191,7 +1200,7 @@ cdef class Job: """Retrieve the resource layout of this Job on each node. !!! warning - + Return type may still be subject to change in the future Returns: @@ -1204,13 +1213,13 @@ cdef class Job: # # The copyright notices for the file that contains the original code # is below: - # + # # Portions Copyright (C) 2010-2017 SchedMD LLC . # Copyright (C) 2002-2007 The Regents of the University of California. # Copyright (C) 2008-2010 Lawrence Livermore National Security. # Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). # Written by Morris Jette et. al. - # CODE-OCEC-09-009. All rights reserved. + # CODE-OCEC-09-009. All rights reserved. # # Slurm is licensed under the GNU General Public License. For the full # text of Slurm's License, please see here: @@ -1218,11 +1227,11 @@ cdef class Job: # # Please, as mentioned above, also have a look at Slurm's DISCLAIMER # under pyslurm/slurm/SLURM_DISCLAIMER - # + # # TODO: Explain the structure of the return value a bit more. cdef: slurm.job_resources *resources = self.ptr.job_resrcs - slurm.hostlist_t hl + slurm.hostlist_t *hl uint32_t rel_node_inx int bit_inx = 0 int bit_reps = 0 @@ -1299,9 +1308,9 @@ cdef class Job: free(host) slurm.slurm_hostlist_destroy(hl) - return output + return output + - # https://github.com/SchedMD/slurm/blob/d525b6872a106d32916b33a8738f12510ec7cf04/src/api/job_info.c#L99 cdef _threads_per_core(char *host): # TODO diff --git a/pyslurm/core/job/sbatch_opts.pyx b/pyslurm/core/job/sbatch_opts.pyx index 3ba61fb5..cbce9bff 100644 --- a/pyslurm/core/job/sbatch_opts.pyx +++ b/pyslurm/core/job/sbatch_opts.pyx @@ -28,9 +28,9 @@ from pathlib import Path SBATCH_MAGIC = "#SBATCH" -class _SbatchOpt(): - def __init__(self, short_opt, long_opt, - our_attr_name, attr_param=None, is_boolean=False, +class SbatchOpt(): + def __init__(self, short_opt=None, long_opt=None, + our_attr_name=None, attr_param=None, is_boolean=False, has_optional_args=False): self.short_opt = short_opt self.long_opt = long_opt @@ -39,102 +39,127 @@ class _SbatchOpt(): self.is_boolean = is_boolean self.has_optional_args = has_optional_args + def set(self, val, desc, overwrite): + if self.our_attr_name is None: + return None + + if getattr(desc, self.our_attr_name) is None or overwrite: + val = self.attr_param if val is None else val + setattr(desc, self.our_attr_name, val) + + +class SbatchOptGresFlags(SbatchOpt): + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + + def set(self, val, desc, overwrite): + for flag in val.split(","): + flag = flag.casefold() + + if flag == "enforce-binding" or flag == "disable-binding": + if desc.gres_binding is None or overwrite: + desc.gres_binding = flag + elif flag == "one-task-per-sharing" or flag == "multiple-tasks-per-sharing": + if desc.gres_tasks_per_sharing is None or overwrite: + desc.gres_tasks_per_sharing = flag + # Sorted by occurrence in the sbatch manpage - keep in order. SBATCH_OPTIONS = [ - _SbatchOpt("A", "account", "account"), - _SbatchOpt(None, "acctg-freq", "accounting_gather_frequency"), - _SbatchOpt("a", "array", "array"), - _SbatchOpt(None, "batch", "batch_constraints"), - _SbatchOpt(None, "bb", "burst_buffer"), - _SbatchOpt(None, "bbf", "burst_buffer_file"), - _SbatchOpt("b", "begin", "begin_time"), - _SbatchOpt("D", "chdir", "working_directory"), - _SbatchOpt(None, "cluster-constraint", "cluster_constraints"), - _SbatchOpt("M", "clusters", "clusters"), - _SbatchOpt(None, "comment","comment"), - _SbatchOpt("C", "constraint", "constraints"), - _SbatchOpt(None, "container", "container"), - _SbatchOpt(None, "contiguous", "requires_contiguous_nodes"), - _SbatchOpt("S", "core-spec", "cores_reserved_for_system"), - _SbatchOpt(None, "cores-per-socket", "cores_per_socket"), - _SbatchOpt(None, "cpu-freq", "cpu_frequency"), - _SbatchOpt(None, "cpus-per-gpu", "cpus_per_gpu"), - _SbatchOpt("c", "cpus-per-task", "cpus_per_task"), - _SbatchOpt(None, "deadline", "deadline"), - _SbatchOpt(None, "delay-boot", "delay_boot_time"), - _SbatchOpt("d", "dependency", "dependencies"), - _SbatchOpt("m", "distribution", "distribution"), - _SbatchOpt("e", "error", "standard_error"), - _SbatchOpt("x", "exclude", "excluded_nodes"), - _SbatchOpt(None, "exclusive", "resource_sharing", "no"), - _SbatchOpt(None, "export", "environment"), - _SbatchOpt(None, "export-file", None), - _SbatchOpt("B", "extra-node-info", None), - _SbatchOpt(None, "get-user-env", "get_user_environment"), - _SbatchOpt(None, "gid", "group_id"), - _SbatchOpt(None, "gpu-bind", "gpu_binding"), - _SbatchOpt(None, "gpu-freq", None), - _SbatchOpt("G", "gpus", "gpus"), - _SbatchOpt(None, "gpus-per-node", "gpus_per_node"), - _SbatchOpt(None, "gpus-per-socket", "gpus_per_socket"), - _SbatchOpt(None, "gpus-per-socket", "gpus_per_task"), - _SbatchOpt(None, "gres", "gres_per_node"), - _SbatchOpt(None, "gres-flags", "gres_binding"), - _SbatchOpt(None, "hint", None), - _SbatchOpt("H", "hold", "priority", 0), - _SbatchOpt(None, "ignore-pbs", None), - _SbatchOpt("i", "input", "standard_in"), - _SbatchOpt("J", "job-name", "name"), - _SbatchOpt(None, "kill-on-invalid-dep", "kill_on_invalid_dependency"), - _SbatchOpt("L", "licenses", "licenses"), - _SbatchOpt(None, "mail-type", "mail_types"), - _SbatchOpt(None, "mail-user", "mail_user"), - _SbatchOpt(None, "mcs-label", "mcs_label"), - _SbatchOpt(None, "mem", "memory_per_node"), - _SbatchOpt(None, "mem-bind", None), - _SbatchOpt(None, "mem-per-cpu", "memory_per_cpu"), - _SbatchOpt(None, "mem-per-gpu", "memory_per_gpu"), - _SbatchOpt(None, "mincpus", "min_cpus_per_node"), - _SbatchOpt(None, "network", "network"), - _SbatchOpt(None, "nice", "nice"), - _SbatchOpt("k", "no-kill", "kill_on_node_fail", False), - _SbatchOpt(None, "no-requeue", "is_requeueable", False), - _SbatchOpt("F", "nodefile", None), - _SbatchOpt("w", "nodelist", "required_nodes"), - _SbatchOpt("N", "nodes", "nodes"), - _SbatchOpt("n", "ntasks", "ntasks"), - _SbatchOpt(None, "ntasks-per-core", "ntasks_per_core"), - _SbatchOpt(None, "ntasks-per-gpu", "ntasks_per_gpu"), - _SbatchOpt(None, "ntasks-per-node", "ntasks_per_node"), - _SbatchOpt(None, "ntasks-per-socket", "ntasks_per_socket"), - _SbatchOpt(None, "open-mode", "log_files_open_mode"), - _SbatchOpt("o", "output", "standard_output"), - _SbatchOpt("O", "overcommit", "overcommit", True), - _SbatchOpt("s", "oversubscribe", "resource_sharing", "yes"), - _SbatchOpt("p", "partition", "partition"), - _SbatchOpt(None, "power", "power_options"), - _SbatchOpt(None, "prefer", None), - _SbatchOpt(None, "priority", "priority"), - _SbatchOpt(None, "profile", "profile_types"), - _SbatchOpt(None, "propagate", None), - _SbatchOpt("q", "qos", "qos"), - _SbatchOpt(None, "reboot", "requires_node_reboot", True), - _SbatchOpt(None, "requeue", "is_requeueable", True), - _SbatchOpt(None, "reservation", "reservations"), - _SbatchOpt(None, "signal", "signal"), - _SbatchOpt(None, "sockets-per-node", "sockets_per_node"), - _SbatchOpt(None, "spread-job", "spreads_over_nodes", True), - _SbatchOpt(None, "switches", "switches"), - _SbatchOpt(None, "thread-spec", "threads_reserved_for_system"), - _SbatchOpt(None, "threads-per-core", "threads_per_core"), - _SbatchOpt("t", "time", "time_limit"), - _SbatchOpt(None, "time-min", "time_limit_min"), - _SbatchOpt(None, "tmp", "temporary_disk_per_node"), - _SbatchOpt(None, "uid", "user_id"), - _SbatchOpt(None, "use-min-nodes", "use_min_nodes", True), - _SbatchOpt(None, "wait-all-nodes", "wait_all_nodes", True), - _SbatchOpt(None, "wckey", "wckey"), + SbatchOpt("A", "account", "account"), + SbatchOpt(None, "acctg-freq", "accounting_gather_frequency"), + SbatchOpt("a", "array", "array"), + SbatchOpt(None, "batch", "batch_constraints"), + SbatchOpt(None, "bb", "burst_buffer"), + SbatchOpt(None, "bbf", "burst_buffer_file"), + SbatchOpt("b", "begin", "begin_time"), + SbatchOpt("D", "chdir", "working_directory"), + SbatchOpt(None, "cluster-constraint", "cluster_constraints"), + SbatchOpt("M", "clusters", "clusters"), + SbatchOpt(None, "comment","comment"), + SbatchOpt("C", "constraint", "constraints"), + SbatchOpt(None, "container", "container"), + SbatchOpt(None, "contiguous", "requires_contiguous_nodes"), + SbatchOpt("S", "core-spec", "cores_reserved_for_system"), + SbatchOpt(None, "cores-per-socket", "cores_per_socket"), + SbatchOpt(None, "cpu-freq", "cpu_frequency"), + SbatchOpt(None, "cpus-per-gpu", "cpus_per_gpu"), + SbatchOpt("c", "cpus-per-task", "cpus_per_task"), + SbatchOpt(None, "deadline", "deadline"), + SbatchOpt(None, "delay-boot", "delay_boot_time"), + SbatchOpt("d", "dependency", "dependencies"), + SbatchOpt("m", "distribution", "distribution"), + SbatchOpt("e", "error", "standard_error"), + SbatchOpt("x", "exclude", "excluded_nodes"), + SbatchOpt(None, "exclusive", "resource_sharing", "no"), + SbatchOpt(None, "export", "environment"), + SbatchOpt(None, "export-file", None), + SbatchOpt("B", "extra-node-info", None), + SbatchOpt(None, "get-user-env", "get_user_environment"), + SbatchOpt(None, "gid", "group_id"), + SbatchOpt(None, "gpu-bind", "gpu_binding"), + SbatchOpt(None, "gpu-freq", None), + SbatchOpt("G", "gpus", "gpus"), + SbatchOpt(None, "gpus-per-node", "gpus_per_node"), + SbatchOpt(None, "gpus-per-socket", "gpus_per_socket"), + SbatchOpt(None, "gpus-per-socket", "gpus_per_task"), + SbatchOpt(None, "gres", "gres_per_node"), + SbatchOptGresFlags(None, "gres-flags"), + SbatchOpt(None, "hint", None), + SbatchOpt("H", "hold", "priority", 0), + SbatchOpt(None, "ignore-pbs", None), + SbatchOpt("i", "input", "standard_in"), + SbatchOpt("J", "job-name", "name"), + SbatchOpt(None, "kill-on-invalid-dep", "kill_on_invalid_dependency"), + SbatchOpt("L", "licenses", "licenses"), + SbatchOpt(None, "mail-type", "mail_types"), + SbatchOpt(None, "mail-user", "mail_user"), + SbatchOpt(None, "mcs-label", "mcs_label"), + SbatchOpt(None, "mem", "memory_per_node"), + SbatchOpt(None, "mem-bind", None), + SbatchOpt(None, "mem-per-cpu", "memory_per_cpu"), + SbatchOpt(None, "mem-per-gpu", "memory_per_gpu"), + SbatchOpt(None, "mincpus", "min_cpus_per_node"), + SbatchOpt(None, "network", "network"), + SbatchOpt(None, "nice", "nice"), + SbatchOpt("k", "no-kill", "kill_on_node_fail", False), + SbatchOpt(None, "no-requeue", "is_requeueable", False), + SbatchOpt("F", "nodefile", None), + SbatchOpt("w", "nodelist", "required_nodes"), + SbatchOpt("N", "nodes", "nodes"), + SbatchOpt("n", "ntasks", "ntasks"), + SbatchOpt(None, "ntasks-per-core", "ntasks_per_core"), + SbatchOpt(None, "ntasks-per-gpu", "ntasks_per_gpu"), + SbatchOpt(None, "ntasks-per-node", "ntasks_per_node"), + SbatchOpt(None, "ntasks-per-socket", "ntasks_per_socket"), + SbatchOpt(None, "open-mode", "log_files_open_mode"), + SbatchOpt("o", "output", "standard_output"), + SbatchOpt("O", "overcommit", "overcommit", True), + SbatchOpt("s", "oversubscribe", "resource_sharing", "yes"), + SbatchOpt("p", "partition", "partition"), + SbatchOpt(None, "power", "power_options"), + SbatchOpt(None, "prefer", None), + SbatchOpt(None, "priority", "priority"), + SbatchOpt(None, "profile", "profile_types"), + SbatchOpt(None, "propagate", None), + SbatchOpt("q", "qos", "qos"), + SbatchOpt(None, "reboot", "requires_node_reboot", True), + SbatchOpt(None, "requeue", "is_requeueable", True), + SbatchOpt(None, "reservation", "reservations"), + SbatchOpt(None, "signal", "signal"), + SbatchOpt(None, "sockets-per-node", "sockets_per_node"), + SbatchOpt(None, "spread-job", "spreads_over_nodes", True), + SbatchOpt(None, "switches", "switches"), + SbatchOpt(None, "thread-spec", "threads_reserved_for_system"), + SbatchOpt(None, "threads-per-core", "threads_per_core"), + SbatchOpt("t", "time", "time_limit"), + SbatchOpt(None, "time-min", "time_limit_min"), + SbatchOpt(None, "tmp", "temporary_disk_per_node"), + SbatchOpt(None, "uid", "user_id"), + SbatchOpt(None, "use-min-nodes", "use_min_nodes", True), + SbatchOpt(None, "wait-all-nodes", "wait_all_nodes", True), + SbatchOpt(None, "wckey", "wckey"), ] @@ -178,7 +203,7 @@ def _find_opt(opt): if opt == sbopt.short_opt or opt == sbopt.long_opt: return sbopt - return None + return SbatchOpt() def _parse_opts_from_batch_script(desc, script, overwrite): @@ -194,11 +219,4 @@ def _parse_opts_from_batch_script(desc, script, overwrite): if line.startswith(SBATCH_MAGIC): flag, val = _parse_line(line) opt = _find_opt(flag) - - if not opt or opt.our_attr_name is None: - # Not supported - continue - - if getattr(desc, opt.our_attr_name) is None or overwrite: - val = opt.attr_param if val is None else val - setattr(desc, opt.our_attr_name, val) + opt.set(val, desc, overwrite) diff --git a/pyslurm/core/job/step.pyx b/pyslurm/core/job/step.pyx index 1da489ee..812d4f1d 100644 --- a/pyslurm/core/job/step.pyx +++ b/pyslurm/core/job/step.pyx @@ -30,7 +30,7 @@ from pyslurm.settings import LOCAL_CLUSTER from pyslurm import xcollections from pyslurm.utils.helpers import ( signal_to_num, - instance_to_dict, + instance_to_dict, uid_to_name, humanize_step_id, dehumanize_step_id, @@ -260,7 +260,7 @@ cdef class JobStep: Implements the slurm_signal_job_step RPC. Args: - signal (Union[str, int]): + signal (Union[str, int]): Any valid signal which will be sent to the Job. Can be either a str like `SIGUSR1`, or simply an [int][]. @@ -294,7 +294,7 @@ cdef class JobStep: >>> pyslurm.JobStep(9999, 1).cancel() """ step_id = self.ptr.step_id.step_id - verify_rpc(slurm_kill_job_step(self.job_id, step_id, 9)) + verify_rpc(slurm_kill_job_step(self.job_id, step_id, 9, 0)) def modify(self, JobStep changes): """Modify a job step. @@ -312,7 +312,7 @@ cdef class JobStep: Examples: >>> import pyslurm - >>> + >>> >>> # Setting the new time-limit to 20 days >>> changes = pyslurm.JobStep(time_limit="20-00:00:00") >>> pyslurm.JobStep(9999, 1).modify(changes) @@ -399,7 +399,7 @@ cdef class JobStep: @property def cluster(self): return cstr.to_unicode(self.ptr.cluster) - + @property def srun_host(self): return cstr.to_unicode(self.ptr.srun_host) @@ -439,7 +439,7 @@ cdef class JobStep: @property def ntasks(self): return u32_parse(self.ptr.num_tasks) - + @property def distribution(self): return TaskDistribution.from_int(self.ptr.task_dist) diff --git a/pyslurm/core/job/submission.pxd b/pyslurm/core/job/submission.pxd index 4dc7f035..32aeb685 100644 --- a/pyslurm/core/job/submission.pxd +++ b/pyslurm/core/job/submission.pxd @@ -55,17 +55,17 @@ cdef class JobSubmitDescription: name (str): Name of the Job, same as -J/--job-name from sbatch. account (str): - Account of the job, same as -A/--account from sbatch. + Account of the job, same as -A/--account from sbatch. user_id (Union[str, int]): Run the job as a different User, same as --uid from sbatch. This requires root privileges. - You can both specify the name or numeric uid of the User. + You can both specify the name or numeric uid of the User. group_id (Union[str, int]): Run the job as a different Group, same as --gid from sbatch. This requires root privileges. - You can both specify the name or numeric gid of the User. + You can both specify the name or numeric gid of the User. priority (int): - Specific priority the Job will receive. + Specific priority the Job will receive. Same as --priority from sbatch. You can achieve the behaviour of sbatch's --hold option by specifying a priority of 0. @@ -183,7 +183,7 @@ cdef class JobSubmitDescription: An MCS Label for the Job. This is the same as --mcs-label from sbatch. memory_per_cpu (Union[str, int]): - Memory required per allocated CPU. + Memory required per allocated CPU. The default unit is in Mebibytes. You are also able to specify unit suffixes like K|M|G|T. @@ -237,7 +237,7 @@ cdef class JobSubmitDescription: Adjusted scheduling priority for the Job. This is the same as --nice from sbatch. log_files_open_mode (str): - Mode in which standard_output and standard_error log files should be opened. + Mode in which standard_output and standard_error log files should be opened. This is the same as --open-mode from sbatch. @@ -353,7 +353,7 @@ cdef class JobSubmitDescription: gpus (Union[dict, str, int]): GPUs for the Job to be allocated in total. This is the same as -G/--gpus from sbatch. - Specifying the type of the GPU is optional. + Specifying the type of the GPU is optional. For example, specifying the GPU counts as a dict: @@ -422,7 +422,7 @@ cdef class JobSubmitDescription: This is the same as --gres from sbatch. You should also use this option if you want to specify GPUs per node (--gpus-per-node). Specifying the type (by separating GRES name and type with a - semicolon) is optional. + semicolon) is optional. For example, specifying it as a dict: @@ -463,7 +463,7 @@ cdef class JobSubmitDescription: switches. This is the same as --switches from sbatch. - + For example, specifying it as a dict: switches = { "count": 5, "max_wait_time": "00:10:00" } @@ -512,13 +512,22 @@ cdef class JobSubmitDescription: This is the same as --use-min-nodes from sbatch. gres_binding (str): Generic resource task binding options. - This is the --gres-flags option from sbatch. + This is contained in the --gres-flags option from sbatch. Possible values are: * `enforce-binding` * `disable-binding` + gres_tasks_per_sharing (str): + Shared GRES Tasks + This is contained in the --gres-flags option from sbatch. + + + Possible values are: + + * `multiple` or `multiple-tasks-per-sharing` + * `one` or `one-task-per-sharing` temporary_disk_per_node (Union[str, int]): Amount of temporary disk space needed per node. @@ -630,6 +639,7 @@ cdef class JobSubmitDescription: spreads_over_nodes use_min_nodes gres_binding + gres_tasks_per_sharing temporary_disk_per_node get_user_environment min_cpus_per_node diff --git a/pyslurm/core/job/submission.pyx b/pyslurm/core/job/submission.pyx index 0c9e699c..cb22d049 100644 --- a/pyslurm/core/job/submission.pyx +++ b/pyslurm/core/job/submission.pyx @@ -43,7 +43,7 @@ from pyslurm.utils.ctime import ( ) from pyslurm.utils.helpers import ( humanize, - dehumanize, + dehumanize, signal_to_num, user_to_uid, group_to_gid, @@ -92,7 +92,7 @@ cdef class JobSubmitDescription: ... cpus_per_task=1, ... time_limit="10-00:00:00", ... script="/path/to/your/submit_script.sh") - >>> + >>> >>> job_id = desc.submit() >>> print(job_id) 99 @@ -117,7 +117,7 @@ cdef class JobSubmitDescription: attributes. Args: - overwrite (bool): + overwrite (bool): If set to `True`, the value from an option found in the environment will override the current value of the attribute in this instance. Default is `False` @@ -166,9 +166,9 @@ cdef class JobSubmitDescription: # Arguments directly specified upon object creation will # always have precedence. continue - - spec = attr.upper() - val = pyenviron.get(f"PYSLURM_JOBDESC_{spec)}") + + spec = attr.upper() + val = pyenviron.get(f"PYSLURM_JOBDESC_{spec)}") if (val is not None and (getattr(self, attr) is None or overwrite)): @@ -225,7 +225,7 @@ cdef class JobSubmitDescription: cstr.from_gres_dict(self.gpus_per_task, "gpu")) cstr.fmalloc(&ptr.tres_per_node, cstr.from_gres_dict(self.gres_per_node)) - cstr.fmalloc(&ptr.cpus_per_tres, + cstr.fmalloc(&ptr.cpus_per_tres, cstr.from_gres_dict(self.cpus_per_gpu, "gpu")) cstr.fmalloc(&ptr.admin_comment, self.admin_comment) cstr.fmalloc(&self.ptr.dependency, @@ -256,7 +256,7 @@ cdef class JobSubmitDescription: u64_set_bool_flag(&ptr.bitflags, self.spreads_over_nodes, slurm.SPREAD_JOB) u64_set_bool_flag(&ptr.bitflags, self.kill_on_invalid_dependency, - slurm.KILL_INV_DEP) + slurm.KILL_INV_DEP) u64_set_bool_flag(&ptr.bitflags, self.use_min_nodes, slurm.USE_MIN_NODES) ptr.contiguous = u16_bool(self.requires_contiguous_nodes) @@ -283,6 +283,7 @@ cdef class JobSubmitDescription: self._set_cpu_frequency() self._set_gpu_binding() self._set_gres_binding() + self._set_gres_tasks_per_sharing() self._set_min_cpus() # TODO @@ -330,7 +331,7 @@ cdef class JobSubmitDescription: and self.threads_reserved_for_system): raise ValueError("cores_reserved_for_system is mutually " " exclusive with threads_reserved_for_system.") - + def _set_core_spec(self): if self.cores_reserved_for_system: self.ptr.core_spec = u16(self.cores_reserved_for_system) @@ -351,13 +352,13 @@ cdef class JobSubmitDescription: self.ptr.cpu_freq_min = freq_min self.ptr.cpu_freq_max = freq_max self.ptr.cpu_freq_gov = freq_gov - + def _set_memory(self): if self.memory_per_cpu: - self.ptr.pn_min_memory = u64(dehumanize(self.memory_per_cpu)) + self.ptr.pn_min_memory = u64(dehumanize(self.memory_per_cpu)) self.ptr.pn_min_memory |= slurm.MEM_PER_CPU elif self.memory_per_node: - self.ptr.pn_min_memory = u64(dehumanize(self.memory_per_node)) + self.ptr.pn_min_memory = u64(dehumanize(self.memory_per_node)) elif self.memory_per_gpu: mem_gpu = u64(dehumanize(val)) cstr.fmalloc(&self.ptr.mem_per_tres, f"gres:gpu:{mem_gpu}") @@ -433,7 +434,7 @@ cdef class JobSubmitDescription: if not "=" in item: continue - var, val = item.split("=", 1) + var, val = item.split("=", 1) slurm_env_array_overwrite(&self.ptr.environment, var, str(val)) get_user_env = True @@ -446,7 +447,7 @@ cdef class JobSubmitDescription: var, str(val)) # Setup all User selected env vars. - for var, val in vals.items(): + for var, val in vals.items(): slurm_env_array_overwrite(&self.ptr.environment, var, str(val)) @@ -467,7 +468,7 @@ cdef class JobSubmitDescription: if isinstance(self.distribution, int): # Assume the user meant to specify the plane size only. - plane = u16(self.distribution) + plane = u16(self.distribution) elif isinstance(self.distribution, str): # Support sbatch style string input dist = TaskDistribution.from_str(self.distribution) @@ -492,7 +493,7 @@ cdef class JobSubmitDescription: if "verbose" in self.gpu_binding: binding = f"verbose,gpu:{binding}" - cstr.fmalloc(&self.ptr.tres_bind, binding) + cstr.fmalloc(&self.ptr.tres_bind, binding) def _set_min_cpus(self): if self.min_cpus_per_node: @@ -534,11 +535,23 @@ cdef class JobSubmitDescription: def _set_gres_binding(self): if not self.gres_binding: return None - elif self.gres_binding.casefold() == "enforce-binding": + + binding = self.gres_binding.casefold() + if binding == "enforce-binding": self.ptr.bitflags |= slurm.GRES_ENFORCE_BIND - elif self.gres_binding.casefold() == "disable-binding": + elif binding == "disable-binding": self.ptr.bitflags |= slurm.GRES_DISABLE_BIND + def _set_gres_tasks_per_sharing(self): + if not self.gres_tasks_per_sharing: + return None + + sharing = self.gres_tasks_per_sharing.casefold() + if sharing == "multiple" or sharing == "multiple-tasks-per-sharing": + self.ptr.bitflags |= slurm.GRES_MULT_TASKS_PER_SHARING + elif sharing == "one" or sharing == "one-task-per-sharing": + self.ptr.bitflags |= slurm.GRES_ONE_TASK_PER_SHARING + def _parse_dependencies(val): final = None @@ -565,7 +578,7 @@ def _parse_dependencies(val): if not isinstance(vals, list): vals = str(vals).split(",") - vals = [str(s) for s in vals] + vals = [str(s) for s in vals] final.append(f"{condition}:{':'.join(vals)}") final = delim.join(final) @@ -627,7 +640,7 @@ def _parse_switches_str_to_dict(switches_str): vals = str(switches_str.split("@")) if len(vals) > 1: out["max_wait_time"] = timestr_to_secs(vals[1]) - + out["count"] = u32(vals[0]) return out @@ -691,7 +704,7 @@ def _validate_cpu_freq(freq): def _validate_batch_script(script, args=None): if Path(script).is_file(): # First assume the caller is passing a path to a script and we try - # to load it. + # to load it. script = Path(script).read_text() else: if args: diff --git a/pyslurm/core/partition.pyx b/pyslurm/core/partition.pyx index ba0bf559..461acc8c 100644 --- a/pyslurm/core/partition.pyx +++ b/pyslurm/core/partition.pyx @@ -156,7 +156,7 @@ cdef class Partitions(MultiClusterMap): @property def total_nodes(self): return xcollections.sum_property(self, Partition.total_nodes) - + cdef class Partition: @@ -183,7 +183,7 @@ cdef class Partition: xfree(self.ptr) def __dealloc__(self): - self._dealloc_impl() + self._dealloc_impl() def __repr__(self): return f'pyslurm.{self.__class__.__name__}({self.name})' @@ -626,7 +626,7 @@ cdef class Partition: def is_user_exclusive(self, val): u16_set_bool_flag(&self.ptr.flags, val, slurm.PART_FLAG_EXCLUSIVE_USER, slurm.PART_FLAG_EXC_USER_CLR) - + @property def is_hidden(self): return u16_parse_bool_flag(self.ptr.flags, slurm.PART_FLAG_HIDDEN) @@ -741,9 +741,6 @@ def _select_type_int_to_list(stype): # plugin out = _select_type_int_to_cons_res(stype) - if stype & slurm.CR_OTHER_CONS_RES: - out.append("OTHER_CONS_RES") - if stype & slurm.CR_ONE_TASK_PER_CORE: out.append("ONE_TASK_PER_CORE") @@ -808,7 +805,7 @@ cdef _extract_job_default_item(typ, slurm.List job_defaults_list): job_defaults_t *default_item SlurmList job_def_list SlurmListItem job_def_item - + job_def_list = SlurmList.wrap(job_defaults_list, owned=False) for job_def_item in job_def_list: default_item = job_def_item.data @@ -828,7 +825,7 @@ cdef _concat_job_default_str(typ, val, char **job_defaults_str): current.update({typ : _val}) cstr.from_dict(job_defaults_str, current) - + def _get_memory(value, per_cpu): if value != slurm.NO_VAL64: diff --git a/pyslurm/pydefines/slurm_defines.pxi b/pyslurm/pydefines/slurm_defines.pxi index b741d382..bcc7d54a 100644 --- a/pyslurm/pydefines/slurm_defines.pxi +++ b/pyslurm/pydefines/slurm_defines.pxi @@ -140,7 +140,6 @@ CR_SOCKET = slurm.CR_SOCKET CR_CORE = slurm.CR_CORE CR_BOARD = slurm.CR_BOARD CR_MEMORY = slurm.CR_MEMORY -CR_OTHER_CONS_RES = slurm.CR_OTHER_CONS_RES CR_ONE_TASK_PER_CORE = slurm.CR_ONE_TASK_PER_CORE CR_PACK_NODES = slurm.CR_PACK_NODES CR_OTHER_CONS_TRES = slurm.CR_OTHER_CONS_TRES @@ -208,7 +207,6 @@ JOB_WAS_RUNNING = slurm.JOB_WAS_RUNNING RESET_ACCRUE_TIME = slurm.RESET_ACCRUE_TIME JOB_MEM_SET = slurm.JOB_MEM_SET -JOB_RESIZED = slurm.JOB_RESIZED USE_DEFAULT_ACCT = slurm.USE_DEFAULT_ACCT USE_DEFAULT_PART = slurm.USE_DEFAULT_PART USE_DEFAULT_QOS = slurm.USE_DEFAULT_QOS @@ -268,7 +266,6 @@ RESERVE_FLAG_PART_NODES = slurm.RESERVE_FLAG_PART_NODES RESERVE_FLAG_NO_PART_NODES = slurm.RESERVE_FLAG_NO_PART_NODES RESERVE_FLAG_OVERLAP = slurm.RESERVE_FLAG_OVERLAP RESERVE_FLAG_SPEC_NODES = slurm.RESERVE_FLAG_SPEC_NODES -RESERVE_FLAG_FIRST_CORES = slurm.RESERVE_FLAG_FIRST_CORES RESERVE_FLAG_TIME_FLOAT = slurm.RESERVE_FLAG_TIME_FLOAT RESERVE_FLAG_REPLACE = slurm.RESERVE_FLAG_REPLACE RESERVE_FLAG_ALL_NODES = slurm.RESERVE_FLAG_ALL_NODES diff --git a/pyslurm/pydefines/slurm_enums.pxi b/pyslurm/pydefines/slurm_enums.pxi index 38aab46c..eb292255 100644 --- a/pyslurm/pydefines/slurm_enums.pxi +++ b/pyslurm/pydefines/slurm_enums.pxi @@ -238,11 +238,9 @@ AUTH_PLUGIN_JWT = slurm.AUTH_PLUGIN_JWT # enum select_plugin_type -SELECT_PLUGIN_CONS_RES = slurm.SELECT_PLUGIN_CONS_RES SELECT_PLUGIN_LINEAR = slurm.SELECT_PLUGIN_LINEAR SELECT_PLUGIN_SERIAL = slurm.SELECT_PLUGIN_SERIAL SELECT_PLUGIN_CRAY_LINEAR = slurm.SELECT_PLUGIN_CRAY_LINEAR -SELECT_PLUGIN_CRAY_CONS_RES = slurm.SELECT_PLUGIN_CRAY_CONS_RES SELECT_PLUGIN_CONS_TRES = slurm.SELECT_PLUGIN_CONS_TRES SELECT_PLUGIN_CRAY_CONS_TRES = slurm.SELECT_PLUGIN_CRAY_CONS_TRES diff --git a/pyslurm/pydefines/slurmdb_enums.pxi b/pyslurm/pydefines/slurmdb_enums.pxi index 7b7afa2e..bcde0cb1 100644 --- a/pyslurm/pydefines/slurmdb_enums.pxi +++ b/pyslurm/pydefines/slurmdb_enums.pxi @@ -86,7 +86,6 @@ SLURMDB_REMOVE_ASSOC_USAGE = slurm.SLURMDB_REMOVE_ASSOC_USAGE SLURMDB_ADD_RES = slurm.SLURMDB_ADD_RES SLURMDB_REMOVE_RES = slurm.SLURMDB_REMOVE_RES SLURMDB_MODIFY_RES = slurm.SLURMDB_MODIFY_RES -SLURMDB_REMOVE_QOS_USAGE = slurm.SLURMDB_REMOVE_QOS_USAGE SLURMDB_ADD_TRES = slurm.SLURMDB_ADD_TRES SLURMDB_UPDATE_FEDS = slurm.SLURMDB_UPDATE_FEDS diff --git a/pyslurm/pyslurm.pyx b/pyslurm/pyslurm.pyx index 82a26ecc..e2363798 100644 --- a/pyslurm/pyslurm.pyx +++ b/pyslurm/pyslurm.pyx @@ -117,6 +117,63 @@ cdef inline SLURM_ID_HASH_STEP_ID(hash_id): cdef inline SLURM_ID_HASH_LEGACY(hash_id): return ((hash_id >> 32) * 10000000000 + (hash_id & 0x00000000FFFFFFFF)) + +# Helpers +cdef inline listOrNone(char* value, sep_char): + if value is NULL: + return [] + + if not sep_char: + return value.decode("UTF-8", "replace") + + if sep_char == '': + return value.decode("UTF-8", "replace") + + return value.decode("UTF_8", "replace").split(sep_char) + + +cdef inline stringOrNone(char* value, value2): + if value is NULL: + if value2 is '': + return None + return value2 + return value.decode("UTF-8", "replace") + + +cdef inline int16orNone(uint16_t value): + if value is NO_VAL16: + return None + else: + return value + + +cdef inline int32orNone(uint32_t value): + if value is NO_VAL: + return None + else: + return value + + +cdef inline int64orNone(uint64_t value): + if value is NO_VAL64: + return None + else: + return value + + +cdef inline int16orUnlimited(uint16_t value, return_type): + if value is INFINITE16: + if return_type is "int": + return None + else: + return "UNLIMITED" + else: + if return_type is "int": + return value + else: + return str(value) + + # # Defined job states # @@ -288,7 +345,7 @@ def get_controllers(): if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) control_machs = [] if slurm_ctl_conf_ptr is not NULL: @@ -296,7 +353,7 @@ def get_controllers(): if slurm_ctl_conf_ptr.control_machine is not NULL: length = slurm_ctl_conf_ptr.control_cnt for index in range(length): - primary = slurm.stringOrNone(slurm_ctl_conf_ptr.control_machine[index], '') + primary = stringOrNone(slurm_ctl_conf_ptr.control_machine[index], '') control_machs.append(primary) slurm.slurm_free_ctl_conf(slurm_ctl_conf_ptr) @@ -351,7 +408,7 @@ def slurm_load_slurmd_status(): int errCode = slurm.slurm_load_slurmd_status(&slurmd_status) if errCode == slurm.SLURM_SUCCESS: - hostname = slurm.stringOrNone(slurmd_status.hostname, '') + hostname = stringOrNone(slurmd_status.hostname, '') Status_dict['actual_boards'] = slurmd_status.actual_boards Status_dict['booted'] = slurmd_status.booted Status_dict['actual_cores'] = slurmd_status.actual_cores @@ -364,9 +421,9 @@ def slurm_load_slurmd_status(): Status_dict['last_slurmctld_msg'] = slurmd_status.last_slurmctld_msg Status_dict['pid'] = slurmd_status.pid Status_dict['slurmd_debug'] = slurmd_status.slurmd_debug - Status_dict['slurmd_logfile'] = slurm.stringOrNone(slurmd_status.slurmd_logfile, '') - Status_dict['step_list'] = slurm.stringOrNone(slurmd_status.step_list, '') - Status_dict['version'] = slurm.stringOrNone(slurmd_status.version, '') + Status_dict['slurmd_logfile'] = stringOrNone(slurmd_status.slurmd_logfile, '') + Status_dict['step_list'] = stringOrNone(slurmd_status.step_list, '') + Status_dict['version'] = stringOrNone(slurmd_status.version, '') Status[hostname] = Status_dict @@ -460,7 +517,7 @@ cdef class config: """Load the slurm control configuration information. Returns: - int: slurm error code + int: slurm error code """ cdef: slurm.slurm_conf_t *slurm_ctl_conf_ptr = NULL @@ -470,7 +527,7 @@ cdef class config: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) self.__Config_ptr = slurm_ctl_conf_ptr return errCode @@ -543,89 +600,83 @@ cdef class config: self.__lastUpdate = self.__Config_ptr.last_update - Ctl_dict['accounting_storage_tres'] = slurm.stringOrNone(self.__Config_ptr.accounting_storage_tres, '') + Ctl_dict['accounting_storage_tres'] = stringOrNone(self.__Config_ptr.accounting_storage_tres, '') Ctl_dict['accounting_storage_enforce'] = self.__Config_ptr.accounting_storage_enforce - Ctl_dict['accounting_storage_backup_host'] = slurm.stringOrNone(self.__Config_ptr.accounting_storage_backup_host, '') - Ctl_dict['accounting_storage_ext_host'] = slurm.stringOrNone(self.__Config_ptr.accounting_storage_ext_host, '') - Ctl_dict['accounting_storage_host'] = slurm.stringOrNone(self.__Config_ptr.accounting_storage_host, '') - Ctl_dict['accounting_storage_pass'] = slurm.stringOrNone(self.__Config_ptr.accounting_storage_pass, '') + Ctl_dict['accounting_storage_backup_host'] = stringOrNone(self.__Config_ptr.accounting_storage_backup_host, '') + Ctl_dict['accounting_storage_ext_host'] = stringOrNone(self.__Config_ptr.accounting_storage_ext_host, '') + Ctl_dict['accounting_storage_host'] = stringOrNone(self.__Config_ptr.accounting_storage_host, '') + Ctl_dict['accounting_storage_pass'] = stringOrNone(self.__Config_ptr.accounting_storage_pass, '') Ctl_dict['accounting_storage_port'] = self.__Config_ptr.accounting_storage_port - Ctl_dict['accounting_storage_type'] = slurm.stringOrNone(self.__Config_ptr.accounting_storage_type, '') - Ctl_dict['accounting_storage_user'] = slurm.stringOrNone(self.__Config_ptr.accounting_storage_user, '') - Ctl_dict['acct_gather_energy_type'] = slurm.stringOrNone(self.__Config_ptr.acct_gather_energy_type, '') - Ctl_dict['acct_gather_profile_type'] = slurm.stringOrNone(self.__Config_ptr.acct_gather_profile_type, '') - Ctl_dict['acct_gather_interconnect_type'] = slurm.stringOrNone(self.__Config_ptr.acct_gather_interconnect_type, '') - Ctl_dict['acct_gather_filesystem_type'] = slurm.stringOrNone(self.__Config_ptr.acct_gather_filesystem_type, '') + Ctl_dict['accounting_storage_type'] = stringOrNone(self.__Config_ptr.accounting_storage_type, '') + Ctl_dict['accounting_storage_user'] = stringOrNone(self.__Config_ptr.accounting_storage_user, '') + Ctl_dict['acct_gather_energy_type'] = stringOrNone(self.__Config_ptr.acct_gather_energy_type, '') + Ctl_dict['acct_gather_profile_type'] = stringOrNone(self.__Config_ptr.acct_gather_profile_type, '') + Ctl_dict['acct_gather_interconnect_type'] = stringOrNone(self.__Config_ptr.acct_gather_interconnect_type, '') + Ctl_dict['acct_gather_filesystem_type'] = stringOrNone(self.__Config_ptr.acct_gather_filesystem_type, '') Ctl_dict['acct_gather_node_freq'] = self.__Config_ptr.acct_gather_node_freq - Ctl_dict['auth_alt_types'] = slurm.stringOrNone(self.__Config_ptr.authalttypes, '') - Ctl_dict['authinfo'] = slurm.stringOrNone(self.__Config_ptr.authinfo, '') - Ctl_dict['authtype'] = slurm.stringOrNone(self.__Config_ptr.authtype, '') + Ctl_dict['auth_alt_types'] = stringOrNone(self.__Config_ptr.authalttypes, '') + Ctl_dict['authinfo'] = stringOrNone(self.__Config_ptr.authinfo, '') + Ctl_dict['authtype'] = stringOrNone(self.__Config_ptr.authtype, '') Ctl_dict['batch_start_timeout'] = self.__Config_ptr.batch_start_timeout - Ctl_dict['bb_type'] = slurm.stringOrNone(self.__Config_ptr.bb_type, '') - Ctl_dict['bcast_exclude'] = slurm.stringOrNone(self.__Config_ptr.bcast_exclude, '') - Ctl_dict['bcast_parameters'] = slurm.stringOrNone(self.__Config_ptr.bcast_parameters, '') + Ctl_dict['bb_type'] = stringOrNone(self.__Config_ptr.bb_type, '') + Ctl_dict['bcast_exclude'] = stringOrNone(self.__Config_ptr.bcast_exclude, '') + Ctl_dict['bcast_parameters'] = stringOrNone(self.__Config_ptr.bcast_parameters, '') Ctl_dict['boot_time'] = self.__Config_ptr.boot_time - Ctl_dict['core_spec_plugin'] = slurm.stringOrNone(self.__Config_ptr.core_spec_plugin, '') - Ctl_dict['cli_filter_plugins'] = slurm.stringOrNone(self.__Config_ptr.cli_filter_plugins, '') - Ctl_dict['cluster_name'] = slurm.stringOrNone(self.__Config_ptr.cluster_name, '') - Ctl_dict['comm_params'] = slurm.stringOrNone(self.__Config_ptr.comm_params, '') + Ctl_dict['core_spec_plugin'] = stringOrNone(self.__Config_ptr.core_spec_plugin, '') + Ctl_dict['cli_filter_plugins'] = stringOrNone(self.__Config_ptr.cli_filter_plugins, '') + Ctl_dict['cluster_name'] = stringOrNone(self.__Config_ptr.cluster_name, '') + Ctl_dict['comm_params'] = stringOrNone(self.__Config_ptr.comm_params, '') Ctl_dict['complete_wait'] = self.__Config_ptr.complete_wait Ctl_dict['conf_flags'] = self.__Config_ptr.conf_flags - Ctl_dict['cpu_freq_def'] = slurm.int32orNone(self.__Config_ptr.cpu_freq_def) + Ctl_dict['cpu_freq_def'] = int32orNone(self.__Config_ptr.cpu_freq_def) Ctl_dict['cpu_freq_govs'] = self.__Config_ptr.cpu_freq_govs - Ctl_dict['cred_type'] = slurm.stringOrNone(self.__Config_ptr.cred_type, '') + Ctl_dict['cred_type'] = stringOrNone(self.__Config_ptr.cred_type, '') Ctl_dict['debug_flags'] = self.__Config_ptr.debug_flags Ctl_dict['def_mem_per_cpu'] = self.__Config_ptr.def_mem_per_cpu - Ctl_dict['dependency_params'] = slurm.stringOrNone(self.__Config_ptr.dependency_params, '') + Ctl_dict['dependency_params'] = stringOrNone(self.__Config_ptr.dependency_params, '') Ctl_dict['eio_timeout'] = self.__Config_ptr.eio_timeout Ctl_dict['enforce_part_limits'] = bool(self.__Config_ptr.enforce_part_limits) - Ctl_dict['epilog'] = slurm.stringOrNone(self.__Config_ptr.epilog, '') + Ctl_dict['epilog'] = stringOrNone(self.__Config_ptr.epilog, '') Ctl_dict['epilog_msg_time'] = self.__Config_ptr.epilog_msg_time - Ctl_dict['epilog_slurmctld'] = slurm.stringOrNone(self.__Config_ptr.epilog_slurmctld, '') - Ctl_dict['ext_sensors_type'] = slurm.stringOrNone(self.__Config_ptr.ext_sensors_type, '') - Ctl_dict['federation_parameters'] = slurm.stringOrNone(self.__Config_ptr.fed_params, '') + Ctl_dict['epilog_slurmctld'] = stringOrNone(self.__Config_ptr.epilog_slurmctld, '') + Ctl_dict['ext_sensors_type'] = stringOrNone(self.__Config_ptr.ext_sensors_type, '') + Ctl_dict['federation_parameters'] = stringOrNone(self.__Config_ptr.fed_params, '') Ctl_dict['first_job_id'] = self.__Config_ptr.first_job_id Ctl_dict['fs_dampening_factor'] = self.__Config_ptr.fs_dampening_factor Ctl_dict['get_env_timeout'] = self.__Config_ptr.get_env_timeout - Ctl_dict['gpu_freq_def'] = slurm.stringOrNone(self.__Config_ptr.gpu_freq_def, '') - Ctl_dict['gres_plugins'] = slurm.listOrNone(self.__Config_ptr.gres_plugins, ',') + Ctl_dict['gpu_freq_def'] = stringOrNone(self.__Config_ptr.gpu_freq_def, '') + Ctl_dict['gres_plugins'] = listOrNone(self.__Config_ptr.gres_plugins, ',') Ctl_dict['group_time'] = self.__Config_ptr.group_time Ctl_dict['group_update_force'] = self.__Config_ptr.group_force Ctl_dict['hash_val'] = self.__Config_ptr.hash_val Ctl_dict['health_check_interval'] = self.__Config_ptr.health_check_interval Ctl_dict['health_check_node_state'] = self.__Config_ptr.health_check_node_state - Ctl_dict['health_check_program'] = slurm.stringOrNone(self.__Config_ptr.health_check_program, '') + Ctl_dict['health_check_program'] = stringOrNone(self.__Config_ptr.health_check_program, '') Ctl_dict['inactive_limit'] = self.__Config_ptr.inactive_limit - Ctl_dict['job_acct_gather_freq'] = slurm.stringOrNone(self.__Config_ptr.job_acct_gather_freq, '') - Ctl_dict['job_acct_gather_type'] = slurm.stringOrNone(self.__Config_ptr.job_acct_gather_type, '') - Ctl_dict['job_acct_gather_params'] = slurm.stringOrNone(self.__Config_ptr.job_acct_gather_params, '') - Ctl_dict['job_comp_host'] = slurm.stringOrNone(self.__Config_ptr.job_comp_host, '') - Ctl_dict['job_comp_loc'] = slurm.stringOrNone(self.__Config_ptr.job_comp_loc, '') - Ctl_dict['job_comp_params'] = slurm.stringOrNone(self.__Config_ptr.job_comp_params, '') - Ctl_dict['job_comp_pass'] = slurm.stringOrNone(self.__Config_ptr.job_comp_pass, '') + Ctl_dict['job_acct_gather_freq'] = stringOrNone(self.__Config_ptr.job_acct_gather_freq, '') + Ctl_dict['job_acct_gather_type'] = stringOrNone(self.__Config_ptr.job_acct_gather_type, '') + Ctl_dict['job_acct_gather_params'] = stringOrNone(self.__Config_ptr.job_acct_gather_params, '') + Ctl_dict['job_comp_host'] = stringOrNone(self.__Config_ptr.job_comp_host, '') + Ctl_dict['job_comp_loc'] = stringOrNone(self.__Config_ptr.job_comp_loc, '') + Ctl_dict['job_comp_params'] = stringOrNone(self.__Config_ptr.job_comp_params, '') + Ctl_dict['job_comp_pass'] = stringOrNone(self.__Config_ptr.job_comp_pass, '') Ctl_dict['job_comp_port'] = self.__Config_ptr.job_comp_port - Ctl_dict['job_comp_type'] = slurm.stringOrNone(self.__Config_ptr.job_comp_type, '') - Ctl_dict['job_comp_user'] = slurm.stringOrNone(self.__Config_ptr.job_comp_user, '') - Ctl_dict['job_container_plugin'] = slurm.stringOrNone(self.__Config_ptr.job_container_plugin, '') - Ctl_dict['job_credential_private_key'] = slurm.stringOrNone( - self.__Config_ptr.job_credential_private_key, '' - ) - Ctl_dict['job_credential_public_certificate'] = slurm.stringOrNone( - self.__Config_ptr.job_credential_public_certificate, '' - ) + Ctl_dict['job_comp_type'] = stringOrNone(self.__Config_ptr.job_comp_type, '') + Ctl_dict['job_comp_user'] = stringOrNone(self.__Config_ptr.job_comp_user, '') + Ctl_dict['job_container_plugin'] = stringOrNone(self.__Config_ptr.job_container_plugin, '') # TODO: wrap with job_defaults_str() - #Ctl_dict['job_defaults_list'] = slurm.stringOrNone(self.__Config_ptr.job_defaults_list, ',') + #Ctl_dict['job_defaults_list'] = stringOrNone(self.__Config_ptr.job_defaults_list, ',') Ctl_dict['job_file_append'] = bool(self.__Config_ptr.job_file_append) Ctl_dict['job_requeue'] = bool(self.__Config_ptr.job_requeue) - Ctl_dict['job_submit_plugins'] = slurm.stringOrNone(self.__Config_ptr.job_submit_plugins, '') - Ctl_dict['keep_alive_time'] = slurm.int16orNone(self.__Config_ptr.keepalive_time) + Ctl_dict['job_submit_plugins'] = stringOrNone(self.__Config_ptr.job_submit_plugins, '') + Ctl_dict['keep_alive_time'] = int16orNone(self.__Config_ptr.keepalive_time) Ctl_dict['kill_on_bad_exit'] = bool(self.__Config_ptr.kill_on_bad_exit) Ctl_dict['kill_wait'] = self.__Config_ptr.kill_wait Ctl_dict['licenses'] = __get_licenses(self.__Config_ptr.licenses) Ctl_dict['log_fmt'] = self.__Config_ptr.log_fmt - Ctl_dict['mail_domain'] = slurm.stringOrNone(self.__Config_ptr.mail_domain, '') - Ctl_dict['mail_prog'] = slurm.stringOrNone(self.__Config_ptr.mail_prog, '') + Ctl_dict['mail_domain'] = stringOrNone(self.__Config_ptr.mail_domain, '') + Ctl_dict['mail_prog'] = stringOrNone(self.__Config_ptr.mail_prog, '') Ctl_dict['max_array_sz'] = self.__Config_ptr.max_array_sz Ctl_dict['max_dbd_msgs'] = self.__Config_ptr.max_dbd_msgs Ctl_dict['max_job_cnt'] = self.__Config_ptr.max_job_cnt @@ -634,104 +685,103 @@ cdef class config: Ctl_dict['max_step_cnt'] = self.__Config_ptr.max_step_cnt Ctl_dict['max_tasks_per_node'] = self.__Config_ptr.max_tasks_per_node Ctl_dict['min_job_age'] = self.__Config_ptr.min_job_age - Ctl_dict['mpi_default'] = slurm.stringOrNone(self.__Config_ptr.mpi_default, '') - Ctl_dict['mpi_params'] = slurm.stringOrNone(self.__Config_ptr.mpi_params, '') + Ctl_dict['mpi_default'] = stringOrNone(self.__Config_ptr.mpi_default, '') + Ctl_dict['mpi_params'] = stringOrNone(self.__Config_ptr.mpi_params, '') Ctl_dict['msg_timeout'] = self.__Config_ptr.msg_timeout Ctl_dict['next_job_id'] = self.__Config_ptr.next_job_id - Ctl_dict['node_prefix'] = slurm.stringOrNone(self.__Config_ptr.node_prefix, '') - Ctl_dict['over_time_limit'] = slurm.int16orNone(self.__Config_ptr.over_time_limit) - Ctl_dict['plugindir'] = slurm.stringOrNone(self.__Config_ptr.plugindir, '') - Ctl_dict['plugstack'] = slurm.stringOrNone(self.__Config_ptr.plugstack, '') - Ctl_dict['power_parameters'] = slurm.stringOrNone(self.__Config_ptr.power_parameters, '') - Ctl_dict['power_plugin'] = slurm.stringOrNone(self.__Config_ptr.power_plugin, '') - Ctl_dict['prep_params'] = slurm.stringOrNone(self.__Config_ptr.prep_params, '') - Ctl_dict['prep_plugins'] = slurm.stringOrNone(self.__Config_ptr.prep_plugins, '') + Ctl_dict['node_prefix'] = stringOrNone(self.__Config_ptr.node_prefix, '') + Ctl_dict['over_time_limit'] = int16orNone(self.__Config_ptr.over_time_limit) + Ctl_dict['plugindir'] = stringOrNone(self.__Config_ptr.plugindir, '') + Ctl_dict['plugstack'] = stringOrNone(self.__Config_ptr.plugstack, '') + Ctl_dict['power_parameters'] = stringOrNone(self.__Config_ptr.power_parameters, '') + Ctl_dict['power_plugin'] = stringOrNone(self.__Config_ptr.power_plugin, '') + Ctl_dict['prep_params'] = stringOrNone(self.__Config_ptr.prep_params, '') + Ctl_dict['prep_plugins'] = stringOrNone(self.__Config_ptr.prep_plugins, '') config_get_preempt_mode = get_preempt_mode(self.__Config_ptr.preempt_mode) - Ctl_dict['preempt_mode'] = slurm.stringOrNone(config_get_preempt_mode, '') + Ctl_dict['preempt_mode'] = stringOrNone(config_get_preempt_mode, '') - Ctl_dict['preempt_type'] = slurm.stringOrNone(self.__Config_ptr.preempt_type, '') + Ctl_dict['preempt_type'] = stringOrNone(self.__Config_ptr.preempt_type, '') if self.__Config_ptr.preempt_exempt_time == slurm.INFINITE: Ctl_dict['preempt_exempt_time'] = "NONE" else: secs2time_str(self.__Config_ptr.preempt_exempt_time) - Ctl_dict['preempt_exempt_time'] = slurm.stringOrNone(tmp_str, '') + Ctl_dict['preempt_exempt_time'] = stringOrNone(tmp_str, '') Ctl_dict['priority_decay_hl'] = self.__Config_ptr.priority_decay_hl Ctl_dict['priority_calc_period'] = self.__Config_ptr.priority_calc_period Ctl_dict['priority_favor_small'] = self.__Config_ptr.priority_favor_small Ctl_dict['priority_flags'] = self.__Config_ptr.priority_flags Ctl_dict['priority_max_age'] = self.__Config_ptr.priority_max_age - Ctl_dict['priority_params'] = slurm.stringOrNone(self.__Config_ptr.priority_params, '') - Ctl_dict['priority_site_factor_params'] = slurm.stringOrNone(self.__Config_ptr.site_factor_params, '') - Ctl_dict['priority_site_factor_plugin'] = slurm.stringOrNone(self.__Config_ptr.site_factor_plugin, '') + Ctl_dict['priority_params'] = stringOrNone(self.__Config_ptr.priority_params, '') + Ctl_dict['priority_site_factor_params'] = stringOrNone(self.__Config_ptr.site_factor_params, '') + Ctl_dict['priority_site_factor_plugin'] = stringOrNone(self.__Config_ptr.site_factor_plugin, '') Ctl_dict['priority_reset_period'] = self.__Config_ptr.priority_reset_period - Ctl_dict['priority_type'] = slurm.stringOrNone(self.__Config_ptr.priority_type, '') + Ctl_dict['priority_type'] = stringOrNone(self.__Config_ptr.priority_type, '') Ctl_dict['priority_weight_age'] = self.__Config_ptr.priority_weight_age Ctl_dict['priority_weight_assoc'] = self.__Config_ptr.priority_weight_assoc Ctl_dict['priority_weight_fs'] = self.__Config_ptr.priority_weight_fs Ctl_dict['priority_weight_js'] = self.__Config_ptr.priority_weight_js Ctl_dict['priority_weight_part'] = self.__Config_ptr.priority_weight_part Ctl_dict['priority_weight_qos'] = self.__Config_ptr.priority_weight_qos - Ctl_dict['proctrack_type'] = slurm.stringOrNone(self.__Config_ptr.proctrack_type, '') + Ctl_dict['proctrack_type'] = stringOrNone(self.__Config_ptr.proctrack_type, '') Ctl_dict['private_data'] = self.__Config_ptr.private_data Ctl_dict['private_data_list'] = get_private_data_list(self.__Config_ptr.private_data) - Ctl_dict['priority_weight_tres'] = slurm.stringOrNone(self.__Config_ptr.priority_weight_tres, '') - Ctl_dict['prolog'] = slurm.stringOrNone(self.__Config_ptr.prolog, '') - Ctl_dict['prolog_epilog_timeout'] = slurm.int16orNone(self.__Config_ptr.prolog_epilog_timeout) - Ctl_dict['prolog_slurmctld'] = slurm.stringOrNone(self.__Config_ptr.prolog_slurmctld, '') + Ctl_dict['priority_weight_tres'] = stringOrNone(self.__Config_ptr.priority_weight_tres, '') + Ctl_dict['prolog'] = stringOrNone(self.__Config_ptr.prolog, '') + Ctl_dict['prolog_epilog_timeout'] = int16orNone(self.__Config_ptr.prolog_epilog_timeout) + Ctl_dict['prolog_slurmctld'] = stringOrNone(self.__Config_ptr.prolog_slurmctld, '') Ctl_dict['propagate_prio_process'] = self.__Config_ptr.propagate_prio_process Ctl_dict['prolog_flags'] = self.__Config_ptr.prolog_flags - Ctl_dict['propagate_rlimits'] = slurm.stringOrNone(self.__Config_ptr.propagate_rlimits, '') - Ctl_dict['propagate_rlimits_except'] = slurm.stringOrNone(self.__Config_ptr.propagate_rlimits_except, '') - Ctl_dict['reboot_program'] = slurm.stringOrNone(self.__Config_ptr.reboot_program, '') + Ctl_dict['propagate_rlimits'] = stringOrNone(self.__Config_ptr.propagate_rlimits, '') + Ctl_dict['propagate_rlimits_except'] = stringOrNone(self.__Config_ptr.propagate_rlimits_except, '') + Ctl_dict['reboot_program'] = stringOrNone(self.__Config_ptr.reboot_program, '') Ctl_dict['reconfig_flags'] = self.__Config_ptr.reconfig_flags - Ctl_dict['resume_fail_program'] = slurm.stringOrNone(self.__Config_ptr.resume_fail_program, '') - Ctl_dict['requeue_exit'] = slurm.stringOrNone(self.__Config_ptr.requeue_exit, '') - Ctl_dict['requeue_exit_hold'] = slurm.stringOrNone(self.__Config_ptr.requeue_exit_hold, '') - Ctl_dict['resume_fail_program'] = slurm.stringOrNone(self.__Config_ptr.resume_fail_program, '') - Ctl_dict['resume_program'] = slurm.stringOrNone(self.__Config_ptr.resume_program, '') + Ctl_dict['resume_fail_program'] = stringOrNone(self.__Config_ptr.resume_fail_program, '') + Ctl_dict['requeue_exit'] = stringOrNone(self.__Config_ptr.requeue_exit, '') + Ctl_dict['requeue_exit_hold'] = stringOrNone(self.__Config_ptr.requeue_exit_hold, '') + Ctl_dict['resume_fail_program'] = stringOrNone(self.__Config_ptr.resume_fail_program, '') + Ctl_dict['resume_program'] = stringOrNone(self.__Config_ptr.resume_program, '') Ctl_dict['resume_rate'] = self.__Config_ptr.resume_rate Ctl_dict['resume_timeout'] = self.__Config_ptr.resume_timeout - Ctl_dict['resv_epilog'] = slurm.stringOrNone(self.__Config_ptr.resv_epilog, '') + Ctl_dict['resv_epilog'] = stringOrNone(self.__Config_ptr.resv_epilog, '') Ctl_dict['resv_over_run'] = self.__Config_ptr.resv_over_run - Ctl_dict['resv_prolog'] = slurm.stringOrNone(self.__Config_ptr.resv_prolog, '') + Ctl_dict['resv_prolog'] = stringOrNone(self.__Config_ptr.resv_prolog, '') Ctl_dict['ret2service'] = self.__Config_ptr.ret2service - Ctl_dict['route_plugin'] = slurm.stringOrNone(self.__Config_ptr.route_plugin, '') - Ctl_dict['sched_logfile'] = slurm.stringOrNone(self.__Config_ptr.sched_logfile, '') + Ctl_dict['sched_logfile'] = stringOrNone(self.__Config_ptr.sched_logfile, '') Ctl_dict['sched_log_level'] = self.__Config_ptr.sched_log_level - Ctl_dict['sched_params'] = slurm.stringOrNone(self.__Config_ptr.sched_params, '') + Ctl_dict['sched_params'] = stringOrNone(self.__Config_ptr.sched_params, '') Ctl_dict['sched_time_slice'] = self.__Config_ptr.sched_time_slice - Ctl_dict['schedtype'] = slurm.stringOrNone(self.__Config_ptr.schedtype, '') - Ctl_dict['scron_params'] = slurm.stringOrNone(self.__Config_ptr.scron_params, '') - Ctl_dict['select_type'] = slurm.stringOrNone(self.__Config_ptr.select_type, '') + Ctl_dict['schedtype'] = stringOrNone(self.__Config_ptr.schedtype, '') + Ctl_dict['scron_params'] = stringOrNone(self.__Config_ptr.scron_params, '') + Ctl_dict['select_type'] = stringOrNone(self.__Config_ptr.select_type, '') Ctl_dict['select_type_param'] = self.__Config_ptr.select_type_param - Ctl_dict['slurm_conf'] = slurm.stringOrNone(self.__Config_ptr.slurm_conf, '') + Ctl_dict['slurm_conf'] = stringOrNone(self.__Config_ptr.slurm_conf, '') Ctl_dict['slurm_user_id'] = self.__Config_ptr.slurm_user_id - Ctl_dict['slurm_user_name'] = slurm.stringOrNone(self.__Config_ptr.slurm_user_name, '') + Ctl_dict['slurm_user_name'] = stringOrNone(self.__Config_ptr.slurm_user_name, '') Ctl_dict['slurmd_user_id'] = self.__Config_ptr.slurmd_user_id - Ctl_dict['slurmd_user_name'] = slurm.stringOrNone(self.__Config_ptr.slurmd_user_name, '') - Ctl_dict['slurmctld_addr'] = slurm.stringOrNone(self.__Config_ptr.slurmctld_addr, '') + Ctl_dict['slurmd_user_name'] = stringOrNone(self.__Config_ptr.slurmd_user_name, '') + Ctl_dict['slurmctld_addr'] = stringOrNone(self.__Config_ptr.slurmctld_addr, '') Ctl_dict['slurmctld_debug'] = self.__Config_ptr.slurmctld_debug # TODO: slurmctld_host - Ctl_dict['slurmctld_logfile'] = slurm.stringOrNone(self.__Config_ptr.slurmctld_logfile, '') - Ctl_dict['slurmctld_pidfile'] = slurm.stringOrNone(self.__Config_ptr.slurmctld_pidfile, '') + Ctl_dict['slurmctld_logfile'] = stringOrNone(self.__Config_ptr.slurmctld_logfile, '') + Ctl_dict['slurmctld_pidfile'] = stringOrNone(self.__Config_ptr.slurmctld_pidfile, '') Ctl_dict['slurmctld_port'] = self.__Config_ptr.slurmctld_port Ctl_dict['slurmctld_port_count'] = self.__Config_ptr.slurmctld_port_count - Ctl_dict['slurmctld_primary_off_prog'] = slurm.stringOrNone(self.__Config_ptr.slurmctld_primary_off_prog, '') - Ctl_dict['slurmctld_primary_on_prog'] = slurm.stringOrNone(self.__Config_ptr.slurmctld_primary_on_prog, '') + Ctl_dict['slurmctld_primary_off_prog'] = stringOrNone(self.__Config_ptr.slurmctld_primary_off_prog, '') + Ctl_dict['slurmctld_primary_on_prog'] = stringOrNone(self.__Config_ptr.slurmctld_primary_on_prog, '') Ctl_dict['slurmctld_syslog_debug'] = self.__Config_ptr.slurmctld_syslog_debug Ctl_dict['slurmctld_timeout'] = self.__Config_ptr.slurmctld_timeout Ctl_dict['slurmd_debug'] = self.__Config_ptr.slurmd_debug - Ctl_dict['slurmd_logfile'] = slurm.stringOrNone(self.__Config_ptr.slurmd_logfile, '') - Ctl_dict['slurmd_parameters'] = slurm.stringOrNone(self.__Config_ptr.slurmd_params, '') - Ctl_dict['slurmd_pidfile'] = slurm.stringOrNone(self.__Config_ptr.slurmd_pidfile, '') + Ctl_dict['slurmd_logfile'] = stringOrNone(self.__Config_ptr.slurmd_logfile, '') + Ctl_dict['slurmd_parameters'] = stringOrNone(self.__Config_ptr.slurmd_params, '') + Ctl_dict['slurmd_pidfile'] = stringOrNone(self.__Config_ptr.slurmd_pidfile, '') Ctl_dict['slurmd_port'] = self.__Config_ptr.slurmd_port - Ctl_dict['slurmd_spooldir'] = slurm.stringOrNone(self.__Config_ptr.slurmd_spooldir, '') + Ctl_dict['slurmd_spooldir'] = stringOrNone(self.__Config_ptr.slurmd_spooldir, '') Ctl_dict['slurmd_syslog_debug'] = self.__Config_ptr.slurmd_syslog_debug Ctl_dict['slurmd_timeout'] = self.__Config_ptr.slurmd_timeout - Ctl_dict['srun_epilog'] = slurm.stringOrNone(self.__Config_ptr.srun_epilog, '') + Ctl_dict['srun_epilog'] = stringOrNone(self.__Config_ptr.srun_epilog, '') a = [0,0] if self.__Config_ptr.srun_port_range != NULL: @@ -739,31 +789,31 @@ cdef class config: a[1] = self.__Config_ptr.srun_port_range[1] Ctl_dict['srun_port_range'] = tuple(a) - Ctl_dict['srun_prolog'] = slurm.stringOrNone(self.__Config_ptr.srun_prolog, '') - Ctl_dict['state_save_location'] = slurm.stringOrNone(self.__Config_ptr.state_save_location, '') - Ctl_dict['suspend_exc_nodes'] = slurm.listOrNone(self.__Config_ptr.suspend_exc_nodes, ',') - Ctl_dict['suspend_exc_parts'] = slurm.listOrNone(self.__Config_ptr.suspend_exc_parts, ',') - Ctl_dict['suspend_program'] = slurm.stringOrNone(self.__Config_ptr.suspend_program, '') + Ctl_dict['srun_prolog'] = stringOrNone(self.__Config_ptr.srun_prolog, '') + Ctl_dict['state_save_location'] = stringOrNone(self.__Config_ptr.state_save_location, '') + Ctl_dict['suspend_exc_nodes'] = listOrNone(self.__Config_ptr.suspend_exc_nodes, ',') + Ctl_dict['suspend_exc_parts'] = listOrNone(self.__Config_ptr.suspend_exc_parts, ',') + Ctl_dict['suspend_program'] = stringOrNone(self.__Config_ptr.suspend_program, '') Ctl_dict['suspend_rate'] = self.__Config_ptr.suspend_rate Ctl_dict['suspend_time'] = self.__Config_ptr.suspend_time Ctl_dict['suspend_timeout'] = self.__Config_ptr.suspend_timeout - Ctl_dict['switch_type'] = slurm.stringOrNone(self.__Config_ptr.switch_type, '') - Ctl_dict['switch_param'] = slurm.stringOrNone(self.__Config_ptr.switch_param, '') - Ctl_dict['task_epilog'] = slurm.stringOrNone(self.__Config_ptr.task_epilog, '') - Ctl_dict['task_plugin'] = slurm.stringOrNone(self.__Config_ptr.task_plugin, '') + Ctl_dict['switch_type'] = stringOrNone(self.__Config_ptr.switch_type, '') + Ctl_dict['switch_param'] = stringOrNone(self.__Config_ptr.switch_param, '') + Ctl_dict['task_epilog'] = stringOrNone(self.__Config_ptr.task_epilog, '') + Ctl_dict['task_plugin'] = stringOrNone(self.__Config_ptr.task_plugin, '') Ctl_dict['task_plugin_param'] = self.__Config_ptr.task_plugin_param - Ctl_dict['task_prolog'] = slurm.stringOrNone(self.__Config_ptr.task_prolog, '') + Ctl_dict['task_prolog'] = stringOrNone(self.__Config_ptr.task_prolog, '') Ctl_dict['tcp_timeout'] = self.__Config_ptr.tcp_timeout - Ctl_dict['tmp_fs'] = slurm.stringOrNone(self.__Config_ptr.tmp_fs, '') - Ctl_dict['topology_param'] = slurm.stringOrNone(self.__Config_ptr.topology_param, '') - Ctl_dict['topology_plugin'] = slurm.stringOrNone(self.__Config_ptr.topology_plugin, '') + Ctl_dict['tmp_fs'] = stringOrNone(self.__Config_ptr.tmp_fs, '') + Ctl_dict['topology_param'] = stringOrNone(self.__Config_ptr.topology_param, '') + Ctl_dict['topology_plugin'] = stringOrNone(self.__Config_ptr.topology_plugin, '') Ctl_dict['tree_width'] = self.__Config_ptr.tree_width - Ctl_dict['unkillable_program'] = slurm.stringOrNone(self.__Config_ptr.unkillable_program, '') + Ctl_dict['unkillable_program'] = stringOrNone(self.__Config_ptr.unkillable_program, '') Ctl_dict['unkillable_timeout'] = self.__Config_ptr.unkillable_timeout - Ctl_dict['version'] = slurm.stringOrNone(self.__Config_ptr.version, '') + Ctl_dict['version'] = stringOrNone(self.__Config_ptr.version, '') Ctl_dict['vsize_factor'] = self.__Config_ptr.vsize_factor Ctl_dict['wait_time'] = self.__Config_ptr.wait_time - Ctl_dict['x11_params'] = slurm.stringOrNone(self.__Config_ptr.x11_params, '') + Ctl_dict['x11_params'] = stringOrNone(self.__Config_ptr.x11_params, '') # # Get key_pairs from Opaque data structure @@ -836,14 +886,14 @@ cdef class partition: all_partitions = [] for record in self._Partition_ptr.partition_array[:self._Partition_ptr.record_count]: - all_partitions.append(slurm.stringOrNone(record.name, '')) + all_partitions.append(stringOrNone(record.name, '')) slurm.slurm_free_partition_info_msg(self._Partition_ptr) self._Partition_ptr = NULL return all_partitions else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def find_id(self, partID): """Get partition information for a given partition. @@ -900,7 +950,7 @@ cdef class partition: self._Partition_ptr = NULL else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def delete(self, PartID): """Delete a give slurm partition. @@ -924,7 +974,7 @@ cdef class partition: if errCode != slurm.SLURM_SUCCESS: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -950,33 +1000,33 @@ cdef class partition: for record in self._Partition_ptr.partition_array[:self._Partition_ptr.record_count]: Part_dict = {} - name = slurm.stringOrNone(record.name, '') + name = stringOrNone(record.name, '') if record.allow_accounts or not record.deny_accounts: if record.allow_accounts == NULL or \ record.allow_accounts[0] == "\0".encode("UTF-8"): Part_dict['allow_accounts'] = "ALL" else: - Part_dict['allow_accounts'] = slurm.listOrNone( + Part_dict['allow_accounts'] = listOrNone( record.allow_accounts, ',') Part_dict['deny_accounts'] = None else: Part_dict['allow_accounts'] = None - Part_dict['deny_accounts'] = slurm.listOrNone( + Part_dict['deny_accounts'] = listOrNone( record.deny_accounts, ',') if record.allow_alloc_nodes == NULL: Part_dict['allow_alloc_nodes'] = "ALL" else: - Part_dict['allow_alloc_nodes'] = slurm.listOrNone( + Part_dict['allow_alloc_nodes'] = listOrNone( record.allow_alloc_nodes, ',') if record.allow_groups == NULL or \ record.allow_groups[0] == "\0".encode("UTF-8"): Part_dict['allow_groups'] = "ALL" else: - Part_dict['allow_groups'] = slurm.listOrNone( + Part_dict['allow_groups'] = listOrNone( record.allow_groups, ',') if record.allow_qos or not record.deny_qos: @@ -984,19 +1034,19 @@ cdef class partition: record.allow_qos[0] == "\0".encode("UTF-8"): Part_dict['allow_qos'] = "ALL" else: - Part_dict['allow_qos'] = slurm.listOrNone( + Part_dict['allow_qos'] = listOrNone( record.allow_qos, ',') Part_dict['deny_qos'] = None else: Part_dict['allow_qos'] = None - Part_dict['deny_qos'] = slurm.listOrNone(record.allow_qos, ',') + Part_dict['deny_qos'] = listOrNone(record.allow_qos, ',') if record.alternate != NULL: - Part_dict['alternate'] = slurm.stringOrNone(record.alternate, '') + Part_dict['alternate'] = stringOrNone(record.alternate, '') else: Part_dict['alternate'] = None - Part_dict['billing_weights_str'] = slurm.stringOrNone( + Part_dict['billing_weights_str'] = stringOrNone( record.billing_weights_str, '') #TODO: cpu_bind @@ -1066,8 +1116,8 @@ cdef class partition: Part_dict['max_time_str'] = secs2time_str(record.max_time * 60) Part_dict['min_nodes'] = record.min_nodes - Part_dict['name'] = slurm.stringOrNone(record.name, '') - Part_dict['nodes'] = slurm.stringOrNone(record.nodes, '') + Part_dict['name'] = stringOrNone(record.name, '') + Part_dict['nodes'] = stringOrNone(record.nodes, '') if record.over_time_limit == slurm.NO_VAL16: Part_dict['over_time_limit'] = "NONE" @@ -1083,19 +1133,19 @@ cdef class partition: preempt_mode = record.preempt_mode if preempt_mode == slurm.NO_VAL16: - Part_dict['preempt_mode'] = slurm.stringOrNone( + Part_dict['preempt_mode'] = stringOrNone( slurm.slurm_preempt_mode_string(preempt_mode), '' ) Part_dict['priority_job_factor'] = record.priority_job_factor Part_dict['priority_tier'] = record.priority_tier - Part_dict['qos_char'] = slurm.stringOrNone(record.qos_char, '') + Part_dict['qos_char'] = stringOrNone(record.qos_char, '') Part_dict['resume_timeout'] = record.resume_timeout Part_dict['state'] = get_partition_state(record.state_up) Part_dict['suspend_time'] = record.suspend_time Part_dict['suspend_timout'] = record.suspend_timeout Part_dict['total_cpus'] = record.total_cpus Part_dict['total_nodes'] = record.total_nodes - Part_dict['tres_fmt_str'] = slurm.stringOrNone(record.tres_fmt_str, '') + Part_dict['tres_fmt_str'] = stringOrNone(record.tres_fmt_str, '') self._PartDict["%s" % name] = Part_dict slurm.slurm_free_partition_info_msg(self._Partition_ptr) @@ -1103,7 +1153,7 @@ cdef class partition: return self._PartDict else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def update(self, dict Partition_dict): @@ -1281,7 +1331,7 @@ def slurm_delete_partition(PartID): if errCode != slurm.SLURM_SUCCESS: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1296,7 +1346,7 @@ cpdef int slurm_ping(int Controller=0) except? -1: Args: Controller (int, optional): 0 for primary (Default=0), 1 for backup, 2 - for backup2, ... + for backup2, ... Returns: 0 for success or slurm error code @@ -1306,7 +1356,7 @@ cpdef int slurm_ping(int Controller=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1322,7 +1372,7 @@ cpdef int slurm_reconfigure() except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1337,7 +1387,7 @@ cpdef int slurm_shutdown(uint16_t Options=0) except? -1: 0 - All slurm daemons (default) 1 - slurmctld generates a core file 2 - slurmctld is shutdown (no core file) - + Returns: int: 0 for success or slurm error code """ @@ -1346,7 +1396,7 @@ cpdef int slurm_shutdown(uint16_t Options=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1370,7 +1420,7 @@ cpdef int slurm_set_debug_level(uint32_t DebugLevel=0) except? -1: Args: DebugLevel (int, optional): The debug level. Possible values are from - 0 to 6. + 0 to 6. Returns: int: 0 for success, -1 for error and set slurm error number @@ -1380,7 +1430,7 @@ cpdef int slurm_set_debug_level(uint32_t DebugLevel=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1402,7 +1452,7 @@ cpdef int slurm_set_debugflags(uint32_t debug_flags_plus=0, if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1421,7 +1471,7 @@ cpdef int slurm_set_schedlog_level(uint32_t Enable=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1445,7 +1495,7 @@ cpdef int slurm_suspend(uint32_t JobID=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1464,7 +1514,7 @@ cpdef int slurm_resume(uint32_t JobID=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1483,7 +1533,7 @@ cpdef int slurm_requeue(uint32_t JobID=0, uint32_t State=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1502,7 +1552,7 @@ cpdef long slurm_get_rem_time(uint32_t JobID=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1522,7 +1572,7 @@ cpdef time_t slurm_get_end_time(uint32_t JobID=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return EndTime @@ -1557,7 +1607,7 @@ cpdef int slurm_signal_job(uint32_t JobID=0, uint16_t Signal=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1577,14 +1627,14 @@ cpdef int slurm_signal_job_step(uint32_t JobID=0, uint32_t JobStep=0, Signal (int, optional): Signal to send. Returns: - int: 0 for success or -1 for error and set the slurm errno. + int: 0 for success or -1 for error and set the slurm errno. """ cdef int apiError = 0 cdef int errCode = slurm.slurm_signal_job_step(JobID, JobStep, Signal) if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1606,7 +1656,7 @@ cpdef int slurm_kill_job(uint32_t JobID=0, uint16_t Signal=0, if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1624,11 +1674,11 @@ cpdef int slurm_kill_job_step(uint32_t JobID=0, uint32_t JobStep=0, int: 0 for success or -1 for error, and slurm errno is set. """ cdef int apiError = 0 - cdef int errCode = slurm.slurm_kill_job_step(JobID, JobStep, Signal) + cdef int errCode = slurm.slurm_kill_job_step(JobID, JobStep, Signal, 0) if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1651,7 +1701,7 @@ cpdef int slurm_kill_job2(const char *JobID='', uint16_t Signal=0, if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1671,7 +1721,7 @@ cpdef int slurm_complete_job(uint32_t JobID=0, uint32_t JobCode=0) except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1691,7 +1741,7 @@ cpdef int slurm_notify_job(uint32_t JobID=0, char* Msg='') except? -1: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1711,7 +1761,7 @@ cpdef int slurm_terminate_job_step(uint32_t JobID=0, uint32_t JobStep=0) except? if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -1779,7 +1829,7 @@ cdef class job: return all_jobs else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def find(self, name='', val=''): """Search for a property and associated value in the retrieved job data. @@ -1834,11 +1884,11 @@ cdef class job: if rc != slurm.SLURM_SUCCESS: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def find_id(self, jobid): """Retrieve job ID data. - + This method accepts both string and integer formats of the jobid. This works for single jobs and job arrays. It uses the internal helper _load_single_job to do slurm_load_job. If the job corresponding @@ -1855,7 +1905,7 @@ cdef class job: def find_user(self, user): """Retrieve a user's job data. - + This method calls slurm_load_job_user to get all job_table records associated with a specific user. @@ -1884,7 +1934,7 @@ cdef class job: return self.get_job_ptr() else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def get(self): """Get all slurm jobs information. @@ -1894,7 +1944,7 @@ cdef class job: Returns: (dict): Data where key is the job name, each entry contains a - dictionary of job attributes + dictionary of job attributes """ cdef: int apiError @@ -1906,7 +1956,7 @@ cdef class job: return self.get_job_ptr() else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) cdef dict get_job_ptr(self): """Convert all job arrays in buffer to dictionary. @@ -1934,21 +1984,21 @@ cdef class job: self._record = &self._job_ptr.job_array[i] Job_dict = {} - Job_dict['account'] = slurm.stringOrNone(self._record.account, '') + Job_dict['account'] = stringOrNone(self._record.account, '') slurm.slurm_make_time_str(&self._record.accrue_time, time_str, sizeof(time_str)) - Job_dict['accrue_time'] = slurm.stringOrNone(time_str, '') + Job_dict['accrue_time'] = stringOrNone(time_str, '') - Job_dict['admin_comment'] = slurm.stringOrNone(self._record.admin_comment, '') - Job_dict['alloc_node'] = slurm.stringOrNone(self._record.alloc_node, '') + Job_dict['admin_comment'] = stringOrNone(self._record.admin_comment, '') + Job_dict['alloc_node'] = stringOrNone(self._record.alloc_node, '') Job_dict['alloc_sid'] = self._record.alloc_sid if self._record.array_job_id: if self._record.array_task_str: Job_dict['array_job_id'] = self._record.array_job_id Job_dict['array_task_id'] = None - Job_dict['array_task_str'] = slurm.stringOrNone( + Job_dict['array_task_str'] = stringOrNone( self._record.array_task_str, '' ) else: @@ -1964,7 +2014,7 @@ cdef class job: if self._record.het_job_id: Job_dict['het_job_id'] = self._record.het_job_id - Job_dict['het_job_id_set'] = slurm.stringOrNone( + Job_dict['het_job_id_set'] = stringOrNone( self._record.het_job_id_set, '' ) Job_dict['het_job_offset'] = self._record.het_job_offset @@ -1980,8 +2030,8 @@ cdef class job: Job_dict['assoc_id'] = self._record.assoc_id Job_dict['batch_flag'] = self._record.batch_flag - Job_dict['batch_features'] = slurm.stringOrNone(self._record.batch_features, '') - Job_dict['batch_host'] = slurm.stringOrNone(self._record.batch_host, '') + Job_dict['batch_features'] = stringOrNone(self._record.batch_features, '') + Job_dict['batch_host'] = stringOrNone(self._record.batch_host, '') if self._record.billable_tres == NO_VAL_DOUBLE: Job_dict['billable_tres'] = None @@ -1990,32 +2040,32 @@ cdef class job: Job_dict['bitflags'] = self._record.bitflags Job_dict['boards_per_node'] = self._record.boards_per_node - Job_dict['burst_buffer'] = slurm.stringOrNone(self._record.burst_buffer, '') - Job_dict['burst_buffer_state'] = slurm.stringOrNone( + Job_dict['burst_buffer'] = stringOrNone(self._record.burst_buffer, '') + Job_dict['burst_buffer_state'] = stringOrNone( self._record.burst_buffer_state, '' ) if self._record.cluster_features: - Job_dict['cluster_features'] = slurm.stringOrNone( + Job_dict['cluster_features'] = stringOrNone( self._record.cluster_features, '' ) - Job_dict['command'] = slurm.stringOrNone(self._record.command, '') - Job_dict['comment'] = slurm.stringOrNone(self._record.comment, '') + Job_dict['command'] = stringOrNone(self._record.command, '') + Job_dict['comment'] = stringOrNone(self._record.comment, '') Job_dict['contiguous'] = bool(self._record.contiguous) - Job_dict['core_spec'] = slurm.int16orNone(self._record.core_spec) - Job_dict['cores_per_socket'] = slurm.int16orNone(self._record.cores_per_socket) + Job_dict['core_spec'] = int16orNone(self._record.core_spec) + Job_dict['cores_per_socket'] = int16orNone(self._record.cores_per_socket) if self._record.cpus_per_task == slurm.NO_VAL16: Job_dict['cpus_per_task'] = "N/A" else: Job_dict['cpus_per_task'] = self._record.cpus_per_task - Job_dict['cpus_per_tres'] = slurm.stringOrNone(self._record.cpus_per_tres, '') - Job_dict['cpu_freq_gov'] = slurm.int32orNone(self._record.cpu_freq_gov) - Job_dict['cpu_freq_max'] = slurm.int32orNone(self._record.cpu_freq_max) - Job_dict['cpu_freq_min'] = slurm.int32orNone(self._record.cpu_freq_min) - Job_dict['dependency'] = slurm.stringOrNone(self._record.dependency, '') + Job_dict['cpus_per_tres'] = stringOrNone(self._record.cpus_per_tres, '') + Job_dict['cpu_freq_gov'] = int32orNone(self._record.cpu_freq_gov) + Job_dict['cpu_freq_max'] = int32orNone(self._record.cpu_freq_max) + Job_dict['cpu_freq_min'] = int32orNone(self._record.cpu_freq_min) + Job_dict['dependency'] = stringOrNone(self._record.dependency, '') if WIFSIGNALED(self._record.derived_ec): term_sig = WTERMSIG(self._record.derived_ec) @@ -2026,7 +2076,7 @@ cdef class job: Job_dict['eligible_time'] = self._record.eligible_time Job_dict['end_time'] = self._record.end_time - Job_dict['exc_nodes'] = slurm.listOrNone(self._record.exc_nodes, ',') + Job_dict['exc_nodes'] = listOrNone(self._record.exc_nodes, ',') if WIFSIGNALED(self._record.exit_code): term_sig = WTERMSIG(self._record.exit_code) @@ -2035,16 +2085,16 @@ cdef class job: Job_dict['exit_code'] = str(exit_status) + ":" + str(term_sig) - Job_dict['features'] = slurm.listOrNone(self._record.features, ',') + Job_dict['features'] = listOrNone(self._record.features, ',') if self._record.fed_siblings_active or self._record.fed_siblings_viable: - Job_dict['fed_origin'] = slurm.stringOrNone( + Job_dict['fed_origin'] = stringOrNone( self._record.fed_origin_str, '' ) - Job_dict['fed_viable_siblings'] = slurm.stringOrNone( + Job_dict['fed_viable_siblings'] = stringOrNone( self._record.fed_siblings_viable_str, '' ) - Job_dict['fed_active_siblings'] = slurm.stringOrNone( + Job_dict['fed_active_siblings'] = stringOrNone( self._record.fed_siblings_active_str, '' ) @@ -2068,32 +2118,32 @@ cdef class job: # JOB RESOURCES HERE Job_dict['job_id'] = self._record.job_id - Job_dict['job_state'] = slurm.stringOrNone( + Job_dict['job_state'] = stringOrNone( slurm.slurm_job_state_string(self._record.job_state), '' ) slurm.slurm_make_time_str(&self._record.last_sched_eval, time_str, sizeof(time_str)) - Job_dict['last_sched_eval'] = slurm.stringOrNone(time_str, '') + Job_dict['last_sched_eval'] = stringOrNone(time_str, '') Job_dict['licenses'] = __get_licenses(self._record.licenses) Job_dict['max_cpus'] = self._record.max_cpus Job_dict['max_nodes'] = self._record.max_nodes - Job_dict['mem_per_tres'] = slurm.stringOrNone(self._record.mem_per_tres, '') - Job_dict['name'] = slurm.stringOrNone(self._record.name, '') - Job_dict['network'] = slurm.stringOrNone(self._record.network, '') - Job_dict['nodes'] = slurm.stringOrNone(self._record.nodes, '') + Job_dict['mem_per_tres'] = stringOrNone(self._record.mem_per_tres, '') + Job_dict['name'] = stringOrNone(self._record.name, '') + Job_dict['network'] = stringOrNone(self._record.network, '') + Job_dict['nodes'] = stringOrNone(self._record.nodes, '') Job_dict['nice'] = (self._record.nice) - NICE_OFFSET - Job_dict['ntasks_per_core'] = slurm.int16orUnlimited(self._record.ntasks_per_core, "int") - Job_dict['ntasks_per_core_str'] = slurm.int16orUnlimited(self._record.ntasks_per_core, "string") + Job_dict['ntasks_per_core'] = int16orUnlimited(self._record.ntasks_per_core, "int") + Job_dict['ntasks_per_core_str'] = int16orUnlimited(self._record.ntasks_per_core, "string") Job_dict['ntasks_per_node'] = self._record.ntasks_per_node - Job_dict['ntasks_per_socket'] = slurm.int16orUnlimited(self._record.ntasks_per_socket, "int") - Job_dict['ntasks_per_socket_str'] = slurm.int16orUnlimited(self._record.ntasks_per_socket, "string") + Job_dict['ntasks_per_socket'] = int16orUnlimited(self._record.ntasks_per_socket, "int") + Job_dict['ntasks_per_socket_str'] = int16orUnlimited(self._record.ntasks_per_socket, "string") Job_dict['ntasks_per_board'] = self._record.ntasks_per_board Job_dict['num_cpus'] = self._record.num_cpus Job_dict['num_nodes'] = self._record.num_nodes Job_dict['num_tasks'] = self._record.num_tasks - Job_dict['partition'] = slurm.stringOrNone(self._record.partition, '') + Job_dict['partition'] = stringOrNone(self._record.partition, '') if self._record.pn_min_memory & slurm.MEM_PER_CPU: self._record.pn_min_memory &= (~slurm.MEM_PER_CPU) @@ -2116,24 +2166,24 @@ cdef class job: slurm.slurm_make_time_str( &self._record.preemptable_time, time_str, sizeof(time_str) ) - Job_dict['preempt_eligible_time'] = slurm.stringOrNone(time_str, '') + Job_dict['preempt_eligible_time'] = stringOrNone(time_str, '') if self._record.preempt_time == 0: Job_dict['preempt_time'] = "None" else: slurm.slurm_make_time_str(&self._record.preempt_time, time_str, sizeof(time_str)) - Job_dict['preempt_time'] = slurm.stringOrNone(time_str, '') + Job_dict['preempt_time'] = stringOrNone(time_str, '') Job_dict['priority'] = self._record.priority Job_dict['profile'] = self._record.profile - Job_dict['qos'] = slurm.stringOrNone(self._record.qos, '') + Job_dict['qos'] = stringOrNone(self._record.qos, '') Job_dict['reboot'] = self._record.reboot - Job_dict['req_nodes'] = slurm.listOrNone(self._record.req_nodes, ',') + Job_dict['req_nodes'] = listOrNone(self._record.req_nodes, ',') Job_dict['req_switch'] = self._record.req_switch Job_dict['requeue'] = bool(self._record.requeue) Job_dict['resize_time'] = self._record.resize_time Job_dict['restart_cnt'] = self._record.restart_cnt - Job_dict['resv_name'] = slurm.stringOrNone(self._record.resv_name, '') + Job_dict['resv_name'] = stringOrNone(self._record.resv_name, '') if IS_JOB_PENDING(self._job_ptr.job_array[i]): run_time = 0 @@ -2152,8 +2202,8 @@ cdef class job: Job_dict['run_time'] = run_time Job_dict['run_time_str'] = secs2time_str(run_time) - Job_dict['sched_nodes'] = slurm.stringOrNone(self._record.sched_nodes, '') - Job_dict['selinux_context'] = slurm.stringOrNone(self._record.selinux_context, '') + Job_dict['sched_nodes'] = stringOrNone(self._record.sched_nodes, '') + Job_dict['selinux_context'] = stringOrNone(self._record.selinux_context, '') if self._record.shared == 0: Job_dict['shared'] = "0" @@ -2166,13 +2216,13 @@ cdef class job: Job_dict['show_flags'] = self._record.show_flags Job_dict['sockets_per_board'] = self._record.sockets_per_board - Job_dict['sockets_per_node'] = slurm.int16orNone(self._record.sockets_per_node) + Job_dict['sockets_per_node'] = int16orNone(self._record.sockets_per_node) Job_dict['start_time'] = self._record.start_time if self._record.state_desc: Job_dict['state_reason'] = self._record.state_desc.decode("UTF-8").replace(" ", "_") else: - Job_dict['state_reason'] = slurm.stringOrNone( + Job_dict['state_reason'] = stringOrNone( slurm.slurm_job_reason_string( self._record.state_reason ), '' @@ -2180,13 +2230,13 @@ cdef class job: if self._record.batch_flag: slurm.slurm_get_job_stderr(tmp_line, sizeof(tmp_line), self._record) - Job_dict['std_err'] = slurm.stringOrNone(tmp_line, '') + Job_dict['std_err'] = stringOrNone(tmp_line, '') slurm.slurm_get_job_stdin(tmp_line, sizeof(tmp_line), self._record) - Job_dict['std_in'] = slurm.stringOrNone(tmp_line, '') + Job_dict['std_in'] = stringOrNone(tmp_line, '') slurm.slurm_get_job_stdout(tmp_line, sizeof(tmp_line), self._record) - Job_dict['std_out'] = slurm.stringOrNone(tmp_line, '') + Job_dict['std_out'] = stringOrNone(tmp_line, '') else: Job_dict['std_err'] = None Job_dict['std_in'] = None @@ -2194,7 +2244,7 @@ cdef class job: Job_dict['submit_time'] = self._record.submit_time Job_dict['suspend_time'] = self._record.suspend_time - Job_dict['system_comment'] = slurm.stringOrNone(self._record.system_comment, '') + Job_dict['system_comment'] = stringOrNone(self._record.system_comment, '') if self._record.time_limit == slurm.NO_VAL: Job_dict['time_limit'] = "Partition_Limit" @@ -2208,26 +2258,26 @@ cdef class job: self._record.time_limit) Job_dict['time_min'] = self._record.time_min - Job_dict['threads_per_core'] = slurm.int16orNone(self._record.threads_per_core) - Job_dict['tres_alloc_str'] = slurm.stringOrNone(self._record.tres_alloc_str, '') - Job_dict['tres_bind'] = slurm.stringOrNone(self._record.tres_bind, '') - Job_dict['tres_freq'] = slurm.stringOrNone(self._record.tres_freq, '') - Job_dict['tres_per_job'] = slurm.stringOrNone(self._record.tres_per_job, '') - Job_dict['tres_per_node'] = slurm.stringOrNone(self._record.tres_per_node, '') - Job_dict['tres_per_socket'] = slurm.stringOrNone(self._record.tres_per_socket, '') - Job_dict['tres_per_task'] = slurm.stringOrNone(self._record.tres_per_task, '') - Job_dict['tres_req_str'] = slurm.stringOrNone(self._record.tres_req_str, '') + Job_dict['threads_per_core'] = int16orNone(self._record.threads_per_core) + Job_dict['tres_alloc_str'] = stringOrNone(self._record.tres_alloc_str, '') + Job_dict['tres_bind'] = stringOrNone(self._record.tres_bind, '') + Job_dict['tres_freq'] = stringOrNone(self._record.tres_freq, '') + Job_dict['tres_per_job'] = stringOrNone(self._record.tres_per_job, '') + Job_dict['tres_per_node'] = stringOrNone(self._record.tres_per_node, '') + Job_dict['tres_per_socket'] = stringOrNone(self._record.tres_per_socket, '') + Job_dict['tres_per_task'] = stringOrNone(self._record.tres_per_task, '') + Job_dict['tres_req_str'] = stringOrNone(self._record.tres_req_str, '') Job_dict['user_id'] = self._record.user_id Job_dict['wait4switch'] = self._record.wait4switch - Job_dict['wckey'] = slurm.stringOrNone(self._record.wckey, '') - Job_dict['work_dir'] = slurm.stringOrNone(self._record.work_dir, '') + Job_dict['wckey'] = stringOrNone(self._record.wckey, '') + Job_dict['work_dir'] = stringOrNone(self._record.work_dir, '') Job_dict['cpus_allocated'] = {} Job_dict['cpus_alloc_layout'] = {} if self._record.nodes is not NULL: hl = hostlist() - _nodes = slurm.stringOrNone(self._record.nodes, '') + _nodes = stringOrNone(self._record.nodes, '') hl.create(_nodes) host_list = hl.get_list() if host_list: @@ -2294,7 +2344,7 @@ cdef class job: try: error = slurm.slurm_job_cpus_allocated_str_on_node(cpus, cpus_len, job_resrcs_ptr, nodeName) if error == 0: - cpus_list = self.__unrange(slurm.stringOrNone(cpus, '')) + cpus_list = self.__unrange(stringOrNone(cpus, '')) finally: free(cpus) @@ -2349,11 +2399,11 @@ cdef class job: self._job_ptr = NULL else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def slurm_job_batch_script(self, jobid): """Return the contents of the batch-script for a Job. - + The string returned also includes all the "\\n" characters (new-line). Args: @@ -2793,7 +2843,7 @@ cdef class job: names. Args: - job_opts (dict): Job information. + job_opts (dict): Job information. Returns: (int): The job id of the submitted job. @@ -3007,7 +3057,7 @@ def slurm_pid2jobid(uint32_t JobPID=0): if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode, JobID @@ -3132,7 +3182,7 @@ def slurm_seterrno(int Errno=0): def slurm_perror(char* Msg=''): """Print to standard error the supplied header. - + Header is followed by a colon, followed by a text description of the last Slurm error code generated. @@ -3192,13 +3242,13 @@ cdef class node: if rc == slurm.SLURM_SUCCESS: all_nodes = [] for record in self._Node_ptr.node_array[:self._Node_ptr.record_count]: - all_nodes.append(slurm.stringOrNone(record.name, '')) + all_nodes.append(stringOrNone(record.name, '')) slurm.slurm_free_node_info_msg(self._Node_ptr) self._Node_ptr = NULL return all_nodes else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def find_id(self, nodeID): """Get node information for a given node. @@ -3258,7 +3308,7 @@ cdef class node: if rc != slurm.SLURM_SUCCESS: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) if slurm.slurm_load_ctl_conf(NULL, &slurm_ctl_conf_ptr) != slurm.SLURM_SUCCESS: raise ValueError("Cannot load slurmctld conf file") @@ -3300,7 +3350,7 @@ cdef class node: total_used = record.cpus - Host_dict['arch'] = slurm.stringOrNone(record.arch, '') + Host_dict['arch'] = stringOrNone(record.arch, '') Host_dict['boards'] = record.boards Host_dict['boot_time'] = record.boot_time Host_dict['cores'] = record.cores @@ -3308,49 +3358,49 @@ cdef class node: Host_dict['cores_per_socket'] = record.cores # TODO: cpu_alloc, cpu_tot Host_dict['cpus'] = record.cpus - + # FIXME #if record.cpu_bind: # slurm.slurm_sprint_cpu_bind_type(tmp_str, record.cpu_bind) - # Host_dict['cpu_bind'] = slurm.stringOrNone(tmp_str, '') - - Host_dict['cpu_load'] = slurm.int32orNone(record.cpu_load) - Host_dict['cpu_spec_list'] = slurm.listOrNone(record.cpu_spec_list, '') - Host_dict['extra'] = slurm.stringOrNone(record.extra, '') - Host_dict['features'] = slurm.listOrNone(record.features, '') - Host_dict['features_active'] = slurm.listOrNone(record.features_act, '') - Host_dict['free_mem'] = slurm.int64orNone(record.free_mem) - Host_dict['gres'] = slurm.listOrNone(record.gres, ',') - Host_dict['gres_drain'] = slurm.listOrNone(record.gres_drain, '') + # Host_dict['cpu_bind'] = stringOrNone(tmp_str, '') + + Host_dict['cpu_load'] = int32orNone(record.cpu_load) + Host_dict['cpu_spec_list'] = listOrNone(record.cpu_spec_list, '') + Host_dict['extra'] = stringOrNone(record.extra, '') + Host_dict['features'] = listOrNone(record.features, '') + Host_dict['features_active'] = listOrNone(record.features_act, '') + Host_dict['free_mem'] = int64orNone(record.free_mem) + Host_dict['gres'] = listOrNone(record.gres, ',') + Host_dict['gres_drain'] = listOrNone(record.gres_drain, '') Host_dict['gres_used'] = self.parse_gres( - slurm.stringOrNone(record.gres_used, '') + stringOrNone(record.gres_used, '') ) Host_dict['last_busy'] = record.last_busy - Host_dict['mcs_label'] = slurm.stringOrNone(record.mcs_label, '') + Host_dict['mcs_label'] = stringOrNone(record.mcs_label, '') Host_dict['mem_spec_limit'] = record.mem_spec_limit - Host_dict['name'] = slurm.stringOrNone(record.name, '') + Host_dict['name'] = stringOrNone(record.name, '') # TODO: next_state - Host_dict['node_addr'] = slurm.stringOrNone(record.node_addr, '') - Host_dict['node_hostname'] = slurm.stringOrNone(record.node_hostname, '') - Host_dict['os'] = slurm.stringOrNone(record.os, '') + Host_dict['node_addr'] = stringOrNone(record.node_addr, '') + Host_dict['node_hostname'] = stringOrNone(record.node_hostname, '') + Host_dict['os'] = stringOrNone(record.os, '') if record.owner == slurm.NO_VAL: Host_dict['owner'] = None else: Host_dict['owner'] = record.owner - Host_dict['partitions'] = slurm.listOrNone(record.partitions, ',') + Host_dict['partitions'] = listOrNone(record.partitions, ',') Host_dict['real_memory'] = record.real_memory Host_dict['slurmd_start_time'] = record.slurmd_start_time Host_dict['sockets'] = record.sockets Host_dict['threads'] = record.threads Host_dict['tmp_disk'] = record.tmp_disk Host_dict['weight'] = record.weight - Host_dict['tres_fmt_str'] = slurm.stringOrNone(record.tres_fmt_str, '') - Host_dict['version'] = slurm.stringOrNone(record.version, '') + Host_dict['tres_fmt_str'] = stringOrNone(record.tres_fmt_str, '') + Host_dict['version'] = stringOrNone(record.version, '') - Host_dict['reason'] = slurm.stringOrNone(record.reason, '') + Host_dict['reason'] = stringOrNone(record.reason, '') if record.reason_time == 0: Host_dict['reason_time'] = None else: @@ -3425,11 +3475,11 @@ cdef class node: node_state |= NODE_STATE_MIXED Host_dict['state'] = ( - slurm.stringOrNone(slurm.slurm_node_state_string(node_state), '') + - slurm.stringOrNone(cloud_str, '') + - slurm.stringOrNone(comp_str, '') + - slurm.stringOrNone(drain_str, '') + - slurm.stringOrNone(power_str, '') + stringOrNone(slurm.slurm_node_state_string(node_state), '') + + stringOrNone(cloud_str, '') + + stringOrNone(comp_str, '') + + stringOrNone(drain_str, '') + + stringOrNone(power_str, '') ) slurm.slurm_get_select_nodeinfo(record.select_nodeinfo, @@ -3438,7 +3488,7 @@ cdef class node: Host_dict['alloc_mem'] = alloc_mem - b_name = slurm.stringOrNone(record.name, '') + b_name = stringOrNone(record.name, '') self._NodeDict[b_name] = Host_dict if nodeID: @@ -3459,7 +3509,7 @@ cdef class node: Args: node_dict (dict): A populated node dictionary, an empty one is created by create_node_dict - + Returns: (int): 0 for success or -1 for error, and the slurm error code is set appropriately. @@ -3485,7 +3535,7 @@ cdef class node: self._Node_ptr = NULL else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def slurm_update_node(dict node_dict): @@ -3537,14 +3587,14 @@ def slurm_update_node(dict node_dict): if errCode != slurm.SLURM_SUCCESS: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode def create_node_dict(): """Return a an update_node dictionary - + This dictionary can be populated by the user and used for the update_node call. @@ -3685,32 +3735,32 @@ cdef class jobstep: else: Step_dict['step_id_str'] = "{0}.{1}".format(job_id, step_id) - Step_dict['cluster'] = slurm.stringOrNone(job_step_info_ptr.job_steps[i].cluster, '') - Step_dict['container'] = slurm.stringOrNone(job_step_info_ptr.job_steps[i].container, '') - Step_dict['cpus_per_tres'] = slurm.stringOrNone(job_step_info_ptr.job_steps[i].cpus_per_tres, '') + Step_dict['cluster'] = stringOrNone(job_step_info_ptr.job_steps[i].cluster, '') + Step_dict['container'] = stringOrNone(job_step_info_ptr.job_steps[i].container, '') + Step_dict['cpus_per_tres'] = stringOrNone(job_step_info_ptr.job_steps[i].cpus_per_tres, '') - Step_dict['dist'] = slurm.stringOrNone( + Step_dict['dist'] = stringOrNone( slurm.slurm_step_layout_type_name( job_step_info_ptr.job_steps[i].task_dist ), '' ) - Step_dict['mem_per_tres'] = slurm.stringOrNone(job_step_info_ptr.job_steps[i].mem_per_tres, '') - Step_dict['name'] = slurm.stringOrNone( job_step_info_ptr.job_steps[i].name, '') - Step_dict['network'] = slurm.stringOrNone( job_step_info_ptr.job_steps[i].network, '') - Step_dict['nodes'] = slurm.stringOrNone(job_step_info_ptr.job_steps[i].nodes, '') + Step_dict['mem_per_tres'] = stringOrNone(job_step_info_ptr.job_steps[i].mem_per_tres, '') + Step_dict['name'] = stringOrNone( job_step_info_ptr.job_steps[i].name, '') + Step_dict['network'] = stringOrNone( job_step_info_ptr.job_steps[i].network, '') + Step_dict['nodes'] = stringOrNone(job_step_info_ptr.job_steps[i].nodes, '') Step_dict['num_cpus'] = job_step_info_ptr.job_steps[i].num_cpus Step_dict['num_tasks'] = job_step_info_ptr.job_steps[i].num_tasks - Step_dict['partition'] = slurm.stringOrNone(job_step_info_ptr.job_steps[i].partition, '') - Step_dict['resv_ports'] = slurm.stringOrNone(job_step_info_ptr.job_steps[i].resv_ports, '') + Step_dict['partition'] = stringOrNone(job_step_info_ptr.job_steps[i].partition, '') + Step_dict['resv_ports'] = stringOrNone(job_step_info_ptr.job_steps[i].resv_ports, '') Step_dict['run_time'] = job_step_info_ptr.job_steps[i].run_time - Step_dict['srun_host'] = slurm.stringOrNone(job_step_info_ptr.job_steps[i].srun_host, '') + Step_dict['srun_host'] = stringOrNone(job_step_info_ptr.job_steps[i].srun_host, '') Step_dict['srun_pid'] = job_step_info_ptr.job_steps[i].srun_pid Step_dict['start_time'] = job_step_info_ptr.job_steps[i].start_time job_state = slurm.slurm_job_state_string(job_step_info_ptr.job_steps[i].state) - Step_dict['state'] = slurm.stringOrNone(job_state, '') - Step_dict['submit_line'] = slurm.stringOrNone(job_step_info_ptr.job_steps[i].submit_line, '') + Step_dict['state'] = stringOrNone(job_state, '') + Step_dict['submit_line'] = stringOrNone(job_step_info_ptr.job_steps[i].submit_line, '') if job_step_info_ptr.job_steps[i].time_limit == slurm.INFINITE: Step_dict['time_limit'] = "UNLIMITED" @@ -3719,31 +3769,31 @@ cdef class jobstep: Step_dict['time_limit'] = job_step_info_ptr.job_steps[i].time_limit Step_dict['time_limit_str'] = secs2time_str(job_step_info_ptr.job_steps[i].time_limit) - Step_dict['tres_alloc_str'] = slurm.stringOrNone( + Step_dict['tres_alloc_str'] = stringOrNone( job_step_info_ptr.job_steps[i].tres_alloc_str, '' ) - Step_dict['tres_bind'] = slurm.stringOrNone( + Step_dict['tres_bind'] = stringOrNone( job_step_info_ptr.job_steps[i].tres_bind, '' ) - Step_dict['tres_freq'] = slurm.stringOrNone( + Step_dict['tres_freq'] = stringOrNone( job_step_info_ptr.job_steps[i].tres_freq, '' ) - Step_dict['tres_per_step'] = slurm.stringOrNone( + Step_dict['tres_per_step'] = stringOrNone( job_step_info_ptr.job_steps[i].tres_per_step, '' ) - Step_dict['tres_per_node'] = slurm.stringOrNone( + Step_dict['tres_per_node'] = stringOrNone( job_step_info_ptr.job_steps[i].tres_per_node, '' ) - Step_dict['tres_per_socket'] = slurm.stringOrNone( + Step_dict['tres_per_socket'] = stringOrNone( job_step_info_ptr.job_steps[i].tres_per_socket, '' ) - Step_dict['tres_per_task'] = slurm.stringOrNone( + Step_dict['tres_per_task'] = stringOrNone( job_step_info_ptr.job_steps[i].tres_per_task, '' ) @@ -3759,7 +3809,7 @@ cdef class jobstep: """Get the slurm job step layout from a given job and step id. Args: - JobID (int): The job id. + JobID (int): The job id. StepID (int): The id of the job step. Returns: @@ -3782,18 +3832,18 @@ cdef class jobstep: Node_cnt = old_job_step_ptr.node_cnt - Layout['front_end'] = slurm.stringOrNone(old_job_step_ptr.front_end, '') + Layout['front_end'] = stringOrNone(old_job_step_ptr.front_end, '') Layout['node_cnt'] = Node_cnt - Layout['node_list'] = slurm.stringOrNone(old_job_step_ptr.node_list, '') + Layout['node_list'] = stringOrNone(old_job_step_ptr.node_list, '') Layout['plane_size'] = old_job_step_ptr.plane_size Layout['task_cnt'] = old_job_step_ptr.task_cnt Layout['task_dist'] = old_job_step_ptr.task_dist - Layout['task_dist'] = slurm.stringOrNone( + Layout['task_dist'] = stringOrNone( slurm.slurm_step_layout_type_name(old_job_step_ptr.task_dist), '' ) hl = hostlist() - node_list = slurm.stringOrNone(old_job_step_ptr.node_list, '') + node_list = stringOrNone(old_job_step_ptr.node_list, '') hl.create(node_list) Nodes = hl.get_list() hl.destroy() @@ -3802,7 +3852,7 @@ cdef class jobstep: Tids_list = [] for j in range(old_job_step_ptr.tasks[i]): Tids_list.append(old_job_step_ptr.tids[i][j]) - Node_list.append([slurm.stringOrNone(node, ''), Tids_list]) + Node_list.append([stringOrNone(node, ''), Tids_list]) Layout['tasks'] = Node_list @@ -3819,7 +3869,7 @@ cdef class jobstep: cdef class hostlist: """Wrapper for Slurm hostlist functions.""" - cdef slurm.hostlist_t hl + cdef slurm.hostlist_t *hl def __cinit__(self): self.hl = NULL @@ -3856,7 +3906,7 @@ cdef class hostlist: (list): The list of hostnames in case of success or None on error. """ cdef: - slurm.hostlist_t hlist = NULL + slurm.hostlist_t *hlist = NULL char *hostlist_s = NULL char *tmp_str = NULL list host_list = None @@ -3902,7 +3952,7 @@ cdef class hostlist: def ranged_string(self): if self.hl is not NULL: - return slurm.stringOrNone(slurm.slurm_hostlist_ranged_string_malloc(self.hl), '') + return stringOrNone(slurm.slurm_hostlist_ranged_string_xmalloc(self.hl), '') def find(self, hostname): if self.hl is not NULL: @@ -3911,7 +3961,7 @@ cdef class hostlist: def pop(self): if self.hl is not NULL: - return slurm.stringOrNone(slurm.slurm_hostlist_shift(self.hl), '') + return stringOrNone(slurm.slurm_hostlist_shift(self.hl), '') def shift(self): return self.pop() @@ -4036,11 +4086,11 @@ cdef class trigger: Trigger_dict['flags'] = record.flags Trigger_dict['trig_id'] = trigger_id Trigger_dict['res_type'] = record.res_type - Trigger_dict['res_id'] = slurm.stringOrNone(record.res_id, '') + Trigger_dict['res_id'] = stringOrNone(record.res_id, '') Trigger_dict['trig_type'] = record.trig_type Trigger_dict['offset'] = record.offset - 0x8000 Trigger_dict['user_id'] = record.user_id - Trigger_dict['program'] = slurm.stringOrNone(record.program, '') + Trigger_dict['program'] = stringOrNone(record.program, '') Triggers[trigger_id] = Trigger_dict @@ -4077,7 +4127,7 @@ cdef class trigger: errCode = slurm.slurm_clear_trigger(&trigger_clear) if errCode != slurm.SLURM_SUCCESS: - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(errCode), ''), errCode) + raise ValueError(stringOrNone(slurm.slurm_strerror(errCode), ''), errCode) return errCode @@ -4180,7 +4230,7 @@ cdef class reservation: self._lastUpdate = self._Res_ptr.last_update else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -4210,25 +4260,25 @@ cdef class reservation: for record in self._Res_ptr.reservation_array[:self._Res_ptr.record_count]: - name = slurm.stringOrNone(record.name, '') + name = stringOrNone(record.name, '') Res_dict = {} - Res_dict['accounts'] = slurm.listOrNone(record.accounts, ',') - Res_dict['burst_buffer'] = slurm.listOrNone(record.burst_buffer, ',') + Res_dict['accounts'] = listOrNone(record.accounts, ',') + Res_dict['burst_buffer'] = listOrNone(record.burst_buffer, ',') Res_dict['core_cnt'] = record.core_cnt Res_dict['end_time'] = record.end_time - Res_dict['features'] = slurm.stringOrNone(record.features, '') + Res_dict['features'] = stringOrNone(record.features, '') flags = slurm.slurm_reservation_flags_string(&record) - Res_dict['flags'] = slurm.stringOrNone(flags, '') + Res_dict['flags'] = stringOrNone(flags, '') Res_dict['licenses'] = __get_licenses(record.licenses) Res_dict['node_cnt'] = record.node_cnt - Res_dict['node_list'] = slurm.stringOrNone(record.node_list, '') - Res_dict['partition'] = slurm.stringOrNone(record.partition, '') + Res_dict['node_list'] = stringOrNone(record.node_list, '') + Res_dict['partition'] = stringOrNone(record.partition, '') Res_dict['start_time'] = record.start_time - Res_dict['tres_str'] = slurm.listOrNone(record.tres_str, ',') - Res_dict['users'] = slurm.listOrNone(record.users, ',') + Res_dict['tres_str'] = listOrNone(record.tres_str, ',') + Res_dict['users'] = listOrNone(record.users, ',') Reservations[name] = Res_dict @@ -4318,15 +4368,10 @@ def slurm_create_reservation(dict reservation_dict={}): resv_msg.end_time = reservation_dict['end_time'] if reservation_dict.get('node_cnt'): - int_value = reservation_dict['node_cnt'] - resv_msg.node_cnt = xmalloc(sizeof(uint32_t) * 2) - resv_msg.node_cnt[0] = int_value - resv_msg.node_cnt[1] = 0 + resv_msg.node_cnt = reservation_dict['node_cnt'] if reservation_dict.get('core_cnt') and not reservation_dict.get('node_list'): - uint32_value = reservation_dict['core_cnt'][0] - resv_msg.core_cnt = xmalloc(sizeof(uint32_t)) - resv_msg.core_cnt[0] = uint32_value + resv_msg.core_cnt = reservation_dict['core_cnt'][0] if reservation_dict.get('node_list'): b_node_list = reservation_dict['node_list'].encode("UTF-8", "replace") @@ -4336,12 +4381,7 @@ def slurm_create_reservation(dict reservation_dict={}): hl.create(b_node_list) if len(reservation_dict['core_cnt']) != hl.count(): raise ValueError("core_cnt list must have the same # elements as the expanded hostlist") - resv_msg.core_cnt = xmalloc(sizeof(uint32_t) * hl.count()) - int_value = 0 - for cores in reservation_dict['core_cnt']: - uint32_value = cores - resv_msg.core_cnt[int_value] = uint32_value - int_value += 1 + resv_msg.core_cnt = len(reservation_dict['core_cnt']) if reservation_dict.get('users'): b_users = reservation_dict['users'].encode("UTF-8", "replace") @@ -4375,11 +4415,11 @@ def slurm_create_reservation(dict reservation_dict={}): resID = '' if resid is not NULL: - resID = slurm.stringOrNone(resid, '') + resID = stringOrNone(resid, '') free(resid) else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return resID @@ -4422,15 +4462,10 @@ def slurm_update_reservation(dict reservation_dict={}): resv_msg.name = b_name if reservation_dict.get('node_cnt'): - int_value = reservation_dict['node_cnt'] - resv_msg.node_cnt = xmalloc(sizeof(uint32_t) * 2) - resv_msg.node_cnt[0] = int_value - resv_msg.node_cnt[1] = 0 + resv_msg.node_cnt = reservation_dict['node_cnt'] if reservation_dict.get('core_cnt') and not reservation_dict.get('node_list'): - uint32_value = reservation_dict['core_cnt'][0] - resv_msg.core_cnt = xmalloc(sizeof(uint32_t)) - resv_msg.core_cnt[0] = uint32_value + resv_msg.core_cnt = reservation_dict['core_cnt'][0] if reservation_dict.get('node_list'): b_node_list = reservation_dict['node_list'] @@ -4440,12 +4475,7 @@ def slurm_update_reservation(dict reservation_dict={}): hl.create(b_node_list) if len(reservation_dict['core_cnt']) != hl.count(): raise ValueError("core_cnt list must have the same # elements as the expanded hostlist") - resv_msg.core_cnt = xmalloc(sizeof(uint32_t) * hl.count()) - int_value = 0 - for cores in reservation_dict['core_cnt']: - uint32_value = cores - resv_msg.core_cnt[int_value] = uint32_value - int_value += 1 + resv_msg.core_cnt = len(reservation_dict['core_cnt']) if reservation_dict.get('users'): b_users = reservation_dict['users'].encode("UTF-8", "replace") @@ -4474,7 +4504,7 @@ def slurm_update_reservation(dict reservation_dict={}): errCode = slurm.slurm_update_reservation(&resv_msg) if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -4502,14 +4532,14 @@ def slurm_delete_reservation(ResID): if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode def create_reservation_dict(): """Create and empty dict for use with create_reservation method. - + Returns a dictionary that can be populated by the user an used for the update_reservation and create_reservation calls. @@ -4579,7 +4609,7 @@ cdef class topology: errCode = slurm.slurm_load_topo(&self._topo_info_ptr) if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -4605,12 +4635,12 @@ cdef class topology: Topo_dict = {} - name = slurm.stringOrNone(self._topo_info_ptr.topo_array[i].name, '') + name = stringOrNone(self._topo_info_ptr.topo_array[i].name, '') Topo_dict['name'] = name - Topo_dict['nodes'] = slurm.stringOrNone(self._topo_info_ptr.topo_array[i].nodes, '') + Topo_dict['nodes'] = stringOrNone(self._topo_info_ptr.topo_array[i].nodes, '') Topo_dict['level'] = self._topo_info_ptr.topo_array[i].level Topo_dict['link_speed'] = self._topo_info_ptr.topo_array[i].link_speed - Topo_dict['switches'] = slurm.stringOrNone(self._topo_info_ptr.topo_array[i].switches, '') + Topo_dict['switches'] = stringOrNone(self._topo_info_ptr.topo_array[i].switches, '') Topo[name] = Topo_dict @@ -4628,6 +4658,7 @@ cdef class topology: if self._topo_info_ptr is not NULL: slurm.slurm_print_topo_info_msg(slurm.stdout, self._topo_info_ptr, + NULL, self._ShowFlags) @@ -4749,7 +4780,7 @@ cdef class statistics: return self._StatsDict else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def reset(self): """Reset scheduling statistics @@ -4767,7 +4798,7 @@ cdef class statistics: return errCode else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) cpdef __rpc_num2string(self, uint16_t opcode): cdef dict num2string @@ -5074,7 +5105,7 @@ cdef class front_end: if errCode != 0: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) return errCode @@ -5113,22 +5144,22 @@ cdef class front_end: if self._FrontEndNode_ptr is not NULL: for record in self._FrontEndNode_ptr.front_end_array[:self._FrontEndNode_ptr.record_count]: FE_dict = {} - name = slurm.stringOrNone(record.name, '') + name = stringOrNone(record.name, '') FE_dict['boot_time'] = record.boot_time - FE_dict['allow_groups'] = slurm.stringOrNone(record.allow_groups, '') - FE_dict['allow_users'] = slurm.stringOrNone(record.allow_users, '') - FE_dict['deny_groups'] = slurm.stringOrNone(record.deny_groups, '') - FE_dict['deny_users'] = slurm.stringOrNone(record.deny_users, '') + FE_dict['allow_groups'] = stringOrNone(record.allow_groups, '') + FE_dict['allow_users'] = stringOrNone(record.allow_users, '') + FE_dict['deny_groups'] = stringOrNone(record.deny_groups, '') + FE_dict['deny_users'] = stringOrNone(record.deny_users, '') fe_node_state = get_node_state(record.node_state) - FE_dict['node_state'] = slurm.stringOrNone(fe_node_state, '') + FE_dict['node_state'] = stringOrNone(fe_node_state, '') - FE_dict['reason'] = slurm.stringOrNone(record.reason, '') + FE_dict['reason'] = stringOrNone(record.reason, '') FE_dict['reason_time'] = record.reason_time FE_dict['reason_uid'] = record.reason_uid FE_dict['slurmd_start_time'] = record.slurmd_start_time - FE_dict['version'] = slurm.stringOrNone(record.version, '') + FE_dict['version'] = stringOrNone(record.version, '') FENode[name] = FE_dict @@ -5173,7 +5204,7 @@ cdef class qos: if QOSList is NULL: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) else: self._QOSList = QOSList @@ -5221,46 +5252,46 @@ cdef class qos: for i in range(listNum): qos = slurm.slurm_list_next(iters) - name = slurm.stringOrNone(qos.name, '') + name = stringOrNone(qos.name, '') # QOS infos QOS_info = {} if name: - QOS_info['description'] = slurm.stringOrNone(qos.description, '') + QOS_info['description'] = stringOrNone(qos.description, '') QOS_info['flags'] = qos.flags QOS_info['grace_time'] = qos.grace_time QOS_info['grp_jobs'] = qos.grp_jobs QOS_info['grp_submit_jobs'] = qos.grp_submit_jobs - QOS_info['grp_tres'] = slurm.stringOrNone(qos.grp_tres, '') + QOS_info['grp_tres'] = stringOrNone(qos.grp_tres, '') # QOS_info['grp_tres_ctld'] - QOS_info['grp_tres_mins'] = slurm.stringOrNone(qos.grp_tres_mins, '') + QOS_info['grp_tres_mins'] = stringOrNone(qos.grp_tres_mins, '') # QOS_info['grp_tres_mins_ctld'] - QOS_info['grp_tres_run_mins'] = slurm.stringOrNone(qos.grp_tres_run_mins, '') + QOS_info['grp_tres_run_mins'] = stringOrNone(qos.grp_tres_run_mins, '') # QOS_info['grp_tres_run_mins_ctld'] QOS_info['grp_wall'] = qos.grp_wall QOS_info['max_jobs_pu'] = qos.max_jobs_pu QOS_info['max_submit_jobs_pu'] = qos.max_submit_jobs_pu - QOS_info['max_tres_mins_pj'] = slurm.stringOrNone(qos.max_tres_mins_pj, '') + QOS_info['max_tres_mins_pj'] = stringOrNone(qos.max_tres_mins_pj, '') # QOS_info['max_tres_min_pj_ctld'] - QOS_info['max_tres_pj'] = slurm.stringOrNone(qos.max_tres_pj, '') + QOS_info['max_tres_pj'] = stringOrNone(qos.max_tres_pj, '') # QOS_info['max_tres_min_pj_ctld'] - QOS_info['max_tres_pn'] = slurm.stringOrNone(qos.max_tres_pn, '') + QOS_info['max_tres_pn'] = stringOrNone(qos.max_tres_pn, '') # QOS_info['max_tres_min_pn_ctld'] - QOS_info['max_tres_pu'] = slurm.stringOrNone(qos.max_tres_pu, '') + QOS_info['max_tres_pu'] = stringOrNone(qos.max_tres_pu, '') # QOS_info['max_tres_min_pu_ctld'] - QOS_info['max_tres_run_mins_pu'] = slurm.stringOrNone( + QOS_info['max_tres_run_mins_pu'] = stringOrNone( qos.max_tres_run_mins_pu, '') QOS_info['max_wall_pj'] = qos.max_wall_pj - QOS_info['min_tres_pj'] = slurm.stringOrNone(qos.min_tres_pj, '') + QOS_info['min_tres_pj'] = stringOrNone(qos.min_tres_pj, '') # QOS_info['min_tres_pj_ctld'] QOS_info['name'] = name # QOS_info['*preempt_bitstr'] = # QOS_info['preempt_list'] = qos.preempt_list qos_preempt_mode = get_preempt_mode(qos.preempt_mode) - QOS_info['preempt_mode'] = slurm.stringOrNone(qos_preempt_mode, '') + QOS_info['preempt_mode'] = stringOrNone(qos_preempt_mode, '') QOS_info['priority'] = qos.priority QOS_info['usage_factor'] = qos.usage_factor @@ -5296,7 +5327,7 @@ cdef class slurmdb_jobs: def get(self, jobids=[], userids=[], starttime=0, endtime=0, flags = None, db_flags = None, clusters = []): """Get Slurmdb information about some jobs. - + Input formats for start and end times: * today or tomorrow * midnight, noon, teatime (4PM) @@ -5331,7 +5362,7 @@ cdef class slurmdb_jobs: slurm.List JOBSList slurm.ListIterator iters = NULL - + if clusters: self.job_cond.cluster_list = slurm.slurm_list_create(NULL) for _cluster in clusters: @@ -5393,30 +5424,30 @@ cdef class slurmdb_jobs: JOBS_info = {} if job is not NULL: jobid = job.jobid - JOBS_info['account'] = slurm.stringOrNone(job.account, '') + JOBS_info['account'] = stringOrNone(job.account, '') JOBS_info['alloc_nodes'] = job.alloc_nodes JOBS_info['array_job_id'] = job.array_job_id JOBS_info['array_max_tasks'] = job.array_max_tasks JOBS_info['array_task_id'] = job.array_task_id - JOBS_info['array_task_str'] = slurm.stringOrNone(job.array_task_str, '') + JOBS_info['array_task_str'] = stringOrNone(job.array_task_str, '') JOBS_info['associd'] = job.associd - JOBS_info['blockid'] = slurm.stringOrNone(job.blockid, '') - JOBS_info['cluster'] = slurm.stringOrNone(job.cluster, '') - JOBS_info['constraints'] = slurm.stringOrNone(job.constraints, '') - JOBS_info['container'] = slurm.stringOrNone(job.container, '') + JOBS_info['blockid'] = stringOrNone(job.blockid, '') + JOBS_info['cluster'] = stringOrNone(job.cluster, '') + JOBS_info['constraints'] = stringOrNone(job.constraints, '') + JOBS_info['container'] = stringOrNone(job.container, '') JOBS_info['derived_ec'] = job.derived_ec - JOBS_info['derived_es'] = slurm.stringOrNone(job.derived_es, '') + JOBS_info['derived_es'] = stringOrNone(job.derived_es, '') JOBS_info['elapsed'] = job.elapsed JOBS_info['eligible'] = job.eligible JOBS_info['end'] = job.end - JOBS_info['env'] = slurm.stringOrNone(job.env, '') + JOBS_info['env'] = stringOrNone(job.env, '') JOBS_info['exitcode'] = job.exitcode JOBS_info['gid'] = job.gid JOBS_info['jobid'] = job.jobid - JOBS_info['jobname'] = slurm.stringOrNone(job.jobname, '') + JOBS_info['jobname'] = stringOrNone(job.jobname, '') JOBS_info['lft'] = job.lft - JOBS_info['partition'] = slurm.stringOrNone(job.partition, '') - JOBS_info['nodes'] = slurm.stringOrNone(job.nodes, '') + JOBS_info['partition'] = stringOrNone(job.partition, '') + JOBS_info['nodes'] = stringOrNone(job.nodes, '') JOBS_info['priority'] = job.priority JOBS_info['qosid'] = job.qosid JOBS_info['req_cpus'] = job.req_cpus @@ -5430,13 +5461,13 @@ cdef class slurmdb_jobs: JOBS_info['requid'] = job.requid JOBS_info['resvid'] = job.resvid - JOBS_info['resv_name'] = slurm.stringOrNone(job.resv_name,'') - JOBS_info['script'] = slurm.stringOrNone(job.script,'') + JOBS_info['resv_name'] = stringOrNone(job.resv_name,'') + JOBS_info['script'] = stringOrNone(job.script,'') JOBS_info['show_full'] = job.show_full JOBS_info['start'] = job.start JOBS_info['state'] = job.state - JOBS_info['state_str'] = slurm.stringOrNone(slurm.slurm_job_state_string(job.state), '') - + JOBS_info['state_str'] = stringOrNone(slurm.slurm_job_state_string(job.state), '') + # TRES are reported as strings in the format `TRESID=value` where TRESID is one of: # TRES_CPU=1, TRES_MEM=2, TRES_ENERGY=3, TRES_NODE=4, TRES_BILLING=5, TRES_FS_DISK=6, TRES_VMEM=7, TRES_PAGES=8 # Example: '1=0,2=745472,3=0,6=1949,7=7966720,8=0' @@ -5453,25 +5484,25 @@ cdef class slurmdb_jobs: if step is not NULL: step_id = step.step_id.step_id - step_info['container'] = slurm.stringOrNone(step.container, '') + step_info['container'] = stringOrNone(step.container, '') step_info['elapsed'] = step.elapsed step_info['end'] = step.end step_info['exitcode'] = step.exitcode - # Don't add this unless you want to create an endless recursive structure + # Don't add this unless you want to create an endless recursive structure # step_info['job_ptr'] = JOBS_Info # job's record step_info['nnodes'] = step.nnodes - step_info['nodes'] = slurm.stringOrNone(step.nodes, '') + step_info['nodes'] = stringOrNone(step.nodes, '') step_info['ntasks'] = step.ntasks - step_info['pid_str'] = slurm.stringOrNone(step.pid_str, '') + step_info['pid_str'] = stringOrNone(step.pid_str, '') step_info['req_cpufreq_min'] = step.req_cpufreq_min step_info['req_cpufreq_max'] = step.req_cpufreq_max step_info['req_cpufreq_gov'] = step.req_cpufreq_gov step_info['requid'] = step.requid step_info['start'] = step.start step_info['state'] = step.state - step_info['state_str'] = slurm.stringOrNone(slurm.slurm_job_state_string(step.state), '') + step_info['state_str'] = stringOrNone(slurm.slurm_job_state_string(step.state), '') # TRES are reported as strings in the format `TRESID=value` where TRESID is one of: # TRES_CPU=1, TRES_MEM=2, TRES_ENERGY=3, TRES_NODE=4, TRES_BILLING=5, TRES_FS_DISK=6, TRES_VMEM=7, TRES_PAGES=8 @@ -5480,31 +5511,31 @@ cdef class slurmdb_jobs: stats = step_info['stats'] stats['act_cpufreq'] = step.stats.act_cpufreq stats['consumed_energy'] = step.stats.consumed_energy - stats['tres_usage_in_max'] = slurm.stringOrNone(step.stats.tres_usage_in_max, '') - stats['tres_usage_in_max_nodeid'] = slurm.stringOrNone(step.stats.tres_usage_in_max_nodeid, '') - stats['tres_usage_in_max_taskid'] = slurm.stringOrNone(step.stats.tres_usage_in_max_taskid, '') - stats['tres_usage_in_min'] = slurm.stringOrNone(step.stats.tres_usage_in_min, '') - stats['tres_usage_in_min_nodeid'] = slurm.stringOrNone(step.stats.tres_usage_in_min_nodeid, '') - stats['tres_usage_in_min_taskid'] = slurm.stringOrNone(step.stats.tres_usage_in_min_taskid, '') - stats['tres_usage_in_tot'] = slurm.stringOrNone(step.stats.tres_usage_in_tot, '') - stats['tres_usage_out_ave'] = slurm.stringOrNone(step.stats.tres_usage_out_ave, '') - stats['tres_usage_out_max'] = slurm.stringOrNone(step.stats.tres_usage_out_max, '') - stats['tres_usage_out_max_nodeid'] = slurm.stringOrNone(step.stats.tres_usage_out_max_nodeid, '') - stats['tres_usage_out_max_taskid'] = slurm.stringOrNone(step.stats.tres_usage_out_max_taskid, '') - stats['tres_usage_out_min'] = slurm.stringOrNone(step.stats.tres_usage_out_min, '') - stats['tres_usage_out_min_nodeid'] = slurm.stringOrNone(step.stats.tres_usage_out_min_nodeid, '') - stats['tres_usage_out_min_taskid'] = slurm.stringOrNone(step.stats.tres_usage_out_min_taskid, '') - stats['tres_usage_out_tot'] = slurm.stringOrNone(step.stats.tres_usage_out_tot, '') + stats['tres_usage_in_max'] = stringOrNone(step.stats.tres_usage_in_max, '') + stats['tres_usage_in_max_nodeid'] = stringOrNone(step.stats.tres_usage_in_max_nodeid, '') + stats['tres_usage_in_max_taskid'] = stringOrNone(step.stats.tres_usage_in_max_taskid, '') + stats['tres_usage_in_min'] = stringOrNone(step.stats.tres_usage_in_min, '') + stats['tres_usage_in_min_nodeid'] = stringOrNone(step.stats.tres_usage_in_min_nodeid, '') + stats['tres_usage_in_min_taskid'] = stringOrNone(step.stats.tres_usage_in_min_taskid, '') + stats['tres_usage_in_tot'] = stringOrNone(step.stats.tres_usage_in_tot, '') + stats['tres_usage_out_ave'] = stringOrNone(step.stats.tres_usage_out_ave, '') + stats['tres_usage_out_max'] = stringOrNone(step.stats.tres_usage_out_max, '') + stats['tres_usage_out_max_nodeid'] = stringOrNone(step.stats.tres_usage_out_max_nodeid, '') + stats['tres_usage_out_max_taskid'] = stringOrNone(step.stats.tres_usage_out_max_taskid, '') + stats['tres_usage_out_min'] = stringOrNone(step.stats.tres_usage_out_min, '') + stats['tres_usage_out_min_nodeid'] = stringOrNone(step.stats.tres_usage_out_min_nodeid, '') + stats['tres_usage_out_min_taskid'] = stringOrNone(step.stats.tres_usage_out_min_taskid, '') + stats['tres_usage_out_tot'] = stringOrNone(step.stats.tres_usage_out_tot, '') step_info['stepid'] = step_id - step_info['stepname'] = slurm.stringOrNone(step.stepname, '') - step_info['submit_line'] = slurm.stringOrNone(step.submit_line, '') + step_info['stepname'] = stringOrNone(step.stepname, '') + step_info['submit_line'] = stringOrNone(step.submit_line, '') step_info['suspended'] = step.suspended step_info['sys_cpu_sec'] = step.sys_cpu_sec step_info['sys_cpu_usec'] = step.sys_cpu_usec step_info['task_dist'] = step.task_dist step_info['tot_cpu_sec'] = step.tot_cpu_sec step_info['tot_cpu_usec'] = step.tot_cpu_usec - step_info['tres_alloc_str'] = slurm.stringOrNone(step.tres_alloc_str, '') + step_info['tres_alloc_str'] = stringOrNone(step.tres_alloc_str, '') step_info['user_cpu_sec'] = step.user_cpu_sec step_info['user_cpu_usec'] = step.user_cpu_usec @@ -5513,23 +5544,23 @@ cdef class slurmdb_jobs: slurm.slurm_list_iterator_destroy(stepsIter) JOBS_info['submit'] = job.submit - JOBS_info['submit_line'] = slurm.stringOrNone(job.submit_line,'') + JOBS_info['submit_line'] = stringOrNone(job.submit_line,'') JOBS_info['suspended'] = job.suspended JOBS_info['sys_cpu_sec'] = job.sys_cpu_sec JOBS_info['sys_cpu_usec'] = job.sys_cpu_usec JOBS_info['timelimit'] = job.timelimit JOBS_info['tot_cpu_sec'] = job.tot_cpu_sec JOBS_info['tot_cpu_usec'] = job.tot_cpu_usec - JOBS_info['tres_alloc_str'] = slurm.stringOrNone(job.tres_alloc_str,'') - JOBS_info['tres_req_str'] = slurm.stringOrNone(job.tres_req_str,'') + JOBS_info['tres_alloc_str'] = stringOrNone(job.tres_alloc_str,'') + JOBS_info['tres_req_str'] = stringOrNone(job.tres_req_str,'') JOBS_info['uid'] = job.uid - JOBS_info['used_gres'] = slurm.stringOrNone(job.used_gres, '') - JOBS_info['user'] = slurm.stringOrNone(job.user,'') + JOBS_info['used_gres'] = stringOrNone(job.used_gres, '') + JOBS_info['user'] = stringOrNone(job.user,'') JOBS_info['user_cpu_sec'] = job.user_cpu_sec JOBS_info['user_cpu_usec'] = job.user_cpu_usec - JOBS_info['wckey'] = slurm.stringOrNone(job.wckey, '') + JOBS_info['wckey'] = stringOrNone(job.wckey, '') JOBS_info['wckeyid'] = job.wckeyid - JOBS_info['work_dir'] = slurm.stringOrNone(job.work_dir, '') + JOBS_info['work_dir'] = stringOrNone(job.work_dir, '') J_dict[jobid] = JOBS_info slurm.slurm_list_iterator_destroy(iters) @@ -5603,12 +5634,12 @@ cdef class slurmdb_reservations: if reservation is not NULL: reservation_id = reservation.id - Reservation_rec_dict['name'] = slurm.stringOrNone(reservation.name, '') - Reservation_rec_dict['nodes'] = slurm.stringOrNone(reservation.nodes, '') - Reservation_rec_dict['node_index'] = slurm.stringOrNone(reservation.node_inx, '') - Reservation_rec_dict['associations'] = slurm.stringOrNone(reservation.assocs, '') - Reservation_rec_dict['cluster'] = slurm.stringOrNone(reservation.cluster, '') - Reservation_rec_dict['tres_str'] = slurm.stringOrNone(reservation.tres_str, '') + Reservation_rec_dict['name'] = stringOrNone(reservation.name, '') + Reservation_rec_dict['nodes'] = stringOrNone(reservation.nodes, '') + Reservation_rec_dict['node_index'] = stringOrNone(reservation.node_inx, '') + Reservation_rec_dict['associations'] = stringOrNone(reservation.assocs, '') + Reservation_rec_dict['cluster'] = stringOrNone(reservation.cluster, '') + Reservation_rec_dict['tres_str'] = stringOrNone(reservation.tres_str, '') Reservation_rec_dict['reservation_id'] = reservation.id Reservation_rec_dict['time_start'] = reservation.time_start Reservation_rec_dict['time_start_prev'] = reservation.time_start_prev @@ -5626,8 +5657,8 @@ cdef class slurmdb_reservations: if tres is not NULL: tmp_tres_dict = {} tres_id = tres.id - tmp_tres_dict['name'] = slurm.stringOrNone(tres.name,'') - tmp_tres_dict['type'] = slurm.stringOrNone(tres.type,'') + tmp_tres_dict['name'] = stringOrNone(tres.name,'') + tmp_tres_dict['type'] = stringOrNone(tres.type,'') tmp_tres_dict['rec_count'] = tres.rec_count tmp_tres_dict['count'] = tres.count tmp_tres_dict['tres_id'] = tres.id @@ -5711,11 +5742,11 @@ cdef class slurmdb_clusters: Cluster_rec_dict = {} if cluster is not NULL: - cluster_name = slurm.stringOrNone(cluster.name, '') + cluster_name = stringOrNone(cluster.name, '') Cluster_rec_dict['name'] = cluster_name - Cluster_rec_dict['nodes'] = slurm.stringOrNone(cluster.nodes, '') - Cluster_rec_dict['control_host'] = slurm.stringOrNone(cluster.control_host, '') - Cluster_rec_dict['tres'] = slurm.stringOrNone(cluster.tres_str, '') + Cluster_rec_dict['nodes'] = stringOrNone(cluster.nodes, '') + Cluster_rec_dict['control_host'] = stringOrNone(cluster.control_host, '') + Cluster_rec_dict['tres'] = stringOrNone(cluster.tres_str, '') Cluster_rec_dict['control_port'] = cluster.control_port Cluster_rec_dict['rpc_version'] = cluster.rpc_version Cluster_rec_dict['plugin_id_select'] = cluster.plugin_id_select @@ -5737,9 +5768,9 @@ cdef class slurmdb_clusters: acct_tres_id = acct_tres_rec.id if (acct_tres_rec.name is not NULL): - acct_tres_dict['name'] = slurm.stringOrNone(acct_tres_rec.name,'') + acct_tres_dict['name'] = stringOrNone(acct_tres_rec.name,'') if (acct_tres_rec.type is not NULL): - acct_tres_dict['type'] = slurm.stringOrNone(acct_tres_rec.type,'') + acct_tres_dict['type'] = stringOrNone(acct_tres_rec.type,'') acct_tres_dict['rec_count'] = acct_tres_rec.rec_count acct_tres_dict['count'] = acct_tres_rec.count @@ -5821,11 +5852,11 @@ cdef class slurmdb_events: if event is not NULL: event_id = event.period_start - event_rec_dict['cluster'] = slurm.stringOrNone(event.cluster, '') - event_rec_dict['cluster_nodes'] = slurm.stringOrNone(event.cluster_nodes, '') - event_rec_dict['node_name'] = slurm.stringOrNone(event.node_name, '') - event_rec_dict['reason'] = slurm.stringOrNone(event.reason, '') - event_rec_dict['tres_str'] = slurm.stringOrNone(event.tres_str, '') + event_rec_dict['cluster'] = stringOrNone(event.cluster, '') + event_rec_dict['cluster_nodes'] = stringOrNone(event.cluster_nodes, '') + event_rec_dict['node_name'] = stringOrNone(event.node_name, '') + event_rec_dict['reason'] = stringOrNone(event.reason, '') + event_rec_dict['tres_str'] = stringOrNone(event.tres_str, '') event_rec_dict['event_type'] = event.event_type event_rec_dict['time_start'] = event.period_start event_rec_dict['time_end'] = event.period_end @@ -5914,17 +5945,17 @@ cdef class slurmdb_reports: for i in range(slurm.slurm_list_count(slurmdb_report_cluster_list)): slurmdb_report_cluster = slurm.slurm_list_next(cluster_itr) - cluster_name = slurm.stringOrNone(slurmdb_report_cluster.name, '') + cluster_name = stringOrNone(slurmdb_report_cluster.name, '') Cluster_dict[cluster_name] = {} itr = slurm.slurm_list_iterator_create(slurmdb_report_cluster.assoc_list) for j in range(slurm.slurm_list_count(slurmdb_report_cluster.assoc_list)): slurmdb_report_assoc = slurm.slurm_list_next(itr) Assoc_dict = {} - Assoc_dict["account"] = slurm.stringOrNone(slurmdb_report_assoc.acct, '') - Assoc_dict["cluster"] = slurm.stringOrNone(slurmdb_report_assoc.cluster, '') - Assoc_dict["parent_account"] = slurm.stringOrNone(slurmdb_report_assoc.parent_acct, '') - Assoc_dict["user"] = slurm.stringOrNone(slurmdb_report_assoc.user, '') + Assoc_dict["account"] = stringOrNone(slurmdb_report_assoc.acct, '') + Assoc_dict["cluster"] = stringOrNone(slurmdb_report_assoc.cluster, '') + Assoc_dict["parent_account"] = stringOrNone(slurmdb_report_assoc.parent_acct, '') + Assoc_dict["user"] = stringOrNone(slurmdb_report_assoc.user, '') Assoc_dict["tres_list"] = [] tres_itr = slurm.slurm_list_iterator_create(slurmdb_report_assoc.tres_list) @@ -5935,8 +5966,8 @@ cdef class slurmdb_reports: Tres_dict["rec_count"] = tres.rec_count Tres_dict["count"] = tres.count Tres_dict["id"] = tres.id - Tres_dict["name"] = slurm.stringOrNone(tres.name, '') - Tres_dict["type"] = slurm.stringOrNone(tres.type, '') + Tres_dict["name"] = stringOrNone(tres.name, '') + Tres_dict["type"] = stringOrNone(tres.type, '') Assoc_dict["tres_list"].append(Tres_dict) Cluster_dict[cluster_name] = Assoc_dict @@ -5966,7 +5997,7 @@ def get_last_slurm_error(): if rc == 0: return (rc, 'Success') else: - return (rc, slurm.stringOrNone(slurm.slurm_strerror(rc), '')) + return (rc, stringOrNone(slurm.slurm_strerror(rc), '')) cdef inline dict __get_licenses(char *licenses): """Returns a dict of licenses from the slurm license string. @@ -5983,7 +6014,7 @@ cdef inline dict __get_licenses(char *licenses): cdef: dict licDict = {} int i = 0 - list alist = slurm.listOrNone(licenses, ',') + list alist = listOrNone(licenses, ',') int listLen = len(alist) if alist: @@ -6014,7 +6045,7 @@ def get_trigger_res_type(uint16_t inx): """Returns a string that represents the slurm trigger res type. Args: - ResType (int): Slurm trigger res state + ResType (int): Slurm trigger res state * TRIGGER_RES_TYPE_JOB 1 * TRIGGER_RES_TYPE_NODE 2 * TRIGGER_RES_TYPE_SLURMCTLD 3 @@ -6147,7 +6178,6 @@ cdef inline object __get_trigger_type(uint32_t TriggerType): # - RESERVE_FLAG_NO_PART_NODES 0x00002000 # - RESERVE_FLAG_OVERLAP 0x00004000 # - RESERVE_FLAG_SPEC_NODES 0x00008000 -# - RESERVE_FLAG_FIRST_CORES 0x00010000 # - RESERVE_FLAG_TIME_FLOAT 0x00020000 # - RESERVE_FLAG_REPLACE 0x00040000 # :returns: Reservation state string @@ -6539,7 +6569,7 @@ def get_job_state(inx): (str): Job state string """ try: - job_state = slurm.stringOrNone(slurm.slurm_job_state_string(inx), '') + job_state = stringOrNone(slurm.slurm_job_state_string(inx), '') return job_state except: pass @@ -6554,7 +6584,7 @@ def get_job_state_reason(inx): Returns: (str): Reason string """ - job_reason = slurm.stringOrNone(slurm.slurm_job_reason_string(inx), '') + job_reason = stringOrNone(slurm.slurm_job_reason_string(inx), '') return job_reason @@ -6628,7 +6658,7 @@ cdef class licenses: def ids(self): """Return the current license names from retrieved license data. - + This method calls slurm_load_licenses to retrieve license information from the controller. slurm_free_license_info_msg is used to free the license message buffer. @@ -6656,7 +6686,7 @@ cdef class licenses: return all_licenses else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) def get(self): """Get full license information from the slurm controller. @@ -6682,7 +6712,7 @@ cdef class licenses: for record in self._msg.lic_array[:self._msg.num_lic]: License_dict = {} - license_name = slurm.stringOrNone(record.name, '') + license_name = stringOrNone(record.name, '') License_dict["total"] = record.total License_dict["in_use"] = record.in_use License_dict["available"] = record.available @@ -6693,4 +6723,4 @@ cdef class licenses: return self._licDict else: apiError = slurm.slurm_get_errno() - raise ValueError(slurm.stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) + raise ValueError(stringOrNone(slurm.slurm_strerror(apiError), ''), apiError) diff --git a/pyslurm/slurm/__init__.pxd b/pyslurm/slurm/__init__.pxd index f29bfc00..52230259 100644 --- a/pyslurm/slurm/__init__.pxd +++ b/pyslurm/slurm/__init__.pxd @@ -1,3 +1,6 @@ +# cython: c_string_type=unicode, c_string_encoding=default +# cython: language_level=3 + from libcpp cimport bool from cpython.version cimport PY_MAJOR_VERSION @@ -87,3 +90,10 @@ include "extra.pxi" # Just keeping them around here for now. Ideally they shouldn't be # within this slurm c-api package, and should be defined somewhere else. include "helpers.pxi" + +# Additional Features added to the Headers after initial release of the new +# Major version +cdef uint8_t ENFORCE_BINDING_GRES +cdef uint8_t ONE_TASK_PER_SHARING_GRES +cdef uint64_t GRES_ONE_TASK_PER_SHARING +cdef uint64_t GRES_MULT_TASKS_PER_SHARING diff --git a/pyslurm/slurm/__init__.pyx b/pyslurm/slurm/__init__.pyx new file mode 100644 index 00000000..ddc12d33 --- /dev/null +++ b/pyslurm/slurm/__init__.pyx @@ -0,0 +1,7 @@ +# cython: c_string_type=unicode, c_string_encoding=default +# cython: language_level=3 + +ENFORCE_BINDING_GRES = 0x0040 +ONE_TASK_PER_SHARING_GRES = 0x0080 +GRES_ONE_TASK_PER_SHARING = 1 << 38 +GRES_MULT_TASKS_PER_SHARING = 1 << 39 diff --git a/pyslurm/slurm/extra.pxi b/pyslurm/slurm/extra.pxi index 3557b0b9..43fe8ef3 100644 --- a/pyslurm/slurm/extra.pxi +++ b/pyslurm/slurm/extra.pxi @@ -5,7 +5,7 @@ # For example: to communicate with the slurmctld directly in order # to retrieve the actual batch-script as a string. # -# https://github.com/SchedMD/slurm/blob/26abe9188ea8712ba1eab4a8eb6322851f06a108/src/common/slurm_persist_conn.h#L51 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_persist_conn.h#L54 ctypedef enum persist_conn_type_t: PERSIST_TYPE_NONE = 0 PERSIST_TYPE_DBD @@ -14,20 +14,23 @@ ctypedef enum persist_conn_type_t: PERSIST_TYPE_HA_DBD PERSIST_TYPE_ACCT_UPDATE -# https://github.com/SchedMD/slurm/blob/26abe9188ea8712ba1eab4a8eb6322851f06a108/src/common/slurm_persist_conn.h#L59 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_persist_conn.h#L63 ctypedef struct persist_msg_t: void *conn void *data uint32_t data_size uint16_t msg_type -ctypedef int (*_slurm_persist_conn_t_callback_proc) (void *arg, persist_msg_t *msg, buf_t **out_buffer, uint32_t *uid) +ctypedef int (*_slurm_persist_conn_t_callback_proc) (void *arg, persist_msg_t *msg, buf_t **out_buffer) ctypedef void (*_slurm_persist_conn_t_callback_fini)(void *arg) -# https://github.com/SchedMD/slurm/blob/26abe9188ea8712ba1eab4a8eb6322851f06a108/src/common/slurm_persist_conn.h#L66 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_persist_conn.h#L70 ctypedef struct slurm_persist_conn_t: void *auth_cred + uid_t auth_uid + gid_t auth_gid + bool auth_ids_set _slurm_persist_conn_t_callback_proc callback_proc _slurm_persist_conn_t_callback_fini callback_fini char *cluster_name @@ -46,24 +49,25 @@ ctypedef struct slurm_persist_conn_t: slurm_trigger_callbacks_t trigger_callbacks; uint16_t version -# https://github.com/SchedMD/slurm/blob/20e2b354168aeb0f76d67f80122d80925c2ef32b/src/common/pack.h#L68 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/pack.h#L68 ctypedef struct buf_t: uint32_t magic char *head uint32_t size uint32_t processed bool mmaped + bool shadow -# https://github.com/SchedMD/slurm/blob/20e2b354168aeb0f76d67f80122d80925c2ef32b/src/common/pack.h#L68 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_protocol_defs.h#L998 ctypedef struct return_code_msg_t: uint32_t return_code -# https://github.com/SchedMD/slurm/blob/fe82218def7b57f5ecda9222e80662ebbb6415f8/src/common/slurm_protocol_defs.h#L650 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_protocol_defs.h#L687 ctypedef struct job_id_msg_t: uint32_t job_id uint16_t show_flags -# https://github.com/SchedMD/slurm/blob/fe82218def7b57f5ecda9222e80662ebbb6415f8/src/common/slurm_protocol_defs.h#L216 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_protocol_defs.h#L229 # Only partially defined - not everything needed at the moment. ctypedef enum slurm_msg_type_t: REQUEST_SHARE_INFO = 2022 @@ -71,16 +75,18 @@ ctypedef enum slurm_msg_type_t: RESPONSE_BATCH_SCRIPT = 2052 RESPONSE_SLURM_RC = 8001 -# https://github.com/SchedMD/slurm/blob/fe82218def7b57f5ecda9222e80662ebbb6415f8/src/common/slurm_protocol_defs.h#L469 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_protocol_defs.h#L504 ctypedef struct forward_t: + slurm_node_alias_addrs_t alias_addrs uint16_t cnt uint16_t init char *nodelist uint32_t timeout uint16_t tree_width -# https://github.com/SchedMD/slurm/blob/fe82218def7b57f5ecda9222e80662ebbb6415f8/src/common/slurm_protocol_defs.h#L491 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_protocol_defs.h#L527 ctypedef struct forward_struct_t: + slurm_node_alias_addrs_t *alias_addrs char *buf int buf_len uint16_t fwd_cnt @@ -89,13 +95,14 @@ ctypedef struct forward_struct_t: List ret_list uint32_t timeout -# https://github.com/SchedMD/slurm/blob/fe82218def7b57f5ecda9222e80662ebbb6415f8/src/common/slurm_protocol_defs.h#L514 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_protocol_defs.h#L544 ctypedef struct slurm_msg_t: slurm_addr_t address void *auth_cred int auth_index uid_t auth_uid - bool auth_uid_set + gid_t auth_gid + bool auth_ids_set uid_t restrict_uid bool restrict_uid_set uint32_t body_offset @@ -116,10 +123,10 @@ ctypedef struct slurm_msg_t: # https://github.com/SchedMD/slurm/blob/fe82218def7b57f5ecda9222e80662ebbb6415f8/src/common/slurm_protocol_defs.c#L865 cdef extern void slurm_free_return_code_msg(return_code_msg_t *msg) -# https://github.com/SchedMD/slurm/blob/2d2e83674b59410a7ed8ab6fc8d8acfcfa8beaf9/src/common/slurm_protocol_api.c#L2401 +# https://github.com/SchedMD/slurm/blob/2354049372e503af3217f94d65753abc440fa178/src/common/slurm_protocol_api.h#L440 cdef extern int slurm_send_recv_controller_msg(slurm_msg_t *request_msg, slurm_msg_t *response_msg, - slurmdb_cluster_rec_t *working_cluster_rec) + slurmdb_cluster_rec_t *comm_cluster_rec) # https://github.com/SchedMD/slurm/blob/fe82218def7b57f5ecda9222e80662ebbb6415f8/src/common/slurm_protocol_defs.c#L168 cdef extern void slurm_msg_t_init(slurm_msg_t *msg) @@ -176,7 +183,7 @@ cdef extern slurm_conf_t slurm_conf cdef extern from "pyslurm/slurm/xmalloc.h" nogil: void xfree(void *__p) void *xmalloc(size_t __sz) - void *try_xmalloc(size_t __sz) + void *try_xmalloc(size_t __sz) cdef extern void slurm_xfree_ptr(void *) @@ -266,7 +273,7 @@ cdef extern from *: void bit_free(bitstr_t *_X) void FREE_NULL_BITMAP(bitstr_t *_X) -cdef extern char *slurm_hostlist_deranged_string_malloc(hostlist_t hl) +cdef extern char *slurm_hostlist_deranged_string_xmalloc(hostlist_t *hl) # # slurmdb functions diff --git a/pyslurm/slurm/helpers.pxi b/pyslurm/slurm/helpers.pxi index 26845c79..abff2594 100644 --- a/pyslurm/slurm/helpers.pxi +++ b/pyslurm/slurm/helpers.pxi @@ -1,71 +1,3 @@ -cdef inline FREE_NULL_LIST(List _X): - if _X: - slurm_list_destroy(_X) - - _X = NULL - - -cdef inline listOrNone(char* value, sep_char): - if value is NULL: - return [] - - if not sep_char: - return value.decode("UTF-8", "replace") - - if sep_char == '': - return value.decode("UTF-8", "replace") - - return value.decode("UTF_8", "replace").split(sep_char) - - -cdef inline stringOrNone(char* value, value2): - if value is NULL: - if value2 is '': - return None - return value2 - return value.decode("UTF-8", "replace") - - -cdef inline int16orNone(uint16_t value): - if value is NO_VAL16: - return None - else: - return value - - -cdef inline int32orNone(uint32_t value): - if value is NO_VAL: - return None - else: - return value - - -cdef inline int64orNone(uint64_t value): - if value is NO_VAL64: - return None - else: - return value - - -cdef inline int16orUnlimited(uint16_t value, return_type): - if value is INFINITE16: - if return_type is "int": - return None - else: - return "UNLIMITED" - else: - if return_type is "int": - return value - else: - return str(value) - - -cdef inline boolToString(int value): - if value == 0: - return 'False' - return 'True' - - # # Job States # diff --git a/pyslurm/slurm/slurm.h.pxi b/pyslurm/slurm/slurm.h.pxi index e7d89ad5..ded4fc24 100644 --- a/pyslurm/slurm/slurm.h.pxi +++ b/pyslurm/slurm/slurm.h.pxi @@ -9,7 +9,7 @@ # * C-Macros are listed with their appropriate uint type # * Any definitions that cannot be translated are not included in this file # -# Generated on 2023-05-06T18:02:46.408139 +# Generated on 2023-12-11T22:40:42.522638 # # The Original Copyright notice from slurm.h has been included # below: @@ -109,6 +109,9 @@ cdef extern from "slurm/slurm.h": uint16_t MAIL_JOB_STAGE_OUT uint16_t MAIL_ARRAY_TASKS uint16_t MAIL_INVALID_DEPEND + uint8_t PARSE_FLAGS_IGNORE_NEW + uint8_t PARSE_FLAGS_CHECK_PERMISSIONS + uint8_t PARSE_FLAGS_INCLUDE_ONLY uint8_t ARRAY_TASK_REQUEUED uint32_t NICE_OFFSET uint8_t PARTITION_SUBMIT @@ -192,12 +195,14 @@ cdef extern from "slurm/slurm.h": uint8_t CR_CORE uint8_t CR_BOARD uint8_t CR_MEMORY - uint8_t CR_OTHER_CONS_RES uint16_t CR_ONE_TASK_PER_CORE uint16_t CR_PACK_NODES + uint16_t LL_SHARED_GRES uint16_t CR_OTHER_CONS_TRES uint16_t CR_CORE_DEFAULT_DIST_BLOCK uint16_t CR_LLN + uint16_t MULTIPLE_SHARING_GRES_PJ + uint16_t CR_LINEAR uint64_t MEM_PER_CPU uint16_t SHARED_FORCE uint8_t PRIVATE_DATA_JOBS @@ -252,7 +257,6 @@ cdef extern from "slurm/slurm.h": uint32_t RESET_ACCRUE_TIME uint32_t CRON_JOB uint32_t JOB_MEM_SET - uint32_t JOB_RESIZED uint32_t USE_DEFAULT_ACCT uint32_t USE_DEFAULT_PART uint32_t USE_DEFAULT_QOS @@ -312,7 +316,6 @@ cdef extern from "slurm/slurm.h": uint16_t RESERVE_FLAG_NO_PART_NODES uint16_t RESERVE_FLAG_OVERLAP uint16_t RESERVE_FLAG_SPEC_NODES - uint32_t RESERVE_FLAG_FIRST_CORES uint32_t RESERVE_FLAG_TIME_FLOAT uint32_t RESERVE_FLAG_REPLACE uint32_t RESERVE_FLAG_ALL_NODES @@ -333,6 +336,8 @@ cdef extern from "slurm/slurm.h": uint64_t RESERVE_FLAG_SKIP uint64_t RESERVE_FLAG_HOURLY uint64_t RESERVE_FLAG_NO_HOURLY + uint64_t RESERVE_FLAG_GRES_REQ + uint64_t RESERVE_TRES_PER_NODE uint8_t DEBUG_FLAG_SELECT_TYPE uint8_t DEBUG_FLAG_STEPS uint8_t DEBUG_FLAG_TRIGGERS @@ -349,6 +354,7 @@ cdef extern from "slurm/slurm.h": uint16_t DEBUG_FLAG_GANG uint16_t DEBUG_FLAG_RESERVATION uint16_t DEBUG_FLAG_FRONT_END + uint32_t DEBUG_FLAG_SACK uint32_t DEBUG_FLAG_SWITCH uint32_t DEBUG_FLAG_ENERGY uint32_t DEBUG_FLAG_EXT_SENSORS @@ -428,6 +434,7 @@ cdef extern from "slurm/slurm.h": uint8_t LOG_FMT_SHORT uint8_t LOG_FMT_THREAD_ID uint8_t LOG_FMT_RFC3339 + uint16_t LOG_FMT_FORMAT_STDERR uint8_t STAT_COMMAND_RESET uint8_t STAT_COMMAND_GET uint8_t TRIGGER_FLAG_PERM @@ -464,6 +471,7 @@ cdef extern from "slurm/slurm.h": uint8_t ASSOC_MGR_INFO_FLAG_USERS uint8_t ASSOC_MGR_INFO_FLAG_QOS uint8_t KILL_JOB_BATCH + uint8_t KILL_ARRAY_TASK uint8_t KILL_STEPS_ONLY uint8_t KILL_FULL_JOB uint8_t KILL_FED_REQUEUE @@ -472,6 +480,7 @@ cdef extern from "slurm/slurm.h": uint8_t KILL_NO_SIBS uint16_t KILL_JOB_RESV uint16_t KILL_NO_CRON + uint16_t KILL_NO_SIG_FAIL uint16_t WARN_SENT uint8_t BB_FLAG_DISABLE_PERSISTENT uint8_t BB_FLAG_ENABLE_PERSISTENT @@ -503,8 +512,6 @@ cdef extern from "slurm/slurm.h": ctypedef slurmdb_cluster_rec slurmdb_cluster_rec_t - ctypedef slurm_job_credential slurm_cred_t - ctypedef switch_jobinfo switch_jobinfo_t ctypedef job_resources job_resources_t @@ -738,6 +745,8 @@ cdef extern from "slurm/slurm.h": WAIT_QOS_MAX_BILLING_PER_ACCT WAIT_QOS_MIN_BILLING WAIT_RESV_DELETED + WAIT_RESV_INVALID + FAIL_CONSTRAINTS cdef enum job_acct_types: JOB_START @@ -749,6 +758,7 @@ cdef extern from "slurm/slurm.h": AUTH_PLUGIN_NONE AUTH_PLUGIN_MUNGE AUTH_PLUGIN_JWT + AUTH_PLUGIN_SLURM cdef enum hash_plugin_type: HASH_PLUGIN_DEFAULT @@ -758,11 +768,9 @@ cdef extern from "slurm/slurm.h": HASH_PLUGIN_CNT cdef enum select_plugin_type: - SELECT_PLUGIN_CONS_RES SELECT_PLUGIN_LINEAR SELECT_PLUGIN_SERIAL SELECT_PLUGIN_CRAY_LINEAR - SELECT_PLUGIN_CRAY_CONS_RES SELECT_PLUGIN_CONS_TRES SELECT_PLUGIN_CRAY_CONS_TRES @@ -939,38 +947,40 @@ cdef extern from "slurm/slurm.h": SSF_INTERACTIVE SSF_MEM_ZERO SSF_OVERLAP_FORCE + SSF_NO_SIG_FAIL + SSF_EXT_LAUNCHER + + cdef enum topology_plugin_type: + TOPOLOGY_PLUGIN_DEFAULT + TOPOLOGY_PLUGIN_3DTORUS + TOPOLOGY_PLUGIN_TREE + TOPOLOGY_PLUGIN_BLOCK void slurm_init(const char* conf) void slurm_fini() - void slurm_client_init_plugins() - - void slurm_client_fini_plugins() - - ctypedef hostlist* hostlist_t + ctypedef hostlist hostlist_t - hostlist_t slurm_hostlist_create(const char* hostlist) + hostlist_t* slurm_hostlist_create(const char* hostlist) - int slurm_hostlist_count(hostlist_t hl) + int slurm_hostlist_count(hostlist_t* hl) - void slurm_hostlist_destroy(hostlist_t hl) + void slurm_hostlist_destroy(hostlist_t* hl) - int slurm_hostlist_find(hostlist_t hl, const char* hostname) + int slurm_hostlist_find(hostlist_t* hl, const char* hostname) - int slurm_hostlist_push(hostlist_t hl, const char* hosts) + int slurm_hostlist_push(hostlist_t* hl, const char* hosts) - int slurm_hostlist_push_host(hostlist_t hl, const char* host) + int slurm_hostlist_push_host(hostlist_t* hl, const char* host) - ssize_t slurm_hostlist_ranged_string(hostlist_t hl, size_t n, char* buf) + ssize_t slurm_hostlist_ranged_string(hostlist_t* hl, size_t n, char* buf) - char* slurm_hostlist_ranged_string_malloc(hostlist_t hl) + char* slurm_hostlist_ranged_string_xmalloc(hostlist_t* hl) - char* slurm_hostlist_ranged_string_xmalloc(hostlist_t hl) + char* slurm_hostlist_shift(hostlist_t* hl) - char* slurm_hostlist_shift(hostlist_t hl) - - void slurm_hostlist_uniq(hostlist_t hl) + void slurm_hostlist_uniq(hostlist_t* hl) ctypedef xlist* List @@ -1053,6 +1063,13 @@ cdef extern from "slurm/slurm.h": ctypedef power_mgmt_data power_mgmt_data_t + ctypedef struct slurm_node_alias_addrs_t: + time_t expiration + char* net_cred + slurm_addr_t* node_addrs + uint32_t node_cnt + char* node_list + cdef struct job_descriptor: char* account char* acctg_freq @@ -1096,6 +1113,7 @@ cdef extern from "slurm/slurm.h": uint64_t fed_siblings_viable uint32_t group_id uint32_t het_job_offset + void* id uint16_t immediate uint32_t job_id char* job_id_str @@ -1393,7 +1411,11 @@ cdef extern from "slurm/slurm.h": uint16_t plane_size cdef struct slurm_step_layout: + uint16_t* cpt_compact_array + uint32_t cpt_compact_cnt + uint32_t* cpt_compact_reps char* front_end + slurm_node_alias_addrs_t* alias_addrs uint32_t node_cnt char* node_list uint16_t plane_size @@ -1516,7 +1538,6 @@ cdef extern from "slurm/slurm.h": char** env char* container char* cwd - bool user_managed_io uint32_t msg_timeout uint16_t ntasks_per_board uint16_t ntasks_per_core @@ -1558,9 +1579,12 @@ cdef extern from "slurm/slurm.h": uint16_t max_cores uint16_t max_threads uint16_t cpus_per_task + uint16_t* cpt_compact_array + uint32_t cpt_compact_cnt + uint32_t* cpt_compact_reps uint16_t threads_per_core uint32_t task_dist - char* partition + uint16_t tree_width bool preserve_env char* mpi_plugin_name uint8_t open_mode @@ -1709,6 +1733,8 @@ cdef extern from "slurm/slurm.h": char* gres char* gres_drain char* gres_used + char* instance_id + char* instance_type time_t last_busy char* mcs_label uint64_t mem_spec_limit @@ -1781,6 +1807,7 @@ cdef extern from "slurm/slurm.h": cdef struct topo_info_response_msg: uint32_t record_count topo_info_t* topo_array + dynamic_plugin_data_t* topo_info ctypedef topo_info_response_msg topo_info_response_msg_t @@ -1962,16 +1989,17 @@ cdef extern from "slurm/slurm.h": char* accounts char* burst_buffer char* comment - uint32_t* core_cnt + uint32_t core_cnt uint32_t duration time_t end_time char* features uint64_t flags char* groups + void* job_ptr char* licenses uint32_t max_start_delay char* name - uint32_t* node_cnt + uint32_t node_cnt char* node_list char* partition uint32_t purge_comp_time @@ -2070,8 +2098,6 @@ cdef extern from "slurm/slurm.h": char* job_comp_type char* job_comp_user char* job_container_plugin - char* job_credential_private_key - char* job_credential_public_certificate list_t* job_defaults_list uint16_t job_file_append uint16_t job_requeue @@ -2153,7 +2179,6 @@ cdef extern from "slurm/slurm.h": uint16_t resv_over_run char* resv_prolog uint16_t ret2service - char* route_plugin char* sched_logfile uint16_t sched_log_level char* sched_params @@ -2252,6 +2277,8 @@ cdef extern from "slurm/slurm.h": char* features char* features_act char* gres + char* instance_id + char* instance_type char* node_addr char* node_hostname char* node_names @@ -2276,7 +2303,7 @@ cdef extern from "slurm/slurm.h": cdef struct job_sbcast_cred_msg: uint32_t job_id char* node_list - sbcast_cred_t* sbcast_cred + void* sbcast_cred ctypedef job_sbcast_cred_msg job_sbcast_cred_msg_t @@ -2309,6 +2336,8 @@ cdef extern from "slurm/slurm.h": uint32_t schedule_cycle_sum uint32_t schedule_cycle_counter uint32_t schedule_cycle_depth + uint32_t* schedule_exit + uint32_t schedule_exit_cnt uint32_t schedule_queue_len uint32_t jobs_submitted uint32_t jobs_started @@ -2325,6 +2354,8 @@ cdef extern from "slurm/slurm.h": uint64_t bf_cycle_sum uint32_t bf_cycle_last uint32_t bf_cycle_max + uint32_t* bf_exit + uint32_t bf_exit_cnt uint32_t bf_last_depth uint32_t bf_last_depth_try uint32_t bf_depth_sum @@ -2484,7 +2515,7 @@ cdef extern from "slurm/slurm.h": int slurm_kill_job(uint32_t job_id, uint16_t signal, uint16_t flags) - int slurm_kill_job_step(uint32_t job_id, uint32_t step_id, uint16_t signal) + int slurm_kill_job_step(uint32_t job_id, uint32_t step_id, uint16_t signal, uint16_t flags) int slurm_kill_job2(const char* job_id, uint16_t signal, uint16_t flags, const char* sibling) @@ -2630,6 +2661,8 @@ cdef extern from "slurm/slurm.h": int slurm_get_node_energy(char* host, uint16_t context_id, uint16_t delta, uint16_t* sensors_cnt, acct_gather_energy_t** energy) + int slurm_get_node_alias_addrs(char* node_list, slurm_node_alias_addrs_t** alias_addrs) + void slurm_free_node_info_msg(node_info_msg_t* node_buffer_ptr) void slurm_print_node_info_msg(FILE* out, node_info_msg_t* node_info_msg_ptr, int one_liner) @@ -2664,9 +2697,7 @@ cdef extern from "slurm/slurm.h": void slurm_free_topo_info_msg(topo_info_response_msg_t* msg) - void slurm_print_topo_info_msg(FILE* out, topo_info_response_msg_t* topo_info_msg_ptr, int one_liner) - - void slurm_print_topo_record(FILE* out, topo_info_t* topo_ptr, int one_liner) + void slurm_print_topo_info_msg(FILE* out, topo_info_response_msg_t* topo_info_msg_ptr, char* node_list, int one_liner) int slurm_get_select_nodeinfo(dynamic_plugin_data_t* nodeinfo, select_nodedata_type data_type, node_states state, void* data) diff --git a/pyslurm/slurm/slurm_errno.h.pxi b/pyslurm/slurm/slurm_errno.h.pxi index 3ed2d122..790fe213 100644 --- a/pyslurm/slurm/slurm_errno.h.pxi +++ b/pyslurm/slurm/slurm_errno.h.pxi @@ -9,7 +9,7 @@ # * C-Macros are listed with their appropriate uint type # * Any definitions that cannot be translated are not included in this file # -# Generated on 2023-05-06T18:02:46.304407 +# Generated on 2023-12-11T22:40:42.328758 # # The Original Copyright notice from slurm_errno.h has been included # below: @@ -119,6 +119,7 @@ cdef extern from "slurm/slurm_errno.h": ESLURM_DEPENDENCY ESLURM_BATCH_ONLY ESLURM_LICENSES_UNAVAILABLE + ESLURM_TAKEOVER_NO_HEARTBEAT ESLURM_JOB_HELD ESLURM_INVALID_CRED_TYPE_CHANGE ESLURM_INVALID_TASK_MEMORY @@ -140,6 +141,7 @@ cdef extern from "slurm/slurm_errno.h": ESLURM_PORTS_INVALID ESLURM_PROLOG_RUNNING ESLURM_NO_STEPS + ESLURM_MISSING_WORK_DIR ESLURM_INVALID_QOS ESLURM_QOS_PREEMPTION_LOOP ESLURM_NODE_NOT_AVAIL @@ -236,6 +238,16 @@ cdef extern from "slurm/slurm_errno.h": ESLURM_INVALID_HET_STEP_JOB ESLURM_JOB_TIMEOUT_KILLED ESLURM_JOB_NODE_FAIL_KILLED + ESLURM_EMPTY_LIST + ESLURM_GROUP_ID_INVALID + ESLURM_GROUP_ID_UNKNOWN + ESLURM_USER_ID_INVALID + ESLURM_USER_ID_UNKNOWN + ESLURM_INVALID_ASSOC + ESLURM_NODE_ALREADY_EXISTS + ESLURM_NODE_TABLE_FULL + ESLURM_INVALID_RELATIVE_QOS + ESLURM_INVALID_EXTRA ESPANK_ERROR ESPANK_BAD_ARG ESPANK_NOT_TASK @@ -288,6 +300,8 @@ cdef extern from "slurm/slurm_errno.h": ESLURM_DB_QUERY_TOO_WIDE ESLURM_DB_CONNECTION_INVALID ESLURM_NO_REMOVE_DEFAULT_ACCOUNT + ESLURM_BAD_SQL + ESLURM_NO_REMOVE_DEFAULT_QOS ESLURM_FED_CLUSTER_MAX_CNT ESLURM_FED_CLUSTER_MULTIPLE_ASSIGNMENT ESLURM_INVALID_CLUSTER_FEATURE @@ -300,6 +314,12 @@ cdef extern from "slurm/slurm_errno.h": ESLURM_PLUGIN_INVALID ESLURM_PLUGIN_INCOMPLETE ESLURM_PLUGIN_NOT_LOADED + ESLURM_PLUGIN_NOTFOUND + ESLURM_PLUGIN_ACCESS_ERROR + ESLURM_PLUGIN_DLOPEN_FAILED + ESLURM_PLUGIN_INIT_FAILED + ESLURM_PLUGIN_MISSING_NAME + ESLURM_PLUGIN_BAD_VERSION ESLURM_REST_INVALID_QUERY ESLURM_REST_FAIL_PARSING ESLURM_REST_INVALID_JOBS_DESC @@ -319,6 +339,7 @@ cdef extern from "slurm/slurm_errno.h": ESLURM_DATA_AMBIGUOUS_MODIFY ESLURM_DATA_AMBIGUOUS_QUERY ESLURM_DATA_PARSE_NOTHING + ESLURM_DATA_INVALID_PARSER ESLURM_CONTAINER_NOT_CONFIGURED ctypedef struct slurm_errtab_t: diff --git a/pyslurm/slurm/slurmdb.h.pxi b/pyslurm/slurm/slurmdb.h.pxi index d4c16e4e..2c7bb862 100644 --- a/pyslurm/slurm/slurmdb.h.pxi +++ b/pyslurm/slurm/slurmdb.h.pxi @@ -9,7 +9,7 @@ # * C-Macros are listed with their appropriate uint type # * Any definitions that cannot be translated are not included in this file # -# Generated on 2023-05-06T18:02:46.554956 +# Generated on 2023-12-11T22:40:42.798426 # # The Original Copyright notice from slurmdb.h has been included # below: @@ -63,6 +63,9 @@ cdef extern from "slurm/slurmdb.h": uint8_t QOS_FLAG_OVER_PART_QOS uint16_t QOS_FLAG_NO_DECAY uint16_t QOS_FLAG_USAGE_FACTOR_SAFE + uint16_t QOS_FLAG_RELATIVE + uint16_t QOS_FLAG_RELATIVE_SET + uint16_t QOS_FLAG_PART_QOS uint32_t SLURMDB_RES_FLAG_BASE uint32_t SLURMDB_RES_FLAG_NOTSET uint32_t SLURMDB_RES_FLAG_ADD @@ -120,6 +123,7 @@ cdef extern from "slurm/slurmdb.h": uint8_t SLURMDB_EVENT_COND_OPEN uint8_t DB_CONN_FLAG_CLUSTER_DEL uint8_t DB_CONN_FLAG_ROLLBACK + uint8_t DB_CONN_FLAG_FEDUPDATE cdef extern from "slurm/slurmdb.h": @@ -186,7 +190,7 @@ cdef extern from "slurm/slurmdb.h": SLURMDB_ADD_RES SLURMDB_REMOVE_RES SLURMDB_MODIFY_RES - SLURMDB_REMOVE_QOS_USAGE + SLURMDB_UPDATE_QOS_USAGE SLURMDB_ADD_TRES SLURMDB_UPDATE_FEDS @@ -353,6 +357,7 @@ cdef extern from "slurm/slurmdb.h": uint16_t is_def slurmdb_assoc_usage_t* leaf_usage uint32_t lft + char* lineage uint32_t max_jobs uint32_t max_jobs_accrue uint32_t max_submit_jobs @@ -380,6 +385,15 @@ cdef extern from "slurm/slurmdb.h": ctypedef slurmdb_assoc_rec slurmdb_assoc_rec_t + ctypedef struct slurmdb_add_assoc_cond_t: + list_t* acct_list + slurmdb_assoc_rec_t assoc + list_t* cluster_list + char* default_acct + list_t* partition_list + list_t* user_list + list_t* wckey_list + cdef struct slurmdb_assoc_usage: uint32_t accrue_cnt List children_list @@ -414,7 +428,6 @@ cdef extern from "slurm/slurmdb.h": List federation_list uint32_t flags List format_list - List plugin_id_select_list List rpc_version_list time_t usage_end time_t usage_start @@ -506,6 +519,25 @@ cdef extern from "slurm/slurmdb.h": uint32_t flags List cluster_list + ctypedef struct slurmdb_instance_cond_t: + List cluster_list + List extra_list + List format_list + List instance_id_list + List instance_type_list + char* node_list + time_t time_end + time_t time_start + + ctypedef struct slurmdb_instance_rec_t: + char* cluster + char* extra + char* instance_id + char* instance_type + char* node_name + time_t time_end + time_t time_start + ctypedef struct slurmdb_job_rec_t: char* account char* admin_comment @@ -537,6 +569,7 @@ cdef extern from "slurm/slurmdb.h": uint32_t jobid char* jobname uint32_t lft + char* lineage char* licenses char* mcs_label char* nodes @@ -637,6 +670,7 @@ cdef extern from "slurm/slurmdb.h": uint16_t preempt_mode uint32_t preempt_exempt_time uint32_t priority + uint64_t* relative_tres_cnt slurmdb_qos_usage_t* usage double usage_factor double usage_thres @@ -875,8 +909,7 @@ cdef extern from "slurm/slurmdb.h": char* acct uint32_t count List groups - uint32_t lft - uint32_t rgt + char* lineage List tres_list ctypedef struct slurmdb_report_cluster_grouping_t: @@ -916,6 +949,8 @@ cdef extern from "slurm/slurmdb.h": int slurmdb_accounts_add(void* db_conn, List acct_list) + char* slurmdb_accounts_add_cond(void* db_conn, slurmdb_add_assoc_cond_t* add_assoc, slurmdb_account_rec_t* acct) + List slurmdb_accounts_get(void* db_conn, slurmdb_account_cond_t* acct_cond) List slurmdb_accounts_modify(void* db_conn, slurmdb_account_cond_t* acct_cond, slurmdb_account_rec_t* acct) @@ -1000,6 +1035,8 @@ cdef extern from "slurm/slurmdb.h": List slurmdb_events_get(void* db_conn, slurmdb_event_cond_t* event_cond) + List slurmdb_instances_get(void* db_conn, slurmdb_instance_cond_t* instance_cond) + List slurmdb_problems_get(void* db_conn, slurmdb_assoc_cond_t* assoc_cond) List slurmdb_reservations_get(void* db_conn, slurmdb_reservation_cond_t* resv_cond) @@ -1020,6 +1057,8 @@ cdef extern from "slurm/slurmdb.h": void slurmdb_destroy_qos_usage(void* object) + void slurmdb_free_user_rec_members(slurmdb_user_rec_t* slurmdb_user) + void slurmdb_destroy_user_rec(void* object) void slurmdb_destroy_account_rec(void* object) @@ -1044,6 +1083,8 @@ cdef extern from "slurm/slurmdb.h": void slurmdb_destroy_event_rec(void* object) + void slurmdb_destroy_instance_rec(void* object) + void slurmdb_destroy_job_rec(void* object) void slurmdb_free_qos_rec_members(slurmdb_qos_rec_t* qos) @@ -1086,6 +1127,8 @@ cdef extern from "slurm/slurmdb.h": void slurmdb_destroy_event_cond(void* object) + void slurmdb_destroy_instance_cond(void* object) + void slurmdb_destroy_job_cond(void* object) void slurmdb_destroy_qos_cond(void* object) @@ -1100,6 +1143,10 @@ cdef extern from "slurm/slurmdb.h": void slurmdb_destroy_archive_cond(void* object) + void slurmdb_free_add_assoc_cond_members(slurmdb_add_assoc_cond_t* add_assoc) + + void slurmdb_destroy_add_assoc_cond(void* object) + void slurmdb_destroy_update_object(void* object) void slurmdb_destroy_used_limits(void* object) @@ -1134,12 +1181,16 @@ cdef extern from "slurm/slurmdb.h": void slurmdb_init_federation_rec(slurmdb_federation_rec_t* federation, bool free_it) + void slurmdb_init_instance_rec(slurmdb_instance_rec_t* instance) + void slurmdb_init_qos_rec(slurmdb_qos_rec_t* qos, bool free_it, uint32_t init_val) void slurmdb_init_res_rec(slurmdb_res_rec_t* res, bool free_it) void slurmdb_init_wckey_rec(slurmdb_wckey_rec_t* wckey, bool free_it) + void slurmdb_init_add_assoc_cond(slurmdb_add_assoc_cond_t* add_assoc, bool free_it) + void slurmdb_init_tres_cond(slurmdb_tres_cond_t* tres, bool free_it) void slurmdb_init_cluster_cond(slurmdb_cluster_cond_t* cluster, bool free_it) @@ -1148,7 +1199,7 @@ cdef extern from "slurm/slurmdb.h": void slurmdb_init_res_cond(slurmdb_res_cond_t* cluster, bool free_it) - List slurmdb_get_hierarchical_sorted_assoc_list(List assoc_list, bool use_lft) + List slurmdb_get_hierarchical_sorted_assoc_list(List assoc_list) List slurmdb_get_acct_hierarchical_rec_list(List assoc_list) @@ -1180,6 +1231,10 @@ cdef extern from "slurm/slurmdb.h": int slurmdb_users_add(void* db_conn, List user_list) + char* slurmdb_users_add_cond(void* db_conn, slurmdb_add_assoc_cond_t* add_assoc, slurmdb_user_rec_t* user) + + List slurmdb_users_add_conn(void* db_conn, slurmdb_user_rec_t* user, slurmdb_assoc_cond_t* assoc_cond, slurmdb_assoc_rec_t* assoc) + List slurmdb_users_get(void* db_conn, slurmdb_user_cond_t* user_cond) List slurmdb_users_modify(void* db_conn, slurmdb_user_cond_t* user_cond, slurmdb_user_rec_t* user) diff --git a/pyslurm/utils/helpers.pyx b/pyslurm/utils/helpers.pyx index 4d5f6d0c..577a1c9a 100644 --- a/pyslurm/utils/helpers.pyx +++ b/pyslurm/utils/helpers.pyx @@ -175,17 +175,17 @@ def nodelist_from_range_str(nodelist): cdef: char *nl = nodelist - slurm.hostlist_t hl + slurm.hostlist_t *hl char *hl_unranged = NULL hl = slurm.slurm_hostlist_create(nl) if not hl: return [] - hl_unranged = slurm.slurm_hostlist_deranged_string_malloc(hl) + hl_unranged = slurm.slurm_hostlist_deranged_string_xmalloc(hl) out = cstr.to_list(hl_unranged) - free(hl_unranged) + xfree(hl_unranged) slurm.slurm_hostlist_destroy(hl) return out @@ -206,17 +206,17 @@ def nodelist_to_range_str(nodelist): cdef: char *nl = nodelist - slurm.hostlist_t hl + slurm.hostlist_t *hl char *hl_ranged = NULL hl = slurm.slurm_hostlist_create(nl) if not hl: return None - hl_ranged = slurm.slurm_hostlist_ranged_string_malloc(hl) + hl_ranged = slurm.slurm_hostlist_ranged_string_xmalloc(hl) out = cstr.to_unicode(hl_ranged) - free(hl_ranged) + xfree(hl_ranged) slurm.slurm_hostlist_destroy(hl) return out diff --git a/setup.cfg b/setup.cfg index ba3ad0b6..755e3840 100644 --- a/setup.cfg +++ b/setup.cfg @@ -10,7 +10,7 @@ packager = Giovanni Torres doc_files = README.md examples/ build_requires = python3-devel >= 3.6 - slurm-devel >= 23.02.0 + slurm-devel >= 23.11.0 requires = slurm use_bzip2 = 1 diff --git a/tests/unit/test_job_submit.py b/tests/unit/test_job_submit.py index 5720f75f..c7fc78a2 100644 --- a/tests/unit/test_job_submit.py +++ b/tests/unit/test_job_submit.py @@ -350,6 +350,7 @@ def test_parsing_sbatch_options_from_script(): #SBATCH --exclusive #SBATCH --ntasks = 2 #SBATCH -c=3 # inline-comments should be ignored + #SBATCH --gres-flags=one-task-per-sharing,enforce-binding sleep 1000 """ @@ -364,8 +365,10 @@ def test_parsing_sbatch_options_from_script(): assert job.resource_sharing == "no" assert job.ntasks == 5 assert job.cpus_per_task == "3" + assert job.gres_tasks_per_sharing == "one-task-per-sharing" + assert job.gres_binding == "enforce-binding" - job = job_desc(ntasks=5) + job = job_desc(ntasks=5, gres_binding="disable-binding") job.script = path job.load_sbatch_options(overwrite=True) assert job.time_limit == "20" @@ -374,6 +377,8 @@ def test_parsing_sbatch_options_from_script(): assert job.resource_sharing == "no" assert job.ntasks == "2" assert job.cpus_per_task == "3" + assert job.gres_tasks_per_sharing == "one-task-per-sharing" + assert job.gres_binding == "enforce-binding" finally: os.remove(path) - +