Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qmanager: send partial-ok with sched.hello #1321

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

garlick
Copy link
Member

@garlick garlick commented Dec 17, 2024

This changes qmanager to invoke the sched.hello request with the flag that tells it to allow partial allocations through. schedutil then diminishes R in the hello responses by the free set (if present) before handing it to the hello callback in qmanager. We were wondering if fluxion would perhaps just handle this.

flux-framework/flux-core#6445 must be merged before it can work with flux-core master, however, preliminary testing with that branch appears to show a problem. The sharness test added here fails when I reload the scheduler with one node in housekeeping, and fluxion says:

Dec 17 00:52:30.860604 UTC sched-fluxion-resource.err[0]: parse_R: json_unpack
Dec 17 00:52:30.860614 UTC sched-fluxion-resource.err[0]: run_update: parsing R: Invalid argument
Dec 17 00:52:30.860616 UTC sched-fluxion-resource.err[0]: update_request_cb: update failed (id=46825209856): Invalid argument
Dec 17 00:52:30.860730 UTC sched-fluxion-qmanager.err[0]: jobmanager_hello_cb: reconstruct (id=46825209856 queue=default): Invalid argument
Dec 17 00:52:30.860842 UTC sched-fluxion-qmanager.err[0]: error raising fatal exception on f2ELnSXR: job is inactive: No such file or directory

So there is a little more work to do here to run that down. Marking this as a WIP.

Copy link
Contributor

@grondo grondo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may have already done this, but I modified Fluxion's parse_R to print the json_unpack error and got:

Dec 17 16:40:58.375707 UTC sched-fluxion-resource.err[0]: parse_R: json_unpack: Expected integer, got real

@@ -560,7 +560,7 @@ static std::shared_ptr<qmanager_ctx_t> qmanager_new (flux_t *h)
}
if (!(ctx->schedutil =
schedutil_create (ctx->h,
SCHEDUTIL_FREE_NOLOOKUP,
SCHEDUTIL_HELLO_PARTIAL_OK,
Copy link
Contributor

@grondo grondo Dec 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: if this code is modified to use the SCHEDUTIL_HELLO_PARTIAL_OK macro only if it is defined, then flux-sched won't need to have a new dependency on an as-yet unreleased version of flux-core.

@grondo
Copy link
Contributor

grondo commented Dec 17, 2024

RFC 20 allows optional microsecond precision in R starttime and expiration (and I'm guessing that's what Fluxion is tripping over here), so I think parse_R needs a simple fix to accept real/float for starttime and expiration.

(FYI - I did make that simple change and your new test passes)

@grondo
Copy link
Contributor

grondo commented Dec 17, 2024

Changes for reference (probably needs a clang-format done with the flux-sched project style)

diff --git a/resource/modules/resource_match.cpp b/resource/modules/resource_match.cpp
index f8554555..42d3fcae 100644
--- a/resource/modules/resource_match.cpp
+++ b/resource/modules/resource_match.cpp
@@ -1528,8 +1528,8 @@ static int parse_R (std::shared_ptr<resource_ctx_t> &ctx,
     int rc = 0;
     int version = 0;
     int saved_errno;
-    uint64_t st = 0;
-    uint64_t et = 0;
+    double tstart = 0.;
+    double expiration = 0.;
     json_t *o = NULL;
     json_t *graph = NULL;
     json_error_t error;
@@ -1541,23 +1541,24 @@ static int parse_R (std::shared_ptr<resource_ctx_t> &ctx,
         errno = EINVAL;
         goto out;
     }
-    if ((rc = json_unpack (o,
-                           "{s:i s:{s:I s:I} s?:o}",
-                           "version",
-                           &version,
-                           "execution",
-                           "starttime",
-                           &st,
-                           "expiration",
-                           &et,
-                           "scheduling",
-                           &graph))
-        < 0) {
+    if ((rc = json_unpack_ex (o,
+                              &error,
+                              0,
+                              "{s:i s:{s:F s:F} s?:o}",
+                              "version", &version,
+                              "execution",
+                               "starttime", &tstart,
+                               "expiration", &expiration,
+                              "scheduling", &graph)) < 0) {
         errno = EINVAL;
-        flux_log (ctx->h, LOG_ERR, "%s: json_unpack", __FUNCTION__);
+        flux_log (ctx->h,
+                  LOG_ERR,
+                  "%s: json_unpack: %s",
+                  __FUNCTION__,
+                  error.text);
         goto freemem_out;
     }
-    if (version != 1 || st < 0 || et < st) {
+    if (version != 1 || tstart < 0 || expiration < tstart) {
         rc = -1;
         errno = EPROTO;
         flux_log (ctx->h,
@@ -1565,8 +1566,8 @@ static int parse_R (std::shared_ptr<resource_ctx_t> &ctx,
                   "%s: version=%d, starttime=%jd, expiration=%jd",
                   __FUNCTION__,
                   version,
-                  static_cast<intmax_t> (st),
-                  static_cast<intmax_t> (et));
+                  static_cast<intmax_t> (tstart),
+                  static_cast<intmax_t> (expiration));
         goto freemem_out;
     }
     if (graph != NULL) {
@@ -1585,8 +1586,8 @@ static int parse_R (std::shared_ptr<resource_ctx_t> &ctx,
         format = "rv1exec";
     }
 
-    starttime = static_cast<int64_t> (st);
-    duration = et - st;
+    starttime = static_cast<int64_t> (tstart);
+    duration = static_cast<uint64_t> (expiration - tstart);
 
 freemem_out:
     saved_errno = errno;

@garlick
Copy link
Member Author

garlick commented Dec 17, 2024

Nice! Thanks! I will add those changes to the PR.

@garlick
Copy link
Member Author

garlick commented Dec 17, 2024

OK pushed the following

  • @grondo's fix for parse_R()
  • modification to previous raw RPC changes to avoid dependency on flux-core 0.70
  • now that flux-core has changed the schedutil flags to preprocessor defines, use the PARTIAL_OK one conditionally
  • make the test conditional also

I'm not sure if we need a test that covers partial release with JGF as well. I'll leave this a WIP that's figured out.

Problem: the raw RPC interfaces are changing to use a size_t
instead of int for payload size.

Earlier qmanager was changed to use size_t.  It didn't occur to
me at the time, but we can avoid using the raw RPC interface entirely
since all payloads are strings.  This allows fluxion to continue to
work with older versions.

Do that instead.
@garlick
Copy link
Member Author

garlick commented Dec 17, 2024

Another push:

  • fixed formatting
  • fixed race between qmanager hello handshake and tests

@garlick
Copy link
Member Author

garlick commented Dec 18, 2024

Strange. The new test is failing on some builders. The test output includes indicates that the scheduler successfully set the partial-ok flag:

job-manager.debug[0]: scheduler: hello +partial-ok

but then later, the job manager behaves as though it's not set:

job-manager.err[0]: housekeeping: fv-az1945-857 (rank 0) from f2Mt8nf9 will be terminated because scheduler does not support restart with partially released resources

Bah, I've been staring at this for a while and think I'll step away and try again tonight or tomorrow.

@garlick
Copy link
Member Author

garlick commented Dec 18, 2024

Reran the failing tests and they all work now!?!

@grondo
Copy link
Contributor

grondo commented Dec 18, 2024

Is it possible that the flux-core docker images hadn't yet updated?

@garlick
Copy link
Member Author

garlick commented Dec 18, 2024

That must be it, but the +partial-ok message in the logs is printed by the job manager, added by flux-framework/flux-core#6445 (the new flux-core version). The only way I can think of that happening and then the test failing with the job manager message indicating that the scheduler doesn't support partial-ok is if, when the test reloads the fluxion modules, it's loading an older version that doesn't set the flag. But that seems impossible.

I'm somewhat inclined to just move on from this since it is working now, but it is disconcerting!

@grondo
Copy link
Contributor

grondo commented Dec 18, 2024

I was thinking it was a delay in available of the switch to preprocessor defines for schedutil flags (flux-framework/flux-core#6520), but since the +partial-ok message was present in the logs, I'm also mystified.

@garlick garlick force-pushed the partial_ok branch 3 times, most recently from 07d200e to a91309c Compare December 18, 2024 21:15
@garlick
Copy link
Member Author

garlick commented Dec 18, 2024

Just pushed a test with match-format=rv1, which fails.

The test does the following:

  • runs a 4 node job, arranging for 1 node get stuck in housekeeping
  • confirms job completed, housekeeping launched, and one node stuck
  • fluxion thinks only 1 node is allocated (CORRECT)
  • reloads the scheduler with the node still stuck
  • fluxion thinks 4 nodes are allocated (WRONG)
  • kills the stuck node
  • fluxion thinks 0 nodes are allocated (CORRECT)

My guess is that the JGF reader is just reading the scheduling key and ignoring the ranks in Rv1 object, but I have not confirmed this.

@garlick
Copy link
Member Author

garlick commented Dec 18, 2024

@milroy if you have any hints on this one, much appreciated!

Here's the part of the test that's failing


expecting success: 
	remove_qmanager &&
	reload_resource match-format=rv1 &&
	load_qmanager_sync &&
	flux resource list &&
	FLUX_RESOURCE_LIST_RPC=sched.resource-status flux resource list

{
 "queues": {
  "default": {
   "policy": "fcfs",
   "queue_depth": 32,
   "max_queue_depth": 1000000,
   "queue_parameters": {},
   "policy_parameters": {},
   "action_counts": {
    "pending": 0,
    "running": 0,
    "reserved": 0,
    "rejected": 0,
    "complete": 0,
    "cancelled": 0,
    "reprioritized": 0
   },
   "pending_queues": {
    "pending": [],
    "pending_provisional": [],
    "blocked": []
   },
   "scheduled_queues": {
    "running": [],
    "rejected": [],
    "canceled": []
   }
  }
 }
}
     STATE NNODES NCORES NGPUS NODELIST
      free      4     16     0 system76-pc,system76-pc,system76-pc,system76-pc
 allocated      0      0     0 
      down      0      0     0 
     STATE NNODES NCORES NGPUS NODELIST
      free      4     16     0 system76-pc,system76-pc,system76-pc,system76-pc
 allocated      0      0     0 
      down      0      0     0 
ok 22 - reload fluxion modules with match-format=rv1

expecting success: 
	flux run -N4 true
	hk_wait_for_running 1 &&
	hk_wait_for_allocated_nnodes 1

ok 23 - run a job and wait for node to get stuck

expecting success: 
	test $(fluxion_allocated nnodes) -eq 1

ok 24 - fluxion shows 1 nodes allocated

expecting success: 
	remove_qmanager &&
	reload_resource match-format=rv1 &&
	load_qmanager_sync &&
	flux resource list &&
	FLUX_RESOURCE_LIST_RPC=sched.resource-status flux resource list

{
 "queues": {
  "default": {
   "policy": "fcfs",
   "queue_depth": 32,
   "max_queue_depth": 1000000,
   "queue_parameters": {},
   "policy_parameters": {},
   "action_counts": {
    "pending": 0,
    "running": 1,
    "reserved": 0,
    "rejected": 0,
    "complete": 0,
    "cancelled": 0,
    "reprioritized": 0
   },
   "pending_queues": {
    "pending": [],
    "pending_provisional": [],
    "blocked": []
   },
   "scheduled_queues": {
    "running": [
     "f2ZMoEBy"
    ],
    "rejected": [],
    "canceled": []
   }
  }
 }
}
     STATE NNODES NCORES NGPUS NODELIST
      free      3     12     0 system76-pc,system76-pc,system76-pc
 allocated      1      4     0 system76-pc
      down      0      0     0 
     STATE NNODES NCORES NGPUS NODELIST
      free      0      0     0 
 allocated      4     16     0 system76-pc,system76-pc,system76-pc,system76-pc
      down      0      0     0 
ok 25 - reload fluxion modules with match-format=rv1

expecting success: 
	test $(fluxion_allocated nnodes) -eq 1

not ok 26 - fluxion still shows 1 node allocated
#	
#		test $(fluxion_allocated nnodes) -eq 1
#	

expecting success: 
	flux housekeeping kill --all

Dec 18 22:55:51.858087 UTC job-manager.err[0]: housekeeping: system76-pc (rank 0) f2ZMoEBy: Terminated
ok 27 - kill housekeeping

expecting success: 
	hk_wait_for_running 0 &&
	test $(fluxion_allocated nnodes) -eq 0

ok 28 - fluxion shows 0 nodes allocated

expecting success: 
	remove_qmanager &&
	remove_resource &&
	flux module load sched-simple

ok 29 - unload fluxion modules

# failed 1 among 29 test(s)
1..29

@milroy
Copy link
Member

milroy commented Dec 19, 2024

@garlick the reason for the failure is that the free key isn't getting unpacked in jobmanager_hello_cb and a partial free isn't getting run on the extracted ranks. This change allows all tests to pass in the bookworm image based on your branch:

diff --git a/qmanager/modules/qmanager_callbacks.cpp b/qmanager/modules/qmanager_callbacks.cpp
index 56251f83..a2578b3c 100644
--- a/qmanager/modules/qmanager_callbacks.cpp
+++ b/qmanager/modules/qmanager_callbacks.cpp
@@ -150,15 +150,17 @@ int qmanager_cb_t::jobmanager_hello_cb (flux_t *h, const flux_msg_t *msg, const
     unsigned int prio;
     uint32_t uid;
     double ts;
+    const char *free_ranks = NULL;
     json_t *jobspec = NULL;
     flux_future_t *f = NULL;
+    std::string R_free;
 
     /* Don't expect jobspec to be set here as it is not currently defined
      * in RFC 27.  However, add it anyway in case the hello protocol
      * evolves to include it.  If it is not set, it must be looked up.
      */
     if (flux_msg_unpack (msg,
-                         "{s:I s:i s:i s:f s?o}",
+                         "{s:I s:i s:i s:f s?s s?o}",
                          "id",
                          &id,
                          "priority",
@@ -167,6 +169,8 @@ int qmanager_cb_t::jobmanager_hello_cb (flux_t *h, const flux_msg_t *msg, const
                          &uid,
                          "t_submit",
                          &ts,
+                         "free",
+                         &free_ranks,
                          "jobspec",
                          &jobspec)
         < 0) {
@@ -216,6 +220,22 @@ int qmanager_cb_t::jobmanager_hello_cb (flux_t *h, const flux_msg_t *msg, const
               "requeue success (queue=%s id=%jd)",
               queue_name.c_str (),
               static_cast<intmax_t> (id));
+    if (free_ranks) {
+        R_free = "{\"version\":1,\"execution\":{\"R_lite\":[{\"rank\":\"" + std::string (free_ranks) + "\"}]}}";
+        if (queue->remove (static_cast<void *> (h), id, false, R_free.c_str ()) < 0) {
+            flux_log_error (h,
+                            "%s: partial cancel (id=%jd queue=%s)",
+                            __FUNCTION__,
+                            static_cast<intmax_t> (id),
+                            queue_name.c_str ());
+            goto out;
+        }
+        flux_log (h,
+                  LOG_DEBUG,
+                  "partial cancel successful after requeue (queue=%s id=%jd)",
+                  queue_name.c_str (),
+                  static_cast<intmax_t> (id));
+    }
     rc = 0;
 out:
     flux_future_destroy (f);

@garlick
Copy link
Member Author

garlick commented Dec 19, 2024

Progress! That does get the tests passing alright!

Any worries about the following:

  • The R object built here using the free ranks is techncally invalid because it doesn't include the required chidren key (see RFC 20)
  • In the rv1_nosched case queue->remove() would be attempting to free ranks that aren't allocated, because they have already been subtracted from R passed in to the callback and then to queue->reconstruct()

I would have thought the correct solution would be to find where queue->reconstruct() is allocating resources for the rv1 (as opposed to rv1_nosched) case, and have that code free the intersection of the JGF and rv1 rank key.

However, I'm fine with this if you are!

@garlick
Copy link
Member Author

garlick commented Dec 19, 2024

I just pushed that change, but I realized there may be a problem if any of the job's partially released resources had been reallocated to other jobs before the scheduler reload.

Then the partially released job would be attempting to allocate the same resources during hello, and fail.

Problem: RFC 20 allows optional microsecond precision in R starttime
and expiration, but the resource module's parse_R() function
assumes integer.

Update R parser to handle any JSON number for these fields.
Also log a more detailed error when parsing fails.

Co-authored-by: Mark A. Grondona <[email protected]>
@milroy
Copy link
Member

milroy commented Dec 19, 2024

The R object built here using the free ranks is techncally invalid because it doesn't include the required chidren key (see RFC 20)

All the reader is doing in this case is extract the ranks to be canceled from R (so R doesn't have to be valid). I could add the children key to the R_free string, or I could also modify the reader to accept a string with only a range of ranks. The latter would require more complex modifications.

  • In the rv1_nosched case queue->remove() would be attempting to free ranks that aren't allocated, because they have already been subtracted from R passed in to the callback and then to queue->reconstruct()

That could be a problem depending on how the partial cancel fails. I don't think it would be too hard to handle those failures, though.

I would have thought the correct solution would be to find where queue->reconstruct() is allocating resources for the rv1 (as opposed to rv1_nosched) case, and have that code free the intersection of the JGF and rv1 rank key.

I just pushed that change, but I realized there may be a problem if any of the job's partially released resources had been reallocated to other jobs before the scheduler reload.
Then the partially released job would be attempting to allocate the same resources during hello, and fail.

These two observations are closely related and represent a valid concern. There isn't currently a way to perform the intersection of the JGF and rv1 rank key, but I think adding support for the intersection would address these problems.

@garlick
Copy link
Member Author

garlick commented Dec 19, 2024

Just pushed an enhancement to the test that demonstrates the failure just described, where partially freed nodes have been reallocated to another job. I see this:

Dec 19 18:45:27.085776 UTC sched-fluxion-resource.err[0]: run: dfu_traverser_t::run (id=59877883904): update_vtx_plan: can't add span into core0.
Dec 19 18:45:27.085783 UTC sched-fluxion-resource.err[0]: run_update: run: Invalid argument
Dec 19 18:45:27.085786 UTC sched-fluxion-resource.err[0]: update_request_cb: update failed (id=59877883904): Invalid argument
Dec 19 18:45:27.085867 UTC sched-fluxion-qmanager.err[0]: jobmanager_hello_cb: reconstruct (id=59877883904 queue=default): Invalid argument
Dec 19 18:45:27.085955 UTC sched-fluxion-qmanager.err[0]: error raising fatal exception on f2aEComh: job is inactive: No such file or directory

and later when core tries to free resources it thinks are still allocated to the partially released job

Dec 19 18:45:27.414696 UTC sched-fluxion-qmanager.err[0]: jobmanager_free_cb: can't find queue for job (id=59877883904): No such file or directory

@milroy
Copy link
Member

milroy commented Dec 20, 2024

@garlick I'll work on the JGF and rv1 rank intersection and send an update when it's ready.

On a related note, I don't seem to be able to fork your fork. I used to have a fork but must have deleted it and now GitHub thinks it still exists. Any ideas how I might make a PR to your fork?

@garlick
Copy link
Member Author

garlick commented Dec 20, 2024

Thanks @milroy !

When working with someone else on a PR I usually add their fork as a remote, pull down their working branch, do some work, then push it to my own repo. Like

git remote add garlick  https://github.com/garlick/flux-sched.git
git fetch garlick
git checkout partial_ok
git push origin partial_ok

(if origin is the remote for your fork)

Then you can just tell me to cherry pick from your copy of my branch when you have something for me to try.

@milroy
Copy link
Member

milroy commented Dec 21, 2024

When working with someone else on a PR I usually add their fork as a remote, pull down their working branch, do some work, then push it to my own repo. Like

@garlick thanks for the reminder, I'm pretty sure that's what I did last time and just forgot.

I've pushed my changes here: https://github.com/milroy/flux-sched/tree/partial_ok. Note that I included a revert commit for f27b290 because those changes shouldn't be included. You can just cherry pick the other commits and drop f27b290 from your branch. I also included a fixup for the testsuite fixup to correct the allocated node equality check.

milroy and others added 4 commits December 21, 2024 13:41
Problem: the JGF reader doesn't support excluding previously freed ranks
during update allocate. This leads to state divergence between sched
and core because Fluxion uses the initial, full JGF to reconstruct an
allocation state.

Add support for identifying and skipping vertices and edges with ranks
corresponding to those freed in previous partial cancellations. The JGF
reader unpacks the `free_ranks` key inserted in qmanager in the
`scheduling` key payload and uses the idset to exclude vertices and
edges from the JGF when fetching and updating vertices and edges.
Problem: libschedutil has ignored the SCHEDUTIL_FREE_NOLOOKUP flag
(now the default) since flux-core-0.61.0.

Drop this flag.
Problem: when the scheduler is reloaded, housekeeping jobs with
partially allocated resources are canceled rather than being sent to
the scheduler for re-allocation during the hello handshake.

This is because there was no way for the job manager to inform the
scheduler of the free R subset.  However, RFC 27 now specifies that the
sched.hello request may contain a 'partial-ok' flag.  If set, hello
responses may include a 'free' key containing an RFC 22 idset, with
ids corresponding to ranks of R that are free.

Schedutil wraps this so that if schedutil_create() is called with the
SCHEDUTIL_HELLO_PARTIAL_OK flag, then
- the hello request includes partial-ok
- free ranks, if any, are subtracted from R in each response message
  before calling the scheduler's callback

Note that the scheduling key (JGF), if present, always contains the
original, full resource set.

Set SCHEDUTIL_HELLO_PARTIAL_OK flag if defined by flux-core.
Problem: the sched.hello RPC now includes a `free` key whose value is an
idset of previously partially released ranks. Currently Fluxion doesn't
unpack the key and handle the previously freed resources. The
rv1_noexec doesn't need the freed ranks idset because core only sends
the R that is still allocated in R_lite. However, core doesn't support
processing JGF, so rv1 with the scheduling key contains R with a
scheduling key JGF representing the initial resource set. This leads to
state divergence between core and sched for the rv1 or reader.

Add support for unpacking the `free` key and packing it into the JSON
`scheduling` payload for queue reconstruction and update allocate with
JGF.
@garlick
Copy link
Member Author

garlick commented Dec 21, 2024

I did just, plus reordered the commits in the PR to avoid breaking git bisect. I'll drop the WIP from the PR title since this seems complete now.

Thanks! Nice work @milroy!

@garlick garlick changed the title WIP: qmanager: send partial-ok with sched.hello qmanager: send partial-ok with sched.hello Dec 21, 2024
Problem: there is no test coverage for reloading the scheduler
with partially allocated jobs.

Add a test that runs if the scheduler was able to send the hello
request with the partial-ok flag.

Use the convenience scripts from sharness.d to load/reload the
scheduler modules, and load qmanager with synchronization so that
tests are not racing with the hello handshake that happens after
module loading completes.
Copy link

codecov bot commented Dec 21, 2024

Codecov Report

Attention: Patch coverage is 70.00000% with 27 lines in your changes missing coverage. Please review.

Project coverage is 75.2%. Comparing base (3fb0b4e) to head (67ff8aa).

Files with missing lines Patch % Lines
resource/readers/resource_reader_jgf.cpp 73.4% 13 Missing ⚠️
qmanager/modules/qmanager.cpp 36.3% 7 Missing ⚠️
qmanager/modules/qmanager_callbacks.cpp 73.9% 6 Missing ⚠️
resource/modules/resource_match.cpp 85.7% 1 Missing ⚠️
Additional details and impacted files
@@          Coverage Diff           @@
##           master   #1321   +/-   ##
======================================
  Coverage    75.2%   75.2%           
======================================
  Files         111     111           
  Lines       15986   16046   +60     
======================================
+ Hits        12029   12076   +47     
- Misses       3957    3970   +13     
Files with missing lines Coverage Δ
resource/readers/resource_reader_jgf.hpp 100.0% <ø> (ø)
resource/modules/resource_match.cpp 69.4% <85.7%> (ø)
qmanager/modules/qmanager_callbacks.cpp 74.2% <73.9%> (-0.2%) ⬇️
qmanager/modules/qmanager.cpp 70.3% <36.3%> (+0.2%) ⬆️
resource/readers/resource_reader_jgf.cpp 71.0% <73.4%> (+0.3%) ⬆️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants