-
Notifications
You must be signed in to change notification settings - Fork 460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server/scheduler: Current default code doesn't inhibit work supply to faulty hosts #3061
Comments
#3024 has a detailed explanation of what it does. Please read it. |
Yes, I read it when it was first opened, and I read it again before opening this issue. As you say, it has a clear explanation of what it does: what is missing is a description of why it does it. The patch in #3024 is directed to the validator: detecting problems at the end of the process, and setting conditions that should prevent them growing into larger problems. But my purpose in opening this issue was to suggest that the inhibition conditions set by the validator may not be respected by the work allocation routines. It is not the fault of the validator that the host I reported on had 1663 tasks in progress (now 1723) when it should only have 200: those tasks will not even be looked at by the validator until the transitioner sees the deadline, the month after next (and maybe not even then: I haven't inspected whether it is the validator or the transitioner which sets "Timed out - no response"). This PR contains a single, very clear, example of a problem which has plagued SETI for many years - there is even a message board thread dedicated to volunteers who find they are paired with faulty hosts and try to reach out to their owners to offer help. But as users, we don't have access to the server logs which might explain why the scheduler isn't respecting the inhibitors set by the validators. Only @davidpanderson and @SETIguy have that access. Unless practice matches theory - the scheduler behaves as intended - your work in #3024 is null and void. |
@RichardHaselgrove PR #3024 works as intended and goes part way to solving this issue. The original issue I posted is #3009 and I have just linked that to the PR. The situation that I wanted to detect is where there is a permanent and detectable issue with the host that causes the tasks to fail. This can now be done, the max_jobs_per_day is set to 1 and the host does not receive any more tasks. I see though that there is still room for an optimization. I have tried to pay with the configuration values but still each day the host will pull at least NCPU number of tasks before max_jobs_per_day is set to 1. My recommendation would be that no matter what |
Thanks - that's much clearer. That's why the "Fixes: " prompt is in the new PR template, and it would be helpful if everyone could use it. For what it's worth, my assumption is that the example host I'm using here is suffering from severe communications problems: I think that the majority of supposedly 'in progress' tasks are what users refer to as 'ghost tasks'. These are tasks which are allocated by the server in response to a request for work, but where the allocation reply is never received by the remote host. Completed task reports are held by the client until an affirmative 'ack' is received from the server: there is no corresponding ack that a client has received the allocated new work. My understanding is that the Note that the 'Assigned' status could only be used when the user's BOINC client version is declared to be one which supplies an |
@davidpanderson - can you review the following? The limits on tasks per day is governed by the checks in the method 'daily_quota_exceeded' That method is called in get_app_version_anonymous here and in get_app_version here. In get_app_version, the code path for homogeneous app version and where an app version id has already been set for the task results in the app version being returned without regards to the user having reached their daily limit for the job. See here. For projects that use homogeneous app version, this could result in a client being able to obtain considerably more work than it should be able to. I think a fix would be something like putting the following at line 522.
I don't know if this would match what is being seen but it looks suspicious. |
Describe the bug
SETI@Home users have drawn attention to SETI host 8625200.
The host was created on 28 Nov 2018, by a new user who also joined the project on the same day.
At the time of writing, the results.php display for the host is showing 1663 tasks in progress and 313 error results. Those figures include 48 apparently successful downloads so far today (14 March 2019), and 16 'Error while downloading'. I can find no evidence that the host has ever completed even a single task from any of the 10 application versions it has attempted.
Expected behavior
Hosts which consistently fail to return valid work should have their maximum task quota gradually reduced to no more than one task per application per day.
Additionally, for projects which enforce a 'tasks in progress' limit, that limit should apply to all hosts - it would be 200 in this case.
System Information (please complete the following information):
Additional context
SETI is a project with multiple application versions and generous deadlines. Tasks require 2nd. instance validation. Some of the 'in progress' tasks date back to 23 Jan 2019, and won't time out until 17 Mar 2019. The extended retention of so many task and workunit records is detrimental both to the project database and to fellow users.
I don't know whether this is related to the "punitive validation" mechanism described in #3024, and I can't know because the only context given in opening that PR by @davidpanderson was "This is for @lfield". Without an issue number, or the text of the email, we're none the wiser.
What I'm describing here isn't a need for 'punishment': all I'm suggesting is that normal operation of the regular restrictive rules appears to be broken. Fixing it might be all that @lfield requires.
The text was updated successfully, but these errors were encountered: