Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle 503 reply code properly #2884

Closed
dragotin opened this issue Feb 24, 2015 · 6 comments
Closed

Handle 503 reply code properly #2884

dragotin opened this issue Feb 24, 2015 · 6 comments
Assignees
Labels
p2-high Escalation, on top of current planning, release blocker type:bug

Comments

@dragotin
Copy link
Contributor

There are two scenarios where the server replies 503:

  1. Maintenance mode: the server is in maintenance mode, access to the root folder returns error code 503. In that case we silently need to disable access to the server at all, and keep calm with the "not signed in" icon. After 30 seconds we need to try again if the server is back again. With 1.8.0 beta1 we show an error message instead.
  2. The PROPFIND on a specific folder also could return 503. That is the case if an external storage went away for whatever reason. The server detects that and returns 503 for that folder. The client has to IGNORE that directory, otherwise data loss on the local repo can happen.

We need to fix that.

@dragotin dragotin added type:bug p2-high Escalation, on top of current planning, release blocker labels Feb 24, 2015
@dragotin dragotin added this to the 1.8 - UI Enhancements milestone Feb 24, 2015
@dragotin
Copy link
Contributor Author

@ckamm I felt free to assign you here.

For case 1. (which can be easily tested by setting maintenance = true in servers config.php) the problem is that in object Folder and in Folderman the variable _csyncUnavail is true if the server returned 503, but that is not properly handled. No retries to connect happen.

@ckamm
Copy link
Contributor

ckamm commented Feb 25, 2015

@dragotin Currently the behavior is that the 503 response will make the 'service unavailable' error appear in the UI and show the account as disconnected (AccountState goes to NetworkError). You're saying we should hide the error and silently try again?

We could do that for a bit, but I don't want to pretend to be connected while we can't actually sync anything for too long. The 503 response could have other causes and be permanent. What's the problem we want to solve? Would trying again for a minute and then giving up be good?

I'll check out 2. now.

@ckamm
Copy link
Contributor

ckamm commented Feb 25, 2015

About 2.: The commit fixes it. It'd be really nice if we could distinguish this 'maintenance mode!' and 'storage unavailable!' from regular 503 errors that could have other causes.

@guruz
Copy link
Contributor

guruz commented Feb 25, 2015

Refs #1923

ckamm added a commit to ckamm/owncloud-client that referenced this issue Feb 25, 2015
ckamm added a commit to ckamm/owncloud-client that referenced this issue Feb 25, 2015
ckamm added a commit to ckamm/owncloud-client that referenced this issue Feb 25, 2015
@ckamm ckamm added the ReadyToTest QA, please validate the fix/enhancement label Feb 25, 2015
@ckamm
Copy link
Contributor

ckamm commented Feb 25, 2015

What was done:

  1. Receiving a 503 response due to server maintenance during connection validation disables sync scheduling etc, but keeps the green icon. Regular connection checking continues and restores the client to a 'connected' state once the 503 goes away.
  2. Receiving a 503 Storage not available response during remote discovery will ignore the whole directory.

What was not done: A server-maintenance 503 during remote discovery will still abort the sync with an error.

@luciamaestro
Copy link

Tested case 1 in version 1.8.0-nightly20150302 (build 2068). It is fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
p2-high Escalation, on top of current planning, release blocker type:bug
Projects
None yet
Development

No branches or pull requests

4 participants