Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better error handling #193

Closed
brain0 opened this issue Mar 20, 2018 · 5 comments
Closed

Better error handling #193

brain0 opened this issue Mar 20, 2018 · 5 comments

Comments

@brain0
Copy link
Contributor

brain0 commented Mar 20, 2018

This issue is meant to be a tracking issue for better error handling in librespot. Right now, there are way to many calls to unwrap() on conditions which might fail, resulting in application crashes.

This is particularly annoying when a connection loss results in a crash, instead of reconnecting. It's also annoying when spurious failures on the server side lead to crashes. We need to come up with a way to improve error handling globally in librespot.

@brain0
Copy link
Contributor Author

brain0 commented Mar 20, 2018

Currently, the following issues are a consequence of the lack of proper error handling: #51 #134 #172 #183. I could probably add a few more by looking at my logs.

@sashahilton00
Copy link
Member

sashahilton00 commented Mar 20, 2018

Indeed, there are a few key errors that are somewhat irritating:

  1. Reconnection, by far, is the biggest issue, as it's simply not implemented. Several issues are tied to this.
  2. Protobuf errors. One of the many features of protobuf is that versioning is semi-integrated; you add fields to the schema and recompile to get the bindings, with older bindings ignoring the newly added fields. Yet whenever Spotify pushes a protobuf schema change, all previous librespot implementations fail. We should ideally just warn of unrecognised protobuf fields, and then continue running. This would prevent a raft of issues such as the recent one with kSupportsPlaylistV2, which realistically didn't break any logic in librespot. If anyone knows how to silence protobuf errors, or turn them into warnings, that would be appreciated.
  3. Spotify 5xx errors. I think it's safe to assume these are service unavailable errors. If these occur, as in Random crash in result.rs #172, librespot should not crash, as chances are, it's a brief error/server restart, etc. We should instead try reconnecting in a runoff fashion for 10 minutes or so, informing the user via warn! that Spotify appears to be offline at each retry, then panic, as if it's down for more than 10 minutes, chances are there's something bigger going wrong at Spotify.
  4. Unwraps. As touched on by @brain0, there are a bunch of unwraps which cause errors. We should start handling these properly to avoid non fatal errors crashing the program.

@brain0
Copy link
Contributor Author

brain0 commented Mar 20, 2018

@sashahilton00 Thanks for the summary.

Another reconnection story: I have librespot running on a headless machine. I run it in discovery mode and use Android via wifi to connect it to spotify. It often happens that I listen to some music and then don't use it for several days. When I then try to connect again, it crashes instantly and only works after a restart. I need to capture a backtrace for this, I just installed a non-stripped version and can probably reproduce tomorrow.

I agree on the protobuf thing. I have only dealt with protobuf in C++ and C#, and adding new fields was always non-fatal. If this isn't the case for the Rust implementation, that is a serious problem.

@brain0
Copy link
Contributor Author

brain0 commented Mar 20, 2018

As for protobuf, the problem is here, where an error in an enum variant in a repeated field cannot be ignored. See also here.

@sashahilton00
Copy link
Member

For the 5xx errors, shall I just add a check that panics librespot for now? Seems better than hanging indefinitely...

@librespot-org librespot-org locked and limited conversation to collaborators Feb 23, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
Development

No branches or pull requests

2 participants