Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

librespot crashing after x songs #215

Closed
kutmasterk opened this issue May 7, 2018 · 26 comments
Closed

librespot crashing after x songs #215

kutmasterk opened this issue May 7, 2018 · 26 comments

Comments

@kutmasterk
Copy link

kutmasterk commented May 7, 2018

Since a few days, librespot is crashing after playing a number of songs from a playlist.

All was working before and nothing has changed on my side, so my guess is, that spotify must have changed something.

Steps to reproduce:

  • Start a playlist
  • if you don't want to wait, skip every song to the last part and let it change the track, go on like this for a couple of tracks (for me it is between 9 and 15 tracks) it eventually crashes

Here is a verbose log from the crash:

DEBUG:librespot_playback::player: command=Load(SpotifyId(u128!(218675386531455298211524659961312007164)), true, 0)
INFO:librespot_playback::player: Loading track "I Am" with Spotify URI "spotify:track:50qLAF3XgfhtaCGDKT4Kks"
DEBUG:librespot_audio::fetch: Downloading file 3c1d6b8f30aaa1ba57612f5aeb4f4a498990562a
TRACE:librespot_audio::fetch: requesting chunk 0
ERROR:librespot_core::channel: channel error: 2 1
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ChannelError', libcore/result.rs:945:5
stack backtrace:
   0:   0x9b7333 - std::sys::unix::backtrace::tracing::imp::unwind_backtrace::h4ef6490c84cef3d0
                       at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1:   0x9b108f - std::sys_common::backtrace::_print::hea150ee4f33bb5de
                       at libstd/sys_common/backtrace.rs:71
   2:   0x9b5da3 - std::panicking::default_hook::{{closure}}::h6c4396ff5bd61d89
                       at libstd/sys_common/backtrace.rs:59
                       at libstd/panicking.rs:380
   3:   0x9b5987 - std::panicking::default_hook::h0f6cb16cf06a0198
                       at libstd/panicking.rs:396
   4:   0x9b6247 - std::panicking::rust_panic_with_hook::h2207195920ffa55f
                       at libstd/panicking.rs:576
   5:   0x9b60ab - std::panicking::begin_panic::h8d43a09cf6d6e5f1
                       at libstd/panicking.rs:537
   6:   0x9b602f - std::panicking::begin_panic_fmt::hd9afc94159ee9735
                       at libstd/panicking.rs:521
   7:   0x9b5fc7 - rust_begin_unwind
                       at libstd/panicking.rs:497
   8:   0xa03e57 - core::panicking::panic_fmt::h0bd854df201d1baf
                       at libcore/panicking.rs:71
   9:   0x56290f - core::result::unwrap_failed::h0e273f2ef825e9af
  10:   0x55d85f - librespot_playback::player::PlayerInternal::run::h296c59bd7682ffbf
  11:   0x4dd4bb - std::sys_common::backtrace::__rust_begin_short_backtrace::h0ef34e9728d818f7
  12:   0x4f0a03 - std::panicking::try::do_call::h55970923d54b3006
  13:   0x9c5673 - __rust_maybe_catch_panic
                       at libpanic_unwind/lib.rs:102
  14:   0x4f292b - <F as alloc::boxed::FnBox<A>>::call_box::h6c97a59281f0cee4
  15:   0x9bc2df - std::sys::unix::thread::Thread::new::thread_start::hb1d1727b173caaf8
                       at /checkout/src/liballoc/boxed.rs:798
                       at libstd/sys_common/thread.rs:24
                       at libstd/sys/unix/thread.rs:90
DEBUG:librespot_playback::player: drop Player[0]
DEBUG:librespot_core::session: Session[0] strong=2 weak=4

i tried this with pipe backend and alsa-backend on a raspberry pi 3. It crashes with both backends. I do not have pulseaudio installed.

Also tried this with disabled cache - still crashing :(

EDIT: i use a self-compiled binary of the latest version, but i also checked out the pre-compiled raspotify binary and some older versions of librespot from the last 2 months which i had on backup. they all have the same behavior.

@mrwsl
Copy link

mrwsl commented May 8, 2018

I can confirm this behaviour as well. I hadn't have the time to provide a proper log. I'm using a self compiled librespot (latest version) on raspberry pi3, too.

@berrywhite96
Copy link

Same error here, tried the "raspotify - that JUST WORKS" and the spotify daemon. I just came from the spotify connect version build in python, which is maybe 3 years old, that also stops working.
I think spotify tries atm to stop third party tools like librespot...

@plietar
Copy link
Contributor

plietar commented May 8, 2018

Does this happen with the some tracks in particular, or do the same tracks sometimes work and sometimes don't?

PS: Fixed it for you, but in the future please quote logs with triple backticks, not single

@berrywhite96
Copy link

Its not really depending on the song, the more songs were played the higher is the chance of an crash. The python spotify connect I mentioned worked for years, since friday it crashes as the newer spotify projects.
So there are still people where one of these projects dont crash?!

@kutmasterk
Copy link
Author

The same tracks sometimes work and sometimes don't. It really depends on the amount of tracks played.

@kutmasterk
Copy link
Author

kutmasterk commented May 9, 2018

So, as of today i cannot reproduce the crashes anymore. I just played 30 tracks without interruption.

The only thing useful i have found is this:
spotify/web-api#859 (comment)

As of this month spotify has enabled a load balancer on their google provided distribution network and now requires a "Content-Length header" set for requests. This is a bugreport for the spotify web-api, but i am not sure if librespot is even using this api.

@berrywhite96
Copy link

Can confirm this, runned over the night tracks on spotify and the program doesnt seem to crash. I called it with my credentials, maybe this could help.
But I determine that every track needs 5-10 seconds before it plays to start. My internet connection isnt the problem, do other have this problem?

@mrwsl
Copy link

mrwsl commented May 9, 2018

@berrywhite96 with the linux client I have the same problem. When connecting from my android client librespot plays the song almost instantly.

@berrywhite96
Copy link

@herrwusel interesting, maybe some internet provider thing?! Try later to run my pi over the mobile network connection.

@cortegedusage
Copy link

@berrywhite96 I to have the same 5 to 8 secs delay when changing songs as of this week.
also the occasional crash. I control it with android client

@mrwsl
Copy link

mrwsl commented May 9, 2018

@berrywhite96 Same network connection, just different clients.

@berrywhite96
Copy link

berrywhite96 commented May 9, 2018

@herrwusel I tried it yesterday on two pi's, a pi 3 and pi 2. The pi 3 runs raspbian jessy and the pi 2 runs raspbian stretch, both have the same problem. I cant really determine the problem.

@michaelherger
Copy link
Contributor

While I'm using my own librespot based application, I've seen crashing behaviour recently, too. Likely no provider any of you is using :-).

Frequent crashes, and very, very slow playback start.

Spotify mentioned that they had switched CDN provider (from Akamai to Google?) in a bug report I commented on (spotify/web-api#859 (comment)). This seems to behave differently wrt. to some aspects of http implementation. I'm wondering whether we see some other side-effects of this change here, too.

@kutmasterk
Copy link
Author

I can confirm these delays too. If you start a song, album or playlist from the app (desktop mac and android here) there is a 10 second delay before the song starts playing on librespot.

Maybe we should open a new bug report specifically for this problem, as i am not sure this is connected to the crashes.

@berrywhite96
Copy link

berrywhite96 commented May 9, 2018

Now I havent these delays, but the crashes are back. At this time it crashed on the fifth track:
INFO:librespot_playback::player: Loading track "Top Off" with Spotify URI "spotify:track:1W6wxOOYyJyyok8fqYSZ3m" ERROR:librespot_core::channel: channel error: 2 1 thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ChannelError', libcore/result.rs:945:5 stack backtrace: 0: 0x9d9333 - std::sys::unix::backtrace::tracing::imp::unwind_backtrace::h4ef6490c84cef3d0 at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49 1: 0x9d308f - std::sys_common::backtrace::_print::hea150ee4f33bb5de at libstd/sys_common/backtrace.rs:71
The "loaded" message didnt came up, so its something while its loading the track.
I dont understand rust, so maybe anyone helps this info.

@plietar
Copy link
Contributor

plietar commented May 9, 2018

The desktop client (and probably the mobile one too) download audio files over HTTP, whereas librespot downloads them using the same TCP socket used for everything else. It is possible Spotify is slowing shutting down the latter as they move to some new CDN.

Fixing this would require some more reverse engineering, to figure out how the client learns about the URL to use to download audio files.

My spotify-analyze scripts would be the first step for this. Additionally the desktop client can be started in verbose mode, in which case it prints the URL to the logs. You should compare those logs with the packet dumps, see if any identifier matches.

Does anyone know if speakers which were shipped with Spotify support ~3/4 years ago still work? I originally used their firmware to figure out the protocol.

@WhiteHatTux
Copy link

Regarding the delay. I think this is related to loading and am working on it over at issue #210. I'm not sure as I just started using and understanding the library. I just presumed it was connected to my network.

@kingosticks
Copy link
Contributor

I am not seeing any delays with the linux desktop client. I couldn't work out how to start the client in verbose mode but when using a http/https proxy I only see traffic for the album art. Nothing else.

Libspotify is still working if that's any help.

However, if they were making changes it's likely they'd move people over in stages and maybe I just haven't been moved yet...

@berrywhite96
Copy link

berrywhite96 commented May 10, 2018

@kingosticks atm there arent any delays, so maybe there were just some slow servers on the last days.

Besides the different error message, the crash looks like the others before.
Last crash I had was with this error:

WARN:librespot_audio::fetch: error from channel
thread 'main' panicked at 'connection closed', libcore/option.rs:917:5
stack backtrace:
   0:   0x94f333 - std::sys::unix::backtrace::tracing::imp::unwind_backtrace::h4ef6490c84cef3d0
                       at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1:   0x94908f - std::sys_common::backtrace::_print::hea150ee4f33bb5de
                       at libstd/sys_common/backtrace.rs:71
   2:   0x94dda3 - std::panicking::default_hook::{{closure}}::h6c4396ff5bd61d89
                       at libstd/sys_common/backtrace.rs:59
                       at libstd/panicking.rs:380
   3:   0x94d987 - std::panicking::default_hook::h0f6cb16cf06a0198
                       at libstd/panicking.rs:396
   4:   0x94e247 - std::panicking::rust_panic_with_hook::h2207195920ffa55f
                       at libstd/panicking.rs:576
   5:   0x94e0ab - std::panicking::begin_panic::h8d43a09cf6d6e5f1
                       at libstd/panicking.rs:537
   6:   0x94e02f - std::panicking::begin_panic_fmt::hd9afc94159ee9735
                       at libstd/panicking.rs:521
   7:   0x94dfc7 - rust_begin_unwind
                       at libstd/panicking.rs:497
   8:   0x99be57 - core::panicking::panic_fmt::h0bd854df201d1baf
                       at libcore/panicking.rs:71
   9:   0x99bec3 - core::option::expect_failed::hfa0c8a51e07f7adc
                       at libcore/option.rs:917
  10:   0x57457f - <librespot_core::session::DispatchTask<S> as futures::future::Future>::poll::he38a6d169aa13ae2
  11:   0x5b1b87 - <futures::future::join::Join<A, B> as futures::future::Future>::poll::h013ba2d2fb03815f
  12:   0x609b63 - <futures::future::map::Map<A, F> as futures::future::Future>::poll::h4eadf278ffd914cb
  13:   0x61c7bb - <futures::future::map_err::MapErr<A, F> as futures::future::Future>::poll::he7a9f5fde36154eb
  14:   0x8793f3 - <futures::task_impl::Spawn<T>>::poll_future_notify::ha67b04466b554e99
  15:   0x877e97 - <std::thread::local::LocalKey<T>>::with::hbe3872cac982acb2
  16:   0x88035b - <tokio::executor::current_thread::scheduler::Scheduler<U>>::tick::h06483af9be3f9258
  17:   0x8808af - <scoped_tls::ScopedKey<T>>::set::h4d344e2912809838
  18:   0x8778a3 - <std::thread::local::LocalKey<T>>::with::h1dc1eb7f0f4c412e
  19:   0x8780cf - <std::thread::local::LocalKey<T>>::with::hd5ac94511321d7ed
  20:   0x872c53 - tokio_core::reactor::Core::poll::h821e8447dd7b3d43
  21:   0x46f23b - tokio_core::reactor::Core::run::h09af0e02db6cafcd
  22:   0x49323b - librespot::main::h2aac80d528d4c194
  23:   0x48080b - std::rt::lang_start::{{closure}}::hecf37fca32271fe5
  24:   0x94deff - std::panicking::try::do_call::h5f03c49f73b7300e
                       at libstd/rt.rs:59
                       at libstd/panicking.rs:479
  25:   0x95d673 - __rust_maybe_catch_panic
                       at libpanic_unwind/lib.rs:102
  26:   0x94e66f - std::rt::lang_start_internal::h907d77b40aa767dd
                       at libstd/panicking.rs:458
                       at libstd/panic.rs:358
                       at libstd/rt.rs:58
  27:   0x493e23 - main
  28: 0x76d07677 - __libc_start_main

@l3d00m
Copy link

l3d00m commented May 11, 2018

@berrywhite96 I reported that crash in #217 too as I thought this was unrelated.

@berrywhite96
Copy link

@l3d00m i think its all the same bug, they all happen when the receiver tries to load the next track. Maybe its unrelated, but i have no experience with the source code.

Since yesterday evening, the receiver works perfectly. I dont know why, but maybe the problem is time related?! Do we know other servers to get the tracks?

@plietar
Copy link
Contributor

plietar commented May 11, 2018

The server used (Access Point) used is printed one one of the first lines. You could hardcode that value in apresolve.rs and see if it now always works.

We could add a flag to use a fixed AP instead of apresolve.

@sashahilton00
Copy link
Member

With regards to how the URL is generated, I had a very quick look using spotify-analyze, and a search for the file downloaded yields a couple of commands tagged with 0x9 from Spotify Servers, which are then sent to librespot's channel.rs. Those commands contain the full URL for downloading. I haven't investigated what triggers the server to send the URL though.

There appear to be two endpoints that one can download the file through:
http://audio-fa.spotify.com/audio/a694d4d3e32927e1148...3206_BgCXZ+ig5P4x1xJet/GXFt7/MxNdQLaFI5rH1DPSb0M=
http://audio-akp.spotify.com.edgesuite.net/audio/a694d4d3e32927e1148...2071ec57b228c0256e50a7611f

@l3d00m
Copy link

l3d00m commented May 17, 2018

This issue does appear to be kind of resolved for me, librespot crashes are now back to normal (once day or so) and not multiple crashes a day.

@kingosticks
Copy link
Contributor

It may be worth nothing that with libspotify, periods of random ChannelErrors were not unheard of and would correspond to something being temporarily broken at Spotify's end. I guess the take-away here is that we don't want to crash but I think there's already an issue for tracking improvements to this sort of thing.

@roderickvd
Copy link
Member

Channel errors fixed upstream by Spotify on their end.
Reconnection issues tracked in #609.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests