Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spotifyd just stops itself when sending from another spotify account #246

Closed
flokli opened this issue May 12, 2019 · 5 comments
Closed

spotifyd just stops itself when sending from another spotify account #246

flokli opened this issue May 12, 2019 · 5 comments
Labels
bug A functionality or parts of a program that do not work as intended duplicate Issues that already have been opened and refer to the same topic

Comments

@flokli
Copy link

flokli commented May 12, 2019

I connected spotifyd to a premium account. It advertises itself in the network, and I'm able to play to it.

It's also visible to other Spotify users on the local network. However, when they try to connect to the spotifyd instance, spotifyd simply exits:

22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Readable | Writable Token(4194305)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [TRACE] mdns::fsm: [<unknown>:77] received packet from V4(192.168.1.57:64838)
22:14:41 [DEBUG] mdns::fsm: received question: IN _spotify-connect._tcp.local
22:14:41 [TRACE] mdns::fsm: [<unknown>:183] found interface Interface { name: "wlp3s0", addr: V4(Ifv4Addr { ip: 192.168.1.34, netmask: 255.255.255.0, broadcast: Some(192.168.1.255) }) }
22:14:41 [TRACE] mdns::fsm: [<unknown>:183] found interface Interface { name: "docker0", addr: V4(Ifv4Addr { ip: 172.17.0.1, netmask: 255.255.0.0, broadcast: Some(172.17.255.255) }) }
22:14:41 [TRACE] mdns::fsm: [<unknown>:183] found interface Interface { name: "virbr0", addr: V4(Ifv4Addr { ip: 192.168.122.1, netmask: 255.255.255.0, broadcast: Some(192.168.122.255) }) }
22:14:41 [TRACE] mdns::fsm: [<unknown>:183] found interface Interface { name: "vboxnet0", addr: V4(Ifv4Addr { ip: 192.168.56.1, netmask: 255.255.255.0, broadcast: Some(192.168.56.1) }) }
22:14:41 [TRACE] mdns::fsm: [<unknown>:183] found interface Interface { name: "wlp3s0", addr: V6(Ifv6Addr { ip: 2003:e5:3f13:3500:557:8a18:a88e:d6d8, netmask: ffff:ffff:ffff:ffff::, broadcast: None }) }
22:14:41 [TRACE] mdns::fsm: [<unknown>:183] found interface Interface { name: "wlp3s0", addr: V6(Ifv6Addr { ip: 2003:e5:3f13:3500:d653:442e:b223:33e2, netmask: ffff:ffff:ffff:ffff::, broadcast: None }) }
22:14:41 [TRACE] mdns::fsm: [<unknown>:247] sending packet to V4(192.168.1.57:64838)
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 2.012583652s
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96328, tv_nsec: 953730905 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 7.566µs
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Writable Token(4194305)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Readable Token(0)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 4.921081ms
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96328, tv_nsec: 958664455 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 48.353µs
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:193] Conn::read_head
22:14:41 [TRACE] mio::poll: [<unknown>:785] registering with poller
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Readable | Writable Token(41943044)
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:654] flushed State { reading: Init, writing: Init, keep_alive: Busy, error: None }
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:337] wants_read_again? false
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 62.411µs
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96328, tv_nsec: 958798123 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 8.481µs
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:193] Conn::read_head
22:14:41 [DEBUG] hyper::proto::h1::io: read 221 bytes
22:14:41 [TRACE] hyper::proto::h1::role: [<unknown>:46] Request.parse([Header; 100], [u8; 221])
22:14:41 [TRACE] hyper::proto::h1::role: [<unknown>:50] Request.parse Complete(221)
22:14:41 [TRACE] hyper::header: [<unknown>:355] maybe_literal not found, copying "Keep-Alive"
22:14:41 [DEBUG] hyper::proto::h1::io: parsed 6 headers (221 bytes)
22:14:41 [DEBUG] hyper::proto::h1::conn: incoming body is content-length (0 bytes)
22:14:41 [TRACE] hyper::proto: [<unknown>:133] expecting_continue(version=Http11, header=None) = false
22:14:41 [TRACE] hyper::proto: [<unknown>:122] should_keep_alive(version=Http11, header=Some(Connection([KeepAlive]))) = true
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:317] read_keep_alive; is_mid_message=true
22:14:41 [TRACE] hyper::proto: [<unknown>:122] should_keep_alive(version=Http11, header=None) = true
22:14:41 [TRACE] hyper::proto::h1::role: [<unknown>:129] Server::encode has_body=true, method=Some(Get)
22:14:41 [TRACE] hyper::proto::h1::encode: [<unknown>:100] encoding chunked 443B
22:14:41 [DEBUG] hyper::proto::h1::io: flushed 539 bytes
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:460] maybe_notify; read_from_io blocked
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:654] flushed State { reading: Init, writing: Init, keep_alive: Idle, error: None }
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:337] wants_read_again? false
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 204.609µs
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96328, tv_nsec: 959015274 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 23.286µs
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Readable | Writable | Hup Token(41943044)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:193] Conn::read_head
22:14:41 [DEBUG] hyper::proto::h1::io: read 0 bytes
22:14:41 [TRACE] hyper::proto::h1::io: [<unknown>:127] parse eof
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:904] State::close_read()
22:14:41 [DEBUG] hyper::proto::h1::conn: read eof
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:317] read_keep_alive; is_mid_message=true
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:654] flushed State { reading: Closed, writing: Init, keep_alive: Disabled, error: None }
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:337] wants_read_again? false
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:662] shut down IO
22:14:41 [TRACE] mio::poll: [<unknown>:905] deregistering handle with poller
22:14:41 [DEBUG] tokio_reactor: dropping I/O source: 4
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 36.90554ms
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96328, tv_nsec: 995947584 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 6.716µs
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Readable Token(0)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 477.137447ms
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96329, tv_nsec: 473096769 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 13.525µs
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:193] Conn::read_head
22:14:41 [TRACE] mio::poll: [<unknown>:785] registering with poller
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:654] flushed State { reading: Init, writing: Init, keep_alive: Busy, error: None }
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:337] wants_read_again? false
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 28.391µs
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96329, tv_nsec: 473144138 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 8.325µs
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Writable Token(46137348)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Readable | Writable Token(46137348)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:193] Conn::read_head
22:14:41 [DEBUG] hyper::proto::h1::io: read 931 bytes
22:14:41 [TRACE] hyper::proto::h1::role: [<unknown>:46] Request.parse([Header; 100], [u8; 931])
22:14:41 [TRACE] hyper::proto::h1::role: [<unknown>:50] Request.parse Complete(228)
22:14:41 [TRACE] hyper::header: [<unknown>:355] maybe_literal not found, copying "Keep-Alive"
22:14:41 [DEBUG] hyper::proto::h1::io: parsed 7 headers (228 bytes)
22:14:41 [DEBUG] hyper::proto::h1::conn: incoming body is content-length (703 bytes)
22:14:41 [TRACE] hyper::proto: [<unknown>:133] expecting_continue(version=Http11, header=None) = false
22:14:41 [TRACE] hyper::proto: [<unknown>:122] should_keep_alive(version=Http11, header=Some(Connection([KeepAlive]))) = true
22:14:41 [DEBUG] librespot_connect::discovery: Post "/" {}
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:279] Conn::read_body
22:14:41 [TRACE] hyper::proto::h1::decode: [<unknown>:88] decode; state=Length(703)
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:654] flushed State { reading: Body(Length(0)), writing: Init, keep_alive: Busy, error: None }
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:337] wants_read_again? false
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 1.407881ms
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96329, tv_nsec: 474564754 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 6.379µs
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:279] Conn::read_body
22:14:41 [TRACE] hyper::proto::h1::decode: [<unknown>:88] decode; state=Length(0)
22:14:41 [DEBUG] hyper::proto::h1::conn: incoming body completed
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:317] read_keep_alive; is_mid_message=true
22:14:41 [TRACE] hyper::proto: [<unknown>:122] should_keep_alive(version=Http11, header=None) = true
22:14:41 [TRACE] hyper::proto::h1::role: [<unknown>:129] Server::encode has_body=true, method=Some(Post)
22:14:41 [TRACE] hyper::proto::h1::encode: [<unknown>:100] encoding chunked 57B
22:14:41 [DEBUG] hyper::proto::h1::io: flushed 152 bytes
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:460] maybe_notify; read_from_io blocked
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:654] flushed State { reading: Init, writing: Init, keep_alive: Idle, error: None }
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:337] wants_read_again? false
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 6.083463ms
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96329, tv_nsec: 480657722 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 6.005µs
22:14:41 [TRACE] hyper::client::pool: [<unknown>:125] park; waiting for idle connection: "http://apresolve.spotify.com"
22:14:41 [TRACE] hyper::client::connect: [<unknown>:118] Http::connect("http://apresolve.spotify.com/")
22:14:41 [DEBUG] hyper::client::dns: resolving host="apresolve.spotify.com", port=80
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:193] Conn::read_head
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:654] flushed State { reading: Init, writing: Init, keep_alive: Idle, error: None }
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:337] wants_read_again? false
22:14:41 [TRACE] tokio_io::framed_write: [<unknown>:188] flushing framed transport
22:14:41 [TRACE] tokio_io::framed_write: [<unknown>:191] writing; remaining=320
22:14:41 [TRACE] tokio_io::framed_write: [<unknown>:208] framed transport flushed
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 86.463µs
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96329, tv_nsec: 480932315 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 23.889µs
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 1.469789ms
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96329, tv_nsec: 482433329 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 12.646µs
22:14:41 [DEBUG] hyper::client::connect: connecting to 104.199.64.136:80
22:14:41 [TRACE] mio::poll: [<unknown>:785] registering with poller
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Readable | Writable | Hup Token(46137348)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:193] Conn::read_head
22:14:41 [DEBUG] hyper::proto::h1::io: read 0 bytes
22:14:41 [TRACE] hyper::proto::h1::io: [<unknown>:127] parse eof
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:904] State::close_read()
22:14:41 [DEBUG] hyper::proto::h1::conn: read eof
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:317] read_keep_alive; is_mid_message=true
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:654] flushed State { reading: Closed, writing: Init, keep_alive: Disabled, error: None }
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:337] wants_read_again? false
22:14:41 [TRACE] hyper::proto::h1::conn: [<unknown>:662] shut down IO
22:14:41 [TRACE] mio::poll: [<unknown>:905] deregistering handle with poller
22:14:41 [DEBUG] tokio_reactor: dropping I/O source: 4
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 1.46971ms
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96329, tv_nsec: 483990523 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 6.818µs
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Readable | Writable Token(20971525)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [TRACE] tokio_reactor: [<unknown>:368] event Writable Token(50331654)
22:14:41 [TRACE] tokio_reactor: [<unknown>:379] loop process - 1 events, 0.000s
22:14:41 [TRACE] tokio_io::framed_read: [<unknown>:195] attempting to decode a frame
22:14:41 [TRACE] tokio_io::framed_read: [<unknown>:198] frame decoded from buffer
22:14:41 [TRACE] tokio_io::framed_read: [<unknown>:195] attempting to decode a frame
22:14:41 [TRACE] tokio_io::framed_write: [<unknown>:188] flushing framed transport
22:14:41 [TRACE] tokio_io::framed_write: [<unknown>:208] framed transport flushed
22:14:41 [DEBUG] tokio_core::reactor: loop poll - 33.388736ms
22:14:41 [DEBUG] tokio_core::reactor: loop time - Instant { tv_sec: 96329, tv_nsec: 517407401 }
22:14:41 [DEBUG] tokio_core::reactor: loop process, 7.341µs
22:14:41 [DEBUG] librespot_connect::spirc: drop Spirc[0]
22:14:41 [DEBUG] librespot_playback::player: Shutting down player thread ...
22:14:41 [DEBUG] librespot_playback::player: drop Player[0]
22:14:41 [DEBUG] librespot_core::session: drop Session[0]
22:14:41 [DEBUG] librespot::component: drop MercuryManager
22:14:41 [TRACE] tokio_threadpool::pool: [<unknown>:141] shutdown; state=pool::State { lifecycle: Running, num_futures: 0 }
22:14:41 [TRACE] tokio_threadpool::pool: [<unknown>:187]   -> transitioned to shutdown
22:14:41 [TRACE] tokio_threadpool::pool: [<unknown>:208]   -> shutting down workers
22:14:41 [DEBUG] tokio_reactor: dropping I/O source: 3
22:14:41 [TRACE] mio::poll: [<unknown>:905] deregistering handle with poller
22:14:41 [DEBUG] tokio_reactor: dropping I/O source: 1
22:14:41 [TRACE] mio::poll: [<unknown>:905] deregistering handle with poller
22:14:41 [DEBUG] tokio_reactor: dropping I/O source: 2
22:14:41 [TRACE] mio::poll: [<unknown>:905] deregistering handle with poller
22:14:41 [DEBUG] tokio_reactor: dropping I/O source: 0
22:14:41 [DEBUG] librespot_core::session: drop Dispatch
22:14:41 [TRACE] mio::poll: [<unknown>:905] deregistering handle with poller
22:14:41 [DEBUG] tokio_reactor: dropping I/O source: 5
22:14:41 [TRACE] mio::poll: [<unknown>:905] deregistering handle with poller
22:14:41 [DEBUG] tokio_reactor: dropping I/O source: 6
22:14:41 [TRACE] want: [<unknown>:261] signal: Closed

Is this behaviour known? Can we make spotifyd usable as a destination for multiple accounts, at least premium accounts?

I'd like to use it as a headless loudspeaker, also usable by guests (not sharing the same spotify credentials).

Maybe related: #181

@MeganerdNL
Copy link

This works if you don't fill in any credendials in the config file (just uncomment with #).

@flokli
Copy link
Author

flokli commented May 22, 2019

But I guess in that case you are not able to control it anymore from another network, right?

Any way to get both working?

@nilsjha
Copy link

nilsjha commented Jun 16, 2019

I experienced this issue yesterday, when a friend tried to play music from his account (from my phone). Either the username and password fields are present in the configuration file. I also tested playing music from his phone with my account, that works as expected.

I observe that a credentials.json file is generated in the cache_path which contains my username. Is possible that this might be causing the authentication to "lock up" against my account ?

edit: both accounts are spotify premium accounts, and work flawlessly when using spotfiy connect against other devices

@Starkstromkonsument
Copy link

Starkstromkonsument commented Jul 17, 2019

I can confirm this issue. My Daemon is running on Ubuntu 18.04.2 LTS. Both accounts are premium.

As a quick and dirty Workaround I inserted the following line into my spotfyd.service-File:

ExecStopPost=/bin/rm /var/tmp/spotifyd/credentials.json

and changed:

RestartSec=5

This way the first attempt of connecting a different user always results in a crash of the daemon but the second (after waiting 5sec for the daemon to restart) works.

Is there a chance of getting this to work without a crash?

@mainrs
Copy link
Member

mainrs commented Sep 5, 2019

I can take a look at it. It's a duplicate of #181 as far as I can tell, so I'll close this for now.

@mainrs mainrs closed this as completed Sep 5, 2019
@mainrs mainrs added bug A functionality or parts of a program that do not work as intended duplicate Issues that already have been opened and refer to the same topic labels Sep 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug A functionality or parts of a program that do not work as intended duplicate Issues that already have been opened and refer to the same topic
Projects
None yet
Development

No branches or pull requests

5 participants