Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dealer: Rework context retrieval #1414

Open
wants to merge 32 commits into
base: dev
Choose a base branch
from

Conversation

photovoltex
Copy link
Member

@photovoltex photovoltex commented Dec 11, 2024

This resolves the issue that a context with multiple pages couldn't be completely shuffled. I had to rework most of the context resolving part, because resolving the entire context at once wasn't an option. Pretty much blocked the entire main loop for like ~1-30sec depending on the context. Which blocked all processing of commands and or events.

Each context is now requested independently so that commands can fit between requests :D

additional fixes:

  • metadata of an context is applied again (did seem to have broken it at some point)
    • does influence the displayed context name, for most contexts it was broken
  • simplified some match blocks
  • adjusted playback_speed according to previously mentioned here Spirc: Replace Mecury with Dealer  #1356 (comment)
  • tried to reduce some state updates
    • this isn't done for all update, because some have a valid reason to be instantaneously

I will let it stay in draft for some days, so I can still test it a bit more during the day. And maybe someone else even wants to try it^^

@photovoltex
Copy link
Member Author

When finishing a context it does seem to reset the current track correctly, but doesn't update the next tracks correclty.

probably wrong:

  • missing queue revision update
  • no reset of the context-index, or wrong fill_up_context

@photovoltex
Copy link
Member Author

photovoltex commented Dec 12, 2024

Quickly seeking to beginning (just pressing prev) and pausing can update the state incorrectly. Seems to be a no consistent behavior tho.

Log Snippet
[2024-12-12T09:54:10Z TRACE librespot_core::dealer] dealer request hm://connect-state/v1/player/command
[2024-12-12T09:54:10Z TRACE librespot_core::dealer::protocol] message was send with gzip encoding
[2024-12-12T09:54:10Z TRACE librespot_core::dealer::protocol] websocket request: Object {
        "command": Object {
            "endpoint": String("pause"),
            "logging_params": Object {
                "command_id": String("c718c865dfe6ca6e212f56cd54c27750"),
                "device_identifier": String("9c905a62c8859ea4c546520f668ea11d72f601c3"),
            },
            "options": Object {},
        },
        "message_id": Number(977946890),
        "sent_by_device_id": String("9c905a62c8859ea4c546520f668ea11d72f601c3"),
        "target_alias_id": Null,
    }
[2024-12-12T09:54:10Z DEBUG librespot_connect::spirc] handling: 'endpoint: pause' from 9c905a62c8859ea4c546520f668ea11d72f601c3
[2024-12-12T09:54:10Z DEBUG librespot_core::dealer::manager] replying to ws request: Success
[2024-12-12T09:54:10Z DEBUG librespot_playback::player] command=Pause
[2024-12-12T09:54:10Z TRACE librespot_playback::player] == Stopping sink ==
[2024-12-12T09:54:10Z DEBUG librespot_connect::state] updated connect play status playing: true, paused: true, buffering: true
[2024-12-12T09:54:10Z DEBUG librespot_connect::state] update position to 2:14 at 9:54:10.459
[2024-12-12T09:54:10Z DEBUG librespot_core::http_client] Requesting https://gew4-spclient.spotify.com:443/connect-state/v1/devices/7c28ab8a5c9512e4266ac7cb756312c82ee43d7e?product=0&country=DE&salt=3851948888
[2024-12-12T09:54:10Z TRACE librespot_core::dealer] dealer request hm://connect-state/v1/player/command
[2024-12-12T09:54:10Z TRACE librespot_core::dealer::protocol] message was send with gzip encoding
[2024-12-12T09:54:10Z TRACE librespot_core::dealer::protocol] websocket request: Object {
        "command": Object {
            "endpoint": String("seek_to"),
            "logging_params": Object {
                "command_id": String("fefb91696c47f00d05dd575ee3963b83"),
                "device_identifier": String("9c905a62c8859ea4c546520f668ea11d72f601c3"),
            },
            "position": Number(0),
            "value": Number(0),
        },
        "message_id": Number(977947441),
        "sent_by_device_id": String("9c905a62c8859ea4c546520f668ea11d72f601c3"),
        "target_alias_id": Null,
    }
[2024-12-12T09:54:10Z DEBUG librespot_connect::spirc] handling: 'endpoint: seek_to' from 9c905a62c8859ea4c546520f668ea11d72f601c3
[2024-12-12T09:54:10Z TRACE librespot_connect::spirc] seek to SeekToCommand { value: 0, position: 0, logging_params: LoggingParams { interaction_ids: None, device_identifier: Some("9c905a62c8859ea4c546520f668ea11d72f601c3"), command_initiated_time: None, page_instance_ids: None, command_id: Some("fefb91696c47f00d05dd575ee3963b83") } }
[2024-12-12T09:54:10Z DEBUG librespot_core::dealer::manager] replying to ws request: Success
[2024-12-12T09:54:10Z DEBUG librespot_playback::player] command=Seek(0)
[2024-12-12T09:54:10Z TRACE librespot_connect::spirc] ==> Paused
[2024-12-12T09:54:10Z TRACE librespot_connect::spirc] ==> Playing
[2024-12-12T09:54:11Z DEBUG librespot_connect::state] updated connect play status playing: true, paused: true, buffering: true
[2024-12-12T09:54:11Z DEBUG librespot_connect::state] update position to 2:14 at 9:54:11.078
[2024-12-12T09:54:11Z DEBUG librespot_core::http_client] Requesting https://gew4-spclient.spotify.com:443/connect-state/v1/devices/7c28ab8a5c9512e4266ac7cb756312c82ee43d7e?product=0&country=DE&salt=673088253

- rename `handle_context` to `handle_next_context`
- disconnect should only pause the playback
- find_next should not exceed queue length
@roderickvd roderickvd requested a review from Copilot December 15, 2024 20:28
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 6 out of 12 changed files in this pull request and generated 2 comments.

Files not reviewed (6)
  • connect/src/state/options.rs: Evaluated as low risk
  • connect/src/model.rs: Evaluated as low risk
  • core/src/spclient.rs: Evaluated as low risk
  • connect/src/state/transfer.rs: Evaluated as low risk
  • connect/src/state/handle.rs: Evaluated as low risk
  • connect/src/state/tracks.rs: Evaluated as low risk
Comments suppressed due to low confidence (1)

connect/src/state/context.rs:267

  • Ensure that the behavior of returning Ok(None) when next_contexts is empty is covered by tests.
if next_contexts.is_empty() { return Ok(None); }

connect/src/state/context.rs Outdated Show resolved Hide resolved
core/src/dealer/protocol.rs Outdated Show resolved Hide resolved
@photovoltex
Copy link
Member Author

photovoltex commented Dec 15, 2024

Currently the initial transfer is not as smooth as it was before these changes. I will fix that, and afterwards remove the draft state. Is probably be a small fix.

- fix context_metadata and restriction incorrect reset
- do some state updates earlier
- add more logging
@photovoltex
Copy link
Member Author

Hopefully last points:

  • when starting a playback while overwriting shuffeling it jumps to a random track, even tho the playback was started on a specific track
  • update_context bzw. a playlist modification seems to behave not as expected with the rework
    • should at most update the context, but currently seems to clear the entire next_tracks without filling them again

@photovoltex
Copy link
Member Author

Besides the incorrect tests, there seems to be a missing track update when starting a playback with shuffle and an specified track. Besides that the state feels quite stable again, so I will probably mark the PR as ready this evening.

@PocketMiner82
Copy link

PocketMiner82 commented Dec 19, 2024

Hey @photovoltex. I tested the PR and it seems to work (at least that what I tested) :D

Don't know if this belongs in this PR or if I should rather create an issue: I discovered that when playing a search result with autoplay, then sometimes the same song is played again when skipping to the next song. Sometimes it is not played directly again but after like 2-4 songs:
Screenshot_2024-12-19-12-01-17-935_com spotify music-edit
after clicking on the skip button:
Screenshot_2024-12-19-12-01-32-872_com spotify music-edit

Log after clicking on the search result:

[2024-12-19T11:09:19Z WARN  librespot_connect::state::context] couldn't load context info because: context is not available. type: Default
[2024-12-19T11:09:20Z INFO  librespot_playback::player] Loading <Never Gonna Give You Up> with Spotify URI <spotify:track:4PTG3Z6ehGkBFwjybzWkR8>
[2024-12-19T11:09:20Z INFO  librespot_playback::player] <Never Gonna Give You Up> (213573 ms) loaded

No log after clicking skip.


Btw: Congrats on becoming a maintainer :D

@PocketMiner82
Copy link

PocketMiner82 commented Dec 19, 2024

Now, just as I have written the comment, I realized that it is no longer possible to select another song to play from the liked songs playlist if shuffle is enabled. xD

If selecting a track, it always jumps to the beginning of the playlist or to the beginning of the currently played song...

It works when shuffle is off.

@photovoltex
Copy link
Member Author

Now, just as I have written the comment, I realized that it is no longer possible to select another song to play from the liked songs playlist if shuffle is enabled. xD

If selecting a track, it always jumps to the beginning of the playlist or to the beginning of the currently played song...

It works when shuffle is off.

see here #1414 (comment) hehe

Don't know if this belongs in this PR or if I should rather create an issue: I discovered that when playing a search result with autoplay, then sometimes the same song is played again when skipping to the next song. Sometimes it is not played directly again but after like 2-4 songs:

I also encountered that Problem and didn't think much about it. But seems to be a Problem of the current solution/branch. I think I already know where the problem lies heh. So that shouldn't be much of a problem to solve.

As always thanks for testing this branch. Helps a lot :D

@PocketMiner82
Copy link

PocketMiner82 commented Dec 19, 2024

see here #1414 (comment) hehe

Now I feel stupid, completely overlooked that somehow xD

@photovoltex
Copy link
Member Author

Huh... the autoplay endpoint seems to always provide the initial track again. That seems to be the "normal" behavior. I wonder why that didn't happen before... Well time to fix that.

@photovoltex
Copy link
Member Author

Wait I take that back... I have a track where it behaves like that and another's where it doesn't... the hell

I suppose when it only affects autoplay we can filter out the track from which we requested the autoplay (so for singles/tracks just the track). Weird behavior tho... I wonder why it didn't seem to happen before. I checked the code that changed, but the relevant parts didn't change.

@PocketMiner82
Copy link

Wait I take that back... I have a track where it behaves like that and another's where it doesn't... the hell

Yup. Now to make it more confusing, it isn't even always the first track after skipping, as I described above (example: Feliz Navidad, at least for me). Will test your recent changes tomorrow.

@photovoltex
Copy link
Member Author

Wait I take that back... I have a track where it behaves like that and another's where it doesn't... the hell

Yup. Now to make it more confusing, it isn't even always the first track after skipping, as I described above (example: Feliz Navidad, at least for me). Will test your recent changes tomorrow.

I look at "Feliz Navidad" and there is another "Feliz Navidad" in the autoplay context, but with a different uid and uri... so that's impossible to filter out... I will investigate it a bit more, might be that we just need to adjust to the autoplay request or so.

@photovoltex
Copy link
Member Author

This should "fix" the weird behavior, as we now provide all recently played songs, in which the current track will also be contained, and by that should be not included anymore in the autoplay tracks. But thats for spotify to decide in the end^^

@photovoltex photovoltex marked this pull request as ready for review December 20, 2024 20:42
@PocketMiner82
Copy link

Wait I take that back... I have a track where it behaves like that and another's where it doesn't... the hell

Yup. Now to make it more confusing, it isn't even always the first track after skipping, as I described above (example: Feliz Navidad, at least for me). Will test your recent changes tomorrow.

I look at "Feliz Navidad" and there is another "Feliz Navidad" in the autoplay context, but with a different uid and uri... so that's impossible to filter out... I will investigate it a bit more, might be that we just need to adjust to the autoplay request or so.

@photovoltex I tested it and the tracks are no longer duplicated. I also was quite sure that when I tried playing Feliz Navidad, that there was the exact same track duplicated, now it's a track with even a different name... so issue fixed I guess :D

Selecting a song when playing shuffled also works now.

@roderickvd roderickvd requested a review from Copilot December 23, 2024 09:31
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 6 out of 13 changed files in this pull request and generated 1 comment.

Files not reviewed (7)
  • connect/src/state/options.rs: Evaluated as low risk
  • connect/src/state/transfer.rs: Evaluated as low risk
  • core/src/spclient.rs: Evaluated as low risk
  • connect/src/state/handle.rs: Evaluated as low risk
  • examples/play_connect.rs: Evaluated as low risk
  • connect/src/lib.rs: Evaluated as low risk
  • connect/src/state/tracks.rs: Evaluated as low risk
Comments suppressed due to low confidence (5)

connect/src/context_resolver.rs:170

  • The debug message contains a minor grammar issue. It should be 'context was requested {} seconds ago'.
debug!("context was requested {}s ago, trying again to resolve the requested context", last_try.expect("checked by condition").as_secs());

connect/src/context_resolver.rs:232

  • The behavior of get_next_context when UpdateContext::Autoplay is used should be covered by tests to ensure that it handles unsupported contexts for podcasts correctly.
pub async fn get_next_context(&self, recent_track_uri: impl Fn() -> Vec<String>) -> Result<Context, Error> {

connect/src/context_resolver.rs:33

  • [nitpick] The name ResolveContext might be ambiguous. It could be renamed to ContextResolution to better reflect its purpose.
pub(super) struct ResolveContext {

core/src/dealer/protocol.rs:177

  • The word "send" should be corrected to "sent".
trace!("message was send with {encoding} encoding ");

connect/src/state.rs:358

  • The update_context_index method is called with new_index + 1, which might cause an off-by-one error. It should be called with new_index.
self.update_context_index(self.active_context, new_index + 1)?;

connect/src/context_resolver.rs Outdated Show resolved Hide resolved
Copy link
Member

@roderickvd roderickvd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, some general feedback on code.


fn find_next(&self) -> Option<(&ResolveContext, &str, usize)> {
let mut idx = 0;
loop {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not change into a for loop? You can still break out of that.

Resolve::Uri(ref uri) => ConnectState::valid_resolve_uri(uri),
Resolve::Context(ref ctx) => ConnectState::get_context_uri_from_context(ctx),
}
.or(self.fallback.as_deref())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work correctly? I believe or is eagerly evaluated.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As it doesn't evaluated any values it shouldn't matter if it's eagerly or not. Unless there isn't any benefit to it, I would just keep it as is.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just thinking that this will probably always evaluate to None because of the eager evaluation at compile time, not sure if that's the behavior you intend?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But why would it always evaluate to None, I think eagerly evaluated just means always/early evaluated? As self.fallback is an Option<String> it should only provide the value in place. I think the only difference would be the evaluation of as_deref(). But this is only used here to convert Option<String> into Option<&str>.

Just want to understand where you are coming from, as from my perspective it shouldn't make a difference.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right.


impl Eq for ResolveContext {}

impl Hash for ResolveContext {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess deriving Hash on the structs doesn't work?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quit easily as Context doesn't implement Hash. I could implement Hash and Eq in the protocol crate, then I could just derive the trait. Both ways seems fine to me. Just tell me what you would prefer :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem. Certainly if you need to patch up protobuf generated code, I'd say don't bother.


pub fn remove_used_and_invalid(&mut self) {
if let Some((_, _, remove)) = self.find_next() {
for _ in 0..remove {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

impl TryFrom<SkipTo> for PlayingTrack {
type Error = ();

fn try_from(value: SkipTo) -> Result<Self, ()> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most idiomatic to return Self::Error.

.first()
.and_then(|p| p.tracks.first().map(|t| &t.uri))
pub fn get_context_uri_from_context(context: &Context) -> Option<&str> {
match Self::valid_resolve_uri(&context.uri) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At your option (heh) you could also go for or_else.

let ctx = match self.get_context(&new_context) {
let player = self.player_mut();
player.context_metadata.clear();
player.restrictions.clear();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How much can context_metadata and restrictions differ in size? Clearing keeps the memory allocated; does it make sense to free it (shrink it) or keep it around? I can imagine it makes sense to keep it, so just asking.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never thought about that before. Anyhow, the metadata provided for context can be a big difference. For example an artists context is about two entries and the discovery weekly has more then 10 entries. For restrictions it's usually one to six, but it could be more.

For restrictions it probably doesn't make much sense to free it, but for the metadata could be. But could also be that for just ten entries it isn't worth it to free the instance.

Btw. do you mean by freeing dropping the current map and creating a new instance?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two options:

  1. Dropping then assigning a new object, like you say, or

  2. Calling shrink_to_fit.

Honestly I don't know about the performance difference between dropping+creating versus clearing+shrinking, probably both just fine in this case.

- use drain instead of for with pop
- use for instead of loop
- use or_else instead of match
- use Self::Error instead of the value
- free memory for metadata and restrictions
@photovoltex
Copy link
Member Author

I added another fix around is_playing and is_paused, as is_playing was reported incorrectly because of the weird expectations of the connect devices. I had the fix on a different branch, but it seems more fitting to resolve the issue in the current context.

@photovoltex
Copy link
Member Author

photovoltex commented Dec 23, 2024

And another small fix that adjusts the setting of the options only as provided by the SpircLoadCommand and from the dealer.

Last change as I will not add anything more.

# Conflicts:
#	connect/src/model.rs
#	connect/src/spirc.rs
#	connect/src/state.rs
#	connect/src/state/context.rs
#	connect/src/state/transfer.rs
#	protocol/src/lib.rs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants