Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor away Mutex around BDK #730

Closed
klochowicz opened this issue Jun 2, 2023 · 8 comments
Closed

Refactor away Mutex around BDK #730

klochowicz opened this issue Jun 2, 2023 · 8 comments
Labels

Comments

@klochowicz
Copy link
Contributor

It's a huge problem that we can block the whole app because other part is waiting for the Mutex to be unlocked.

Moving the BDK wallet inside an entity that communicates with other threads via channel can avoid this situation. This is essentially the "actor" pattern.
This is what we've done in ItchySats, and we never had a problem with BDK there.

note: There might not be a need to introduce a whole framework to deal with it, it might be enough to hand-craft it: https://ryhl.io/blog/actors-with-tokio/

@klochowicz klochowicz self-assigned this Jun 2, 2023
@klochowicz klochowicz added the MVP label Jun 2, 2023
@bonomat
Copy link
Contributor

bonomat commented Jun 2, 2023

Moving the BDK wallet inside an entity that communicates with other threads via channel can avoid this situation.

Can you elaborate how this can help? I don't see it yet.

This is my understanding of what you are trying:

Create a wallet actor: actor.
actor has an internal task which syncs the internal wallet every X seconds for Y seconds.

Another service (or task) needs to get access to wallet and sends a message to actor.
actor is blocked every X seconds for Y seconds and cannot process incoming messages.

This means, service is blocked during Y.

Isn't this the same situation what we have right now?

@luckysori
Copy link
Contributor

Isn't this the same situation what we have right now?

Not quite. We already figured out a way to do this in ItchySats, see Thomas' comment here.

It's only a workaround until we can upgrade to latest bdk which has already dealt with this AFAIK. At the very least the mutiny-node already uses that version of bdk in a way that appears to not block the entire wallet for seconds at a time on every sync.

@klochowicz
Copy link
Contributor Author

my apologies, I missed this comment yesterday.

additional advantages:

  • prevents holding the lock for longer than needed (can happen if you just expose the lock)
  • some pieces of code could become 'fire and forget', if there's nothing they return and we don't need to wait for it to happen (we could puta command on the queue for processing and just move on). e.g. broadcast_transaction could stop blocking the ldk background processor.
  • if we wanted, we could add a cache inside the actor really easily - no change in the API whatsoever.

yes, apparently 1.0-alpha version gets rid of this issue, but I would wait a bit until it matures to use it....

@github-actions
Copy link

github-actions bot commented Aug 7, 2023

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Aug 7, 2023
@luckysori
Copy link
Contributor

I think we can close this because we should just update to LDK v0.0.116 and see what design is most desirable.

@luckysori luckysori closed this as not planned Won't fix, can't repro, duplicate, stale Aug 7, 2023
@Restioson
Copy link
Contributor

This is what we've done in ItchySats, and we never had a problem with BDK there.

Technically we had one issue I think - need to ensure that CPU-bound tasks (e.g wallet sync) aren't done on the tokio reactor. Not sure if still applies but might be relevant.

@luckysori
Copy link
Contributor

luckysori commented Aug 14, 2023

This is what we've done in ItchySats, and we never had a problem with BDK there.

Technically we had one issue I think - need to ensure that CPU-bound tasks (e.g wallet sync) aren't done on the tokio reactor. Not sure if still applies but might be relevant.

Absolutely, great point! Although I think there should be ways to still follow the actor pattern with tokio and spawn the blocking work onto the blocking thread pool (or us block_in_place), right?

Funnily enough, I proposed a PR for this at some point and I was making the mistake you highlight here.

@Restioson
Copy link
Contributor

Absolutely, great point! Although I think there should be ways to still follow the actor pattern with tokio and spawn the blocking work onto the blocking thread pool (or us block_in_place), right?

Yep, sounds good!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants