Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multicast destination with "all confirm" semantics #33

Open
rocketraman opened this issue Nov 19, 2012 · 6 comments
Open

Support multicast destination with "all confirm" semantics #33

rocketraman opened this issue Nov 19, 2012 · 6 comments
Assignees

Comments

@rocketraman
Copy link
Contributor

Currently, only one destination in a multicast destination scenario needs to confirm a message before the message is deleted/marked for deletion.

It would be very useful to support a scenario in which all multicast destinations need to confirm the message before it is deleted. For example, if a message needs to be sent to multiple external systems, and one system is down, then the message would remain in the journal, for the destination actor that sends to that system only.

@krasserm
Copy link
Contributor

If you want to achieve that a message is redelivered only to that external system that has been unavailable during the initial delivery, use a separate channel for each destination. For example (with cardinalities in parentheses)

processor(1) -> router(1) -> channel(n) -> destination(n)

ensures that the delivery is done for each destination individually and indpendently. This can already be done with eventsourced without any further additions. In my previous post I was more referring to the following scenario.

processor(1) -> channel(1) -> router(1) -> destination(n)

In this case, if one or more destinations do not confirm, the messages would be re-delivered (re-routed) to all destinations. This would however require an addition to router (or the multicast processor).

As far as I understood, you'd like to have support for the first scenario. Would you also like to have support for the second scenario?

@rocketraman
Copy link
Contributor Author

Currently I am interested in the first scenario. But as I understand it, if I create n separate channels, then the message will be written to disk n times, which is what I am trying to avoid. Is my understanding correct?

@ghost ghost assigned krasserm Nov 20, 2012
@krasserm
Copy link
Contributor

Ok now it's clear to me what you want. However, the Multicast processor of the lib serves a different purpose: it allows several event-sourced processors to 'share' the same entry in the journal.

To have the same optimization for n reliable channels (such that they 'share' the message to be delivered in the journal) requires indeed an addition to the library. You cannot use Multicast for that. I think it makes sense to implement that optimization.

In the meantime, I recommend you to use n reliable channels as proposed in my previous message. This will involve more disk IO but won't require more disk space long-term as messages written by reliable channels get deleted after delivery.

@rocketraman
Copy link
Contributor Author

My events can be quite large (they contain a bunch of binary data), which is why I was trying to avoid writing them to the journal twice.

I will do some performance testing with the dual channel approach you describe and see if it is ok.

@krasserm
Copy link
Contributor

Maybe I should mention further alternatives: you could also use a reliable channel to a destination that sends the message to a message broker (such as RabbitMQ or whatever) and the message broker is then repsonsible for optimized storage on disk, distribution to multiple destinations and dealing with destination failures. This was actually one of the primary use cases for implementing the ReliableChannel in eventsourced. Eventsourced is not meant to be a message broker. Nevertheless, I still think the discussed optimization should be implemented. WDYT?

Just out of interest, are you using only the reliable channel of eventsourced or also its event-sourcing features?

@rocketraman
Copy link
Contributor Author

Currently, I'm only interested in the reliable channel part of eventsourced, as it does all the hard work for me in terms of the journaling as well as integration with my Akka actors. I may very well have future need of the pure event sourcing part later.

Thanks for the suggestion of using a broker. At this time I want to avoid the hassle of another large component in my application for a relatively simple requirement. What I am really trying to achieve at this point is just deferring the persistence of certain data that is quite large, and takes a long time to transfer to my database (and therefore creates locks in the DB that slows down other operations as well). By journaling the data to disk locally, I can transfer the data to the DB asynchronously of the main application thread, but still maintain the overall persistence guarantee.

So one actor will persist that data for long term access and the other actor will take some other action on the data, in this case involving a remote system. Therefore, my need for a ReliableChannel that can feed two actors.

I definitely think it would be a good optimization -- it would be very useful in any use case where given data needs to be sent to multiple external systems.

@ghost ghost assigned krasserm May 13, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants