Skip to content
This repository has been archived by the owner on Jun 1, 2021. It is now read-only.

Replication throughput improvements #196

Open
krasserm opened this issue Jan 18, 2016 · 2 comments
Open

Replication throughput improvements #196

krasserm opened this issue Jan 18, 2016 · 2 comments

Comments

@krasserm
Copy link
Contributor

Improve the current Replicator implementation to support concurrent ReplicationReads from the source event log while executing ReplicationWrites to the target event log. Also, batch sizes for reading can be usually much larger than batch sizes for writing (especially when using Cassandra). Therefore, an event batch read from the source should be split into n batches for sequential writes to the target.

@krasserm krasserm changed the title Replicator throughput improvements Replication throughput improvements Mar 14, 2016
@krasserm krasserm added prio and removed prio labels Aug 30, 2016
@magro
Copy link
Contributor

magro commented Oct 5, 2016

Just to mention it here (a think that I got aware of recently): as long as the replication builds upon akka remoting the messages for replicating events shouldn't become too big, because akka remoting is not designed for sending large messages. Currently eventuate's replication messages could impact akka remote control messages and cause disconnections in consequence.

In the long term it would probably be beneficial to switch to something different (akka streams?) or see if the new remoting (Artery) with support for a separate subchannel for large messages can be used.

@krasserm
Copy link
Contributor Author

krasserm commented Oct 6, 2016

I agree. I also followed the recent development of akka-remote using Artery and we should definitely evaluate it, in addition to akka-stream as potential alternative to the current implementation.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants