-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add shards_cluster for PG support in OTP23 #49
Conversation
src/shards_cluster.erl
Outdated
join(Group, Pid) -> | ||
%% HACK: Maybe implement apply_on_target? | ||
OwnerNode = node(Pid), | ||
if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use case
instead
Hi @cabol, So everything is implemented except tests but there are some issues I need your help around. The biggest issue is that when we call leave or join on Another issue that I'm having is P.S. I think the CI system may be malfunctioning!? I'm not quite sure! Thanks. |
@@ -143,7 +143,10 @@ t_join_leave_ops(Config) -> | |||
|
|||
% leave node E from SET | |||
OkNodes2 = lists:usort([node() | lists:droplast(OkNodes1)]), | |||
OkNodes2 = shards:leave(?SET, [ENode]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line sometime fails due to pg
propagation issue. I've just ignored the result for now!
Hi @alisinabh , first of all, thanks a lot for the effort 😄 !!
Yeah agreed, the
Overall, I was checking out the issues you mention and also the PD: I could work this weekend on this refactor, moving |
Thank you @cabol,
I still will be happy to help on this. Please let me know if there was anything that I could do. Thanks again. |
#47
pg2
usages toshards_cluster
in other modules and tests.pg2
proxy functions inshards_cluster
.pg
operations inshards_cluster
.pg
operations to handle remote join/leave.join/2
andleave/2
on list of pids.delete/1
forpg
usingleave/2
on all pids.Wait for completion of remote join and leave as it makes the output on shards.leave/join invalid. (Because of concurrency)Not possible due to population problem inpg
implementation.shards_cluster
.pg
is started beforeshards
.