Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix shards_dist and shards specs to consider the case when rpc returns {badrpc, Reason} #43

Closed
cabol opened this issue Jan 9, 2019 · 3 comments

Comments

@cabol
Copy link
Owner

cabol commented Jan 9, 2019

Currently, the module shards_dist uses rpc to run distributed calls, and in the specs the case whet rpc returns {badrpc, Reason} is not being considered.

I see two ways to handle this:

  1. Just fix the specs considering this case (specs have to be fixed not only in shards_dist but also in shards, since it is a wrapper on top of it). In this case we can create a type like:
-type rpc_res(R) :: R | {badrpc, Reason :: term()}

And re use it from the the specs, like so:

-spec update_counter(
        Tab      :: atom(),
        Key      :: term(),
        UpdateOp :: term(),
        State    :: shards_state:state()
      ) -> rpc_res(integer() | [integer()]).
  1. Handle the {badrpc, Reason} internally and raise an exception with the corresponding error.
%% @private
rpc_call(Node, Module, Function, Args) ->
  handle_rpc_res(rpc:call(Node, Module, Function, Args)).

%% @private
handle_rpc_res({badrpc, {'EXIT', {Reason, _}}}) ->
  error(Reason);
handle_rpc_res(Res) ->
  Res.

And reuse the private function rpc_call/4 internally.

Evaluate both alternatives and come up with the best approach.

@pggalaviz
Copy link

In my opinion, I rather receive the {:bad_rpc, error} so I can pattern match against it and handle the different outcomes, such as the example I referenced in #42.
So I vote for solution 1.

@cabol
Copy link
Owner Author

cabol commented Jan 9, 2019

Let's go for option one then!!

@cabol
Copy link
Owner Author

cabol commented Oct 28, 2020

I'll close this issue since it doesn't apply anymore, with shards v1 the distributed module shards_dist will be in a separate repo, so by the time there is a first implementation ready, we will keep this in mind.

@cabol cabol closed this as completed Oct 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants