Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retrying message in case of failure. #263

Closed
jayrulez opened this issue Sep 10, 2017 · 12 comments
Closed

Retrying message in case of failure. #263

jayrulez opened this issue Sep 10, 2017 · 12 comments
Labels

Comments

@jayrulez
Copy link
Contributor

In the previous version of RawRabbit we could automatically publish a message again by calling context.RetryLater and passing a timespan.

Is there an equivalent feature in the 2.0 version or will I need to do this part myself?

Regards

@pardahlman
Copy link
Owner

There is! See the acknowledgement tests. You can ack, nack, reject and retry later by returning corresponding Acknowledgement

await subscriber.SubscribeAsync<BasicMessage>(async (recieved) =>
{
    return new Reject();
});
await subscriber.SubscribeAsync<BasicMessage>(async (recieved) =>
{
    return new Ack();
});

@jayrulez
Copy link
Contributor Author

Great. Thank you.

I'm not seeing where I can access the retry information.

Say I want to retry the task a maximum of 5 times and then give up, how could I accomplish this?

@pardahlman
Copy link
Owner

Hi, sorry for the late reply! At the moment, there is no support for doing this in the message handler. The reason for this is that the MessageContext is no longer a mandatory part of the messaging protocol and so there is no natural place for the retry information to exist.

It would be possible to create something like an enricher that is registered client wide to declare an message agnostic strategy of type "retry X times then do Y (where Y can be something like error exchange, ...)". Would that fit in your solution?

@jayrulez
Copy link
Contributor Author

I can see where that solution can be useful. However, it would might not work for my use case. We are working with over 30 consumers right now (more in the future). The policies differ greatly among these workers, so a global configuration would not work.

The idea for the enricher would definitely be useful for other use cases though.

Regards

@pardahlman
Copy link
Owner

Understood. I'm keeping this ticket open, but it is not trivial to just re-implement the behavior from 1.x, as message context is no longer a key part of the solution. Feel free to shim in with ideas!

@marcingolenia
Copy link

marcingolenia commented Nov 7, 2017

Hi! Did anyone figured something out? It will be useful to handle messages again with delay with limitation like in the 1.x (based on docs)

        client.SubscribeAsync<BasicMessage>(async (message, context) =>
        {
            if (context.RetryInfo.NumberOfRetries > 10)
            {
                throw new Exception($"Unable to handle message '{context.GlobalRequestId}'.");
            }
            if (CanNotBeProcessed())
            {
                context.RetryLater(TimeSpan.FromMinutes(5));
                return;
            }
        });

I've found Enrichers.Polly package but this seems to be usefull for handling some topology exceptions which are not related to exceptions happening during subscription. Any new hints?
At the moment I am thinking about custom context using Enrichers.MessageContext package and include the number of retries in it and combine this with the Retry.In method.

@pardahlman
Copy link
Owner

Hi @marcingolenia - thanks for reaching out 👍

There are still some parts missing in order to be able to support this fully. The approach taken in 1.x was to add custom headers to the message when requesting a retry and then use those headers to enrich the message context once the timespan was up.

It is some extra code to be written which don't feel like "core" parts of the client. At the moment, I'm leaning towards moving all retry classes and logic to a separate enrich. The way I see it, that package should not need to rely on the message context, as you theoretically can build your own execution pipes and work on the IPipeContext.

It will be addressed one way or the other, but ATM I'm struggling to find time for it.

@marcingolenia
Copy link

marcingolenia commented Nov 10, 2017

Thanks for feedback! We've just switched to 1.x version. One question though (cuz it's hard to find the answer in source code) - let's say we the have following scenario (WithNoAck(false), durable queues and messages):

  1. We subscribed to message X.
  2. We published message X.
  3. We've just consumed the message, but something went wrong, we call RetryLater(TimeSpan.FromSeconds(20).
  4. If something goes wrong before RetryLater the ack is not send and the message goes back to queue - great!
  5. We hit the RetryLater call.
  6. RawRabbit does the magic.
  7. We have the message X published on dead letter exchange with ttl 20000.

Now the question - what happens if something goes wrong in point 6? For example the server shuts down? Do RawRabbit first publish the message on dead-letter exchange, then sends ack to the queue so we are 100% sure the message won't get lost? Or the ack is being send before publishing on dead letter exchange.

Best Regards :)

pardahlman added a commit that referenced this issue Nov 19, 2017
@pardahlman
Copy link
Owner

Hey @jayrulez - looking to introduce enricher RawRabbit.Enrichers.RetryLater which brings back the concept of RetryInformation, see this integration test for more information.

@pardahlman
Copy link
Owner

Hi @marcingolenia - thanks for reaching out.

Most of the logical for retry later in 1.x is performed in the ContextEnhancer. The channel used to publish the message is retrieved from the channel factory, which has robust checks for retrieving an open channel on an open connection. Once the message has been published to the "retry later" exchange, the instance off the message currently consumed is acked. Hope this helps!

@jayrulez
Copy link
Contributor Author

@pardahlman This is a very pleasant surprise. Thank you for the update. We will be making use of this asap.

@pardahlman
Copy link
Owner

Released in rc2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants