Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SQS] Multiple exceptions thrown during the day "String could not be parsed as XML" #1218

Closed
mynameisbogdan opened this issue Apr 15, 2022 · 6 comments

Comments

@mynameisbogdan
Copy link

Hello,

Firstly thank you for offering us this amazing client!

I have an issue with getting a few exceptions with the message String could not be parsed as XML. It's not the end of the world but I hope to find a solution in order to prevent further spamming of the logs.

Trace

{
    "class": "Exception",
    "message": "String could not be parsed as XML",
    "code": 0,
    "file": "[/path/site/]vendor/async-aws/sqs/src/Result/ReceiveMessageResult.php:33",
    "trace": [
        "[/path/site/]vendor/async-aws/sqs/src/Result/ReceiveMessageResult.php:33",
        "[/path/site/]vendor/async-aws/core/src/Result.php:133",
        "[/path/site/]vendor/async-aws/sqs/src/Result/ReceiveMessageResult.php:26",
        "[/path/site/]vendor/symfony/amazon-sqs-messenger/Transport/Connection.php:235",
        "[/path/site/]vendor/symfony/amazon-sqs-messenger/Transport/Connection.php:222",
        "[/path/site/]vendor/symfony/amazon-sqs-messenger/Transport/Connection.php:194",
        "[/path/site/]vendor/symfony/amazon-sqs-messenger/Transport/Connection.php:181",
        "[/path/site/]vendor/symfony/amazon-sqs-messenger/Transport/AmazonSqsReceiver.php:44",
        "[/path/site/]vendor/symfony/messenger/Worker.php:104",
        "[/path/site/]vendor/symfony/messenger/Command/ConsumeMessagesCommand.php:225",
        "[/path/site/]vendor/symfony/console/Command/Command.php:298",
        "[/path/site/]vendor/symfony/console/Application.php:1033",
        "[/path/site/]vendor/symfony/framework-bundle/Console/Application.php:96",
        "[/path/site/]vendor/symfony/console/Application.php:299"
    ]
}

I'm seeing that RetryableHttpClient is used with AwsRetryStrategy with 3 maxRetries.

What can I do to further prevent this issue?

@jderusse
Copy link
Member

That's weird. Do you have an example of content sent by AWS when such exception is thrown?

I believe AWS (or your local network) have a temporary outage and content received by the client is unexpected (ie. 500 error).

@mynameisbogdan
Copy link
Author

Hello @jderusse,

That's weird. Do you have an example of content sent by AWS when such exception is thrown?

Sadly except that trace I have no other data in the logs, and I can't reproduced it.

I believe AWS (or your local network) have a temporary outage and content received by the client is unexpected (ie. 500 error).

I believe to be my locally data center where it's hosted to be the issue, even if I had zero issues sending when dispatching messages. But seeing that SqsClient it's already using a retry strategy, maybe 3 retries isn't enough in my case?

I know the default poll_timeout is 0.1 so I bumped to 0.2 a while back, now it's set to 0.5.

@jdelaune
Copy link

We have also been hit by this recently. However we also don't log the content of the message unfortunately.

@mynameisbogdan
Copy link
Author

@jdelaune try to bump up poll_timeout. For the last 12 days since I bumped it from 0.2 to 0.5 I received only one error so far, which is ok for me.

@jdelaune
Copy link

We are still getting this error a lot (~47 times an hour), even with poll_timeout set to 0.5. We are requesting it from an EC2 server so a bit strange we are encountering so many network issues within AWS to cause this.

@armetiz
Copy link

armetiz commented May 29, 2023

Hi there.

I'm using symfony/messenger with SQS transport configuration.
I created an issue with an important description of the problem.

Note that symfony/messenger is using aws/async.

I thought the problem was the way symfony/messenger handled messages with the "polling" and "buffer" configuration. But it seems that the problem is coming from the async-aws library. I'm very happy to have succeeded in making progress on this subject, which has been spamming us with notifications from supervisord for over 3 months 😱.

I'm now a little short of ideas to continue solving this problem, but I'm available to test hypotheses.

Note : I'll try to increase the default poll_timeout.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants