-
-
Notifications
You must be signed in to change notification settings - Fork 798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ASGIRef 3.4.1 + Channels 3.0.3 causes non-deterministic 500 errors serving static files #1722
Comments
Would you say this is an asgiref issue? It would seem in this case that the deadlock detection is doing the thing it's meant to do, which is preventing you from a thread-exhaustion (if that is the cause) deadlock? What in asgiref do you think needs changing to fix it? |
Being Channels or ASGIRef, one of them needs a fix to be able to handle large amounts of fast requests, which is what's currently breaking our SPA app being served by Django using Channels 3.0.3 and (currently locked) ASGIRef 3.3.4 (since the behaviour mentioned was introduced after by django/asgiref@13d0b82) In a fresh "app-root" request we serve hundreds of small html/script files as our architecture is component based. If you feel that this should be a Channels issue, I'm happy to replicate this ticket over there |
I'm happy to take this over to Channels. If we're hitting an issue here then the new code is doing the right thing. I'll comment on this:
The static files handler is a development convenience. It's not at all intended for production. Rather you should run Also, if you're still hitting this after that, if you serve any plain Django views via WSGI (gunicorn say) and leave only the Consumers to be handled by ASGI, you'll not trigger this code path at all. (That's not a fix, but it will get you going in a robust way.) |
Thanks for the feedback. Our static files in production are being correctly served by S3. The issue did not trigger in production but we saw it in dev environment directly tied to an analogous example as the provided one. I understand that staticfile serving is a convenience and is not representative of how production is working. I'm just afraid that this specific behavior may be triggered in other scenarios. And investigating this already lead to the ASGIRef 3.4.1 so I still believe we are on to something. |
HI @rdmrocha — Yes, I think it's worth looking into, but I think the error is showing we have a potential issue in static files handler, rather than an issue in asgiref… — If you're happy to keep digging that's super. Thanks! |
Btw if anyone gets this error during selenium tests, you can turn off static file serving. We use Whitenoise anyway so disabling serving static files fixed it for us: class OurSpecialTestCases(ChannelsLiveServerTestCase):
serve_static = False
class OurTestCase(OurSpecialTestCases)
... (ps tyvm for Channels, it has made our lives much easier!) |
We're consistently seeing this in development with runserver often enough for it to be a major inconvenience since we upgraded channels dependencies:
|
I've been seeing it in development too with around 0.5% to 1% of the requests failing with asgiref 3.4.1 & channels 3.0.4. It's not a big deal when an image fails but we use code splitting so when it hits one of our js files then the entire app fails to load. When I was testing a page that makes around 20 static file requests at a time, we observed the exception being thrown in as little as the 5th request made. I have not seen the error in production but we serve all static files either though s3 or uwsgi. |
My two cents here! I really appreciate everyone who contributed to Django, ASGI and Channels, but this needs very serous investigation. We all know that static files must be served separately and we do it in production, but that's not about it. Essentially it is no longer capable of handling the same amount of load as previous versions of channels and asgi, therefore whoever implemented these updates must go back and make sure, that we do not degrade Django-ASGI-Channels stack performance, otherwise all the great things that were introduced with the updates, becomes pointless, even harmful and can cause a lot of troubles for many. For example I see this exact behavior in a fairly standard django admin, where admin tries to load a few additional script and style files, but it seems like dev server is unable to handle it. Feels like it's unable to handle more than ~20 concurrent requests at the time to /static/ endpoint. |
Hi, What is the recommended course of action regarding this problem? Thanks. |
We are seeing this in development too. |
Right now I'm using the most updated versions with python(3.9.9)
I hope I can be of help: I don't know if its just me. But I only saw this happen with /static folder and not with /media... Being that the case. It would mean that django STATIC_URL default serving, maybe has a buggy difference with adding a custom static in urls.py (django/views/static/serve method works ok) To test this,
A simpler way to test this, without changing the templates is
(I tried both ways realoading many times and the error 500 is gone) |
Before this, staying under 3.4 of asgiref would work. Recent releases open systems to this issue
|
@carltongibson regarding this Hi and thank you! I know this was from a few months back, could you perhaps provide an example of what this would look like and where it'd be delegated (gunicorn command, asgi application config, django settings)? e.g. assuming the most basic setup with gunicorn with uvicorn, e.g.
In my case I'm also using whitenoise (not sure if that's a plus or not) |
@rdmrocha, @ckcollab, @codekiln, @blayzen-w @Simanas @karatemir @ShaheedHaque @LinkkG Some questions about when you experience this:
My case:
|
same issue.
|
|
The underlying problem is channels' StaticFilesWrapper. Fortunately, there is a workaround that doesn't involve patching channels:
if DEBUG:
INSTALLED_APPS = [a for a in INSTALLED_APPS if a != 'django.contrib.staticfiles']
if settings.DEBUG:
from django.apps import apps
if not apps.is_installed('django.contrib.staticfiles'):
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
urlpatterns = staticfiles_urlpatterns() + urlpatterns And you're done. Local tests indicate that the problems with concurrent requests for static files are gone and all works as intended. |
Fair warning, I don't use channels so I could be entirely barking up the wrong tree, but here I am anyway, pointing things out (and teaching grandmother to suck eggs, possibly) in case they are relevant in sparking further discussion of solutions. From what I can see of the i.e. Apologies if I'm just muddying things further or not proving useful... |
@kezabelle Yes. Using Django’s implementation and removing the Channels one is the way to go. Channel’s The proximate reason that’s not been done was the need to maintain Django 2.2 support, but we can let that go now. (See also #1795 — time for a major version bump, again.) We need to keep In the meantime (responding to various points above):
|
@carltongibson @andrewgodwin Thank you for your being courteous and your efforts on the project in general! |
Does this imply there will be a change to channels itself and this issue will/may be overcome by events? As for the other points: I hope to see some examples materialize from this. Maybe it warrants a Q&A discussion to share those if this gets closed? |
On Thu, 27 Jan 2022, 20:33 Tony Narlock, ***@***.***> wrote:
- Development is slow. It’s all volunteer. But it’s inexorable.
Year-by-year it comes on. (Contributions welcome!)
- That applies double during the pandemic. My own bandwidth for
Channels had been exceedingly constrained since (approx) last summer.
@carltongibson <https://github.com/carltongibson> @andrewgodwin
<https://github.com/andrewgodwin> Thank you for your being courteous and
your efforts on the project in general!
Hear, hear.
I imagine that I am like many others in that my live setup, which uses
nginx to serve static files, shouldn't even see the issue.
—
… Reply to this email directly, view it on GitHub
<#1722 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABFHWKIHOH3PPTYJVGVBS7TUYGT2VANCNFSM47XZZHXQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
As @fmgoncalves has mentioned, one way is to alter how the files are served, but I have found a little more reliable patch that I have implemented in my Django Channels Setup based on the information provided by their post. It seems that the change of That being said, I felt it would be safe to monkey patch the
Then you just overwrite the existing instance of asgiref's sync_to_async method with our patched wrapper that enforces thread_sensitive=False
This is make channels, and all of Django run in a insensitive manner like it did before the ASGIRef update. |
Having ran into this issue and dug into its root cause, I think I can provide some insight. As I understand it, the deadlock detection in asgiref works like this:
The issue here is that contexts may be re-used by daphne / twisted in the case of persistent connections. When a second HTTP request is sent on the same TCP connection, twisted re-uses the same context from the existing connection instead of creating a new one.
So in twisted, context variables are per connection not per http request. This subtle difference then causes a problem due to how the
So what I think is happening is this sequence of events:
If step 6 blocked instead of erroring, all would be fine, since the sync thread would have finished anyways. I don't think there's a deadlock here, and I don't thing the deadlock detection code in asgiref is working properly. |
@brownan and anyone else: Would these steps be possible to reproduce the behavior in a test? Would this test need to be written in asgiref, channels, or django itself? |
@tony @brownan My experience is getting 500 errors from Daphne while running Firefox while trying to debug a live-server running test. On macOS. Some files work, some don't -- I guess that Firefox is trying to open a few connections to the live sever to get static files as soon as possible. So I guess I would try with some command-line tool that's opening multiple connections to a single server and trying to download something. Tools for testing HTTP, like ab ("Apache Bench") could be parametrized. Perhaps there is a tool like that in Python that we could use to run in tests... I'll be examining this bug this week, I think, as I'd like it to be fixed. |
I haven't had any time lately to dig into this further, but keep me updated and I'll help out how I can. |
Hi @brownan — nice investigation #1722 (comment) This will be resolved by moving to Django's static files handler, which doesn't exhibit the same issue, but if you think this is true...
... and you can reduce that to a minimal example (without Twisted and all that if possible) a report to django/asgiref would be worthwhile! |
yeah, I realize that's a bold claim to make without a minimal test case 😀 I'll see if I can get the time to do that soon |
OK, I've started work on what will be v4.0 #1890 moves to use If anyone wants to give that a run, or follow Once I've made a bit more progress, I'll open a tracking issue for v4.0 as well. |
I tried the main channel branch today on my project (just on my laptop, not in prod!) and it works as expected! Thanks! |
Thanks for confirming @JulienPalard. I'm pulling together the releases now, so a few days for the final versions to be live. |
4.0.0b1 fixed it for me as well. Can' t wait for v4! Thanks! |
So this was the very first scenario in which the Single thread executor error was found and that lead to me opening django/asgiref#275
While trying to get a simple repro-case for it, we figured out a way to trigger an error related to it in a very simple way and this was fixed with https://github.com/django/asgiref/releases/tag/3.4.1
But testing the new 3.4.1 version against our code-base still yielded the same 500 errors while serving static files (at least) in the dev environment.
I've updated https://github.com/rdmrocha/asgiref-thread-bug with this new repro-case, by loading a crapload of JS files
(1500) but that can be changed in the
views.py
file.It doesn't ALWAYS happen (so you might need a hard-refresh or two) but when it does, you'll be greeted with something like this:
I believe this is still related django/asgiref@13d0b82 as reverting to v3.3.4 via requirements.txt makes the error go away.
Looking at the offending code inside
channels/http.py
it looks like this might be a thread exhaustion issue but this is pure speculation.since the handle is decorated as sync_to_async:
This is forcing the
send
to become sync and we're waiting on it like this:await self.handle(scope, async_to_sync(send), body_stream)
.If there's no more threads available, I speculate that they might end up in a deadlock waiting for the unwrap of this await async_to_sync(async_to_sync) call, eventually triggering the protection introduced in django/asgiref@13d0b82
But take this last part with a grain of salt as this is pure speculation without diving into the code and debugging it.
Hope it helps
The text was updated successfully, but these errors were encountered: