-
-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of memory Crash - 1.6.0-9.22789.0 #3840
Comments
https://wiki.multitheftauto.com/wiki/Famous_crash_offsets_and_their_meaning |
Unfortunatelly we cant proceed with checking MTA versions to find which one introduced the issue, since latest 22789 is now enforced. the last tested-stable for us were: 22746 Also, collected a bit more data over crashes in last 90 days, seems like 22780.0 were the latest 'good' for us. CRASH COUNT | VERSION
|
I remember this crash (game crashed on disconnect). Really annoying crash.
did r22787 work for instance? if i'm not confused you guys still havent found a version where the amount of crashes did not spike. Does your server use CEF? |
Yea, we haven't found exact. but statistic advise that .22780.0 were last normal. |
Afaik last forced minclientversion before r22789 was r27763, according to my logs. Do you have for instance data related to amount of users using r22780 vs amount of crashes? That would be really useful. Recently some changes have been made in CEF (#2933), so maybe thats the issue. Maybe disabling GPU rendering could stop the crashes? |
Cant find any API to disable it, isn't it compilation flag? |
You can disable it from the settings, and you can check if the client has it enabled with |
It's a client setting, it's entirely up to the user. There is no API to control this, as per most other client settings (the server has no authority over them). As I mentioned in Discord the CEF GPU rendering was introduced in 22771 but was broken due to compositing being re-enabled, then fixed in 22789 (by disabling compositing, but still having GPU enabled by default). I doubt that disabling GPU in CEF will resolve your issue but you can ask players to try it out (MTA settings > Web Browser > Enable GPU rendering). |
If 22780 was good for your players then it's 99% not CEF, since that was (mostly) broken from 22771 to 22789 as mentioned above. |
Have 3 users so far, who got a lot off crashes, and since disabled GPU Rendering - having no more issues. so i assume, because we are on the edge with models/textures, adding CEF to video memory takes all the space... |
Wouldn't it be nice to have a client function to disable cef gpu rendering, so certain servers can control whether their clients should need it or not? |
Not viable for two reasons:
|
In my opinion it's not up to a server to decide that a client can't use GPU in CEF, just because that server wants to push memory limits to breaking point. |
After week of researches...:
As result, testers now getting this crash not after 30 mins, but after ~1.5-2 hours. (even with GPU CEF setting turned off) Also i released some patches to resource pack, reducing the size of the textures by ~200 mb in total, just for test purposes to public: Counting items on the client-side element tree, shows no advancement over the session time, so we are not leaking elements, shaders or textures. And this is not the first time to be honest, when CEF gets some new update, and we are getting crashes #2446 |
Could it be a table (or a series of tables) that are not being cleaned properly? Global tables... not local ones because those get caught up by the garbage collector. Memory leaks can also happen because you are not clearing properly some global variables. For example you are not cleaning them when elements get destroyed, players disconnect, or when they are not longer needed. You can check the memory usage of a resource in the performance browser. This is a little script that should rise your RAM usage (an exaggeration, but you could easily make this mistake with an function tableCopy(orig)
local orig_type = type(orig)
local copy
if orig_type == 'table' then
copy = {}
for orig_key, orig_value in pairs(orig) do
copy[orig_key] = orig_value
end
else -- number, string, boolean, etc
copy = orig
end
return copy
end
---------------------
allElements = {}
global = {}
function onStart()
getChildren(root)
utilizeRAM()
end
addEventHandler("onClientResourceStart", resourceRoot, onStart)
function getChildren(element)
local children = getElementChildren(element)
if #children == 0 then
return
end
for key,element in ipairs(children) do
local elementType = getElementType(element)
if not allElements[elementType] then
allElements[elementType] = {}
end
local i = #allElements[elementType]+1
allElements[elementType][i] = element
getChildren(element)
end
end
function utilizeRAM()
local iMax = 500000
for i=1,iMax do
global[i] = tableCopy(allElements)
end
end So, if you could check your memory usage for your resources that would be great. Realistically speaking I doubt you have a massive memory leak in one of your resources but that could be the case, so it would be nice to roll that out. |
Could you guys please allow mta downgrade |
Did you try nightly? |
Nightly 22789, a.k current force-update version is the reason of this issue ) |
So, i think we found it: My mode is depent on CEF, some UI menus maden there. When user joins, i shortly create CEF browser with GA init code to obtain session, and once its done - deleting the browser. In both cases, if i requestBrowserDomains or create CEF for page-hit event - the CEF being loaded and attached to MTA. Process And memory leaking starts... Whenever the MTA has CEF - after 30-50 minutes of game my client will crash. So we made a test with one of my problematic clients. I removed everything related to CEF. Afterwards, i just enabled anaylitcs plugin with only summing up with the previous research above, it should be clear now, that latest introduced features to CEF are having memory leak. |
I don't think we changed anything in CEF recently outside of adding a setting for enabling GPU (which is just a command line option on CEF instantiation) and vendor updates. Did you test this with GPU option enabled or disabled? If you tested it with GPU disabled and still have the issue, then it's nothing to do with those recent GPU changes (as I said, the MTA implementation side is just passing a command line option to CEF itself on launch; it's a 3 line change where we don't allocate/change anything on our side, so not a memory leak in the MTA implementation). It could be related to a recent update in CEF whereby we are missing out on something from MTA's existing implementation; but it's nothing to do with the GPU changes if the issue still exists with GPU disabled in CEF. |
Also if you have a way to replicate a crash in CEF then please provide a resource/script which we can use |
It is worth to mention, that not everyone experience this crash. For example on my machine i never had this problem so far. This is the minimal code, which causing my tester to crash, once was started after ~45 minutes of game on my 'memory-heavy' gamemode. GPU rendering was disabled in latest tests with this user.
|
So you don't even need to create a browser to cause the crash, only requestBrowserDomains? Or is there code missing from above? |
I also tried yesterday for some time to replicate a memory leak scenario in CEF but I had no luck with that. Also I'm sure there would be far more reports about this from other popular servers, if there was a general issue with memory leak in CEF. I'm not saying there isn't a memory leak issue in CEF, as it seems like you have identified CEF as a problem by eliminating its usage in tests, but we don't really have enough info yet. Ideally we need to be able to replicate the issue. It may be something quite particular in any of your scripts that utilize CEF, the more info we have the better chance we have to track it down. Can you provide more analysis about the memory usage in this leak? Is the memory being increased in the CEF subprocess(es), or on gta_sa.exe itself? Can you provide memory usage over time (on more than two time-points) of a player who eventually crashed due to OOM, with detailed reporting from performance browser (showing all available metrics relating to memory usage)? How many CEF browser instances are you creating at any given time? |
this code above is complete, it was enough to just call requestBrowserDomains() (it is attaching cef) to cause a crash for tester.
As i said, i also dont have this crash locally, and never been able to reproduce this kind of memory issue, i'm getting crash only due to multiple reconnects.
normally, not more then one, or 0. but requestBrowserDomains executed for each connected player. so CEF is there always in process |
So all of your UI is done in a single browser instance? Also this mention of the instant crash is confusing things, I thought we are talking about a memory leak here? What you mentioned earlier about playing for an hour with no memory issues, then enabling a CEF resource and getting an instant crash sounds completely unrelated to memory leaks? Unless you are saying that within the time the resource started, memory usage spiraled out of control to cause that crash? Otherwise, lets try to be clearer about what issue is being referred to here. Until I get my hands on some proper analysis of the memory usage throughout a player's session who experiences this, I'm afraid I have nothing to work with on the claims that this is a CEF related issue. |
Describe the bug
My players experiencing a huge spike in crashes due to memory usage (low memory, memory access and so on).
My game mode is pretty memory intensive because of a lot of custom models, we spent days looking for the problem and so far we only found that
older mta version having almost no problems, comparing to latest one's:
Crashes in last 30 days per versions:
1381 | 1.6.0-9.22789.0
675 | 1.6.0-9.22780.0
611 | 1.6.0-9.22763.0
247 | 1.6.0-9.22746.0
23 | 1.6.0-9.22650.0
17 | 1.6.0-9.22684.0
10 | 1.6.0-9.22771.0
9 | 1.6.0-9.22751.0
We are now suggesting player to rollback to older 22746 / 22650 version, and they report that there is no problems with that versions.
we had two crash scenarious, one is "good one", when player reconnects server 2-3 times and on 3-4th time he may get low memory and crash which is like ~99% of cases for us during last 10 years )
and new one - player start mta and joins server first time, plays 15-20 minutes, or after 2-3 minimizing he gets low memory and crash.
this is what happens primarily with 22789.0
moment before player gets crash, after first low-memory warnings (textures/fonts not created) - his memstat looks totally okeish
crashes are different but mostly:
Going to update ticket as soon, as we identify exact version where problem appeared first time.
Steps to reproduce
Version
Client: 1.6.0-9.22763.0 - 1.6.0-9.22789.0
Additional context
No response
Relevant log output
No response
Security Policy
The text was updated successfully, but these errors were encountered: