-
Notifications
You must be signed in to change notification settings - Fork 84
Large File Silent Failure #298
Comments
From [email protected] on July 11, 2013 18:39:17 If you zip the file, will the issue tracker let you attach it? Otherwise, can you send me the file ([email protected]) and I can take a look? I've loaded large (150meg) files in the past and they worked, although they were very slow. This is something I've been working on so any examples that cause extreme slowness would be appreciated. |
From [email protected] on July 24, 2013 17:49:13 Labels: -Tool-All |
Josh Bleecher Snyder reported an issue with regards a large trace output from Go. Essentially https://dl.dropboxusercontent.com/u/4300994/Go/trace-1.5M.zip has ~1.5M events (~130MB) and crashes Chrome tab with an "Aw Snap". I was making it smaller and it seems the limit currently is around 1M events. Although, there is no significant slowness when showing 1M (https://dl.dropboxusercontent.com/u/4300994/Go/trace-1M.zip) events - so I'm guessing some memory limit is triggered. |
After a bit more debugging and inspecting found out that the 1.5M events file works in Chrome Canary (45.0.2448.0). |
We tweaked canary recently to be better with memory so that may be what it is. Maybe we should warn folks when you exceed 1.5m events? The viewer itself really isn't mean to cope with much more, but I don't see much harm in warning folks up front... |
I also wrote a quick alternate trace-viewer to see how much memory minimally could be used, and during some tests I used up 2GB of memory while loading, so it doesn't actually seem to be caused by memory. I'm not sure how to see what was causing Chrome to fail with Aw Snap message. Now started thinking maybe the bug is still there, but adjustments to the memory made it less likely or hid it. The highest I was able to load was 1.07M events, based on that I would put the limit at 1M. But I'm concerned how that number translates to other trace files. i.e. Will a trace recorded in chrome have the same limit or break earlier or later? I guess if the message is non-intrusive and shown ibelow importing dialog then there isn't a big drawback to it. |
It seems that it isn't as easy as putting some message below the importer delay. It needs some task to be run, just after creating importers (at that point we know the number of events) and before importing (that's where the crash happens). I failed to notice a nice way to handle a message asynchronously inside importing. Easiest solution was to use:
Just before the |
Hmm you're right. I'm genuinely not sure how to handle this gracefully. If
|
We actually don't even have to use JSON.parse to handle this, we can write a simple parser that just counts the number of objects in the dataset. It should correlate very well with the total number of events. Or even simpler would be to estimate this based on the data-size, or combined - i.e. if data is below 50MB, assume it's fine... if it's above 50MB, count the number of objects; if number of objects / 3 (or whatever the ratio objects/events is) is bigger than 1M, show a warning. I.e. as long as we can load it into the memory we can show a warning. |
How do you count the number of objects though? We start with a string that On Wed, Jul 8, 2015 at 3:35 AM Egon Elbre [email protected] wrote:
|
Hmm, so the importer takes a gzipped input? I thought it got a regular JSON and the XHR unpacked it. As long as the JSON is correct, then this (ignoring any bugs) should work:
|
We accept both regular JSON and gzipped JSON. Chrome is now always sending gzipped JSON to chrome://tracing. Why do we need to precheck the size? We can just do the import and, if # of events > 1.5million throw up a warning dialog and say things maybe sluggish. |
@dj2 problem is that Chrome may "crash" (the "Aw Snap" screen) or throw an "this page is not responding" during the importing. So, showing the warning after importing is too late. Although, I think it should always be able to handle JSON.parse - but we can estimate the number of events without properly parsing; this means we could show the warning immediately after loading the JSON data (and unzipping). |
Migrated to catapult-project/catapult#298 |
Hi all, I currently have a large trace file which is silently breaking the chrome trace viewer. Do you know if this issue has a fix? |
Chrome DevTools performance tab can also render traces. And https://ui.perfetto.dev/#!/viewer is the successor to trace-viewer and may have more luck rendering a megalarge (100MB+) trace. |
Thank you so much for answering: I didn't know about perfetto, looks like a great tool. And then into:
I posted a related issue on perfetto's GitHub. |
From [email protected] on July 10, 2013 14:04:35
What steps will reproduce the problem? 1. Have large (>=100MB) json trace file
2. Using either svn source, or current google chrome build, load tracing data
3. Data silently fails to load/render, does not download What is the expected output? What do you see instead? Tracing output, rendering. The file is ignored, and not loaded. What version of the product are you using? On what operating system? Mac OS X 10.8.4, Google Version 28.0.1500.71 Please provide any additional information below. I have a tool that filters the file down to about ~30MB, that file takes a very long time to render, often asking to kill the page. However it will render the file, often using 10X the file size amount of RAM.
Original issue: http://code.google.com/p/trace-viewer/issues/detail?id=292
The text was updated successfully, but these errors were encountered: