-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WordPress 5.0 beta1 - Block now causing Fatal Errors #11066
Comments
After more troubleshooting, this only occurs when running PHP 7.2 (specifically 7.2.8), when I run my local setup with 7.3.0 RC3 or 5.6 the problem disappears. |
Hi @PeterBooker, Thanks for following up with additional detail. It seems this issue is specific to PHP 7.2, and not something that can be fixed in Gutenberg. Feel free to re-open if you identify a specific change needing to be made in Gutenberg. |
After further testing and every version of PHP 7.2 that I checked (7.2.0 / 7.2.4 / 7.2.8 / 7.2.11) has the same issue. I made another custom block and just stored a load of random text (lorem ipsum) in an attribute to see if it was something particular to my block and that resulted in the same error. So it seems like this occurs when a large amount of content is stored in block attributes. It seems possible (likely even) that others will try to use attributes in a similar way and PHP 7.2 is the recommended PHP version for WordPress, so it feels like this could be a significant issue once blocks are more widely developed with. Just to reiterate, this does not occur with WordPress 4.9.8 and Gutenberg 4.1.1 (or previous versions). It has been newly introduced in the WordPress 5.0 integration. So it feels like it could be something that is fixed in Gutenberg. If you still do not feel it is a Gutenberg/WordPress problem that is fine, thank you for your time. |
Can you provide specific steps to reproduce, please? Including the content you're using. Thanks. |
Some more information as requested: I am using my own Docker based local setup, which defaults to PHP 7.2. The memory limit from the Health Check plugins PHP Information page is 256M (local) 128M (master). Steps to reproduce involve my DotOrg plugin (detailed above) or using this custom block. It is built from create-guten-block with lots of text hardcoded to an attribute. Just add the block to any post. If there is anything else just let me know. |
Can you provide the sample content you're using? Thanks. |
Ah, sorry. My test blocks contained the following code:
and the second...
|
I think there is something going on with that I'm testing using PHP 7.1 and the latest from the 5.0 branch. |
I can reproduce this with the parser in the Gutenberg plugin, too. That @dmsnell: Do you have thoughts here? |
It's something to do with the negative lookahead, the memory errors stop if I replace |
Actually, it may be a solution. Given that the serialiser encodes both |
cc @dmsnell |
Alternate approach to #11355 Fixes: #11066 Parsing JSON-attributes in the existing parsers can be slow when the attribute list is very long. This is due to the need to lookahead at each character and then backtrack as the parser looks for the closing of the attribute section and the block comment closer. In this patch we're introducing an optimization that can eliminate a significant amount of memory-usage and execution time when parsing long attribute sections by recognizing that _when inside the JSON attribute area_ any character that _isn't_ a `}` can be consumed without any further investigation. I've added a test to make sure we can parse 100,000 characters inside the JSON attributes. The default parser was fine in testing with more than a million but the spec parser was running out of memory. I'd prefer to keep the tests running on all parsers and definitely let our spec parser define the acceptance criteria so I left the limit low for now. It'd be great if we found a way to update php-pegjs so that it could generate the code differently. Until then I vote for a weaker specification. The default parser went from requiring hundreds of thousands of steps and taking seconds to parse the attack to taking something like 35 steps and a few milliseconds. There should be no functional changes to the parser outputs and no breaking changes to the project. This is a performance/operational optimization.
Alternate approach to #11355 Fixes: #11066 Parsing JSON-attributes in the existing parsers can be slow when the attribute list is very long. This is due to the need to lookahead at each character and then backtrack as the parser looks for the closing of the attribute section and the block comment closer. In this patch we're introducing an optimization that can eliminate a significant amount of memory-usage and execution time when parsing long attribute sections by recognizing that _when inside the JSON attribute area_ any character that _isn't_ a `}` can be consumed without any further investigation. I've added a test to make sure we can parse 100,000 characters inside the JSON attributes. The default parser was fine in testing with more than a million but the spec parser was running out of memory. I'd prefer to keep the tests running on all parsers and definitely let our spec parser define the acceptance criteria so I left the limit low for now. It'd be great if we found a way to update php-pegjs so that it could generate the code differently. Until then I vote for a weaker specification. The default parser went from requiring hundreds of thousands of steps and taking seconds to parse the attack to taking something like 35 steps and a few milliseconds. There should be no functional changes to the parser outputs and no breaking changes to the project. This is a performance/operational optimization.
* Parser: Optimize JSON-attribute parsing Alternate approach to #11355 Fixes: #11066 Parsing JSON-attributes in the existing parsers can be slow when the attribute list is very long. This is due to the need to lookahead at each character and then backtrack as the parser looks for the closing of the attribute section and the block comment closer. In this patch we're introducing an optimization that can eliminate a significant amount of memory-usage and execution time when parsing long attribute sections by recognizing that _when inside the JSON attribute area_ any character that _isn't_ a `}` can be consumed without any further investigation. I've added a test to make sure we can parse 100,000 characters inside the JSON attributes. The default parser was fine in testing with more than a million but the spec parser was running out of memory. I'd prefer to keep the tests running on all parsers and definitely let our spec parser define the acceptance criteria so I left the limit low for now. It'd be great if we found a way to update php-pegjs so that it could generate the code differently. Until then I vote for a weaker specification. The default parser went from requiring hundreds of thousands of steps and taking seconds to parse the attack to taking something like 35 steps and a few milliseconds. There should be no functional changes to the parser outputs and no breaking changes to the project. This is a performance/operational optimization.
Describe the bug
I am running into a problem with WP 5.0 beta1 which I do not see in Gutenberg 4.1.0 or earlier. My custom block is a simple syntax highlighter and everything works fine with small amounts of code, but once I add say more than 10 lines of code into the block all saving fails with the Chrome console showing:
If I try to refresh the post edit page, or re-navigate to it, the admin fatal errors with:
To Reproduce
Steps to reproduce the behavior:
Kebo Code
block and copy some code into it (over 10 lines).Expected behavior
The previous and current versions of Gutenberg all work fine. I would expect WordPress 5.0 beta1 to work too.
Desktop (please complete the following information):
Additional context
I am using a blank WordPress 5.0 beta1 install with no plugins other than the one mentioned above (required to reproduce the bug).
My custom block does not do anything particularly fancy, it uses CodeMirror's runMode function to perform syntax highlighting live as the block is changed. The resulting HTML is stored in an attribute, which may mean that there is far more stored in the attribute than usual.
The text was updated successfully, but these errors were encountered: