Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WordPress 5.0 beta1 - Block now causing Fatal Errors #11066

Closed
PeterBooker opened this issue Oct 25, 2018 · 15 comments · Fixed by #11369
Closed

WordPress 5.0 beta1 - Block now causing Fatal Errors #11066

PeterBooker opened this issue Oct 25, 2018 · 15 comments · Fixed by #11369
Assignees
Labels
[Feature] Parsing Related to efforts to improving the parsing of a string of data and converting it into a different f [Type] Bug An existing feature does not function as intended [Type] Performance Related to performance efforts

Comments

@PeterBooker
Copy link

PeterBooker commented Oct 25, 2018

Describe the bug
I am running into a problem with WP 5.0 beta1 which I do not see in Gutenberg 4.1.0 or earlier. My custom block is a simple syntax highlighter and everything works fine with small amounts of code, but once I add say more than 10 lines of code into the block all saving fails with the Chrome console showing:

POST http://wp.local.com/wp-json/wp/v2/posts/22/autosaves 500 (Internal Server Error)
POST http://wp.local.com/wp-json/wp/v2/posts/22/autosaves 500 (Internal Server Error)
POST http://wp.local.com/wp-json/wp/v2/posts/22 500 (Internal Server Error)

If I try to refresh the post edit page, or re-navigate to it, the admin fatal errors with:

Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 4096 bytes) in /var/www/html/wp-includes/class-wp-block-parser.php on line 385

To Reproduce
Steps to reproduce the behavior:

  1. Install WordPress 5.0 beta1
  2. Install the Kebo Code plugin - https://wordpress.org/plugins/kebo-code/
  3. Create a new post, add the Kebo Code block and copy some code into it (over 10 lines).
  4. Set the language and click off the block (lose focus).
  5. You should now see autosaves and publish failing.
  6. If you try to refresh the post edit page it should show an OOM error.

Expected behavior
The previous and current versions of Gutenberg all work fine. I would expect WordPress 5.0 beta1 to work too.

Desktop (please complete the following information):

  • OS: Windows 10
  • Browser: Chrome 69.0.3497.100 / Firefox 62.0.3

Additional context
I am using a blank WordPress 5.0 beta1 install with no plugins other than the one mentioned above (required to reproduce the bug).

My custom block does not do anything particularly fancy, it uses CodeMirror's runMode function to perform syntax highlighting live as the block is changed. The resulting HTML is stored in an attribute, which may mean that there is far more stored in the attribute than usual.

@PeterBooker
Copy link
Author

After more troubleshooting, this only occurs when running PHP 7.2 (specifically 7.2.8), when I run my local setup with 7.3.0 RC3 or 5.6 the problem disappears.

@danielbachhuber
Copy link
Member

Hi @PeterBooker,

Thanks for following up with additional detail. It seems this issue is specific to PHP 7.2, and not something that can be fixed in Gutenberg. Feel free to re-open if you identify a specific change needing to be made in Gutenberg.

@PeterBooker
Copy link
Author

PeterBooker commented Oct 30, 2018

After further testing and every version of PHP 7.2 that I checked (7.2.0 / 7.2.4 / 7.2.8 / 7.2.11) has the same issue. I made another custom block and just stored a load of random text (lorem ipsum) in an attribute to see if it was something particular to my block and that resulted in the same error.

So it seems like this occurs when a large amount of content is stored in block attributes. It seems possible (likely even) that others will try to use attributes in a similar way and PHP 7.2 is the recommended PHP version for WordPress, so it feels like this could be a significant issue once blocks are more widely developed with.

Just to reiterate, this does not occur with WordPress 4.9.8 and Gutenberg 4.1.1 (or previous versions). It has been newly introduced in the WordPress 5.0 integration. So it feels like it could be something that is fixed in Gutenberg.

If you still do not feel it is a Gutenberg/WordPress problem that is fine, thank you for your time.

@danielbachhuber
Copy link
Member

So it seems like this occurs when a large amount of content is stored in block attributes. It seems possible (likely even) that others will try to use attributes in a similar way and PHP 7.2 is the recommended PHP version for WordPress, so it feels like this could be a significant issue once blocks are more widely developed with.

Can you provide specific steps to reproduce, please? Including the content you're using. Thanks.

@danielbachhuber danielbachhuber added [Status] Needs More Info Follow-up required in order to be actionable. [Type] Performance Related to performance efforts labels Oct 30, 2018
@PeterBooker
Copy link
Author

PeterBooker commented Oct 30, 2018

Some more information as requested:

I am using my own Docker based local setup, which defaults to PHP 7.2. The memory limit from the Health Check plugins PHP Information page is 256M (local) 128M (master).

Steps to reproduce involve my DotOrg plugin (detailed above) or using this custom block. It is built from create-guten-block with lots of text hardcoded to an attribute. Just add the block to any post.

If there is anything else just let me know.

@danielbachhuber
Copy link
Member

Steps to reproduce involve my DotOrg plugin (detailed above)

Can you provide the sample content you're using? Thanks.

@danielbachhuber
Copy link
Member

Can you provide the sample content you're using?

I was able to reproduce this with 112 lines of code. The initial save request failed, and then trying to reload the editor results in this:

image

@PeterBooker
Copy link
Author

Ah, sorry.

My test blocks contained the following code:

version: '3'
services:
  database:
    image: mariadb:${DB_VERSION:-10.3}
    volumes:
      - "./.docker/data/db:/var/lib/mysql"
      - "./.docker/config/db:/etc/mysql/conf.d"
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: ${DB_PASSWORD:-password}
      MYSQL_DATABASE: ${DB_DATABASE:-wordpress}
      MYSQL_USER: ${DB_USER:-wordpress}
      MYSQL_PASSWORD: ${DB_PASSWORD:-password}
    restart: always
    command:
      'mysqld --innodb-flush-method=fsync'
  memcached:
    image: memcached:alpine
    restart: always
    command: -m 128
  php:
    image: peterbooker/phpwp:${PHP_VERSION:-7.2}-fpm
    depends_on:
      - database
      - memcached
    working_dir: /var/www/html
    environment:
      MYSQL_DATABASE: ${DB_DATABASE:-wordpress}
      MYSQL_USER: ${DB_USER:-wordpress}
      MYSQL_PASSWORD: ${DB_PASSWORD:-password}
      WP_VERSION: ${WP_VERSION:-latest}
      WP_USER: ${WP_USER:-user}
      WP_PASSWORD: ${WP_PASSWORD:-password}
      WP_EMAIL: ${WP_EMAIL:[email protected]}
    volumes:
      - "./.docker/config/php/php.ini:/usr/local/etc/php/php.ini"
      - "./.docker/wordpress:/var/www/html"
      - "adminer:/var/www/adminer"
    extra_hosts:
      - "localhost:172.18.0.1"
    restart: always
  mailhog:
    image: mailhog/mailhog
    ports:
      - "1025:1025"
  caddy:
    image: abiosoft/caddy
    depends_on:
      - php
    volumes:
      - "./.docker/config/caddy/Caddyfile:/etc/Caddyfile"
      - "./.docker/wordpress:/var/www/html"
      - "adminer:/var/www/adminer"
    ports:
      - "80:80"
      - "443:443"
volumes:
  adminer: {}

and the second...

<?php
/**
 * Primary class file for the Kebo Code plugin.
 *
 * @package Kebo Code
 */

if ( ! class_exists( 'Kebo_Code' ) ) {
	/**
	 * Class Kebo_Code
	 */
	class Kebo_Code {
		/**
		 * Kebo_Code constructor.
		 *
		 * @return void
		 */
		public function __construct() {
			$this->init();
		}

		/**
		 * Plugin initiation.
		 *
		 * A helper function, called by `Kebo_Code::__construct()` to initiate actions, hooks and other features needed.
		 *
		 * @uses get_option()
		 * @uses version_compare()
		 * @uses add_action()
		 * @uses add_filter()
		 *
		 * @return void
		 */
		public function init() {
			$version = get_option( 'kebo_code_version' );
			if ( false === $version || version_compare( $version, KBCO_VERSION, '<' ) ) {
				$this->update();
			}

			add_action( 'plugins_loaded', array( $this, 'load_i18n' ) );
			add_action( 'wp_register_scripts', array( $this, 'register' ) );
		}

		/**
		 * Runs on plugin updates.
		 *
		 * Checks for version changes and runs any necessary update processing.
		 *
		 * @uses update_option()
		 *
		 * @return void
		 */
		public function update() {
			update_option( 'kebo_code_version', KBCO_VERSION );
		}

		/**
		 * Load translations.
		 *
		 * Loads the textdomain needed to get translations for our plugin.
		 *
		 * @uses load_plugin_textdomain()
		 *
		 * @return void
		 */
		public function load_i18n() {
			load_plugin_textdomain( 'kebo-code', false, KBCO_PATH . '/languages' );
		}

		/**
		 * Register assets.
		 *
		 * Register our CSS and JavaScript files.
		 *
		 * @uses wp_register_style()
		 *
		 * @return void
		 */
		public function register() {
			wp_register_style( 'kebo-code-editor', KBCO_URL . 'assets/css/editor.css', array(), KBCO_VERSION, 'all' );
			wp_register_style( 'kebo-code-themes', KBCO_URL . 'assets/css/themes.css', array(), KBCO_VERSION, 'all' );
		}

		/**
		 * Compatibility check on activation.
		 *
		 * @uses version_compare()
		 *
		 * @return void
		 */
		public static function activation() {

			if ( ! function_exists( 'register_block_type' ) ) {
				self::block_activation( __( 'Kebo Code requires the Gutenberg Plugin or WordPress 5.0 to function properly. Please install either before activating Kebo Code.', 'kebo-code' ) );
			}

			$min_php_version = '5.6';
			$min_wp_version  = '4.9.8';

			if ( version_compare( $GLOBALS['wp_version'], $min_wp_version, '<' ) ) {
				// translators: WordPress version number.
				self::block_activation( sprintf( __( 'Kebo Code requires WordPress %s or later to function properly. Please upgrade WordPress before activating Kebo Code.', 'kebo-code' ), $min_wp_version ) );
			}

			if ( version_compare( phpversion(), $min_php_version, '<' ) ) {
				// translators: PHP version number.
				self::block_activation( sprintf( __( 'Kebo Code requires PHP %s or later to function properly. Please upgrade PHP before activating Kebo Code.', 'kebo-code' ), $min_php_version ) );
			}

		}

		/**
		 * Blocks Activation.
		 *
		 * Displays an HTML message explaining why activation was blocked.
		 *
		 * @param {string} $message String containing the activation message.
		 *
		 * @return void
		 */
		public static function block_activation( $message = '' ) {
			deactivate_plugins( array( 'kebo-code/kebo-code.php' ) );
			wp_die( '<div class="error"><p>' . esc_html( $message ) . '</p></div>' );
		}
	}
}

@danielbachhuber
Copy link
Member

When I downgrade to WordPress 4.9.8 and activate Gutenberg 4.1.0, I'm no longer able to reproduce the issue.

image

@earnjam
Copy link
Contributor

earnjam commented Oct 31, 2018

I think there is something going on with that preg_match() in WP_Block_Parser->next_token(). When I use xDebug, it starts throwing exceptions for $matches[ 0 ] being undefined, which I think leads to an infinite loop. If I put in some breakpoints back above that, I start getting some false values for $has_match.

I'm testing using PHP 7.1 and the latest from the 5.0 branch.

@pento
Copy link
Member

pento commented Oct 31, 2018

I can reproduce this with the parser in the Gutenberg plugin, too.

That preg_match() in WP_Block_Parser::next_token() is failing with a PREG_JIT_STACKLIMIT_ERROR error.

@dmsnell: Do you have thoughts here?

@pento
Copy link
Member

pento commented Oct 31, 2018

It's something to do with the negative lookahead, the memory errors stop if I replace (?:(?!}\s+-->).)+? with .+? in that regex. That's not an actual solution, but it at least confirms where the problem is.

@pento
Copy link
Member

pento commented Oct 31, 2018

Actually, it may be a solution. Given that the serialiser encodes both -- and >, we can assume that the JSON blob will never contain the string "-->". If this is the case, we can just do a non-greedy slurp of everything until it encounters the end of the block comment.

@pento pento added [Type] Bug An existing feature does not function as intended [Feature] Parsing Related to efforts to improving the parsing of a string of data and converting it into a different f and removed [Status] Needs More Info Follow-up required in order to be actionable. labels Oct 31, 2018
@pento pento added this to the WordPress 5.0 milestone Oct 31, 2018
@mcsf
Copy link
Contributor

mcsf commented Oct 31, 2018

cc @dmsnell

@dmsnell
Copy link
Member

dmsnell commented Oct 31, 2018

Non-greedy slurp sounds good to me. Thanks for the diagnosis @pento.

Check out #11320 too which is related in the sense of running the same tests against all parsers.

I'll work on a fix if I can today; I'll review yours if you make one.

@dmsnell dmsnell self-assigned this Oct 31, 2018
dmsnell added a commit that referenced this issue Nov 1, 2018
Alternate approach to #11355
Fixes: #11066

Parsing JSON-attributes in the existing parsers can be slow when the
attribute list is very long. This is due to the need to lookahead at
each character and then backtrack as the parser looks for the closing
of the attribute section and the block comment closer.

In this patch we're introducing an optimization that can eliminate
a significant amount of memory-usage and execution time when parsing
long attribute sections by recognizing that _when inside the JSON
attribute area_ any character that _isn't_ a `}` can be consumed
without any further investigation.

I've added a test to make sure we can parse 100,000 characters inside
the JSON attributes. The default parser was fine in testing with more
than a million but the spec parser was running out of memory. I'd
prefer to keep the tests running on all parsers and definitely let our
spec parser define the acceptance criteria so I left the limit low for
now. It'd be great if we found a way to update php-pegjs so that it
could generate the code differently. Until then I vote for a weaker
specification.

The default parser went from requiring hundreds of thousands of steps
and taking seconds to parse the attack to taking something like 35
steps and a few milliseconds.

There should be no functional changes to the parser outputs and no
breaking changes to the project. This is a performance/operational
optimization.
dmsnell added a commit that referenced this issue Nov 2, 2018
Alternate approach to #11355
Fixes: #11066

Parsing JSON-attributes in the existing parsers can be slow when the
attribute list is very long. This is due to the need to lookahead at
each character and then backtrack as the parser looks for the closing
of the attribute section and the block comment closer.

In this patch we're introducing an optimization that can eliminate
a significant amount of memory-usage and execution time when parsing
long attribute sections by recognizing that _when inside the JSON
attribute area_ any character that _isn't_ a `}` can be consumed
without any further investigation.

I've added a test to make sure we can parse 100,000 characters inside
the JSON attributes. The default parser was fine in testing with more
than a million but the spec parser was running out of memory. I'd
prefer to keep the tests running on all parsers and definitely let our
spec parser define the acceptance criteria so I left the limit low for
now. It'd be great if we found a way to update php-pegjs so that it
could generate the code differently. Until then I vote for a weaker
specification.

The default parser went from requiring hundreds of thousands of steps
and taking seconds to parse the attack to taking something like 35
steps and a few milliseconds.

There should be no functional changes to the parser outputs and no
breaking changes to the project. This is a performance/operational
optimization.
dmsnell added a commit that referenced this issue Nov 4, 2018
* Parser: Optimize JSON-attribute parsing

Alternate approach to #11355
Fixes: #11066

Parsing JSON-attributes in the existing parsers can be slow when the
attribute list is very long. This is due to the need to lookahead at
each character and then backtrack as the parser looks for the closing
of the attribute section and the block comment closer.

In this patch we're introducing an optimization that can eliminate
a significant amount of memory-usage and execution time when parsing
long attribute sections by recognizing that _when inside the JSON
attribute area_ any character that _isn't_ a `}` can be consumed
without any further investigation.

I've added a test to make sure we can parse 100,000 characters inside
the JSON attributes. The default parser was fine in testing with more
than a million but the spec parser was running out of memory. I'd
prefer to keep the tests running on all parsers and definitely let our
spec parser define the acceptance criteria so I left the limit low for
now. It'd be great if we found a way to update php-pegjs so that it
could generate the code differently. Until then I vote for a weaker
specification.

The default parser went from requiring hundreds of thousands of steps
and taking seconds to parse the attack to taking something like 35
steps and a few milliseconds.

There should be no functional changes to the parser outputs and no
breaking changes to the project. This is a performance/operational
optimization.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
[Feature] Parsing Related to efforts to improving the parsing of a string of data and converting it into a different f [Type] Bug An existing feature does not function as intended [Type] Performance Related to performance efforts
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants