You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Test configurations and rough average page load times:
live (nginx + php-fpm): 0.01s
live (nginx + php-apache docker): 0.05s
dev (nginx + php-fpm both docker): 0.04s
dev (php-apache docker): 0.04s
For the live configurations, mysql is on the host and PHP connects via a socket. For the dev configurations, mysql is in docker (with /var/lib/mysql in a data volume) and the PHP container connects with the usual networking scheme.
For reference, the live configuration uses the "overlay2" storage driver, while dev uses "aufs".
Possible causes of the slownless:
Network latency communicating with mysql (unlikely since the socket connection is also slow)
Network latency and/or filesystem I/O with volume mounts (I guess... but why would they have the same performance degredation?)
Apache vs. Nginx+FPM (unlikely since both are slow)
Production php.ini vs. development (tested both, and only about a 10% change in speed, not a 500% change)
I did some profiling with Xdebug (in dev only, haven't set it up on live), and noticed that the mysql calls were the greatest expense by far on every page, which I assumed confirmed my suspicion that it was somehow related to the way we were connecting to mysql within docker.
However, when I came across this bug report (after flailing with google searches trying in a last desperate attempt to understand what was causing this slowness), I realized that all the file includes did seem a bit expensive.
(Perhaps ~25% just for require_once on this page.)
I think any satisfying conclusion involves using Xdebug to profile the speedy live configuration. In the meantime, we can monitor the above bug report to see if anyone confirms that this is an issue with Docker itself.
The text was updated successfully, but these errors were encountered:
hemberger
added a commit
to hemberger/smr
that referenced
this issue
Apr 7, 2018
Closessmrealms#297.
At long last, the mystery is solved. The reason the live server was
~5x faster when it was using the host system's PHP (rather than Docker)
was because it had the Zend OPcache enabled. This PHP extension caches
the compiled PHP and allows it to be reused at runtime. Since we have
some fairly large libraries, loading and parsing each script was a
significant expense.
Simply installing the `opcache` PHP extension in the Dockerfile
dramatically improves performance.
Test configurations and rough average page load times:
live (nginx + php-fpm): 0.01s
live (nginx + php-apache docker): 0.05s
dev (nginx + php-fpm both docker): 0.04s
dev (php-apache docker): 0.04s
For the live configurations, mysql is on the host and PHP connects via a socket. For the dev configurations, mysql is in docker (with
/var/lib/mysql
in a data volume) and the PHP container connects with the usual networking scheme.For reference, the live configuration uses the "overlay2" storage driver, while dev uses "aufs".
Possible causes of the slownless:
I did some profiling with Xdebug (in dev only, haven't set it up on live), and noticed that the mysql calls were the greatest expense by far on every page, which I assumed confirmed my suspicion that it was somehow related to the way we were connecting to mysql within docker.
However, when I came across this bug report (after flailing with google searches trying in a last desperate attempt to understand what was causing this slowness), I realized that all the file includes did seem a bit expensive.
docker-library/php#493
(Perhaps ~25% just for
require_once
on this page.)I think any satisfying conclusion involves using Xdebug to profile the speedy live configuration. In the meantime, we can monitor the above bug report to see if anyone confirms that this is an issue with Docker itself.
The text was updated successfully, but these errors were encountered: