Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server up in memory. #636

Closed
Stephan83c opened this issue Jun 12, 2018 · 11 comments
Closed

Server up in memory. #636

Stephan83c opened this issue Jun 12, 2018 · 11 comments
Assignees
Milestone

Comments

@Stephan83c
Copy link

Hello,

My Pathfinder server is working well and I congratulate you.
But sometime the server goes to swap memory and sometime mysql stop to working.
I noticed in the process à lot of "php index.php /cron/deleteLogData"

I would like to know if the problem comes from there.

Server:

  • Debian 9 (fresh install).
  • Intel(R) Atom(TM) CPU N2800 @ 1.86GHz
  • 1Go RAM
  • 500MO Swap

Thx for answer.

@Stephan83c
Copy link
Author

I checked, I killed all the processes "php index.php /cron/deleteLogData" and the memory came back very low.

@exodus4d
Copy link
Owner

Hi @Stephan83c ,

php index.php /cron/deleteLogData is a CronJob task that is triggered every minute. What it does is, to look for Pilots that are "probably" no longer logged into eve, because their position/ship/docked station has not changed for a while (3 min). Pathfinder checks the current "online" status for these characters (max 10 characters/minute get checked) and if they are "offline" Pathfinder deletes their current location from DB. So it is some kind of "clean up" task.
So even if you kill that process it will be re-created 60s afterwards :)

I have some questions:

  1. how many people use your installation? Is there a great number of active players in your character_log table?
  2. How have you figured out, that this specific task took so much RAM? A Unix command would be nice, I can´t figure out anything with $ top or $ htop :(

@exodus4d exodus4d changed the title Serveur up in memory. Server up in memory. Jun 15, 2018
@Stephan83c
Copy link
Author

Stephan83c commented Jun 15, 2018

Hi

Thanks for the explanation.

  1. We are 8 ~ 10 peoples to use Pathfinder everyday and character_log tableworking great.
    2 . I use a script for memory size for processes of the same name:

#!/bin/bash
if [ "$1" = "" ] ; then
echo -n "Process Name: "
read process
else
process=$1
fi
ps aux | grep $process | grep -v grep | awk 'BEGIN { sum=0 } {sum=sum+$6; } END {printf("RAM Size use: %s Mo\n",sum / 1024)}'

and

./Script.sh deleteLogData

I have reboot server yesterday and currently without connection:

  • 6 process whit "ps -aux | grep deleteLogData"
  • RAM Size use: 269.164 Mo whit the script.

If it goes on I think I'll do a cron that will kill the processes every night.

I hope it will help, thank you for your answer

@verunightscout
Copy link

verunightscout commented Jun 16, 2018

This is interesting topic, I need to restart my private PF server ones a week because of memory exhaust. Currently deleteLogData use 2 MB RAM, maybe after couple days this will change.
Grab this https://github.com/pixelb/ps_mem utility, it will show you greater picture of memory consumption on your server.

@exodus4d
Copy link
Owner

exodus4d commented Jun 16, 2018

Very interesting! Ill have a look. I have some RAM issues on my public installation too but could not hunt them down to a single script. Some more info:

Without checking this on my VPS, just looking at the code I have a "theory":

  • Within the updateLog() function (which is called for each character) multiple ESI API requests are necessary in order to get the information we need for this character. The amount of ESI requests depends on the characters current "system"/"docked station"/"ship"/"structure" and the data you already have in the Universe DB (which works as a cache DB for static ESI data).
    In the worst case, we have ~ 6 API calls per Character
  • Depending on _ESI_´s response time it could probably happen, that the script is triggered again after 60s before all 10 Characters * 6 ESI requests = 60 ESI requests are done and the script ends.
  • All PHP scripts triggered by a Cronjob are triggered async over CLI . So maybe we have a running script that is not finished yet before it is triggered again....

If this is the case, a quick fix is to lower the characters that are checked by the job:
https://github.com/exodus4d/pathfinder/blob/master/app/main/cron/characterupdate.php#L24

@Stephan83c
Copy link
Author

I went down to 8, kill the processes and I'm watching what's going on.

@exodus4d
Copy link
Owner

exodus4d commented Jun 16, 2018

@Stephan83c I did some deeper investigation into this. I could not see the behavior you described on my server. The php index.php /cron/deleteLogData cronjob appears every 60s in the process list and disappears ~20s afterwards, no errors so far. ~20MB RAM usage.

I wondered about the amount of parallel running /cron/deleteLogData processes on your installation. Why did not the max_execution_time kill the process?! I figured out, that PHP CLI scripts are not affected by max_execution_time set in your php.ini config file.

"The default setting is 30. When running PHP from the command line the default setting is 0."
source: http://php.net/manual/en/info.configuration.php#ini.max-execution-time

So even if a max_execution_time is set, it is ignored. In case there is no timeout set for e.g. a DB connection, the PHP script runs forever...
vdgckij

I´ll update the cronJobs in the next release and add max_execution_timeout on script runtime in order to kill them after a while, e.g:

ini_set('max_execution_time', 300)

@exodus4d exodus4d self-assigned this Jun 16, 2018
@exodus4d exodus4d added this to the v1.3.6 milestone Jun 16, 2018
@Quintinon
Copy link

Quintinon commented Jun 16, 2018

I'm a bit out of my realm here (experienced in Microsoft SQL) but the query itself could be faster:
https://github.com/exodus4d/pathfinder/blob/master/app/main/cron/characterupdate.php#L53-L55

Instead of modifying the value in each row and comparing it to a static value, take the literal value in the row and compare it to a static value. This is a rough example of what I mean:

        $characterLogs = $characterLogModel->find([
            'updated > DATE_SUB(NOW(), INTERVAL :lifetime SECOND)'
            ':lifetime' => $logInactiveTime
        ], [
            'order' => 'updated asc',
            'limit' => self::CHARACTERS_UPDATE_LOGS_MAX
        ]);

Also, this stackoverflow post seems to back up my understanding that MSSQL and MySQL work upon roughly the same performance principles.
https://stackoverflow.com/questions/9830718/mysql-why-such-different-performance-with-timestamps-in-these-2-queries

You might also consider pulling the DATE_SUB out into the php code incase Mysql is attempting to recalculate it for every row (I don't know if it will or not).

@exodus4d
Copy link
Owner

@Quintinon Thanks for feedback! :)
I´m pretty sure the query is not the problem. I mean there are <100 rows in that table. The time-consuming part is the updateLog() function.

@Stephan83c
Copy link
Author

@exodus4d
I have the same result for php.ini.
I'm waiting for the new version and for the moment I'm making a cron at 05:00AM for kill process.

thank you for investigating.

For information I have 4 process whit 1 people conected now.

@verunightscout
Copy link

Where to put "ini_set('max_execution_time', 300)" ?

@exodus4d exodus4d mentioned this issue Jul 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants