Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak #533

Closed
ethanfowler opened this issue Dec 6, 2021 · 2 comments
Closed

Memory leak #533

ethanfowler opened this issue Dec 6, 2021 · 2 comments

Comments

@ethanfowler
Copy link
Contributor

Hi,

I have the standard install running on an Amazon EC2 t2.micro. Every couple of days it goes down, stops responding to anything, and I have to restart. I added a Cloudwatch agent in order to monitor memory utilisation; looks like there's a leak that's bringing the instance to its knees.

image

@ethanfowler
Copy link
Contributor Author

There is definitely a memory leak making hosting on small (t2.micro) servers impossible without hacks. My current hack is a cron job that checks memory availability and restarts the docker containers if it is low.

The below assumes that your host is Ubuntu 20.04 and docker-compose.yml is in /root/.; adapt as necessary.

sudo crontab -e and add to the end of the file:

*/5 * * * * /root/restart_if_memory_leak.sh

Create /root/restart_if_memory_leak.sh and populate it with:

#!/bin/bash

available_memory=$(free -m | awk '/Mem/ {print $7}')
echo "Free memory: ${available_memory}MB"
if [[ $available_memory -lt 200 ]]; then
	echo "Restarting netmaker..."
	pushd /root
	docker-compose restart
	popd
	echo "Done"
fi

I added this cron job to my instance after another outage on 12/08, and it seems to be keeping things alive:
image

@afeiszli
Copy link
Contributor

fixed in 0.9.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants