Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Free space #389

Closed
Fedora-Man opened this issue Dec 30, 2019 · 9 comments
Closed

Free space #389

Fedora-Man opened this issue Dec 30, 2019 · 9 comments

Comments

@Fedora-Man
Copy link

The disk is becoming full so how to free space? Is the a house keeping procedure to free disk space frequently because logs will accumulate and disk space is limited!

@Cyb3rWard0g
Copy link
Owner

Hey @Fedora-Man , the automatic free space is not set up in the project yet. something like curator should help. I will add that as a feature request.

@neu5ron neu5ron added this to the 7.x milestone Jan 3, 2020
@neu5ron
Copy link
Collaborator

neu5ron commented Jan 3, 2020

@Fedora-Man there is probably a combination or at least one of the following issues.. that we are aware of and are either working as we speak or documenting..

  1. Docker logs filling up disk space
  2. Data (logs being sent) in Elasticsearch
  3. the monitoring data with the .mointoring indexes in Elasticsearch
    Added Curator to HELK build, added email alerts to Elastalert, exposed ES port to host #352

However, I may be able to provide a temporary workaround (but can't promise anything).
can you send me the output of the following from within Kibana "Dev Tools" app/tab:
GET /_cat/nodes?h=h,diskAvail

also, send the output from the following on the operating system:
docker inspect --format='{{.LogPath}}' $(docker ps -a -q) | sudo xargs -n 1 du -ah

@Fedora-Man
Copy link
Author

I am having problem accessing Kibana (already reported as issue #374)

The output of the command:
docker inspect --format='{{.LogPath}}' $(docker ps -a -q) | sudo xargs -n 1 du -ah Is:

0	/var/lib/docker/containers/6eacab2ed0dac7713fc12f96ffa4c4ca5c0250c3769260149129ebf9d886c7b5/6eacab2ed0dac7713fc12f96ffa4c4ca5c0250c3769260149129ebf9d886c7b5-json.log
245M	/var/lib/docker/containers/5a7602b9130b78f34f73e2532beaac4b1da516469894d13b9013cc37c044407c/5a7602b9130b78f34f73e2532beaac4b1da516469894d13b9013cc37c044407c-json.log
8.8M	/var/lib/docker/containers/4334302e989c5fd518ec48e23b25f6f4cd249a8defbf6adb664f8947e34df384/4334302e989c5fd518ec48e23b25f6f4cd249a8defbf6adb664f8947e34df384-json.log
12K	/var/lib/docker/containers/85e427eb36999917dcc364e1e03c0ba41a8a54fde113705010893f30738a4620/85e427eb36999917dcc364e1e03c0ba41a8a54fde113705010893f30738a4620-json.log
11G	/var/lib/docker/containers/154f200f561dd7c875745cef4c87b33c14f182389efa35c7d6a9feda77945ffb/154f200f561dd7c875745cef4c87b33c14f182389efa35c7d6a9feda77945ffb-json.log
16K	/var/lib/docker/containers/9f39f3c3de9565abfc4f5bc269b4a0a85fa2d7dd71b2c2fa283a9d6734a7a44a/9f39f3c3de9565abfc4f5bc269b4a0a85fa2d7dd71b2c2fa283a9d6734a7a44a-json.log
46M	/var/lib/docker/containers/fcc0f3c6ff08b5d5979e21c0f9e26b48c18ce7a519386eccf389ac105d26efdd/fcc0f3c6ff08b5d5979e21c0f9e26b48c18ce7a519386eccf389ac105d26efdd-json.log
45G	/var/lib/docker/containers/5a629e97f982f94c6367b0f9693471b35feb2eba3596a7ee15b531851b3c655e/5a629e97f982f94c6367b0f9693471b35feb2eba3596a7ee15b531851b3c655e-json.log
61G	/var/lib/docker/containers/38091bd54ff36d8ce7adfba361820056ff5f1b220ed2b6a7044007eb2e860f78/38091bd54ff36d8ce7adfba361820056ff5f1b220ed2b6a7044007eb2e860f78-json.log

@neu5ron
Copy link
Collaborator

neu5ron commented Jan 4, 2020

can you send output of df -h

@Fedora-Man
Copy link
Author

here is the output of df -h:

df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  232G  169G   63G  73% /
devtmpfs                 7.8G     0  7.8G   0% /dev
tmpfs                    7.8G     0  7.8G   0% /dev/shm
tmpfs                    7.8G  746M  7.1G  10% /run
tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda1               1014M  232M  783M  23% /boot
/dev/mapper/centos-home   10G  673M  9.4G   7% /home
tmpfs                    1.6G   12K  1.6G   1% /run/user/42
tmpfs                    1.6G     0  1.6G   0% /run/user/1000

@webhead404
Copy link

Another option that hasn't been discussed is creating index lifecycle management policies in Elastic. https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html

@Fedora-Man
Copy link
Author

How to enable ILM in helk?

@neu5ron
Copy link
Collaborator

neu5ron commented Jan 15, 2020

let me know if this helps:
https://www.elastic.co/guide/en/kibana/current/creating-index-lifecycle-policies.html

we dont have a perfect way to know after what date or amount left for people in order to delete logs. but in future releases this is on are radar.

@neu5ron
Copy link
Collaborator

neu5ron commented Apr 17, 2020

fixed in upcoming release

@neu5ron neu5ron closed this as completed Apr 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants