Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[leo_storage][data-compaction] Avoids write operation before leo_storage is not able to execute data-compaction #592

Closed
yosukehara opened this issue Jan 27, 2017 · 5 comments

Comments

@yosukehara
Copy link
Member

yosukehara commented Jan 27, 2017

We already delivered auto-compaction feature, and recommend using this feature but in case of its disabled, LeoFS' storage node crashes when disk-full situation happen.

We need to support a LeoFS' storage node reject write-operation before its node reached not able to compaction situation as below:

data-compaction

After stoping WRITE operation, an admin needs to maintain a stoped leo_storage:

@mocchira
Copy link
Member

WIP

@yosukehara
Copy link
Member Author

yosukehara commented Sep 11, 2017

@mocchira We've benchmarked v1.3.7-dev today. Its result is not good, Benchmark LeoFS v1.3.7. We need to reconsider this implementation. So I would like to propose this implementation as below:

Proposal

  • One monitor process, leo_object_storage_diskspace_mon (tentative name) checks disk-space every one minute which is configurable
  • In case of over the threshold, leo_object_storage_diskspace_mon sets a flag of is_able_to_write to leo_object_storage_server's 'State'
  • leo_object_storage_server's put and store always checks is_able_to_write to reject storing an object.

@mocchira
Copy link
Member

@yosukehara Confirmed. It seems disksup gen_server process might be bottleneck as multiple leo_object_storage_server (default: 8) call that gen_server each time and also each call cause callers to be locked exclusively (waiting for other calls to finish). so your proposal loos reasonable.

@yosukehara
Copy link
Member Author

@mocchira

Confirmed. It seems disksup gen_server process might be the bottleneck as multiple leo_object_storage_server (default: 8) call that gen_server each time and also each call cause callers to be locked exclusively (waiting for other calls to finish). so your proposal looks reasonable.

Thanks for your confirmation. I'm going to send a PR soon.

yosukehara added a commit to yosukehara/leo_object_storage that referenced this issue Sep 12, 2017
yosukehara added a commit to yosukehara/leo_object_storage that referenced this issue Sep 12, 2017
mocchira pushed a commit to leo-project/leo_object_storage that referenced this issue Sep 12, 2017
@yosukehara
Copy link
Member Author

We've recognized this issue was fixed:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants