Skip to content
This repository has been archived by the owner on Sep 22, 2020. It is now read-only.

cache: update LRU cache when putting same key but different value #424

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

cache: update LRU cache when putting same key but different value #424

wants to merge 3 commits into from

Conversation

nak3
Copy link
Contributor

@nak3 nak3 commented Dec 9, 2016

cache: update LRU cache when putting same key but different value

When same key but different value was put on the cache, current code
doesn't replace the cache. This patch changes to find the same value
cache and replace it when the same key cache was put.

if v == value {
return
} else {
lru.priority.Remove(lru.priority.Front())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I'm not sure why this is necessary. Why remove the most used element?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, umm... When lru.get()#35 was called and found the element value, lru.priority.MoveToFront(element) is supposed to be called here. So, the element came to the front. And that's the reason why I remove the front value.

Is this wrong?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I understand now. So it is not wrong, but you are using a side-effect of the get() function. If at some future time get() does not do that, this code would remove a different element.

Instead, find the element, and remove it. That will always work.

Copy link
Contributor Author

@nak3 nak3 Dec 9, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I admit that it has a side effect as you mentioned. But I think golang's container/list doesn't have the "efficient" find function (using Next()/Prev() loop would be O(n), but I don't want to use them for the cache code here.). Do you have any idea to find the element to remove efficiently?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, find code with Next() loop would be O(n) generally, but in this case, the element is supposed to be moved in the front. Thus, even if I use the loop code, most probably the loop can be finished in the first loop.
I will update this with writing the Next() and loop code to find element.

@lpabon
Copy link

lpabon commented Dec 9, 2016

You probably want to force a removal of this element, then call Put() again instead.

@nak3
Copy link
Contributor Author

nak3 commented Dec 10, 2016

Thank you for your advice. I updated.

if old := lru.find(lru.priority.Front(), v); old != nil {
lru.priority.Remove(old)
} else {
clog.Errorf("read cache is corrupted. Please restart the process.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clog.Fatalf

func (lru *cache) find(e *list.Element, v interface{}) *list.Element {
if e == nil {
return nil
} else if e.Value.(kv).value == v {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change else if to if. we already return at line 81.

if reflect.DeepEqual(v, value) {
return
} else {
// Actually find() is not necessary, but it makes sure to remove
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if we want to be extra safe here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Umm... @lpabon how do you think?

@xiang90
Copy link
Contributor

xiang90 commented Dec 13, 2016

lgtm. defer to @lpabon. Can you take another look? Thanks.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants