Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KeyError TTLCache #80

Closed
rafaelsimbiose opened this issue Dec 13, 2016 · 3 comments
Closed

KeyError TTLCache #80

rafaelsimbiose opened this issue Dec 13, 2016 · 3 comments

Comments

@rafaelsimbiose
Copy link

rafaelsimbiose commented Dec 13, 2016

Hi, I received KeyError when I attributed the value to cache using TTLCACHE, for example:

I have this cache:

WHITE_LIST_CACHE = TTLCache(maxsize=10000,
                            ttl=10)

And error ocurred now:

WHITE_LIST_CACHE[project_id] = white_list

This traceback happens when he called self.expire and crash in del self.data[key] into cache_delitem method, he returned KeyError.

def __setitem__(self, key, value, cache_setitem=Cache.__setitem__):
        with self.__timer as time:
            self.expire(time) <---------
            cache_setitem(self, key, value)
        try:
            link = self.__getlink(key)
        except KeyError:
            self.__links[key] = link = _Link(key)

while curr is not root and curr.expire < time:
            cache_delitem(self, curr.key) <-----
            del links[curr.key]
            next = curr.next
            curr.unlink()
            curr = next

    def __delitem__(self, key):
        size = self.__size.pop(key)
        del self.__data[key] <------ Exception happens here
        self.__currsize -= size
    def __contains__(self, key):
        return key in self.__data

Is this expected?

Thanks !

@tkem
Copy link
Owner

tkem commented Dec 13, 2016

@rafaelsimbiose: No, as far as I can tell from your report, this is not expected. Can you provide a test case to reproduce this? Is your code multi-threaded, and you forgot to lock access to WHITE_LIST_CACHE properly?

@rafaelsimbiose
Copy link
Author

@tkem I discovered the problem, it really was something that was not doing the lock with threads, I'll close the issue. Thank so much.

@tkem
Copy link
Owner

tkem commented Dec 14, 2016

You're welcome! Maybe I should state this more prominently in the docs...

@tkem tkem reopened this Dec 14, 2016
@tkem tkem closed this as completed Dec 14, 2016
mmcfarland added a commit to microsoft/planetary-computer-apis that referenced this issue Oct 24, 2022
cachetools cache is not thread safe and there were frequent exceptions
logged indicating that cache updates during async calls were failing
with key errors similar to those described in:

tkem/cachetools#80

Add a lock per table instance synchronizes cache updates across threads
in.
mmcfarland added a commit to microsoft/planetary-computer-apis that referenced this issue Oct 25, 2022
* Temporarily use fork for starlette 0.21 release

The 0.21 release resolves a frequent error on our fastapi version.

See:
encode/starlette#1710
encode/starlette#1715

* Disable FTP as function app deploy option

Security controls

* Trace request attributes before invoking middleware

If an exception is raised in subsequent middlewares, added trace
attributes will still be logged to Azure. This allows us to find
requests that fail in the logs.

* Make config cache thread safe

cachetools cache is not thread safe and there were frequent exceptions
logged indicating that cache updates during async calls were failing
with key errors similar to those described in:

tkem/cachetools#80

Add a lock per table instance synchronizes cache updates across threads
in.

* Lint

* Changelog
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants