Ensure flexicache is LRU cached#679
Conversation
|
Good find @pydanny, thanks! Let's use the OrderedDict approach, since we value simplicity most highly. BTW looking at imports, I see that asyncio is imported in that function and not used, so perhaps replace it with the collections import? |
|
Oh BTW don't forget |
|
I switched the PR to use the OrderedDict approach. As for the use of Ran nbdev_clean but it still looks messy. 😞 |
|
Oh sorry you're right I didn't see asyncio being used at the bottom there! OK in that case you should put it back where it was -- in fastcore.xtras we prefer to only import rarely used modules if/when they're actually used. You should also only import |
f457b99 to
3b137ba
Compare
|
Why are all the tests now passing on GitHub? I was stumped by the failing test that had emerged, being unable to replicate it locally. Here is the failing command: nbdev_test --flags '' --n_workers 3 --pause 1.5 --file_re "[0-9][0-1].*"Now having moved the imports to within the function all the tests are now passing on GitHub. Why is that the case? |
|
I don't know - I didn't see the failing tests so can't really guess. But glad it's working now :) |
|
I still can't replicate it locally. For now I'm going to consider it a temporary GitHub problem. |
Originating ticket: #678
Problem
The Python docs say that dict.popitem is LIFO, which means it is Last In First Out. That is the opposite of FIFO, which is part of the algorithm often used in LRU (Least Recently Used).
Reference: https://docs.python.org/3/library/stdtypes.html#dict.popitem
Solution
Since the cache is constantly being popped and reset, treat the order of the cache keys as an iterator and pick the first cache key set. To do this we just call the
nextcommand once oniter(cache).Tests
As cache invalidation is hard, new tests are added to flexicache that document cache invalidation and resetting.
Alternative approach: OrderedDict
collections.OrderedDicthas apopitem()method with alastboolean that replicates our design, as show in the code example below:I rejected this approach because it adds an extra import and
cache.popitem()is probably a very tiny bit slower thandel cache[next(iter(cache))]. However, this approach is arguably more clear. In any case, I'm not opposed to changing out the current solution for the one presented here.