How to Implement a LRU Cache
Typically LRU cache is implemented using a doubly linked list and a hash map.
Doubly Linked List is used to store list of pages with most recently used page at the start of the list. So, as more pages are added to the list, least recently used pages are moved to the end of the list with page at tail being the least recently used page in the list.
Hash Map (key: page number, value: page) is used for O(1) access to pages in cache
When a page is accessed, there can be 2 cases:
1. Page is present in the cache – If the page is already present in the cache, we move the page to the start of the list.
2. Page is not present in the cache – If the page is not present in the cache, we add the page to the list.
How to add a page to the list:
a. If the cache is not full, add the new page to the start of the list.
b. If the cache is full, remove the last node of the linked list and move the new page to the start of the list.
Basically you will have 2 main data structures :
get link private DoublyLinkedList pageList;
PageMap will contain key and its value which needs to be cached. PageList will be a doubly linkedlist which will support methods like addEntryToList , moveEntryToHead and removeTail. addEntryToList will insert a new node at the start, basically a brand new key. moveEntryToHead will be called if key was already there, as it was invoked just not we will move this node to the head. removeTail will be called when cache is full and we need to remove the tail node to create space for the new node. Whenever removeTail is called, we need to clear the corresponding entry from pageMap also. So basically pageMap will have actual key-value cached data and pagelist will maintain the nodes in the order of their last-access-time where least accessed node will be at the tail and removed eventually in case there is a space crunch