• Tutorials
  • DSA
  • Data Science
  • Web Tech
  • Courses
August 16, 2024 |230 Views

Complete Tutorial on LRU Cache with Implementations

  Share   Like
Description
Discussion

LRU Cache Implementation

The Least Recently Used (LRU) Cache is a common concept used in system design and optimization problems. It is an important problem in both interviews and real-world applications, especially in areas where efficient cache management is critical.

Problem Statement

You need to design a data structure that can support the following operations efficiently:

  1. get(key): Returns the value of the key if it exists in the cache; otherwise, returns -1.
  2. put(key, value): Inserts or updates the value for the key. If the cache reaches its capacity, it should invalidate the least recently used item before inserting a new item.

LRU Cache Characteristics

  • The cache uses the Least Recently Used (LRU) eviction policy, meaning that when the cache reaches its capacity, it removes the least recently accessed element to make space for the new one.
  • The cache needs to operate in constant time (O(1)) for both the get and put operations.

Approaches to Solve the Problem

To implement an LRU cache efficiently, the combination of a doubly linked list and a hashmap is often used. Here’s why:

  1. Doubly Linked List: This is used to maintain the order of access, where the most recently accessed item is at the head and the least recently accessed item is at the tail.
  2. HashMap: This is used to provide fast access to the keys, allowing both get and put operations to be performed in constant time.

How the Combination Works

When a get operation is performed:

  • If the key is found, move it to the head of the list (indicating it’s the most recently accessed).
  • If the key is not found, return -1.

When a put operation is performed:

  • If the key is already in the cache, update its value and move it to the head of the list.
  • If the key is not in the cache:
    • If the cache has reached its capacity, remove the tail (the least recently accessed item) and add the new key-value pair to the head.
    • Otherwise, simply add the new key-value pair to the head.

Example Walkthrough

Consider a cache with a capacity of 2:

  • put(1, 1): Cache becomes {1=1}.
  • put(2, 2): Cache becomes {2=2, 1=1}.
  • get(1): Returns 1 and moves 1 to the head, cache becomes {1=1, 2=2}.
  • put(3, 3): Cache reaches its capacity; evicts 2, cache becomes {3=3, 1=1}.
  • get(2): Returns -1 (not found).
  • put(4, 4): Cache evicts 1, cache becomes {4=4, 3=3}.
  • get(1): Returns -1 (not found).
  • get(3): Returns 3 and moves 3 to the head, cache becomes {3=3, 4=4}.
  • get(4): Returns 4 and moves 4 to the head, cache becomes {4=4, 3=3}.

Applications

  • Operating Systems: LRU is used in memory management and virtual memory page replacement algorithms.
  • Web Browsers: Cache frequently accessed web pages and remove least accessed pages when the cache is full.
  • Database Systems: Caches frequently used queries or data to reduce access time.

Conclusion

The LRU cache problem is a fundamental design problem that teaches how to maintain and access data efficiently in constant time. Understanding this concept is crucial for optimizing systems that require efficient memory or data management.

For a detailed step-by-step guide, check out the full article: https://www.geeksforgeeks.org/lru-cache-implementation/.