本文由一缘原创整理,系统梳理 LRU 缓存的原理与手写实现,适合所有算法/后端/前端进阶开发者。

数据结构与算法:手写 LRU 缓存

LRU(Least Recently Used)缓存是高频面试题,也是高性能系统常用的数据结构。


1. LRU 缓存原理

  • 最近最少使用,淘汰最久未访问的数据
  • 需支持 O(1) 的 get/set 操作
  • 常用双向链表 + 哈希表实现

2. 设计思路

  • 哈希表存 key 到节点的映射,O(1) 查找
  • 双向链表维护访问顺序,头部是最新,尾部是最旧
  • get 时把节点移到头部,set 时插入头部,超容量时移除尾部

3. 手写 LRU 缓存(Python 版)

class Node:
    def __init__(self, key, val):
        self.key = key
        self.val = val
        self.prev = self.next = None

class LRUCache:
    def __init__(self, capacity):
        self.cap = capacity
        self.map = {}
        self.head = Node(0, 0)
        self.tail = Node(0, 0)
        self.head.next = self.tail
        self.tail.prev = self.head

    def get(self, key):
        if key not in self.map:
            return -1
        node = self.map[key]
        self._remove(node)
        self._add(node)
        return node.val

    def put(self, key, val):
        if key in self.map:
            self._remove(self.map[key])
        node = Node(key, val)
        self._add(node)
        self.map[key] = node
        if len(self.map) > self.cap:
            n = self.tail.prev
            self._remove(n)
            del self.map[n.key]

    def _remove(self, node):
        p, n = node.prev, node.next
        p.next = n
        n.prev = p

    def _add(self, node):
        node.next = self.head.next
        node.prev = self.head
        self.head.next.prev = node
        self.head.next = node

# 测试
lru = LRUCache(2)
lru.put(1, 1)
lru.put(2, 2)
print(lru.get(1))  # 1
lru.put(3, 3)
print(lru.get(2))  # -1
lru.put(4, 4)
print(lru.get(1))  # -1
print(lru.get(3))  # 3
print(lru.get(4))  # 4

输出:

1
-1
-1
3
4

4. JavaScript 版 LRU 实现

class Node {
  constructor(key, val) {
    this.key = key;
    this.val = val;
    this.prev = this.next = null;
  }
}
class LRUCache {
  constructor(cap) {
    this.cap = cap;
    this.map = new Map();
    this.head = new Node(0, 0);
    this.tail = new Node(0, 0);
    this.head.next = this.tail;
    this.tail.prev = this.head;
  }
  get(key) {
    if (!this.map.has(key)) return -1;
    const node = this.map.get(key);
    this._remove(node);
    this._add(node);
    return node.val;
  }
  put(key, val) {
    if (this.map.has(key)) this._remove(this.map.get(key));
    const node = new Node(key, val);
    this._add(node);
    this.map.set(key, node);
    if (this.map.size > this.cap) {
      const n = this.tail.prev;
      this._remove(n);
      this.map.delete(n.key);
    }
  }
  _remove(node) {
    node.prev.next = node.next;
    node.next.prev = node.prev;
  }
  _add(node) {
    node.next = this.head.next;
    node.prev = this.head;
    this.head.next.prev = node;
    this.head.next = node;
  }
}
// 测试
const lru = new LRUCache(2);
lru.put(1, 1);
lru.put(2, 2);
console.log(lru.get(1)); // 1
lru.put(3, 3);
console.log(lru.get(2)); // -1
lru.put(4, 4);
console.log(lru.get(1)); // -1
console.log(lru.get(3)); // 3
console.log(lru.get(4)); // 4

输出:

1
-1
-1
3
4

5. 性能分析与最佳实践

  • get/put 均为 O(1)
  • 适合高并发缓存、内存敏感场景
  • 生产环境可用 OrderedDict(Python)、Map+链表(JS)、LinkedHashMap(Java)等

结语

LRU 缓存是高频面试与高性能系统的必备技能,理解其原理和实现能极大提升你的算法与工程能力。欢迎留言交流更多算法实战!