Node-LRU-Addon,Native的LRU-Cache,支持Node 4.0
发布于 9 年前 作者 denghongcai 4865 次浏览 最后一次编辑是 8 年前 来自 分享

基于node-lru-native。由于原作者一直没更新,我做了一些优化更新后重新发布

GitHub地址: node-lru-addon

This is an implementation of a simple in-memory cache for node.js, supporting LRU (least-recently-used) eviction and TTL expirations.

It was developed as an alternative to the (excellent) node-lru-cache library for use with hashes with a very large number of items. V8 normally does a good job of optimizing the in-memory representation of objects, but it isn’t optimized for an object that holds a huge amount of data. When you add a very large number of properties (particularly with non-integer keys) to an object, performance begins to suffer.

Rather than rely on V8 to figure out what we’re trying to do, node-lru-native is a light wrapper around std::unordered_map from C++11. A std::list is used to track accesses so we can evict the least-recently-used item when necessary.

Based on the node-hashtable library by Issac Wagner.

Because original code is no longer maintained, I update this code and publish it as a new name lru-addon

Usage

Install via npm:

$ npm install lru-addon

Then:

var LRUCache = require('lru-addon');
var cache = new LRUCache({ maxElements: 1000 });
cache.set('some-key', 42);
var value = cache.get('some-key');

If you’d like to tinker, you can build the extension using node-gyp:

$ npm install -g node-gyp
$ node-gyp configure
$ node-gyp build

Configuration

To configure the cache, you can pass a hash to the LRUCache constructor with the following options:

var cache = new LRUCache({

  // The maximum number of items to add to the cache before evicting the least-recently-used item.
  // Default: 0, meaning there is no maximum.
  maxElements: 10000,

  // The maximum age (in milliseconds) of an item.
  // The item will be removed if get() is called and the item is too old.
  // Default: 0, meaning items will never expire.
  maxAge: 60000,

  // The initial number of items for which space should be allocated.
  // The cache will resize dynamically if necessary.
  size: 1000,

  // The maximum load factor for buckets in the unordered_map.
  // Typically you won't need to change this.
  maxLoadFactor: 2.0

});

API

set(key, value)

Adds the specified item to the cache with the specified key.

get(key)

Returns the item with the specified key, or undefined if no item exists with that key.

remove(key)

Removes the item with the specified key if it exists.

clear()

Removes all items from the cache.

size()

Returns the number of items in the cache.

stats()

Returns a hash containing internal information about the cache.

Changelog

  • 1.0.3 – Support io.js 3.0
  • 1.0.0 – Publish a new package
  • 0.4.0 – Use nan to across node version
  • 0.3.0 – Changed memory allocation strategy, fixed issue where remove() would do a seek through the LRU list, code cleanup
  • 0.2.0 – Fixed issue where maxAge-based removal would result in a seek through the LRU list
  • 0.1.0 – Initial release
9 回复

@i5ting C++实现,元素较多或者占用内存较大时性能会比Javascript实现要快,并且不占用V8的heap内存

@denghongcai benchmark做了了么?

实现lru应该考虑用链表吧…@i5ting 给的这个用js模拟链表,楼主实现是O(n)的方案,数据量大了不一定比js这个快哦。。

还是看清楚用的是什么吧,我这个是std::list和std::unordered_map的方案,是标准的LRU Cache的实现方式

我还是在README里加个benchmark好了

是我看错代码了,不好意思。期待压力测试

可以看仓库里的benchmark

我的测试结果:

  1. LRU-Cache:396ms
  2. LRU-Addon: 180ms

支持Node 4.0咯

回到顶部