469
node_modules/lru-cache/README.md
generated
vendored
Normal file
469
node_modules/lru-cache/README.md
generated
vendored
Normal file
@@ -0,0 +1,469 @@
|
||||
# lru-cache
|
||||
|
||||
A cache object that deletes the least-recently-used items.
|
||||
|
||||
Specify a max number of the most recently used items that you
|
||||
want to keep, and this cache will keep that many of the most
|
||||
recently accessed items.
|
||||
|
||||
This is not primarily a TTL cache, and does not make strong TTL
|
||||
guarantees. There is no preemptive pruning of expired items by
|
||||
default, but you _may_ set a TTL on the cache or on a single
|
||||
`set`. If you do so, it will treat expired items as missing, and
|
||||
delete them when fetched. If you are more interested in TTL
|
||||
caching than LRU caching, check out
|
||||
[@isaacs/ttlcache](http://npm.im/@isaacs/ttlcache).
|
||||
|
||||
As of version 7, this is one of the most performant LRU
|
||||
implementations available in JavaScript, and supports a wide
|
||||
diversity of use cases. However, note that using some of the
|
||||
features will necessarily impact performance, by causing the
|
||||
cache to have to do more work. See the "Performance" section
|
||||
below.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npm install lru-cache --save
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```js
|
||||
// hybrid module, either works
|
||||
import { LRUCache } from 'lru-cache'
|
||||
// or:
|
||||
const { LRUCache } = require('lru-cache')
|
||||
// or in minified form for web browsers:
|
||||
import { LRUCache } from 'http://unpkg.com/lru-cache@9/dist/mjs/index.min.mjs'
|
||||
|
||||
// At least one of 'max', 'ttl', or 'maxSize' is required, to prevent
|
||||
// unsafe unbounded storage.
|
||||
//
|
||||
// In most cases, it's best to specify a max for performance, so all
|
||||
// the required memory allocation is done up-front.
|
||||
//
|
||||
// All the other options are optional, see the sections below for
|
||||
// documentation on what each one does. Most of them can be
|
||||
// overridden for specific items in get()/set()
|
||||
const options = {
|
||||
max: 500,
|
||||
|
||||
// for use with tracking overall storage size
|
||||
maxSize: 5000,
|
||||
sizeCalculation: (value, key) => {
|
||||
return 1
|
||||
},
|
||||
|
||||
// for use when you need to clean up something when objects
|
||||
// are evicted from the cache
|
||||
dispose: (value, key, reason) => {
|
||||
freeFromMemoryOrWhatever(value)
|
||||
},
|
||||
|
||||
// for use when you need to know that an item is being inserted
|
||||
// note that this does NOT allow you to prevent the insertion,
|
||||
// it just allows you to know about it.
|
||||
onInsert: (value, key, reason) => {
|
||||
logInsertionOrWhatever(key, value)
|
||||
},
|
||||
|
||||
// how long to live in ms
|
||||
ttl: 1000 * 60 * 5,
|
||||
|
||||
// return stale items before removing from cache?
|
||||
allowStale: false,
|
||||
|
||||
updateAgeOnGet: false,
|
||||
updateAgeOnHas: false,
|
||||
|
||||
// async method to use for cache.fetch(), for
|
||||
// stale-while-revalidate type of behavior
|
||||
fetchMethod: async (key, staleValue, { options, signal, context }) => {},
|
||||
}
|
||||
|
||||
const cache = new LRUCache(options)
|
||||
|
||||
cache.set('key', 'value')
|
||||
cache.get('key') // "value"
|
||||
|
||||
// non-string keys ARE fully supported
|
||||
// but note that it must be THE SAME object, not
|
||||
// just a JSON-equivalent object.
|
||||
var someObject = { a: 1 }
|
||||
cache.set(someObject, 'a value')
|
||||
// Object keys are not toString()-ed
|
||||
cache.set('[object Object]', 'a different value')
|
||||
assert.equal(cache.get(someObject), 'a value')
|
||||
// A similar object with same keys/values won't work,
|
||||
// because it's a different object identity
|
||||
assert.equal(cache.get({ a: 1 }), undefined)
|
||||
|
||||
cache.clear() // empty the cache
|
||||
```
|
||||
|
||||
If you put more stuff in the cache, then less recently used items
|
||||
will fall out. That's what an LRU cache is.
|
||||
|
||||
For full description of the API and all options, please see [the
|
||||
LRUCache typedocs](https://isaacs.github.io/node-lru-cache/)
|
||||
|
||||
## Storage Bounds Safety
|
||||
|
||||
This implementation aims to be as flexible as possible, within
|
||||
the limits of safe memory consumption and optimal performance.
|
||||
|
||||
At initial object creation, storage is allocated for `max` items.
|
||||
If `max` is set to zero, then some performance is lost, and item
|
||||
count is unbounded. Either `maxSize` or `ttl` _must_ be set if
|
||||
`max` is not specified.
|
||||
|
||||
If `maxSize` is set, then this creates a safe limit on the
|
||||
maximum storage consumed, but without the performance benefits of
|
||||
pre-allocation. When `maxSize` is set, every item _must_ provide
|
||||
a size, either via the `sizeCalculation` method provided to the
|
||||
constructor, or via a `size` or `sizeCalculation` option provided
|
||||
to `cache.set()`. The size of every item _must_ be a positive
|
||||
integer.
|
||||
|
||||
If neither `max` nor `maxSize` are set, then `ttl` tracking must
|
||||
be enabled. Note that, even when tracking item `ttl`, items are
|
||||
_not_ preemptively deleted when they become stale, unless
|
||||
`ttlAutopurge` is enabled. Instead, they are only purged the
|
||||
next time the key is requested. Thus, if `ttlAutopurge`, `max`,
|
||||
and `maxSize` are all not set, then the cache will potentially
|
||||
grow unbounded.
|
||||
|
||||
In this case, a warning is printed to standard error. Future
|
||||
versions may require the use of `ttlAutopurge` if `max` and
|
||||
`maxSize` are not specified.
|
||||
|
||||
If you truly wish to use a cache that is bound _only_ by TTL
|
||||
expiration, consider using a `Map` object, and calling
|
||||
`setTimeout` to delete entries when they expire. It will perform
|
||||
much better than an LRU cache.
|
||||
|
||||
Here is an implementation you may use, under the same
|
||||
[license](./LICENSE) as this package:
|
||||
|
||||
```js
|
||||
// a storage-unbounded ttl cache that is not an lru-cache
|
||||
const cache = {
|
||||
data: new Map(),
|
||||
timers: new Map(),
|
||||
set: (k, v, ttl) => {
|
||||
if (cache.timers.has(k)) {
|
||||
clearTimeout(cache.timers.get(k))
|
||||
}
|
||||
cache.timers.set(
|
||||
k,
|
||||
setTimeout(() => cache.delete(k), ttl),
|
||||
)
|
||||
cache.data.set(k, v)
|
||||
},
|
||||
get: k => cache.data.get(k),
|
||||
has: k => cache.data.has(k),
|
||||
delete: k => {
|
||||
if (cache.timers.has(k)) {
|
||||
clearTimeout(cache.timers.get(k))
|
||||
}
|
||||
cache.timers.delete(k)
|
||||
return cache.data.delete(k)
|
||||
},
|
||||
clear: () => {
|
||||
cache.data.clear()
|
||||
for (const v of cache.timers.values()) {
|
||||
clearTimeout(v)
|
||||
}
|
||||
cache.timers.clear()
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
If that isn't to your liking, check out
|
||||
[@isaacs/ttlcache](http://npm.im/@isaacs/ttlcache).
|
||||
|
||||
## Storing Undefined Values
|
||||
|
||||
This cache never stores undefined values, as `undefined` is used
|
||||
internally in a few places to indicate that a key is not in the
|
||||
cache.
|
||||
|
||||
You may call `cache.set(key, undefined)`, but this is just
|
||||
an alias for `cache.delete(key)`. Note that this has the effect
|
||||
that `cache.has(key)` will return _false_ after setting it to
|
||||
undefined.
|
||||
|
||||
```js
|
||||
cache.set(myKey, undefined)
|
||||
cache.has(myKey) // false!
|
||||
```
|
||||
|
||||
If you need to track `undefined` values, and still note that the
|
||||
key is in the cache, an easy workaround is to use a sigil object
|
||||
of your own.
|
||||
|
||||
```js
|
||||
import { LRUCache } from 'lru-cache'
|
||||
const undefinedValue = Symbol('undefined')
|
||||
const cache = new LRUCache(...)
|
||||
const mySet = (key, value) =>
|
||||
cache.set(key, value === undefined ? undefinedValue : value)
|
||||
const myGet = (key, value) => {
|
||||
const v = cache.get(key)
|
||||
return v === undefinedValue ? undefined : v
|
||||
}
|
||||
```
|
||||
|
||||
## Tracing and Observability
|
||||
|
||||
Most methods can accept a `status` option, which is an
|
||||
[`LRUCache.Status`](https://isaacs.github.io/node-lru-cache/interfaces/LRUCache.LRUCache.Status.html)
|
||||
object that will be decorated along the operation with
|
||||
indications about what was done and why.
|
||||
|
||||
Additionally, this library is instrumented using the
|
||||
[`node:diagnostics_channel`](https://nodejs.org/api/diagnostics_channel.html)
|
||||
module on Node and other platforms that support it. In order to
|
||||
get diagnostics metrics, listen on the
|
||||
`channel('lru-cache:metrics')`. To get Tracing Channel traces,
|
||||
subscribe to the `tracingChannel('lru-cache')`. The
|
||||
[`LRUCache.Status`](https://isaacs.github.io/node-lru-cache/interfaces/LRUCache.LRUCache.Status.html)
|
||||
objects will be provided as the message context to those channel
|
||||
listeners.
|
||||
|
||||
For example, you could do the following to get comprehensive
|
||||
information about every LRUCache instance in your application:
|
||||
|
||||
```ts
|
||||
import { tracingChannel, subscribe } from 'node:diagnostics_channel'
|
||||
|
||||
subscribe('lru-cache:metrics', (message, name) => {
|
||||
// name will always be 'lru-cache:metrics'
|
||||
// message will be the LRUCache.Status object for whatever
|
||||
// synchronous operation was performed.
|
||||
console.error('LRUCache Metrics', message)
|
||||
})
|
||||
|
||||
tracingChannel('lru-cache').subscribe({
|
||||
start: status => {
|
||||
// a traced operation is starting
|
||||
},
|
||||
asyncStart: status => {
|
||||
// an async traced operation is starting
|
||||
},
|
||||
asyncEnd: status => {
|
||||
// an async traced operation is ending
|
||||
}
|
||||
error: status => {
|
||||
// a traced operation failed
|
||||
},
|
||||
end: status => {
|
||||
// a traced operation is complete
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
The async `cache.fetch()` and `cache.forceFetch` methods are
|
||||
covered by `tracingChannels`. All the other operations are
|
||||
covered by the `lru-cache:metrics` channel, because they are
|
||||
strictly synchronous, and thus don't have an asynchronous
|
||||
lifecycle to track.
|
||||
|
||||
Note that using `status` objects or using
|
||||
`node:diagnostics_channel` listeners _will_ impose a modest
|
||||
performance penalty. Creating data objects is not ever free; do
|
||||
not believe anyone who tells you otherwise. But it is as small as
|
||||
possible.
|
||||
|
||||
### Platform Compatibility Caveat
|
||||
|
||||
Not all platforms support the `node:diagnostics_channel` module.
|
||||
Currently, this is only available in Node, Bun, and Deno, and
|
||||
some edge computing platforms that provide a Node compatibility
|
||||
layer.
|
||||
|
||||
To work around this, if you are loading in a non-Node
|
||||
environment, the package.json exports will direct your module
|
||||
loader to pull in a version that starts out with a dummy
|
||||
implementation, then does a conditional dynamic `import` of the
|
||||
`node:diagnostics_channel` module, and then swaps out those
|
||||
dummy objects with the real thing if it succeeds. This means that
|
||||
cache metrics and tracing channels started in the first load-time
|
||||
tick of your application will _not_ be covered, except in
|
||||
environments that load using the `require` import
|
||||
condition, or both the `node` and `esm` import conditions
|
||||
together.
|
||||
|
||||
Top-level await _could_ be used to remove this caveat, but that
|
||||
feature is dead on arrival, unfortunately. See
|
||||
[#397](https://github.com/isaacs/node-lru-cache/issues/397) and
|
||||
[#398](https://github.com/isaacs/node-lru-cache/issues/398) for
|
||||
more details.
|
||||
|
||||
## Performance
|
||||
|
||||
As of April 2026, version 11 of this library is one of the most
|
||||
performant LRU cache implementations in JavaScript.
|
||||
|
||||
Benchmarks can be extremely difficult to get right. In
|
||||
particular, the performance of set/get/delete operations on
|
||||
objects will vary _wildly_ depending on the type of key used. V8
|
||||
is highly optimized for objects with keys that are short strings,
|
||||
especially integer numeric strings. Thus any benchmark which
|
||||
tests _solely_ using numbers as keys will tend to find that an
|
||||
object-based approach performs the best.
|
||||
|
||||
Note that coercing _anything_ to strings to use as object keys is
|
||||
unsafe, unless you can be 100% certain that no other type of
|
||||
value will be used. For example:
|
||||
|
||||
```js
|
||||
const myCache = {}
|
||||
const set = (k, v) => (myCache[k] = v)
|
||||
const get = k => myCache[k]
|
||||
|
||||
set({}, 'please hang onto this for me')
|
||||
set('[object Object]', 'oopsie')
|
||||
```
|
||||
|
||||
Also beware of "Just So" stories regarding performance. Garbage
|
||||
collection of large (especially: deep) object graphs can be
|
||||
incredibly costly, with several "tipping points" where it
|
||||
increases exponentially. As a result, putting that off until
|
||||
later can make it much worse, and less predictable. If a library
|
||||
performs well, but only in a scenario where the object graph is
|
||||
kept shallow, then that won't help you if you are using large
|
||||
objects as keys.
|
||||
|
||||
In general, when attempting to use a library to improve
|
||||
performance (such as a cache like this one), it's best to choose
|
||||
an option that will perform well in the sorts of scenarios where
|
||||
you'll actually use it.
|
||||
|
||||
This library is optimized for repeated gets and minimizing
|
||||
eviction time, since that is the expected need of a LRU. Set
|
||||
operations are somewhat slower on average than a few other
|
||||
options, in part because of that optimization. It is assumed
|
||||
that you'll be caching some costly operation, ideally as rarely
|
||||
as possible, so optimizing set over get would be unwise.
|
||||
|
||||
If performance matters to you:
|
||||
|
||||
1. If it's at all possible to use small integer values as keys,
|
||||
and you can guarantee that no other types of values will be
|
||||
used as keys, then do that, and use a cache such as
|
||||
[lru-fast](https://npmjs.com/package/lru-fast), or
|
||||
[mnemonist's
|
||||
LRUCache](https://yomguithereal.github.io/mnemonist/lru-cache)
|
||||
which uses an Object as its data store.
|
||||
|
||||
2. Failing that, if you can use short non-numeric strings (ie,
|
||||
less than 256 characters) as your keys, and you do not need
|
||||
any of the other features of this library, use [mnemonist's
|
||||
LRUCache](https://yomguithereal.github.io/mnemonist/lru-cache).
|
||||
|
||||
3. If the types of your keys will be anything else, especially
|
||||
long strings, strings that look like floats, objects, or some
|
||||
mix of types, or if you aren't sure, then this library will
|
||||
work well for you.
|
||||
|
||||
If you do not need the features that this library provides
|
||||
(like asynchronous fetching, a variety of TTL staleness
|
||||
options, and so on), then [mnemonist's
|
||||
LRUMap](https://yomguithereal.github.io/mnemonist/lru-map) is
|
||||
also a very good option, and just slightly faster than this
|
||||
module (since it does considerably less).
|
||||
|
||||
4. Do not use a `dispose` function, size tracking, or especially
|
||||
ttl behavior or observability features, unless absolutely
|
||||
needed. These features are convenient, and necessary in some
|
||||
use cases, and every attempt has been made to make the
|
||||
performance impact minimal, but it isn't nothing.
|
||||
|
||||
## Testing
|
||||
|
||||
When writing tests that involve TTL-related functionality, note
|
||||
that this module creates an internal reference to the global
|
||||
`performance` or `Date` objects at import time. If you import it
|
||||
statically at the top level, those references cannot be mocked or
|
||||
overridden in your test environment.
|
||||
|
||||
To avoid this, dynamically import the package within your tests
|
||||
so that the references are captured after your mocks are applied.
|
||||
For example:
|
||||
|
||||
```ts
|
||||
// ❌ Not recommended
|
||||
import { LRUCache } from 'lru-cache'
|
||||
// mocking timers, e.g. jest.useFakeTimers()
|
||||
|
||||
// ✅ Recommended for TTL tests
|
||||
// mocking timers, e.g. jest.useFakeTimers()
|
||||
const { LRUCache } = await import('lru-cache')
|
||||
```
|
||||
|
||||
This ensures that your mocked timers or time sources are
|
||||
respected when testing TTL behavior.
|
||||
|
||||
Additionally, you can pass in a `perf` option when creating your
|
||||
LRUCache instance. This option accepts any object with a `now`
|
||||
method that returns a number.
|
||||
|
||||
For example, this would be a very bare-bones time-mocking system
|
||||
you could use in your tests, without any particular test
|
||||
framework:
|
||||
|
||||
```ts
|
||||
import { LRUCache } from 'lru-cache'
|
||||
|
||||
let myClockTime = 0
|
||||
|
||||
const cache = new LRUCache<string>({
|
||||
max: 10,
|
||||
ttl: 1000,
|
||||
perf: {
|
||||
now: () => myClockTime,
|
||||
},
|
||||
})
|
||||
|
||||
// run tests, updating myClockTime as needed
|
||||
```
|
||||
|
||||
## Breaking Changes in Version 7
|
||||
|
||||
This library changed to a different algorithm and internal data
|
||||
structure in version 7, yielding significantly better
|
||||
performance, albeit with some subtle changes as a result.
|
||||
|
||||
If you were relying on the internals of LRUCache in version 6 or
|
||||
before, it probably will not work in version 7 and above.
|
||||
|
||||
## Breaking Changes in Version 8
|
||||
|
||||
- The `fetchContext` option was renamed to `context`, and may no
|
||||
longer be set on the cache instance itself.
|
||||
- Rewritten in TypeScript, so pretty much all the types moved
|
||||
around a lot.
|
||||
- The AbortController/AbortSignal polyfill was removed. For this
|
||||
reason, **Node version 16.14.0 or higher is now required**.
|
||||
- Internal properties were moved to actual private class
|
||||
properties.
|
||||
- Keys and values must not be `null` or `undefined`.
|
||||
- Minified export available at `'lru-cache/min'`, for both CJS
|
||||
and MJS builds.
|
||||
|
||||
## Breaking Changes in Version 9
|
||||
|
||||
- Named export only, no default export.
|
||||
- AbortController polyfill returned, albeit with a warning when
|
||||
used.
|
||||
|
||||
## Breaking Changes in Version 10
|
||||
|
||||
- `cache.fetch()` return type is now `Promise<V | undefined>`
|
||||
instead of `Promise<V | void>`. This is an irrelevant change
|
||||
practically speaking, but can require changes for TypeScript
|
||||
users.
|
||||
|
||||
For more info, see the [change log](CHANGELOG.md).
|
||||
Reference in New Issue
Block a user