Skip to content
Alex Peck edited this page May 23, 2022 · 28 revisions

BitFaster.Caching is a high performance in-memory caching library for .NET.

Why BitFaster.Caching?

The most popular caching library for .NET is arguably System.Extensions.MemoryCache, but it's a heavyweight option with limitations (see below). BitFaster.Caching provides bounded size caches with a focus on performance to address the limitations of MemoryCache.

In particular MemoryCache is a bad fit when all possible cached values use excessive memory. In the worst case a burst of requests can cause everything to be cached; this is either a waste of memory (expensive), causes thrashing (degraded performance), or worse, out of memory (failure). By explicitly choosing how many items to cache, the developer is in control of the cache budget and runaway memory usage is prevented.

Since BitFaster.Caching is generic, non-string keys can be used without being forced to allocate a new key string for each lookup. A cache provides a speedup when a lookup is faster than computing/querying/fetching the value. With faster lookups that don't allocate, caching can achieve speedups in lower level code (e.g. fast computations like parsing/json formatting small objects), in addition to RPC calls or disk reads. This enables caching to be plugged into several layers of the program without exploding memory consumption.

Why not use MemoryCache?

MemoryCache is perfectly serviceable, but it has some limitations:

  • Lookups require heap allocations when the native object key is not type string.
  • Is not 'scan' resistant, fetching all keys will load everything into memory.
  • Does not scale well with concurrent writes.
  • Contains perf counters that can't be disabled.
  • Uses an heuristic to estimate memory used, and evicts items using a timer. The 'trim' process may remove useful items, and if the timer does not fire fast enough the resulting memory pressure can be problematic (e.g. thrashing, out of memory, increased GC).

Contents

  1. ConcurrentLru
  2. ConcurrentTLru
  3. What are the 'fast' LRU classes?
  4. Meta-programming
Clone this wiki locally