moka/lib.rs
1#![warn(clippy::all)]
2#![warn(rust_2018_idioms)]
3// Temporary disable this lint as the MSRV (1.51) require an older lint name:
4// #![deny(rustdoc::broken_intra_doc_links)]
5#![cfg_attr(docsrs, feature(doc_cfg))]
6
7//! Moka is a fast, concurrent cache library for Rust. Moka is inspired by the
8//! [Caffeine][caffeine-git] library for Java.
9//!
10//! Moka provides in-memory concurrent cache implementations on top of hash maps.
11//! They support full concurrency of retrievals and a high expected concurrency for
12//! updates. They utilize a lock-free concurrent hash table as the central key-value
13//! storage.
14//!
15//! All cache implementations perform a best-effort bounding of the map using an
16//! entry replacement algorithm to determine which entries to evict when the capacity
17//! is exceeded.
18//!
19//! [caffeine-git]: https://github.com/ben-manes/caffeine
20//!
21//! # Features
22//!
23//! - Thread-safe, highly concurrent in-memory cache implementations:
24//! - Synchronous caches that can be shared across OS threads.
25//! - An asynchronous (futures aware) cache.
26//! - A cache can be bounded by one of the followings:
27//! - The maximum number of entries.
28//! - The total weighted size of entries. (Size aware eviction)
29//! - Maintains near optimal hit ratio by using an entry replacement algorithms
30//! inspired by Caffeine:
31//! - Admission to a cache is controlled by the Least Frequently Used (LFU)
32//! policy.
33//! - Eviction from a cache is controlled by the Least Recently Used (LRU)
34//! policy.
35//! - [More details and some benchmark results are available here][tiny-lfu].
36//! - Supports expiration policies:
37//! - Time to live.
38//! - Time to idle.
39//! - Per-entry variable expiration.
40//! - Supports eviction listener, a callback function that will be called when an
41//! entry is removed from the cache.
42//!
43//! [tiny-lfu]: https://github.com/moka-rs/moka/wiki#admission-and-eviction-policies
44//!
45//! ## Cache Policies
46//!
47//! When a cache is full, it has to select and evict existing entries to make some
48//! room. A cache policy is a strategy to determine which entry to evict.
49//!
50//! The choice of the cache policy may have a significant impact on the performance
51//! of the cache. Because the time for cache misses is usually much greater than the
52//! time for cache hits, the miss rate (number of misses per second) has a
53//! significant impact on the performance.
54//!
55//! Moka provides the following policies:
56//!
57//! - TinyLFU
58//! - LRU
59//!
60//! ### TinyLFU
61//!
62//! TinyLFU is the default policy of the cache, and will be suitable for most
63//! workloads.
64//!
65//! TinyLFU is a combination of the LRU eviction policy and the LFU admission policy.
66//! LRU stands for Least Recently Used, which is very popular in many cache systems.
67//! LFU stands for Least Frequently Used.
68//!
69//! ![The lifecycle of cached entries with TinyLFU][tiny-lfu-image]
70//!
71//! [tiny-lfu-image]:
72//! https://github.com/moka-rs/moka/wiki/images/benchmarks/moka-tiny-lfu.png
73//!
74//! With TinyLFU policy, the cache will admit a new entry based on its popularity. If
75//! the key of the entry is popular, it will be admitted to the cache. Otherwise, it
76//! will be rejected.
77//!
78//! The popularity of the key is estimated by the historic popularity estimator
79//! called LFU filter. It is a modified Count-Min Sketch, and it can estimate the
80//! frequency of keys with a very low memory footprint (thus the name “tiny”). Note
81//! that it tracks not only the keys currently in the cache, but all hit and missed
82//! keys.
83//!
84//! Once the entry is admitted to the cache, it will be evicted based on the LRU
85//! policy. It evicts the least recently used entry from the cache.
86//!
87//! TinyLFU will be suitable for most workloads, such as database, search, and
88//! analytics.
89//!
90//! ### LRU
91//!
92//! LRU stands for Least Recently Used.
93//!
94//! With LRU policy, the cache will evict the least recently used entry. It is a
95//! simple policy and has been used in many cache systems.
96//!
97//! LRU will be suitable for recency-biased workloads, such as job queues and event
98//! streams.
99//!
100//! # Examples
101//!
102//! See the following document:
103//!
104//! - Thread-safe, synchronous caches:
105//! - [`sync::Cache`][sync-cache-struct]
106//! - [`sync::SegmentedCache`][sync-seg-cache-struct]
107//! - An asynchronous (futures aware) cache:
108//! - [`future::Cache`][future-cache-struct] (Requires "future" feature)
109//!
110//! [future-cache-struct]: ./future/struct.Cache.html
111//! [sync-cache-struct]: ./sync/struct.Cache.html
112//! [sync-seg-cache-struct]: ./sync/struct.SegmentedCache.html
113//!
114//! **NOTE:** The following caches have been moved to a separate crate called
115//! "[mini-moka][mini-moka-crate]".
116//!
117//! - Non concurrent cache for single threaded applications:
118//! - `moka::unsync::Cache` → [`mini_moka::unsync::Cache`][unsync-cache-struct]
119//! - A simple, thread-safe, synchronous cache:
120//! - `moka::dash::Cache` → [`mini_moka::sync::Cache`][dash-cache-struct]
121//!
122//! [mini-moka-crate]: https://crates.io/crates/mini-moka
123//! [unsync-cache-struct]:
124//! https://docs.rs/mini-moka/latest/mini_moka/unsync/struct.Cache.html
125//! [dash-cache-struct]:
126//! https://docs.rs/mini-moka/latest/mini_moka/sync/struct.Cache.html
127//!
128//! # Minimum Supported Rust Versions
129//!
130//! This crate's minimum supported Rust versions (MSRV) are the followings:
131//!
132//! | Feature | MSRV |
133//! |:---------|:--------------------------:|
134//! | `future` | Rust 1.70.0 (June 3, 2022) |
135//! | `sync` | Rust 1.70.0 (June 3, 2022) |
136//!
137//! It will keep a rolling MSRV policy of at least 6 months. If the default features
138//! with a mandatory features (`future` or `sync`) are enabled, MSRV will be updated
139//! conservatively. When using other features, MSRV might be updated more frequently,
140//! up to the latest stable.
141//!
142//! In both cases, increasing MSRV is _not_ considered a semver-breaking change.
143//!
144//! # Implementation Details
145//!
146//! ## Concurrency
147//!
148//! The entry replacement algorithms are kept eventually consistent with the
149//! concurrent hash table. While updates to the cache are immediately applied to the
150//! hash table, recording of reads and writes may not be immediately reflected on the
151//! cache policy's data structures.
152//!
153//! These cache policy structures are guarded by a lock and operations are applied in
154//! batches to avoid lock contention.
155//!
156//! Recap:
157//!
158//! - The concurrent hash table in the cache is _strong consistent_:
159//! - It is a lock-free data structure and immediately applies updates.
160//! - It is guaranteed that the inserted entry will become visible immediately to
161//! all threads.
162//! - The cache policy's data structures are _eventually consistent_:
163//! - They are guarded by a lock and operations are applied in batches.
164//! - An example of eventual consistency: the `entry_count` method may return an
165//! outdated value.
166//!
167//! ### Bounded Channels
168//!
169//! In order to hold the recordings of reads and writes until they are applied to the
170//! cache policy's data structures, the cache uses two bounded channels, one for
171//! reads and the other for writes. Bounded means that a channel have a maximum
172//! number of elements that can be stored.
173//!
174//! These channels are drained when one of the following conditions is met:
175//!
176//! - The numbers of read or write recordings reach to the configured amounts.
177//! - It is currently hard-coded to 64.
178//! - Or, the certain time past from the last draining.
179//! - It is currently hard-coded to 300 milliseconds.
180//!
181//! Cache does not have a dedicated thread for draining. Instead, it is done by a
182//! user thread. When user code calls certain cache methods, such as `get`,
183//! `get_with`, `insert`, and `run_pending_tasks`, the cache checks if the above
184//! condition is met, and if so, it will start draining as a part of the method call
185//! and apply the recordings to the cache policy's data structures. See [the
186//! Maintenance Tasks section](#maintenance-tasks) for more details of applying the
187//! recordings.
188//!
189//! ### When a Bounded Channels is Full
190//!
191//! Under heavy concurrent operations from clients, draining may not be able to catch
192//! up and the bounded channels can become full. In this case, the cache will do one
193//! of the followings:
194//!
195//! - For the read channel, recordings of new reads will be discarded, so that
196//! retrievals will never be blocked. This behavior may have some impact to the hit
197//! rate of the cache.
198//! - For the write channel, updates from clients to the cache will be blocked until
199//! the draining task catches up.
200//!
201//! ## Maintenance Tasks
202//!
203//! When draining the read and write recordings from the channels, the cache will do
204//! the following maintenance tasks:
205//!
206//! 1. Determine whether to admit an entry to the cache or not, based on its
207//! popularity.
208//! - If not, the entry is removed from the internal concurrent hash table.
209//! 2. Apply the recording of cache reads and writes to the internal data structures
210//! for the cache policies, such as the LFU filter, LRU queues, and hierarchical
211//! timer wheels.
212//! - The hierarchical timer wheels are used for the per-entry expiration policy.
213//! 3. When cache's max capacity is exceeded, remove least recently used (LRU)
214//! entries.
215//! 4. Remove expired entries.
216//! 5. Find and remove the entries that have been invalidated by the `invalidate_all`
217//! or `invalidate_entries_if` methods.
218//! 6. Deliver removal notifications to the eviction listener. (Call the eviction
219//! listener closure with the information about the evicted entry)
220//!
221//! The following cache method calls may trigger the maintenance tasks:
222//!
223//! - All cache write methods: `insert`, `get_with`, `invalidate`, etc., except for
224//! `invalidate_all` and `invalidate_entries_if`.
225//! - Some of the cache read methods: `get`
226//! - `run_pending_tasks` method, which executes the pending maintenance tasks
227//! explicitly.
228//!
229//! Except `run_pending_tasks` method, the maintenance tasks are executed lazily
230//! when one of the conditions in the [Bounded Channels](#bounded-channels) section
231//! is met.
232
233#[cfg(not(any(feature = "sync", feature = "future")))]
234compile_error!(
235 "At least one of the crate features `sync` or `future` must be enabled for \
236 `moka` crate. Please update your dependencies in Cargo.toml"
237);
238
239#[cfg(feature = "future")]
240#[cfg_attr(docsrs, doc(cfg(feature = "future")))]
241pub mod future;
242
243#[cfg(feature = "sync")]
244#[cfg_attr(docsrs, doc(cfg(feature = "sync")))]
245pub mod sync;
246
247#[cfg(any(feature = "sync", feature = "future"))]
248#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
249pub mod notification;
250
251#[cfg(any(feature = "sync", feature = "future"))]
252pub(crate) mod cht;
253
254#[cfg(any(feature = "sync", feature = "future"))]
255pub(crate) mod common;
256
257#[cfg(any(feature = "sync", feature = "future"))]
258#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
259pub mod ops;
260
261#[cfg(any(feature = "sync", feature = "future"))]
262pub mod policy;
263
264#[cfg(any(feature = "sync", feature = "future"))]
265pub(crate) mod sync_base;
266
267#[cfg(any(feature = "sync", feature = "future"))]
268#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
269pub use common::error::PredicateError;
270
271#[cfg(any(feature = "sync", feature = "future"))]
272#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
273pub use common::entry::Entry;
274
275#[cfg(any(feature = "sync", feature = "future"))]
276#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
277pub use policy::{Expiry, Policy};
278
279#[cfg(feature = "unstable-debug-counters")]
280#[cfg_attr(docsrs, doc(cfg(feature = "unstable-debug-counters")))]
281pub use common::concurrent::debug_counters::GlobalDebugCounters;
282
283#[cfg(test)]
284mod tests {
285 #[cfg(trybuild)]
286 #[test]
287 fn trybuild_default() {
288 let t = trybuild::TestCases::new();
289 t.compile_fail("tests/compile_tests/default/clone/*.rs");
290 }
291
292 #[cfg(all(trybuild, feature = "future"))]
293 #[test]
294 fn trybuild_future() {
295 let t = trybuild::TestCases::new();
296 t.compile_fail("tests/compile_tests/future/clone/*.rs");
297 }
298}
299
300#[cfg(all(doctest, feature = "sync"))]
301mod doctests {
302 // https://doc.rust-lang.org/rustdoc/write-documentation/documentation-tests.html#include-items-only-when-collecting-doctests
303 #[doc = include_str!("../README.md")]
304 struct ReadMeDoctests;
305}