Skip to main content

moka/
lib.rs

1#![warn(clippy::all)]
2#![warn(rust_2018_idioms)]
3// Temporary disable this lint as the MSRV (1.51) require an older lint name:
4// #![deny(rustdoc::broken_intra_doc_links)]
5#![cfg_attr(docsrs, feature(doc_cfg))]
6
7//! Moka is a fast, concurrent cache library for Rust. Moka is inspired by the
8//! [Caffeine][caffeine-git] library for Java.
9//!
10//! Moka provides in-memory concurrent cache implementations on top of hash maps.
11//! They support full concurrency of retrievals and a high expected concurrency for
12//! updates. They utilize a lock-free concurrent hash table as the central key-value
13//! storage.
14//!
15//! All cache implementations perform a best-effort bounding of the map using an
16//! entry replacement algorithm to determine which entries to evict when the capacity
17//! is exceeded.
18//!
19//! [caffeine-git]: https://github.com/ben-manes/caffeine
20//!
21//! **NOTE**:
22//! If you have any questions about Moka's APIs or internal design, you can ask the
23//! AI chatbot at DeepWiki in a natural language:
24//! <https://deepwiki.com/moka-rs/moka>
25//!
26//! # Features
27//!
28//! - Thread-safe, highly concurrent in-memory cache implementations:
29//!     - Synchronous caches that can be shared across OS threads.
30//!     - An asynchronous (futures aware) cache.
31//! - A cache can be bounded by one of the followings:
32//!     - The maximum number of entries.
33//!     - The total weighted size of entries. (Size aware eviction)
34//! - Maintains near optimal hit ratio by using an entry replacement algorithms
35//!   inspired by Caffeine:
36//!     - Admission to a cache is controlled by the Least Frequently Used (LFU)
37//!       policy.
38//!     - Eviction from a cache is controlled by the Least Recently Used (LRU)
39//!       policy.
40//!     - [More details and some benchmark results are available here][tiny-lfu].
41//! - Supports expiration policies:
42//!     - Time to live.
43//!     - Time to idle.
44//!     - Per-entry variable expiration.
45//! - Supports eviction listener, a callback function that will be called when an
46//!   entry is removed from the cache.
47//!
48//! [tiny-lfu]: https://github.com/moka-rs/moka/wiki#admission-and-eviction-policies
49//!
50//! ## Cache Policies
51//!
52//! When a cache is full, it has to select and evict existing entries to make some
53//! room. A cache policy is a strategy to determine which entry to evict.
54//!
55//! The choice of the cache policy may have a significant impact on the performance
56//! of the cache. Because the time for cache misses is usually much greater than the
57//! time for cache hits, the miss rate (number of misses per second) has a
58//! significant impact on the performance.
59//!
60//! Moka provides the following policies:
61//!
62//! - TinyLFU
63//! - LRU
64//!
65//! ### TinyLFU
66//!
67//! TinyLFU is the default policy of the cache, and will be suitable for most
68//! workloads.
69//!
70//! TinyLFU is a combination of the LRU eviction policy and the LFU admission policy.
71//! LRU stands for Least Recently Used, which is very popular in many cache systems.
72//! LFU stands for Least Frequently Used.
73//!
74//! ![The lifecycle of cached entries with TinyLFU][tiny-lfu-image]
75//!
76//! [tiny-lfu-image]:
77//!     https://github.com/moka-rs/moka/wiki/images/benchmarks/moka-tiny-lfu.png
78//!
79//! With TinyLFU policy, the cache will admit a new entry based on its popularity. If
80//! the key of the entry is popular, it will be admitted to the cache. Otherwise, it
81//! will be rejected.
82//!
83//! The popularity of the key is estimated by the historic popularity estimator
84//! called LFU filter. It is a modified Count-Min Sketch, and it can estimate the
85//! frequency of keys with a very low memory footprint (thus the name “tiny”). Note
86//! that it tracks not only the keys currently in the cache, but all hit and missed
87//! keys.
88//!
89//! Once the entry is admitted to the cache, it will be evicted based on the LRU
90//! policy. It evicts the least recently used entry from the cache.
91//!
92//! TinyLFU will be suitable for most workloads, such as database, search, and
93//! analytics.
94//!
95//! ### LRU
96//!
97//! LRU stands for Least Recently Used.
98//!
99//! With LRU policy, the cache will evict the least recently used entry. It is a
100//! simple policy and has been used in many cache systems.
101//!
102//! LRU will be suitable for recency-biased workloads, such as job queues and event
103//! streams.
104//!
105//! # Examples
106//!
107//! See the following document:
108//!
109//! - Thread-safe, synchronous caches:
110//!     - [`sync::Cache`][sync-cache-struct]
111//!     - [`sync::SegmentedCache`][sync-seg-cache-struct]
112//! - An asynchronous (futures aware) cache:
113//!     - [`future::Cache`][future-cache-struct] (Requires "future" feature)
114//!
115//! [future-cache-struct]: ./future/struct.Cache.html
116//! [sync-cache-struct]: ./sync/struct.Cache.html
117//! [sync-seg-cache-struct]: ./sync/struct.SegmentedCache.html
118//!
119//! **NOTE:** The following caches have been moved to a separate crate called
120//! "[mini-moka][mini-moka-crate]".
121//!
122//! - Non concurrent cache for single threaded applications:
123//!     - `moka::unsync::Cache` → [`mini_moka::unsync::Cache`][unsync-cache-struct]
124//! - A simple, thread-safe, synchronous cache:
125//!     - `moka::dash::Cache` → [`mini_moka::sync::Cache`][dash-cache-struct]
126//!
127//! [mini-moka-crate]: https://crates.io/crates/mini-moka
128//! [unsync-cache-struct]:
129//!     https://docs.rs/mini-moka/latest/mini_moka/unsync/struct.Cache.html
130//! [dash-cache-struct]:
131//!     https://docs.rs/mini-moka/latest/mini_moka/sync/struct.Cache.html
132//!
133//! # Minimum Supported Rust Versions
134//!
135//! This crate's minimum supported Rust versions (MSRV) are the followings:
136//!
137//! | Feature  | MSRV                       |
138//! |:---------|:-----------------------------:|
139//! | `future` | Rust 1.71.1 (August 3, 2023) |
140//! | `sync`   | Rust 1.71.1 (August 3, 2023) |
141//!
142//! It will keep a rolling MSRV policy of at least 6 months. If the default features
143//! with a mandatory features (`future` or `sync`) are enabled, MSRV will be updated
144//! conservatively. When using other features, MSRV might be updated more frequently,
145//! up to the latest stable.
146//!
147//! In both cases, increasing MSRV is _not_ considered a semver-breaking change.
148//!
149//! # Implementation Details
150//!
151//! ## Concurrency
152//!
153//! The entry replacement algorithms are kept eventually consistent with the
154//! concurrent hash table. While updates to the cache are immediately applied to the
155//! hash table, recording of reads and writes may not be immediately reflected on the
156//! cache policy's data structures.
157//!
158//! These cache policy structures are guarded by a lock and operations are applied in
159//! batches to avoid lock contention.
160//!
161//! Recap:
162//!
163//! - The concurrent hash table in the cache is _strong consistent_:
164//!     - It is a lock-free data structure and immediately applies updates.
165//!     - It is guaranteed that the inserted entry will become visible immediately to
166//!       all threads.
167//! - The cache policy's data structures are _eventually consistent_:
168//!     - They are guarded by a lock and operations are applied in batches.
169//!     - An example of eventual consistency: the `entry_count` method may return an
170//!       outdated value.
171//!
172//! ### Bounded Channels
173//!
174//! In order to hold the recordings of reads and writes until they are applied to the
175//! cache policy's data structures, the cache uses two bounded channels, one for
176//! reads and the other for writes. Bounded means that a channel have a maximum
177//! number of elements that can be stored.
178//!
179//! These channels are drained when one of the following conditions is met:
180//!
181//! - The numbers of read or write recordings reach to the configured amounts.
182//!     - It is currently hard-coded to 64.
183//! - Or, the certain time past from the last draining.
184//!     - It is currently hard-coded to 300 milliseconds.
185//!
186//! Cache does not have a dedicated thread for draining. Instead, it is done by a
187//! user thread. When user code calls certain cache methods, such as `get`,
188//! `get_with`, `insert`, and `run_pending_tasks`, the cache checks if the above
189//! condition is met, and if so, it will start draining as a part of the method call
190//! and apply the recordings to the cache policy's data structures. See [the
191//! Maintenance Tasks section](#maintenance-tasks) for more details of applying the
192//! recordings.
193//!
194//! ### When a Bounded Channels is Full
195//!
196//! Under heavy concurrent operations from clients, draining may not be able to catch
197//! up and the bounded channels can become full. In this case, the cache will do one
198//! of the followings:
199//!
200//! - For the read channel, recordings of new reads will be discarded, so that
201//!   retrievals will never be blocked. This behavior may have some impact to the hit
202//!   rate of the cache.
203//! - For the write channel, updates from clients to the cache will be blocked until
204//!   the draining task catches up.
205//!
206//! ## Maintenance Tasks
207//!
208//! When draining the read and write recordings from the channels, the cache will do
209//! the following maintenance tasks:
210//!
211//! 1. Determine whether to admit an entry to the cache or not, based on its
212//!    popularity.
213//!    - If not, the entry is removed from the internal concurrent hash table.
214//! 2. Apply the recording of cache reads and writes to the internal data structures
215//!    for the cache policies, such as the LFU filter, LRU queues, and hierarchical
216//!    timer wheels.
217//!    - The hierarchical timer wheels are used for the per-entry expiration policy.
218//! 3. When cache's max capacity is exceeded, remove least recently used (LRU)
219//!    entries.
220//! 4. Remove expired entries.
221//! 5. Find and remove the entries that have been invalidated by the `invalidate_all`
222//!    or `invalidate_entries_if` methods.
223//! 6. Deliver removal notifications to the eviction listener. (Call the eviction
224//!    listener closure with the information about the evicted entry)
225//!
226//! The following cache method calls may trigger the maintenance tasks:
227//!
228//! - All cache write methods: `insert`, `get_with`, `invalidate`, etc., except for
229//!   `invalidate_all` and `invalidate_entries_if`.
230//! - Some of the cache read methods: `get`
231//! - `run_pending_tasks` method, which executes the pending maintenance tasks
232//!   explicitly.
233//!
234//! Except `run_pending_tasks` method, the maintenance tasks are executed lazily
235//! when one of the conditions in the [Bounded Channels](#bounded-channels) section
236//! is met.
237
238#[cfg(not(any(feature = "sync", feature = "future")))]
239compile_error!(
240    "At least one of the crate features `sync` or `future` must be enabled for \
241    `moka` crate. Please update your dependencies in Cargo.toml"
242);
243
244// Reexport(s)
245pub use equivalent::Equivalent;
246
247#[cfg(feature = "future")]
248#[cfg_attr(docsrs, doc(cfg(feature = "future")))]
249pub mod future;
250
251#[cfg(feature = "sync")]
252#[cfg_attr(docsrs, doc(cfg(feature = "sync")))]
253pub mod sync;
254
255#[cfg(any(feature = "sync", feature = "future"))]
256#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
257pub mod notification;
258
259#[cfg(any(feature = "sync", feature = "future"))]
260pub(crate) mod cht;
261
262#[cfg(any(feature = "sync", feature = "future"))]
263pub(crate) mod common;
264
265#[cfg(any(feature = "sync", feature = "future"))]
266#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
267pub mod ops;
268
269#[cfg(any(feature = "sync", feature = "future"))]
270pub mod policy;
271
272#[cfg(any(feature = "sync", feature = "future"))]
273#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
274pub use common::error::PredicateError;
275
276#[cfg(any(feature = "sync", feature = "future"))]
277#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
278pub use common::entry::Entry;
279
280#[cfg(any(feature = "sync", feature = "future"))]
281#[cfg_attr(docsrs, doc(cfg(any(feature = "sync", feature = "future"))))]
282pub use policy::{Expiry, Policy};
283
284#[cfg(feature = "unstable-debug-counters")]
285#[cfg_attr(docsrs, doc(cfg(feature = "unstable-debug-counters")))]
286pub use common::concurrent::debug_counters::GlobalDebugCounters;
287
288#[cfg(test)]
289mod tests {
290    #[cfg(trybuild)]
291    #[test]
292    fn trybuild_default() {
293        let t = trybuild::TestCases::new();
294        t.compile_fail("tests/compile_tests/default/clone/*.rs");
295    }
296
297    #[cfg(all(trybuild, feature = "future"))]
298    #[test]
299    fn trybuild_future() {
300        let t = trybuild::TestCases::new();
301        t.compile_fail("tests/compile_tests/future/clone/*.rs");
302    }
303}
304
305#[cfg(all(doctest, feature = "sync"))]
306mod doctests {
307    // https://doc.rust-lang.org/rustdoc/write-documentation/documentation-tests.html#include-items-only-when-collecting-doctests
308    #[doc = include_str!("../README.md")]
309    struct ReadMeDoctests;
310}