pub struct RefKeyEntrySelector<'a, K, Q, V, S>where
Q: ?Sized,{ /* private fields */ }
Expand description
Provides advanced methods to select or insert an entry of the cache.
Many methods here return an Entry
, a snapshot of a single key-value pair in
the cache, carrying additional information like is_fresh
.
RefKeyEntrySelector
is constructed from the
entry_by_ref
method on the cache.
Implementations§
Source§impl<'a, K, Q, V, S> RefKeyEntrySelector<'a, K, Q, V, S>
impl<'a, K, Q, V, S> RefKeyEntrySelector<'a, K, Q, V, S>
Sourcepub fn and_compute_with<F>(self, f: F) -> CompResult<K, V>
pub fn and_compute_with<F>(self, f: F) -> CompResult<K, V>
Performs a compute operation on a cached entry by using the given closure
f
. A compute operation is either put, remove or no-operation (nop).
The closure f
should take the current entry of Option<Entry<K, V>>
for
the key, and return an ops::compute::Op<V>
enum.
This method works as the followings:
- Apply the closure
f
to the current cachedEntry
, and get anops::compute::Op<V>
. - Execute the op on the cache:
Op::Put(V)
: Put the new valueV
to the cache.Op::Remove
: Remove the current cached entry.Op::Nop
: Do nothing.
- Return an
ops::compute::CompResult<K, V>
as the followings:
Op<V> | Entry<K, V> already exists? | CompResult<K, V> | Notes |
---|---|---|---|
Put(V) | no | Inserted(Entry<K, V>) | The new entry is returned. |
Put(V) | yes | ReplacedWith(Entry<K, V>) | The new entry is returned. |
Remove | no | StillNone(Arc<K>) | |
Remove | yes | Removed(Entry<K, V>) | The removed entry is returned. |
Nop | no | StillNone(Arc<K>) | |
Nop | yes | Unchanged(Entry<K, V>) | The existing entry is returned. |
§See Also
- If you want the
Future
resolve toResult<Op<V>>
instead ofOp<V>
, and modify entry only when resolved toOk(V)
, use theand_try_compute_with
method. - If you only want to update or insert, use the
and_upsert_with
method.
§Example
use moka::{
sync::Cache,
ops::compute::{CompResult, Op},
};
let cache: Cache<String, u64> = Cache::new(100);
let key = "key1".to_string();
/// Increment a cached `u64` counter. If the counter is greater than or
/// equal to 2, remove it.
fn inclement_or_remove_counter(
cache: &Cache<String, u64>,
key: &str,
) -> CompResult<String, u64> {
cache
.entry_by_ref(key)
.and_compute_with(|maybe_entry| {
if let Some(entry) = maybe_entry {
let counter = entry.into_value();
if counter < 2 {
Op::Put(counter.saturating_add(1)) // Update
} else {
Op::Remove
}
} else {
Op::Put(1) // Insert
}
})
}
// This should insert a now counter value 1 to the cache, and return the
// value with the kind of the operation performed.
let result = inclement_or_remove_counter(&cache, &key);
let CompResult::Inserted(entry) = result else {
panic!("`Inserted` should be returned: {result:?}");
};
assert_eq!(entry.into_value(), 1);
// This should increment the cached counter value by 1.
let result = inclement_or_remove_counter(&cache, &key);
let CompResult::ReplacedWith(entry) = result else {
panic!("`ReplacedWith` should be returned: {result:?}");
};
assert_eq!(entry.into_value(), 2);
// This should remove the cached counter from the cache, and returns the
// _removed_ value.
let result = inclement_or_remove_counter(&cache, &key);
let CompResult::Removed(entry) = result else {
panic!("`Removed` should be returned: {result:?}");
};
assert_eq!(entry.into_value(), 2);
// The key should no longer exist.
assert!(!cache.contains_key(&key));
// This should start over; insert a new counter value 1 to the cache.
let result = inclement_or_remove_counter(&cache, &key);
let CompResult::Inserted(entry) = result else {
panic!("`Inserted` should be returned: {result:?}");
};
assert_eq!(entry.into_value(), 1);
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same key are executed
serially. That is, and_compute_with
calls on the same key never run
concurrently. The calls are serialized by the order of their invocation. It
uses a key-level lock to achieve this.
Sourcepub fn and_try_compute_with<F, E>(self, f: F) -> Result<CompResult<K, V>, E>
pub fn and_try_compute_with<F, E>(self, f: F) -> Result<CompResult<K, V>, E>
Performs a compute operation on a cached entry by using the given closure
f
. A compute operation is either put, remove or no-operation (nop).
The closure f
should take the current entry of Option<Entry<K, V>>
for
the key, and return a Result<ops::compute::Op<V>, E>
.
This method works as the followings:
- Apply the closure
f
to the current cachedEntry
, and get aResult<ops::compute::Op<V>, E>
. - If resolved to
Err(E)
, return it. - Else, execute the op on the cache:
Ok(Op::Put(V))
: Put the new valueV
to the cache.Ok(Op::Remove)
: Remove the current cached entry.Ok(Op::Nop)
: Do nothing.
- Return an
Ok(ops::compute::CompResult<K, V>)
as the followings:
Op<V> | Entry<K, V> already exists? | CompResult<K, V> | Notes |
---|---|---|---|
Put(V) | no | Inserted(Entry<K, V>) | The new entry is returned. |
Put(V) | yes | ReplacedWith(Entry<K, V>) | The new entry is returned. |
Remove | no | StillNone(Arc<K>) | |
Remove | yes | Removed(Entry<K, V>) | The removed entry is returned. |
Nop | no | StillNone(Arc<K>) | |
Nop | yes | Unchanged(Entry<K, V>) | The existing entry is returned. |
§Similar Methods
- If you want the
Future
resolve toOp<V>
instead ofResult<Op<V>>
, use theand_compute_with
method. - If you only want to update or insert, use the
and_upsert_with
method.
§Example
See [try_append_value_async.rs
] in the examples
directory.
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same key are executed
serially. That is, and_try_compute_with
calls on the same key never run
concurrently. The calls are serialized by the order of their invocation. It
uses a key-level lock to achieve this.
Sourcepub fn and_upsert_with<F>(self, f: F) -> Entry<K, V>
pub fn and_upsert_with<F>(self, f: F) -> Entry<K, V>
Performs an upsert of an Entry
by using the given closure f
. The word
“upsert” here means “update” or “insert”.
The closure f
should take the current entry of Option<Entry<K, V>>
for
the key, and return a new value V
.
This method works as the followings:
- Apply the closure
f
to the current cachedEntry
, and get a new valueV
. - Upsert the new value to the cache.
- Return the
Entry
having the upserted value.
§Similar Methods
- If you want to optionally upsert, that is to upsert only when certain
conditions meet, use the
and_compute_with
method. - If you try to upsert, that is to make the
Future
resolve toResult<V>
instead ofV
, and upsert only when resolved toOk(V)
, use theand_try_compute_with
method.
§Example
use moka::sync::Cache;
let cache: Cache<String, u64> = Cache::new(100);
let key = "key1".to_string();
let entry = cache
.entry_by_ref(&key)
.and_upsert_with(|maybe_entry| {
if let Some(entry) = maybe_entry {
entry.into_value().saturating_add(1) // Update
} else {
1 // Insert
}
});
// It was not an update.
assert!(!entry.is_old_value_replaced());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 1);
let entry = cache
.entry_by_ref(&key)
.and_upsert_with(|maybe_entry| {
if let Some(entry) = maybe_entry {
entry.into_value().saturating_add(1)
} else {
1
}
});
// It was an update.
assert!(entry.is_old_value_replaced());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 2);
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same key are executed
serially. That is, and_upsert_with
calls on the same key never run
concurrently. The calls are serialized by the order of their invocation. It
uses a key-level lock to achieve this.
Sourcepub fn or_default(self) -> Entry<K, V>where
V: Default,
pub fn or_default(self) -> Entry<K, V>where
V: Default,
Returns the corresponding Entry
for the reference of the key given when
this entry selector was constructed. If the entry does not exist, inserts one
by cloning the key and calling the default
function
of the value type V
.
§Example
use moka::sync::Cache;
let cache: Cache<String, Option<u32>> = Cache::new(100);
let key = "key1".to_string();
let entry = cache.entry_by_ref(&key).or_default();
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), None);
let entry = cache.entry_by_ref(&key).or_default();
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
Sourcepub fn or_insert(self, default: V) -> Entry<K, V>
pub fn or_insert(self, default: V) -> Entry<K, V>
Returns the corresponding Entry
for the reference of the key given when
this entry selector was constructed. If the entry does not exist, inserts one
by cloning the key and using the given default
value for V
.
§Example
use moka::sync::Cache;
let cache: Cache<String, u32> = Cache::new(100);
let key = "key1".to_string();
let entry = cache.entry_by_ref(&key).or_insert(3);
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 3);
let entry = cache.entry_by_ref(&key).or_insert(6);
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), 3);
Sourcepub fn or_insert_with(self, init: impl FnOnce() -> V) -> Entry<K, V>
pub fn or_insert_with(self, init: impl FnOnce() -> V) -> Entry<K, V>
Returns the corresponding Entry
for the reference of the key given when
this entry selector was constructed. If the entry does not exist, inserts one
by cloning the key and evaluating the init
closure for the value.
§Example
use moka::sync::Cache;
let cache: Cache<String, String> = Cache::new(100);
let key = "key1".to_string();
let entry = cache
.entry_by_ref(&key)
.or_insert_with(|| "value1".to_string());
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), "value1");
let entry = cache
.entry_by_ref(&key)
.or_insert_with(|| "value2".to_string());
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), "value1");
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same not-existing entry
are coalesced into one evaluation of the init
closure. Only one of the
calls evaluates its closure (thus returned entry’s is_fresh
method returns
true
), and other calls wait for that closure to complete (and their
is_fresh
return false
).
For more detail about the coalescing behavior, see
Cache::get_with
.
Sourcepub fn or_insert_with_if(
self,
init: impl FnOnce() -> V,
replace_if: impl FnMut(&V) -> bool,
) -> Entry<K, V>
pub fn or_insert_with_if( self, init: impl FnOnce() -> V, replace_if: impl FnMut(&V) -> bool, ) -> Entry<K, V>
Works like or_insert_with
, but takes an additional
replace_if
closure.
This method will evaluate the init
closure and insert the output to the
cache when:
- The key does not exist.
- Or,
replace_if
closure returnstrue
.
Sourcepub fn or_optionally_insert_with(
self,
init: impl FnOnce() -> Option<V>,
) -> Option<Entry<K, V>>
pub fn or_optionally_insert_with( self, init: impl FnOnce() -> Option<V>, ) -> Option<Entry<K, V>>
Returns the corresponding Entry
for the reference of the key given when
this entry selector was constructed. If the entry does not exist, clones the
key and evaluates the init
closure. If Some(value)
was returned by the
closure, inserts an entry with the value . If None
was returned, this
method does not insert an entry and returns None
.
§Example
use moka::sync::Cache;
let cache: Cache<String, u32> = Cache::new(100);
let key = "key1".to_string();
let none_entry = cache
.entry_by_ref(&key)
.or_optionally_insert_with(|| None);
assert!(none_entry.is_none());
let some_entry = cache
.entry_by_ref(&key)
.or_optionally_insert_with(|| Some(3));
assert!(some_entry.is_some());
let entry = some_entry.unwrap();
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 3);
let some_entry = cache
.entry_by_ref(&key)
.or_optionally_insert_with(|| Some(6));
let entry = some_entry.unwrap();
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), 3);
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same not-existing entry
are coalesced into one evaluation of the init
closure. Only one of the
calls evaluates its closure (thus returned entry’s is_fresh
method returns
true
), and other calls wait for that closure to complete (and their
is_fresh
return false
).
For more detail about the coalescing behavior, see
Cache::optionally_get_with
.
Sourcepub fn or_try_insert_with<F, E>(self, init: F) -> Result<Entry<K, V>, Arc<E>>
pub fn or_try_insert_with<F, E>(self, init: F) -> Result<Entry<K, V>, Arc<E>>
Returns the corresponding Entry
for the reference of the key given when
this entry selector was constructed. If the entry does not exist, clones the
key and evaluates the init
closure. If Ok(value)
was returned from the
closure, inserts an entry with the value. If Err(_)
was returned, this
method does not insert an entry and returns the Err
wrapped by
std::sync::Arc
.
§Example
use moka::sync::Cache;
let cache: Cache<String, u32> = Cache::new(100);
let key = "key1".to_string();
let error_entry = cache
.entry_by_ref(&key)
.or_try_insert_with(|| Err("error"));
assert!(error_entry.is_err());
let ok_entry = cache
.entry_by_ref(&key)
.or_try_insert_with(|| Ok::<u32, &str>(3));
assert!(ok_entry.is_ok());
let entry = ok_entry.unwrap();
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 3);
let ok_entry = cache
.entry_by_ref(&key)
.or_try_insert_with(|| Ok::<u32, &str>(6));
let entry = ok_entry.unwrap();
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), 3);
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same not-existing entry
are coalesced into one evaluation of the init
closure (as long as these
closures return the same error type). Only one of the calls evaluates its
closure (thus returned entry’s is_fresh
method returns true
), and other
calls wait for that closure to complete (and their is_fresh
return
false
).
For more detail about the coalescing behavior, see
Cache::try_get_with
.