pub struct OwnedKeyEntrySelector<'a, K, V, S> { /* private fields */ }Expand description
Implementations§
Source§impl<'a, K, V, S> OwnedKeyEntrySelector<'a, K, V, S>
impl<'a, K, V, S> OwnedKeyEntrySelector<'a, K, V, S>
Sourcepub async fn and_compute_with<F, Fut>(self, f: F) -> CompResult<K, V>
pub async fn and_compute_with<F, Fut>(self, f: F) -> CompResult<K, V>
Performs a compute operation on a cached entry by using the given closure
f. A compute operation is either put, remove or no-operation (nop).
The closure f should take the current entry of Option<Entry<K, V>> for
the key, and return a Future that resolves to an ops::compute::Op<V>
enum.
This method works as the followings:
- Apply the closure
fto the current cachedEntry, and get aFuture. - Resolve the
Future, and get anops::compute::Op<V>. - Execute the op on the cache:
Op::Put(V): Put the new valueVto the cache.Op::Remove: Remove the current cached entry.Op::Nop: Do nothing.
- Return an
ops::compute::CompResult<K, V>as the followings:
Op<V> | Entry<K, V> already exists? | CompResult<K, V> | Notes |
|---|---|---|---|
Put(V) | no | Inserted(Entry<K, V>) | The new entry is returned. |
Put(V) | yes | ReplacedWith(Entry<K, V>) | The new entry is returned. |
Remove | no | StillNone(Arc<K>) | |
Remove | yes | Removed(Entry<K, V>) | The removed entry is returned. |
Nop | no | StillNone(Arc<K>) | |
Nop | yes | Unchanged(Entry<K, V>) | The existing entry is returned. |
§See Also
- If you want the
Futureresolve toResult<Op<V>>instead ofOp<V>, and modify entry only when resolved toOk(V), use theand_try_compute_withmethod. - If you only want to update or insert, use the
and_upsert_withmethod.
§Example
// Cargo.toml
//
// [dependencies]
// moka = { version = "0.12.8", features = ["future"] }
// tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
use moka::{
future::Cache,
ops::compute::{CompResult, Op},
};
#[tokio::main]
async fn main() {
let cache: Cache<String, u64> = Cache::new(100);
let key = "key1".to_string();
/// Increment a cached `u64` counter. If the counter is greater than or
/// equal to 2, remove it.
async fn inclement_or_remove_counter(
cache: &Cache<String, u64>,
key: &str,
) -> CompResult<String, u64> {
cache
.entry(key.to_string())
.and_compute_with(|maybe_entry| {
let op = if let Some(entry) = maybe_entry {
let counter = entry.into_value();
if counter < 2 {
Op::Put(counter.saturating_add(1)) // Update
} else {
Op::Remove
}
} else {
Op::Put(1) // Insert
};
// Return a Future that is resolved to `op` immediately.
std::future::ready(op)
})
.await
}
// This should insert a new counter value 1 to the cache, and return the
// value with the kind of the operation performed.
let result = inclement_or_remove_counter(&cache, &key).await;
let CompResult::Inserted(entry) = result else {
panic!("`Inserted` should be returned: {result:?}");
};
assert_eq!(entry.into_value(), 1);
// This should increment the cached counter value by 1.
let result = inclement_or_remove_counter(&cache, &key).await;
let CompResult::ReplacedWith(entry) = result else {
panic!("`ReplacedWith` should be returned: {result:?}");
};
assert_eq!(entry.into_value(), 2);
// This should remove the cached counter from the cache, and returns the
// _removed_ value.
let result = inclement_or_remove_counter(&cache, &key).await;
let CompResult::Removed(entry) = result else {
panic!("`Removed` should be returned: {result:?}");
};
assert_eq!(entry.into_value(), 2);
// The key should not exist.
assert!(!cache.contains_key(&key));
// This should start over; insert a new counter value 1 to the cache.
let result = inclement_or_remove_counter(&cache, &key).await;
let CompResult::Inserted(entry) = result else {
panic!("`Inserted` should be returned: {result:?}");
};
assert_eq!(entry.into_value(), 1);
}§Concurrent calls on the same key
This method guarantees that concurrent calls on the same key are executed
serially. That is, and_compute_with calls on the same key never run
concurrently. The calls are serialized by the order of their invocation. It
uses a key-level lock to achieve this.
Sourcepub async fn and_try_compute_with<F, Fut, E>(
self,
f: F,
) -> Result<CompResult<K, V>, E>
pub async fn and_try_compute_with<F, Fut, E>( self, f: F, ) -> Result<CompResult<K, V>, E>
Performs a compute operation on a cached entry by using the given closure
f. A compute operation is either put, remove or no-operation (nop).
The closure f should take the current entry of Option<Entry<K, V>> for
the key, and return a Future that resolves to a
Result<ops::compute::Op<V>, E>.
This method works as the followings:
- Apply the closure
fto the current cachedEntry, and get aFuture. - Resolve the
Future, and get aResult<ops::compute::Op<V>, E>. - If resolved to
Err(E), return it. - Else, execute the op on the cache:
Ok(Op::Put(V)): Put the new valueVto the cache.Ok(Op::Remove): Remove the current cached entry.Ok(Op::Nop): Do nothing.
- Return an
Ok(ops::compute::CompResult<K, V>)as the followings:
Op<V> | Entry<K, V> already exists? | CompResult<K, V> | Notes |
|---|---|---|---|
Put(V) | no | Inserted(Entry<K, V>) | The new entry is returned. |
Put(V) | yes | ReplacedWith(Entry<K, V>) | The new entry is returned. |
Remove | no | StillNone(Arc<K>) | |
Remove | yes | Removed(Entry<K, V>) | The removed entry is returned. |
Nop | no | StillNone(Arc<K>) | |
Nop | yes | Unchanged(Entry<K, V>) | The existing entry is returned. |
§See Also
- If you want the
Futureresolve toOp<V>instead ofResult<Op<V>>, use theand_compute_withmethod. - If you only want to update or insert, use the
and_upsert_withmethod.
§Example
See try_append_value_async.rs in the examples directory.
§Concurrent calls on the same key
This method guarantees that concurrent calls on the same key are executed
serially. That is, and_try_compute_with calls on the same key never run
concurrently. The calls are serialized by the order of their invocation. It
uses a key-level lock to achieve this.
pub async fn and_try_compute_if_nobody_else<F, Fut, E>( self, f: F, ) -> Result<CompResult<K, V>, E>
Sourcepub async fn and_upsert_with<F, Fut>(self, f: F) -> Entry<K, V>
pub async fn and_upsert_with<F, Fut>(self, f: F) -> Entry<K, V>
Performs an upsert of an Entry by using the given closure f. The word
“upsert” here means “update” or “insert”.
The closure f should take the current entry of Option<Entry<K, V>> for
the key, and return a Future that resolves to a new value V.
This method works as the followings:
- Apply the closure
fto the current cachedEntry, and get aFuture. - Resolve the
Future, and get a new valueV. - Upsert the new value to the cache.
- Return the
Entryhaving the upserted value.
§See Also
- If you want to optionally upsert, that is to upsert only when certain
conditions meet, use the
and_compute_withmethod. - If you try to upsert, that is to make the
Futureresolve toResult<V>instead ofV, and upsert only when resolved toOk(V), use theand_try_compute_withmethod.
§Example
// Cargo.toml
//
// [dependencies]
// moka = { version = "0.12.8", features = ["future"] }
// tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
use moka::future::Cache;
#[tokio::main]
async fn main() {
let cache: Cache<String, u64> = Cache::new(100);
let key = "key1".to_string();
let entry = cache
.entry(key.clone())
.and_upsert_with(|maybe_entry| {
let counter = if let Some(entry) = maybe_entry {
entry.into_value().saturating_add(1) // Update
} else {
1 // Insert
};
// Return a Future that is resolved to `counter` immediately.
std::future::ready(counter)
})
.await;
// It was not an update.
assert!(!entry.is_old_value_replaced());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 1);
let entry = cache
.entry(key.clone())
.and_upsert_with(|maybe_entry| {
let counter = if let Some(entry) = maybe_entry {
entry.into_value().saturating_add(1)
} else {
1
};
std::future::ready(counter)
})
.await;
// It was an update.
assert!(entry.is_old_value_replaced());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 2);
}§Concurrent calls on the same key
This method guarantees that concurrent calls on the same key are executed
serially. That is, and_upsert_with calls on the same key never run
concurrently. The calls are serialized by the order of their invocation. It
uses a key-level lock to achieve this.
Sourcepub async fn or_default(self) -> Entry<K, V>where
V: Default,
pub async fn or_default(self) -> Entry<K, V>where
V: Default,
Returns the corresponding Entry for the key given when this entry
selector was constructed. If the entry does not exist, inserts one by calling
the default function of the value type V.
§Example
// Cargo.toml
//
// [dependencies]
// moka = { version = "0.12", features = ["future"] }
// tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
use moka::future::Cache;
#[tokio::main]
async fn main() {
let cache: Cache<String, Option<u32>> = Cache::new(100);
let key = "key1".to_string();
let entry = cache.entry(key.clone()).or_default().await;
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), None);
let entry = cache.entry(key).or_default().await;
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
}Sourcepub async fn or_insert(self, default: V) -> Entry<K, V>
pub async fn or_insert(self, default: V) -> Entry<K, V>
Returns the corresponding Entry for the key given when this entry
selector was constructed. If the entry does not exist, inserts one by using
the the given default value for V.
§Example
// Cargo.toml
//
// [dependencies]
// moka = { version = "0.12", features = ["future"] }
// tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
use moka::future::Cache;
#[tokio::main]
async fn main() {
let cache: Cache<String, u32> = Cache::new(100);
let key = "key1".to_string();
let entry = cache.entry(key.clone()).or_insert(3).await;
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 3);
let entry = cache.entry(key).or_insert(6).await;
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), 3);
}Sourcepub async fn or_insert_with(self, init: impl Future<Output = V>) -> Entry<K, V>
pub async fn or_insert_with(self, init: impl Future<Output = V>) -> Entry<K, V>
Returns the corresponding Entry for the key given when this entry
selector was constructed. If the entry does not exist, resolves the init
future and inserts the output.
§Example
// Cargo.toml
//
// [dependencies]
// moka = { version = "0.12", features = ["future"] }
// tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
use moka::future::Cache;
#[tokio::main]
async fn main() {
let cache: Cache<String, String> = Cache::new(100);
let key = "key1".to_string();
let entry = cache
.entry(key.clone())
.or_insert_with(async { "value1".to_string() })
.await;
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), "value1");
let entry = cache
.entry(key)
.or_insert_with(async { "value2".to_string() })
.await;
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), "value1");
}§Concurrent calls on the same key
This method guarantees that concurrent calls on the same not-existing entry
are coalesced into one evaluation of the init future. Only one of the calls
evaluates its future (thus returned entry’s is_fresh method returns
true), and other calls wait for that future to resolve (and their
is_fresh return false).
For more detail about the coalescing behavior, see
Cache::get_with.
Sourcepub async fn or_insert_with_if(
self,
init: impl Future<Output = V>,
replace_if: impl FnMut(&V) -> bool + Send,
) -> Entry<K, V>
pub async fn or_insert_with_if( self, init: impl Future<Output = V>, replace_if: impl FnMut(&V) -> bool + Send, ) -> Entry<K, V>
Works like or_insert_with, but takes an additional
replace_if closure.
This method will resolve the init future and insert the output to the
cache when:
- The key does not exist.
- Or,
replace_ifclosure returnstrue.
Sourcepub async fn or_optionally_insert_with(
self,
init: impl Future<Output = Option<V>>,
) -> Option<Entry<K, V>>
pub async fn or_optionally_insert_with( self, init: impl Future<Output = Option<V>>, ) -> Option<Entry<K, V>>
Returns the corresponding Entry for the key given when this entry
selector was constructed. If the entry does not exist, resolves the init
future, and inserts an entry if Some(value) was returned. If None was
returned from the future, this method does not insert an entry and returns
None.
§Example
// Cargo.toml
//
// [dependencies]
// moka = { version = "0.12", features = ["future"] }
// tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
use moka::future::Cache;
#[tokio::main]
async fn main() {
let cache: Cache<String, u32> = Cache::new(100);
let key = "key1".to_string();
let none_entry = cache
.entry(key.clone())
.or_optionally_insert_with(async { None })
.await;
assert!(none_entry.is_none());
let some_entry = cache
.entry(key.clone())
.or_optionally_insert_with(async { Some(3) })
.await;
assert!(some_entry.is_some());
let entry = some_entry.unwrap();
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 3);
let some_entry = cache
.entry(key)
.or_optionally_insert_with(async { Some(6) })
.await;
let entry = some_entry.unwrap();
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), 3);
}§Concurrent calls on the same key
This method guarantees that concurrent calls on the same not-existing entry
are coalesced into one evaluation of the init future. Only one of the calls
evaluates its future (thus returned entry’s is_fresh method returns
true), and other calls wait for that future to resolve (and their
is_fresh return false).
For more detail about the coalescing behavior, see
Cache::optionally_get_with.
Sourcepub async fn or_try_insert_with<F, E>(
self,
init: F,
) -> Result<Entry<K, V>, Arc<E>>
pub async fn or_try_insert_with<F, E>( self, init: F, ) -> Result<Entry<K, V>, Arc<E>>
Returns the corresponding Entry for the key given when this entry
selector was constructed. If the entry does not exist, resolves the init
future, and inserts an entry if Ok(value) was returned. If Err(_) was
returned from the future, this method does not insert an entry and returns
the Err wrapped by std::sync::Arc.
§Example
// Cargo.toml
//
// [dependencies]
// moka = { version = "0.12", features = ["future"] }
// tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
use moka::future::Cache;
#[tokio::main]
async fn main() {
let cache: Cache<String, u32> = Cache::new(100);
let key = "key1".to_string();
let error_entry = cache
.entry(key.clone())
.or_try_insert_with(async { Err("error") })
.await;
assert!(error_entry.is_err());
let ok_entry = cache
.entry(key.clone())
.or_try_insert_with(async { Ok::<u32, &str>(3) })
.await;
assert!(ok_entry.is_ok());
let entry = ok_entry.unwrap();
assert!(entry.is_fresh());
assert_eq!(entry.key(), &key);
assert_eq!(entry.into_value(), 3);
let ok_entry = cache
.entry(key)
.or_try_insert_with(async { Ok::<u32, &str>(6) })
.await;
let entry = ok_entry.unwrap();
// Not fresh because the value was already in the cache.
assert!(!entry.is_fresh());
assert_eq!(entry.into_value(), 3);
}§Concurrent calls on the same key
This method guarantees that concurrent calls on the same not-existing entry
are coalesced into one evaluation of the init future (as long as these
futures return the same error type). Only one of the calls evaluates its
future (thus returned entry’s is_fresh method returns true), and other
calls wait for that future to resolve (and their is_fresh return false).
For more detail about the coalescing behavior, see
Cache::try_get_with.