Struct mz_persist_client::cfg::DynamicConfig
source · pub struct DynamicConfig {
batch_builder_max_outstanding_parts: AtomicUsize,
compaction_heuristic_min_inputs: AtomicUsize,
compaction_heuristic_min_parts: AtomicUsize,
compaction_heuristic_min_updates: AtomicUsize,
compaction_memory_bound_bytes: AtomicUsize,
gc_blob_delete_concurrency_limit: AtomicUsize,
state_versions_recent_live_diffs_limit: AtomicUsize,
usage_state_fetch_concurrency_limit: AtomicUsize,
}
Expand description
Persist configurations that can be dynamically updated.
Persist is expected to react to each of these such that updating the value
returned by the function takes effect in persist (i.e. no caching it). This
should happen “as promptly as reasonably possible” where that’s defined by
the tradeoffs of complexity vs promptness. For example, we might use a
consistent version of BLOB_TARGET_SIZE
for the entirety of a single
compaction call. However, it should never require a process restart for an
update of these to take effect.
These are hooked up to LaunchDarkly. Specifically, LaunchDarkly configs are
serialized into a mz_dyncfg::ConfigUpdates. In environmentd, these are applied
directly via mz_dyncfg::ConfigUpdates::apply to the PersistConfig in
crate::cache::PersistClientCache. There is one PersistClientCache
per
process, and every PersistConfig
shares the same Arc<DynamicConfig>
, so
this affects all DynamicConfig usage in the process. The
ConfigUpdates
is also sent via the compute and storage command
streams, which then apply it to all computed/storaged/clusterd processes as
well.
Fields§
§batch_builder_max_outstanding_parts: AtomicUsize
§compaction_heuristic_min_inputs: AtomicUsize
§compaction_heuristic_min_parts: AtomicUsize
§compaction_heuristic_min_updates: AtomicUsize
§compaction_memory_bound_bytes: AtomicUsize
§gc_blob_delete_concurrency_limit: AtomicUsize
§state_versions_recent_live_diffs_limit: AtomicUsize
§usage_state_fetch_concurrency_limit: AtomicUsize
Implementations§
source§impl DynamicConfig
impl DynamicConfig
const LOAD_ORDERING: Ordering = Ordering::SeqCst
const STORE_ORDERING: Ordering = Ordering::SeqCst
sourcepub fn batch_builder_max_outstanding_parts(&self) -> usize
pub fn batch_builder_max_outstanding_parts(&self) -> usize
The maximum number of parts (s3 blobs) that crate::batch::BatchBuilder will pipeline before back-pressuring crate::batch::BatchBuilder::add calls on previous ones finishing.
sourcepub fn compaction_heuristic_min_inputs(&self) -> usize
pub fn compaction_heuristic_min_inputs(&self) -> usize
In Compactor::compact_and_apply, we do the compaction (don’t skip it) if the number of inputs is at least this many. Compaction is performed if any of the heuristic criteria are met (they are OR’d).
sourcepub fn compaction_heuristic_min_parts(&self) -> usize
pub fn compaction_heuristic_min_parts(&self) -> usize
In Compactor::compact_and_apply, we do the compaction (don’t skip it) if the number of batch parts is at least this many. Compaction is performed if any of the heuristic criteria are met (they are OR’d).
sourcepub fn compaction_heuristic_min_updates(&self) -> usize
pub fn compaction_heuristic_min_updates(&self) -> usize
In Compactor::compact_and_apply, we do the compaction (don’t skip it) if the number of updates is at least this many. Compaction is performed if any of the heuristic criteria are met (they are OR’d).
sourcepub fn compaction_memory_bound_bytes(&self) -> usize
pub fn compaction_memory_bound_bytes(&self) -> usize
The upper bound on compaction’s memory consumption. The value must be at
least 4*blob_target_size
. Increasing this value beyond the minimum allows
compaction to merge together more runs at once, providing greater
consolidation of updates, at the cost of greater memory usage.
sourcepub fn gc_blob_delete_concurrency_limit(&self) -> usize
pub fn gc_blob_delete_concurrency_limit(&self) -> usize
The maximum number of concurrent blob deletes during garbage collection.
sourcepub fn state_versions_recent_live_diffs_limit(&self) -> usize
pub fn state_versions_recent_live_diffs_limit(&self) -> usize
The # of diffs to initially scan when fetching the latest consensus state, to determine which requests go down the fast vs slow path. Should be large enough to fetch all live diffs in the steady-state, and small enough to query Consensus at high volume. Steady-state usage should accommodate readers that require seqno-holds for reasonable amounts of time, which to start we say is 10s of minutes.
This value ought to be defined in terms of NEED_ROLLUP_THRESHOLD
to approximate
when we expect rollups to be written and therefore when old states will be truncated
by GC.
sourcepub fn usage_state_fetch_concurrency_limit(&self) -> usize
pub fn usage_state_fetch_concurrency_limit(&self) -> usize
The maximum number of concurrent state fetches during usage computation.
pub fn set_compaction_memory_bound_bytes(&self, val: usize)
Trait Implementations§
Auto Trait Implementations§
impl !Freeze for DynamicConfig
impl RefUnwindSafe for DynamicConfig
impl Send for DynamicConfig
impl Sync for DynamicConfig
impl Unpin for DynamicConfig
impl UnwindSafe for DynamicConfig
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> FutureExt for T
impl<T> FutureExt for T
source§fn with_context(self, otel_cx: Context) -> WithContext<Self>
fn with_context(self, otel_cx: Context) -> WithContext<Self>
source§fn with_current_context(self) -> WithContext<Self>
fn with_current_context(self) -> WithContext<Self>
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request
source§impl<T> Pointable for T
impl<T> Pointable for T
source§impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
source§fn into_rust(self) -> Result<R, TryFromProtoError>
fn into_rust(self) -> Result<R, TryFromProtoError>
RustType::from_proto
.source§fn from_rust(rust: &R) -> P
fn from_rust(rust: &R) -> P
RustType::into_proto
.source§impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
source§fn plus_equals(&mut self, rhs: &&'a S)
fn plus_equals(&mut self, rhs: &&'a S)
std::ops::AddAssign
, for types that do not implement AddAssign
.