Struct mz_persist_client::cfg::DynamicConfig
source · pub struct DynamicConfig {Show 28 fields
batch_builder_max_outstanding_parts: AtomicUsize,
blob_target_size: AtomicUsize,
blob_cache_mem_limit_bytes: AtomicUsize,
compaction_heuristic_min_inputs: AtomicUsize,
compaction_heuristic_min_parts: AtomicUsize,
compaction_heuristic_min_updates: AtomicUsize,
compaction_memory_bound_bytes: AtomicUsize,
compaction_minimum_timeout: RwLock<Duration>,
gc_blob_delete_concurrency_limit: AtomicUsize,
state_versions_recent_live_diffs_limit: AtomicUsize,
usage_state_fetch_concurrency_limit: AtomicUsize,
consensus_connect_timeout: RwLock<Duration>,
consensus_tcp_user_timeout: RwLock<Duration>,
consensus_connection_pool_ttl: RwLock<Duration>,
consensus_connection_pool_ttl_stagger: RwLock<Duration>,
reader_lease_duration: RwLock<Duration>,
sink_minimum_batch_updates: AtomicUsize,
storage_sink_minimum_batch_updates: AtomicUsize,
stats_audit_percent: AtomicUsize,
stats_collection_enabled: AtomicBool,
stats_filter_enabled: AtomicBool,
stats_budget_bytes: AtomicUsize,
stats_untrimmable_columns: RwLock<UntrimmableColumns>,
pubsub_client_enabled: AtomicBool,
pubsub_push_diff_enabled: AtomicBool,
rollup_threshold: AtomicUsize,
feature_flags: BTreeMap<&'static str, AtomicBool>,
next_listen_batch_retryer: RwLock<RetryParameters>,
}
Expand description
Persist configurations that can be dynamically updated.
Persist is expected to react to each of these such that updating the value returned by the function takes effect in persist (i.e. no caching it). This should happen “as promptly as reasonably possible” where that’s defined by the tradeoffs of complexity vs promptness. For example, we might use a consistent version of Self::blob_target_size for the entirety of a single compaction call. However, it should never require a process restart for an update of these to take effect.
These are hooked up to LaunchDarkly. Specifically, LaunchDarkly configs are
serialized into a PersistParameters. In environmentd, these are applied
directly via PersistParameters::apply to the PersistConfig in
crate::cache::PersistClientCache. There is one PersistClientCache
per
process, and every PersistConfig
shares the same Arc<DynamicConfig>
, so
this affects all DynamicConfig usage in the process. The
PersistParameters
is also sent via the compute and storage command
streams, which then apply it to all computed/storaged/clusterd processes as
well.
Fields§
§batch_builder_max_outstanding_parts: AtomicUsize
§blob_target_size: AtomicUsize
§blob_cache_mem_limit_bytes: AtomicUsize
§compaction_heuristic_min_inputs: AtomicUsize
§compaction_heuristic_min_parts: AtomicUsize
§compaction_heuristic_min_updates: AtomicUsize
§compaction_memory_bound_bytes: AtomicUsize
§compaction_minimum_timeout: RwLock<Duration>
§gc_blob_delete_concurrency_limit: AtomicUsize
§state_versions_recent_live_diffs_limit: AtomicUsize
§usage_state_fetch_concurrency_limit: AtomicUsize
§consensus_connect_timeout: RwLock<Duration>
§consensus_tcp_user_timeout: RwLock<Duration>
§consensus_connection_pool_ttl: RwLock<Duration>
§consensus_connection_pool_ttl_stagger: RwLock<Duration>
§reader_lease_duration: RwLock<Duration>
§sink_minimum_batch_updates: AtomicUsize
§storage_sink_minimum_batch_updates: AtomicUsize
§stats_audit_percent: AtomicUsize
§stats_collection_enabled: AtomicBool
§stats_filter_enabled: AtomicBool
§stats_budget_bytes: AtomicUsize
§stats_untrimmable_columns: RwLock<UntrimmableColumns>
§pubsub_client_enabled: AtomicBool
§pubsub_push_diff_enabled: AtomicBool
§rollup_threshold: AtomicUsize
§feature_flags: BTreeMap<&'static str, AtomicBool>
§next_listen_batch_retryer: RwLock<RetryParameters>
Implementations§
source§impl DynamicConfig
impl DynamicConfig
const LOAD_ORDERING: Ordering = Ordering::SeqCst
const STORE_ORDERING: Ordering = Ordering::SeqCst
pub fn enabled(&self, flag: PersistFeatureFlag) -> bool
pub fn set_feature_flag(&self, flag: PersistFeatureFlag, to: bool)
sourcepub fn batch_builder_max_outstanding_parts(&self) -> usize
pub fn batch_builder_max_outstanding_parts(&self) -> usize
The maximum number of parts (s3 blobs) that crate::batch::BatchBuilder will pipeline before back-pressuring crate::batch::BatchBuilder::add calls on previous ones finishing.
sourcepub fn blob_target_size(&self) -> usize
pub fn blob_target_size(&self) -> usize
A target maximum size of blob payloads in bytes. If a logical “batch” is bigger than this, it will be broken up into smaller, independent pieces. This is best-effort, not a guarantee (though as of 2022-06-09, we happen to always respect it). This target size doesn’t apply for an individual update that exceeds it in size, but that scenario is almost certainly a mis-use of the system.
sourcepub fn blob_cache_mem_limit_bytes(&self) -> usize
pub fn blob_cache_mem_limit_bytes(&self) -> usize
Capacity of in-mem blob cache in bytes.
sourcepub fn compaction_heuristic_min_inputs(&self) -> usize
pub fn compaction_heuristic_min_inputs(&self) -> usize
In Compactor::compact_and_apply, we do the compaction (don’t skip it) if the number of inputs is at least this many. Compaction is performed if any of the heuristic criteria are met (they are OR’d).
sourcepub fn compaction_heuristic_min_parts(&self) -> usize
pub fn compaction_heuristic_min_parts(&self) -> usize
In Compactor::compact_and_apply, we do the compaction (don’t skip it) if the number of batch parts is at least this many. Compaction is performed if any of the heuristic criteria are met (they are OR’d).
sourcepub fn compaction_heuristic_min_updates(&self) -> usize
pub fn compaction_heuristic_min_updates(&self) -> usize
In Compactor::compact_and_apply, we do the compaction (don’t skip it) if the number of updates is at least this many. Compaction is performed if any of the heuristic criteria are met (they are OR’d).
sourcepub fn compaction_memory_bound_bytes(&self) -> usize
pub fn compaction_memory_bound_bytes(&self) -> usize
The upper bound on compaction’s memory consumption. The value must be at
least 4*blob_target_size
. Increasing this value beyond the minimum allows
compaction to merge together more runs at once, providing greater
consolidation of updates, at the cost of greater memory usage.
sourcepub fn compaction_minimum_timeout(&self) -> Duration
pub fn compaction_minimum_timeout(&self) -> Duration
In Compactor::compact_and_apply_background, the minimum amount of time to allow a compaction request to run before timing it out. A request may be given a timeout greater than this value depending on the inputs’ size
sourcepub fn consensus_connection_pool_ttl(&self) -> Duration
pub fn consensus_connection_pool_ttl(&self) -> Duration
The minimum TTL of a connection to Postgres/CRDB before it is proactively terminated. Connections are routinely culled to balance load against the downstream database.
sourcepub fn consensus_connection_pool_ttl_stagger(&self) -> Duration
pub fn consensus_connection_pool_ttl_stagger(&self) -> Duration
The minimum time between TTLing connections to Postgres/CRDB. This delay is
used to stagger reconnections to avoid stampedes and high tail latencies.
This value should be much less than consensus_connection_pool_ttl
so that
reconnections are biased towards terminating the oldest connections first.
A value of consensus_connection_pool_ttl / consensus_connection_pool_max_size
is likely a good place to start so that all connections are rotated when the
pool is fully used.
sourcepub fn consensus_connect_timeout(&self) -> Duration
pub fn consensus_connect_timeout(&self) -> Duration
The duration to wait for a Consensus Postgres/CRDB connection to be made before retrying.
sourcepub fn consensus_tcp_user_timeout(&self) -> Duration
pub fn consensus_tcp_user_timeout(&self) -> Duration
The TCP user timeout for a Consensus Postgres/CRDB connection. Specifies the amount of time that transmitted data may remain unacknowledged before the TCP connection is forcibly closed.
sourcepub fn reader_lease_duration(&self) -> Duration
pub fn reader_lease_duration(&self) -> Duration
Length of time after a reader’s last operation after which the reader may be expired.
sourcepub fn set_reader_lease_duration(&self, d: Duration)
pub fn set_reader_lease_duration(&self, d: Duration)
Set the length of time after a reader’s last operation after which the reader may be expired.
sourcepub fn gc_blob_delete_concurrency_limit(&self) -> usize
pub fn gc_blob_delete_concurrency_limit(&self) -> usize
The maximum number of concurrent blob deletes during garbage collection.
sourcepub fn state_versions_recent_live_diffs_limit(&self) -> usize
pub fn state_versions_recent_live_diffs_limit(&self) -> usize
The # of diffs to initially scan when fetching the latest consensus state, to determine which requests go down the fast vs slow path. Should be large enough to fetch all live diffs in the steady-state, and small enough to query Consensus at high volume. Steady-state usage should accommodate readers that require seqno-holds for reasonable amounts of time, which to start we say is 10s of minutes.
This value ought to be defined in terms of NEED_ROLLUP_THRESHOLD
to approximate
when we expect rollups to be written and therefore when old states will be truncated
by GC.
sourcepub fn stats_audit_percent(&self) -> usize
pub fn stats_audit_percent(&self) -> usize
Percent of filtered data to opt in to correctness auditing.
sourcepub fn stats_collection_enabled(&self) -> bool
pub fn stats_collection_enabled(&self) -> bool
Computes and stores statistics about each batch part.
These can be used at read time to entirely skip fetching a part based on its statistics. See Self::stats_filter_enabled.
sourcepub fn stats_filter_enabled(&self) -> bool
pub fn stats_filter_enabled(&self) -> bool
Uses previously computed statistics about batch parts to entirely skip fetching them at read time.
sourcepub fn stats_budget_bytes(&self) -> usize
pub fn stats_budget_bytes(&self) -> usize
The budget (in bytes) of how many stats to write down per batch part. When the budget is exceeded, stats will be trimmed away according to a variety of heuristics.
sourcepub fn stats_untrimmable_columns(&self) -> UntrimmableColumns
pub fn stats_untrimmable_columns(&self) -> UntrimmableColumns
The stats columns that will never be trimmed, even if they go over budget.
sourcepub fn pubsub_client_enabled(&self) -> bool
pub fn pubsub_client_enabled(&self) -> bool
Determines whether PubSub clients should connect to the PubSub server.
sourcepub fn pubsub_push_diff_enabled(&self) -> bool
pub fn pubsub_push_diff_enabled(&self) -> bool
For connected clients, determines whether to push state diffs to the PubSub server. For the server, determines whether to broadcast state diffs to subscribed clients.
sourcepub fn rollup_threshold(&self) -> usize
pub fn rollup_threshold(&self) -> usize
Determines how often to write rollups, assigning a maintenance task
after rollup_threshold
seqnos have passed since the last rollup.
Tuning note: in the absence of a long reader seqno hold, and with incremental GC, this threshold will determine about how many live diffs are held in Consensus. Lowering this value decreases the live diff count at the cost of more maintenance work + blob writes.
sourcepub fn usage_state_fetch_concurrency_limit(&self) -> usize
pub fn usage_state_fetch_concurrency_limit(&self) -> usize
The maximum number of concurrent state fetches during usage computation.
sourcepub fn next_listen_batch_retry_params(&self) -> RetryParameters
pub fn next_listen_batch_retry_params(&self) -> RetryParameters
Retry configuration for next_listen_batch
.
pub fn set_compaction_memory_bound_bytes(&self, val: usize)
Trait Implementations§
Auto Trait Implementations§
impl RefUnwindSafe for DynamicConfig
impl Send for DynamicConfig
impl Sync for DynamicConfig
impl Unpin for DynamicConfig
impl UnwindSafe for DynamicConfig
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> FutureExt for T
impl<T> FutureExt for T
source§fn with_context(self, otel_cx: Context) -> WithContext<Self>
fn with_context(self, otel_cx: Context) -> WithContext<Self>
source§fn with_current_context(self) -> WithContext<Self>
fn with_current_context(self) -> WithContext<Self>
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request
source§impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
impl<P, R> ProtoType<R> for Pwhere R: RustType<P>,
source§fn into_rust(self) -> Result<R, TryFromProtoError>
fn into_rust(self) -> Result<R, TryFromProtoError>
RustType::from_proto
.source§fn from_rust(rust: &R) -> P
fn from_rust(rust: &R) -> P
RustType::into_proto
.