pub struct DynamicConfig {
Show 28 fields batch_builder_max_outstanding_parts: AtomicUsize, blob_target_size: AtomicUsize, blob_cache_mem_limit_bytes: AtomicUsize, compaction_heuristic_min_inputs: AtomicUsize, compaction_heuristic_min_parts: AtomicUsize, compaction_heuristic_min_updates: AtomicUsize, compaction_memory_bound_bytes: AtomicUsize, compaction_minimum_timeout: RwLock<Duration>, gc_blob_delete_concurrency_limit: AtomicUsize, state_versions_recent_live_diffs_limit: AtomicUsize, usage_state_fetch_concurrency_limit: AtomicUsize, consensus_connect_timeout: RwLock<Duration>, consensus_tcp_user_timeout: RwLock<Duration>, consensus_connection_pool_ttl: RwLock<Duration>, consensus_connection_pool_ttl_stagger: RwLock<Duration>, reader_lease_duration: RwLock<Duration>, sink_minimum_batch_updates: AtomicUsize, storage_sink_minimum_batch_updates: AtomicUsize, stats_audit_percent: AtomicUsize, stats_collection_enabled: AtomicBool, stats_filter_enabled: AtomicBool, stats_budget_bytes: AtomicUsize, stats_untrimmable_columns: RwLock<UntrimmableColumns>, pubsub_client_enabled: AtomicBool, pubsub_push_diff_enabled: AtomicBool, rollup_threshold: AtomicUsize, feature_flags: BTreeMap<&'static str, AtomicBool>, next_listen_batch_retryer: RwLock<RetryParameters>,
}
Expand description

Persist configurations that can be dynamically updated.

Persist is expected to react to each of these such that updating the value returned by the function takes effect in persist (i.e. no caching it). This should happen “as promptly as reasonably possible” where that’s defined by the tradeoffs of complexity vs promptness. For example, we might use a consistent version of Self::blob_target_size for the entirety of a single compaction call. However, it should never require a process restart for an update of these to take effect.

These are hooked up to LaunchDarkly. Specifically, LaunchDarkly configs are serialized into a PersistParameters. In environmentd, these are applied directly via PersistParameters::apply to the PersistConfig in crate::cache::PersistClientCache. There is one PersistClientCache per process, and every PersistConfig shares the same Arc<DynamicConfig>, so this affects all DynamicConfig usage in the process. The PersistParameters is also sent via the compute and storage command streams, which then apply it to all computed/storaged/clusterd processes as well.

Fields§

§batch_builder_max_outstanding_parts: AtomicUsize§blob_target_size: AtomicUsize§blob_cache_mem_limit_bytes: AtomicUsize§compaction_heuristic_min_inputs: AtomicUsize§compaction_heuristic_min_parts: AtomicUsize§compaction_heuristic_min_updates: AtomicUsize§compaction_memory_bound_bytes: AtomicUsize§compaction_minimum_timeout: RwLock<Duration>§gc_blob_delete_concurrency_limit: AtomicUsize§state_versions_recent_live_diffs_limit: AtomicUsize§usage_state_fetch_concurrency_limit: AtomicUsize§consensus_connect_timeout: RwLock<Duration>§consensus_tcp_user_timeout: RwLock<Duration>§consensus_connection_pool_ttl: RwLock<Duration>§consensus_connection_pool_ttl_stagger: RwLock<Duration>§reader_lease_duration: RwLock<Duration>§sink_minimum_batch_updates: AtomicUsize§storage_sink_minimum_batch_updates: AtomicUsize§stats_audit_percent: AtomicUsize§stats_collection_enabled: AtomicBool§stats_filter_enabled: AtomicBool§stats_budget_bytes: AtomicUsize§stats_untrimmable_columns: RwLock<UntrimmableColumns>§pubsub_client_enabled: AtomicBool§pubsub_push_diff_enabled: AtomicBool§rollup_threshold: AtomicUsize§feature_flags: BTreeMap<&'static str, AtomicBool>§next_listen_batch_retryer: RwLock<RetryParameters>

Implementations§

source§

impl DynamicConfig

source

const LOAD_ORDERING: Ordering = Ordering::SeqCst

source

const STORE_ORDERING: Ordering = Ordering::SeqCst

source

pub fn enabled(&self, flag: PersistFeatureFlag) -> bool

source

pub fn set_feature_flag(&self, flag: PersistFeatureFlag, to: bool)

source

pub fn batch_builder_max_outstanding_parts(&self) -> usize

The maximum number of parts (s3 blobs) that crate::batch::BatchBuilder will pipeline before back-pressuring crate::batch::BatchBuilder::add calls on previous ones finishing.

source

pub fn blob_target_size(&self) -> usize

A target maximum size of blob payloads in bytes. If a logical “batch” is bigger than this, it will be broken up into smaller, independent pieces. This is best-effort, not a guarantee (though as of 2022-06-09, we happen to always respect it). This target size doesn’t apply for an individual update that exceeds it in size, but that scenario is almost certainly a mis-use of the system.

source

pub fn blob_cache_mem_limit_bytes(&self) -> usize

Capacity of in-mem blob cache in bytes.

source

pub fn compaction_heuristic_min_inputs(&self) -> usize

In Compactor::compact_and_apply, we do the compaction (don’t skip it) if the number of inputs is at least this many. Compaction is performed if any of the heuristic criteria are met (they are OR’d).

source

pub fn compaction_heuristic_min_parts(&self) -> usize

In Compactor::compact_and_apply, we do the compaction (don’t skip it) if the number of batch parts is at least this many. Compaction is performed if any of the heuristic criteria are met (they are OR’d).

source

pub fn compaction_heuristic_min_updates(&self) -> usize

In Compactor::compact_and_apply, we do the compaction (don’t skip it) if the number of updates is at least this many. Compaction is performed if any of the heuristic criteria are met (they are OR’d).

source

pub fn compaction_memory_bound_bytes(&self) -> usize

The upper bound on compaction’s memory consumption. The value must be at least 4*blob_target_size. Increasing this value beyond the minimum allows compaction to merge together more runs at once, providing greater consolidation of updates, at the cost of greater memory usage.

source

pub fn compaction_minimum_timeout(&self) -> Duration

In Compactor::compact_and_apply_background, the minimum amount of time to allow a compaction request to run before timing it out. A request may be given a timeout greater than this value depending on the inputs’ size

source

pub fn consensus_connection_pool_ttl(&self) -> Duration

The minimum TTL of a connection to Postgres/CRDB before it is proactively terminated. Connections are routinely culled to balance load against the downstream database.

source

pub fn consensus_connection_pool_ttl_stagger(&self) -> Duration

The minimum time between TTLing connections to Postgres/CRDB. This delay is used to stagger reconnections to avoid stampedes and high tail latencies. This value should be much less than consensus_connection_pool_ttl so that reconnections are biased towards terminating the oldest connections first. A value of consensus_connection_pool_ttl / consensus_connection_pool_max_size is likely a good place to start so that all connections are rotated when the pool is fully used.

source

pub fn consensus_connect_timeout(&self) -> Duration

The duration to wait for a Consensus Postgres/CRDB connection to be made before retrying.

source

pub fn consensus_tcp_user_timeout(&self) -> Duration

The TCP user timeout for a Consensus Postgres/CRDB connection. Specifies the amount of time that transmitted data may remain unacknowledged before the TCP connection is forcibly closed.

source

pub fn reader_lease_duration(&self) -> Duration

Length of time after a reader’s last operation after which the reader may be expired.

source

pub fn set_reader_lease_duration(&self, d: Duration)

Set the length of time after a reader’s last operation after which the reader may be expired.

source

pub fn gc_blob_delete_concurrency_limit(&self) -> usize

The maximum number of concurrent blob deletes during garbage collection.

source

pub fn state_versions_recent_live_diffs_limit(&self) -> usize

The # of diffs to initially scan when fetching the latest consensus state, to determine which requests go down the fast vs slow path. Should be large enough to fetch all live diffs in the steady-state, and small enough to query Consensus at high volume. Steady-state usage should accommodate readers that require seqno-holds for reasonable amounts of time, which to start we say is 10s of minutes.

This value ought to be defined in terms of NEED_ROLLUP_THRESHOLD to approximate when we expect rollups to be written and therefore when old states will be truncated by GC.

source

pub fn stats_audit_percent(&self) -> usize

Percent of filtered data to opt in to correctness auditing.

source

pub fn stats_collection_enabled(&self) -> bool

Computes and stores statistics about each batch part.

These can be used at read time to entirely skip fetching a part based on its statistics. See Self::stats_filter_enabled.

source

pub fn stats_filter_enabled(&self) -> bool

Uses previously computed statistics about batch parts to entirely skip fetching them at read time.

See Self::stats_collection_enabled.

source

pub fn stats_budget_bytes(&self) -> usize

The budget (in bytes) of how many stats to write down per batch part. When the budget is exceeded, stats will be trimmed away according to a variety of heuristics.

source

pub fn stats_untrimmable_columns(&self) -> UntrimmableColumns

The stats columns that will never be trimmed, even if they go over budget.

source

pub fn pubsub_client_enabled(&self) -> bool

Determines whether PubSub clients should connect to the PubSub server.

source

pub fn pubsub_push_diff_enabled(&self) -> bool

For connected clients, determines whether to push state diffs to the PubSub server. For the server, determines whether to broadcast state diffs to subscribed clients.

source

pub fn rollup_threshold(&self) -> usize

Determines how often to write rollups, assigning a maintenance task after rollup_threshold seqnos have passed since the last rollup.

Tuning note: in the absence of a long reader seqno hold, and with incremental GC, this threshold will determine about how many live diffs are held in Consensus. Lowering this value decreases the live diff count at the cost of more maintenance work + blob writes.

source

pub fn usage_state_fetch_concurrency_limit(&self) -> usize

The maximum number of concurrent state fetches during usage computation.

source

pub fn next_listen_batch_retry_params(&self) -> RetryParameters

Retry configuration for next_listen_batch.

source

pub fn set_compaction_memory_bound_bytes(&self, val: usize)

Trait Implementations§

source§

impl Debug for DynamicConfig

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> FutureExt for T

source§

fn with_context(self, otel_cx: Context) -> WithContext<Self>

Attaches the provided Context to this type, returning a WithContext wrapper. Read more
source§

fn with_current_context(self) -> WithContext<Self>

Attaches the current Context to this type, returning a WithContext wrapper. Read more
source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for Twhere U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoRequest<T> for T

source§

fn into_request(self) -> Request<T>

Wrap the input message T in a tonic::Request
source§

impl<P, R> ProtoType<R> for Pwhere R: RustType<P>,

source§

impl<T> Same<T> for T

§

type Output = T

Should always be Self
source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<V, T> VZip<V> for Twhere V: MultiLane<T>,

source§

fn vzip(self) -> V

source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more