pub enum ComputeCommand<T = Timestamp> {
CreateTimely {
config: TimelyConfig,
epoch: ClusterStartupEpoch,
},
CreateInstance(InstanceConfig),
InitializationComplete,
AllowWrites,
UpdateConfiguration(ComputeParameters),
CreateDataflow(DataflowDescription<FlatPlan<T>, CollectionMetadata, T>),
Schedule(GlobalId),
AllowCompaction {
id: GlobalId,
frontier: Antichain<T>,
},
Peek(Peek<T>),
CancelPeek {
uuid: Uuid,
},
}
Expand description
Compute protocol commands, sent by the compute controller to replicas.
Command sequences sent by the compute controller must be valid according to the Protocol Stages.
Variants§
CreateTimely
CreateTimely
is the first command sent to a replica after a connection was established.
It instructs the replica to initialize the timely dataflow runtime using the given
config
.
This command is special in that it is broadcast to all workers of a multi-worker replica.
All subsequent commands, except UpdateConfiguration
, are only sent to the first worker,
which then distributes them to the other workers using a dataflow. This method of command
distribution requires the timely dataflow runtime to be initialized, which is why the
CreateTimely
command exists.
The epoch
value imposes an ordering on iterations of the compute protocol. When the
compute controller connects to a replica, it must send an epoch
that is greater than all
epochs it sent to the same replica on previous connections. Multi-process replicas should
use the epoch
to ensure that their individual processes agree on which protocol iteration
they are in.
Fields
config: TimelyConfig
TODO(database-issues#7533): Add documentation.
epoch: ClusterStartupEpoch
TODO(database-issues#7533): Add documentation.
CreateInstance(InstanceConfig)
CreateInstance
must be sent after CreateTimely
to complete the Creation Stage of the
compute protocol. Unlike CreateTimely
, it is only sent to the first worker of the
replica, and then distributed through the timely runtime. CreateInstance
instructs the
replica to initialize its state to a point where it is ready to start maintaining
dataflows.
Upon receiving a CreateInstance
command, the replica must further initialize logging
dataflows according to the given LoggingConfig
.
InitializationComplete
InitializationComplete
informs the replica about the end of the Initialization Stage.
Upon receiving this command, the replica should perform a reconciliation process, to ensure
its dataflow state matches the state requested by the computation commands it received
previously. The replica must now start sending responses to commands received previously,
if it opted to defer them during the Initialization Stage.
AllowWrites
AllowWrites
informs the replica that it can transition out of the
read-only computation stage and into the read-write computation stage.
It is now allowed to affect changes to external systems (writes).
After initialization is complete, an instance starts out in the read-only computation stage. Only when receiving this command will it go out of that and allow running operations to do writes.
An instance that has once been told that it can go into read-write mode can never go out of that mode again. It is okay for a read-only controller to re-connect to an instance that is already in read-write mode: someone has already told the instance that it is okay to write and there is no way in the protocol to transition an instance back to read-only mode.
NOTE: We don’t have a protocol in place that allows writes only after a certain, controller-determined, timestamp. Such a protocol would allow tighter control and could allow the instance to avoid work. However, it is more work to put in place the logic for that so we leave it as future work for now.
UpdateConfiguration(ComputeParameters)
UpdateConfiguration
instructs the replica to update its configuration, according to the
given ComputeParameters
.
This command is special in that, like CreateTimely
, it is broadcast to all workers of the
replica. However, unlike CreateTimely
, it is ignored by all workers except the first one,
which distributes the command to the other workers through the timely runtime.
UpdateConfiguration
commands are broadcast only to allow the intermediary parts of the
networking fabric to observe them and learn of configuration updates.
Parameter updates transmitted through this command must be applied by the replica as soon
as it receives the command, and they must be applied globally to all replica state, even
dataflows and pending peeks that were created before the parameter update. This property
allows the replica to hoist UpdateConfiguration
commands during reconciliation.
Configuration parameters that should not be applied globally, but only to specific
dataflows or peeks, should be added to the DataflowDescription
or Peek
types,
rather than as ComputeParameters
.
CreateDataflow(DataflowDescription<FlatPlan<T>, CollectionMetadata, T>)
CreateDataflow
instructs the replica to create a dataflow according to the given
DataflowDescription
.
The DataflowDescription
must have the following properties:
- Dataflow imports are valid:
- Imported storage collections specified in
source_imports
exist and are readable by the compute replica. - Imported indexes specified in
index_imports
have been created on the replica previously, by previousCreateDataflow
commands.
- Imported storage collections specified in
- Dataflow imports are readable at the specified
as_of
. In other words: Thesince
s of imported collections are not beyond the dataflowas_of
. - Dataflow exports have unique IDs, i.e., the IDs of exports from dataflows a replica is instructed to create do not repeat (within a single protocol iteration).
- The dataflow objects defined in
objects_to_build
are topologically ordered according to the dependency relation.
A dataflow description that violates any of the above properties can cause the replica to exhibit undefined behavior, such as panicking or production of incorrect results. A replica should prefer panicking over producing incorrect results.
After receiving a CreateDataflow
command, if the created dataflow exports indexes or
storage sinks, the replica must produce Frontiers
responses that report the
advancement of the frontiers of these compute collections.
After receiving a CreateDataflow
command, if the created dataflow exports subscribes, the
replica must produce SubscribeResponse
s that report the progress and results of the
subscribes.
The replica may create the dataflow in a suspended state and defer starting the computation
until it receives a corresponding Schedule
command. Thus, to ensure dataflow execution,
the compute controller should eventually send a Schedule
command for each sent
CreateDataflow
command.
Schedule(GlobalId)
Schedule
allows the replica to start computation for a compute collection.
It is invalid to send a Schedule
command that references a collection that was not
created by a corresponding CreateDataflow
command before. Doing so may cause the replica
to exhibit undefined behavior.
It is also invalid to send a Schedule
command that references a collection that has,
through an AllowCompaction
command, been allowed to compact to the empty frontier before.
AllowCompaction
AllowCompaction
informs the replica about the relaxation of external read capabilities on
a compute collection exported by one of the replica’s dataflow.
The command names a collection and provides a frontier after which accumulations must be correct. The replica gains the liberty of compacting the corresponding maintained trace up through that frontier.
It is invalid to send an AllowCompaction
command that references a compute collection
that was not created by a corresponding CreateDataflow
command before. Doing so may cause
the replica to exhibit undefined behavior.
The AllowCompaction
command only informs about external read requirements, not internal
ones. The replica is responsible for ensuring that internal requirements are fulfilled at
all times, so local dataflow inputs are not compacted beyond times at which they are still
being read from.
The read frontiers transmitted through AllowCompaction
s may be beyond the corresponding
collections’ current upper
frontiers. This signals that external readers are not
interested in times up to the specified new read frontiers. Consequently, an empty read
frontier signals that external readers are not interested in updates from the corresponding
collection ever again, so the collection is not required anymore.
Sending an AllowCompaction
command with the empty frontier is the canonical way to drop
compute collections.
A replica that receives an AllowCompaction
command with the empty frontier must
eventually respond with Frontiers
responses reporting empty frontiers for the
same collection. (#16271)
Fields
Peek(Peek<T>)
Peek
instructs the replica to perform a peek on a collection: either an index or a
Persist-backed collection.
The Peek
description must have the following properties:
- If targeting an index, it has previously been created by a corresponding
CreateDataflow
command. (If targeting a persist collection, that collection should exist.) - The
Peek::uuid
is unique, i.e., the UUIDs of peeks a replica gets instructed to perform do not repeat (within a single protocol iteration).
A Peek
description that violates any of the above properties can cause the replica to
exhibit undefined behavior.
Specifying a Peek::timestamp
that is less than the target index’s since
frontier does
not provoke undefined behavior. Instead, the replica must produce a PeekResponse::Error
in response.
After receiving a Peek
command, the replica must eventually produce a single
PeekResponse
:
CancelPeek
CancelPeek
instructs the replica to cancel the identified pending peek.
It is invalid to send a CancelPeek
command that references a peek that was not created
by a corresponding Peek
command before. Doing so may cause the replica to exhibit
undefined behavior.
If a replica cancels a peek in response to a CancelPeek
command, it must respond with a
PeekResponse::Canceled
. The replica may also decide to fulfill the peek instead and
return a different PeekResponse
, or it may already have returned a response to the
specified peek. In these cases it must not return another PeekResponse
.
Fields
uuid: Uuid
The identifier of the peek request to cancel.
This Value must match a Peek::uuid
value transmitted in a previous Peek
command.
Trait Implementations§
Source§impl Arbitrary for ComputeCommand<Timestamp>
impl Arbitrary for ComputeCommand<Timestamp>
Source§type Strategy = Union<BoxedStrategy<ComputeCommand>>
type Strategy = Union<BoxedStrategy<ComputeCommand>>
Strategy
used to generate values of type Self
.Source§type Parameters = ()
type Parameters = ()
arbitrary_with
accepts for configuration
of the generated Strategy
. Parameters must implement Default
.Source§fn arbitrary_with(_: Self::Parameters) -> Self::Strategy
fn arbitrary_with(_: Self::Parameters) -> Self::Strategy
Source§impl<T: Clone> Clone for ComputeCommand<T>
impl<T: Clone> Clone for ComputeCommand<T>
Source§fn clone(&self) -> ComputeCommand<T>
fn clone(&self) -> ComputeCommand<T>
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl<T: Debug> Debug for ComputeCommand<T>
impl<T: Debug> Debug for ComputeCommand<T>
Source§impl<'de, T> Deserialize<'de> for ComputeCommand<T>where
T: Deserialize<'de>,
impl<'de, T> Deserialize<'de> for ComputeCommand<T>where
T: Deserialize<'de>,
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Source§impl<T: Send> GenericClient<ComputeCommand<T>, ComputeResponse<T>> for Box<dyn ComputeClient<T>>
impl<T: Send> GenericClient<ComputeCommand<T>, ComputeResponse<T>> for Box<dyn ComputeClient<T>>
Source§fn recv<'life0, 'async_trait>(
&'life0 mut self,
) -> Pin<Box<dyn Future<Output = Result<Option<ComputeResponse<T>>, Error>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn recv<'life0, 'async_trait>(
&'life0 mut self,
) -> Pin<Box<dyn Future<Output = Result<Option<ComputeResponse<T>>, Error>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
§Cancel safety
This method is cancel safe. If recv
is used as the event in a tokio::select!
statement and some other branch completes first, it is guaranteed that no messages were
received by this client.
Source§impl<T> GenericClient<ComputeCommand<T>, ComputeResponse<T>> for SequentialHydration<T>where
T: ComputeControllerTimestamp,
impl<T> GenericClient<ComputeCommand<T>, ComputeResponse<T>> for SequentialHydration<T>where
T: ComputeControllerTimestamp,
Source§fn recv<'life0, 'async_trait>(
&'life0 mut self,
) -> Pin<Box<dyn Future<Output = Result<Option<ComputeResponse<T>>, Error>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn recv<'life0, 'async_trait>(
&'life0 mut self,
) -> Pin<Box<dyn Future<Output = Result<Option<ComputeResponse<T>>, Error>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
§Cancel safety
This method is cancel safe. If recv
is used as the event in a tokio::select!
statement and some other branch completes first, it is guaranteed that no messages were
received by this client.
Source§impl<T: PartialEq> PartialEq for ComputeCommand<T>
impl<T: PartialEq> PartialEq for ComputeCommand<T>
Source§impl<T> Partitionable<ComputeCommand<T>, ComputeResponse<T>> for (ComputeCommand<T>, ComputeResponse<T>)where
T: ComputeControllerTimestamp,
impl<T> Partitionable<ComputeCommand<T>, ComputeResponse<T>> for (ComputeCommand<T>, ComputeResponse<T>)where
T: ComputeControllerTimestamp,
Source§type PartitionedState = PartitionedComputeState<T>
type PartitionedState = PartitionedComputeState<T>
Source§fn new(parts: usize) -> PartitionedComputeState<T>
fn new(parts: usize) -> PartitionedComputeState<T>
PartitionedState
for the command–response pair.Source§impl<T> PartitionedState<ComputeCommand<T>, ComputeResponse<T>> for PartitionedComputeState<T>where
T: ComputeControllerTimestamp,
impl<T> PartitionedState<ComputeCommand<T>, ComputeResponse<T>> for PartitionedComputeState<T>where
T: ComputeControllerTimestamp,
Source§fn split_command(
&mut self,
command: ComputeCommand<T>,
) -> Vec<Option<ComputeCommand<T>>>
fn split_command( &mut self, command: ComputeCommand<T>, ) -> Vec<Option<ComputeCommand<T>>>
Source§fn absorb_response(
&mut self,
shard_id: usize,
message: ComputeResponse<T>,
) -> Option<Result<ComputeResponse<T>, Error>>
fn absorb_response( &mut self, shard_id: usize, message: ComputeResponse<T>, ) -> Option<Result<ComputeResponse<T>, Error>>
Source§impl RustType<ProtoComputeCommand> for ComputeCommand<Timestamp>
impl RustType<ProtoComputeCommand> for ComputeCommand<Timestamp>
Source§fn into_proto(&self) -> ProtoComputeCommand
fn into_proto(&self) -> ProtoComputeCommand
Self
into a Proto
value.Source§fn from_proto(proto: ProtoComputeCommand) -> Result<Self, TryFromProtoError>
fn from_proto(proto: ProtoComputeCommand) -> Result<Self, TryFromProtoError>
Source§fn into_proto_owned(self) -> Proto
fn into_proto_owned(self) -> Proto
Self::into_proto
that types can
optionally implement, otherwise, the default implementation
delegates to Self::into_proto
.Source§impl<T> Serialize for ComputeCommand<T>where
T: Serialize,
impl<T> Serialize for ComputeCommand<T>where
T: Serialize,
Source§impl TryIntoTimelyConfig for ComputeCommand
impl TryIntoTimelyConfig for ComputeCommand
Source§fn try_into_timely_config(
self,
) -> Result<(TimelyConfig, ClusterStartupEpoch), Self>
fn try_into_timely_config( self, ) -> Result<(TimelyConfig, ClusterStartupEpoch), Self>
self
into a (TimelyConfig, ClusterStartupEpoch)
. Otherwise,
fail and return self
back.impl<T> StructuralPartialEq for ComputeCommand<T>
Auto Trait Implementations§
impl<T = Timestamp> !Freeze for ComputeCommand<T>
impl<T> RefUnwindSafe for ComputeCommand<T>where
T: RefUnwindSafe,
impl<T> Send for ComputeCommand<T>where
T: Send,
impl<T> Sync for ComputeCommand<T>where
T: Sync,
impl<T> Unpin for ComputeCommand<T>where
T: Unpin,
impl<T> UnwindSafe for ComputeCommand<T>where
T: UnwindSafe + RefUnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§unsafe fn clone_to_uninit(&self, dst: *mut T)
unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)Source§impl<T> FmtForward for T
impl<T> FmtForward for T
Source§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
self
to use its Binary
implementation when Debug
-formatted.Source§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
self
to use its Display
implementation when
Debug
-formatted.Source§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
self
to use its LowerExp
implementation when
Debug
-formatted.Source§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
self
to use its LowerHex
implementation when
Debug
-formatted.Source§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
self
to use its Octal
implementation when Debug
-formatted.Source§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
self
to use its Pointer
implementation when
Debug
-formatted.Source§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
self
to use its UpperExp
implementation when
Debug
-formatted.Source§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
self
to use its UpperHex
implementation when
Debug
-formatted.Source§impl<T> FutureExt for T
impl<T> FutureExt for T
Source§fn with_context(self, otel_cx: Context) -> WithContext<Self>
fn with_context(self, otel_cx: Context) -> WithContext<Self>
Source§fn with_current_context(self) -> WithContext<Self>
fn with_current_context(self) -> WithContext<Self>
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
Source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request
Source§impl<T, U> OverrideFrom<Option<&T>> for Uwhere
U: OverrideFrom<T>,
impl<T, U> OverrideFrom<Option<&T>> for Uwhere
U: OverrideFrom<T>,
Source§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere
T: ?Sized,
Source§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
Source§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read moreSource§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read moreSource§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
Source§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R,
) -> R
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R, ) -> R
Source§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
self
, then passes self.as_ref()
into the pipe function.Source§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
self
, then passes self.as_mut()
into the pipe
function.Source§fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
self
, then passes self.deref()
into the pipe function.Source§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<T> ProgressEventTimestamp for T
impl<T> ProgressEventTimestamp for T
Source§impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
Source§fn into_rust(self) -> Result<R, TryFromProtoError>
fn into_rust(self) -> Result<R, TryFromProtoError>
RustType::from_proto
.Source§fn from_rust(rust: &R) -> P
fn from_rust(rust: &R) -> P
RustType::into_proto
.Source§impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
Source§fn plus_equals(&mut self, rhs: &&'a S)
fn plus_equals(&mut self, rhs: &&'a S)
std::ops::AddAssign
, for types that do not implement AddAssign
.Source§impl<T> Tap for T
impl<T> Tap for T
Source§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
Borrow<B>
of a value. Read moreSource§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
BorrowMut<B>
of a value. Read moreSource§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
AsRef<R>
view of a value. Read moreSource§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
AsMut<R>
view of a value. Read moreSource§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
Deref::Target
of a value. Read moreSource§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
Deref::Target
of a value. Read moreSource§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap()
only in debug builds, and is erased in release builds.Source§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut()
only in debug builds, and is erased in release
builds.Source§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
.tap_borrow()
only in debug builds, and is erased in release
builds.Source§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
.tap_borrow_mut()
only in debug builds, and is erased in release
builds.Source§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
.tap_ref()
only in debug builds, and is erased in release
builds.Source§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
.tap_ref_mut()
only in debug builds, and is erased in release
builds.Source§fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
.tap_deref()
only in debug builds, and is erased in release
builds.