mz_timely_util/
reclock.rs

1// Copyright Materialize, Inc. and contributors. All rights reserved.
2//
3// Licensed under the Apache License, Version 2.0 (the "License");
4// you may not use this file except in compliance with the License.
5// You may obtain a copy of the License in the LICENSE file at the
6// root of this repository, or online at
7//
8//     http://www.apache.org/licenses/LICENSE-2.0
9//
10// Unless required by applicable law or agreed to in writing, software
11// distributed under the License is distributed on an "AS IS" BASIS,
12// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13// See the License for the specific language governing permissions and
14// limitations under the License.
15
16//! ## Notation
17//!
18//! Collections are represented with capital letters (T, S, R), collection traces as bold letters
19//! (𝐓, 𝐒, 𝐑), and difference traces as δ𝐓.
20//!
21//! Indexing a collection trace 𝐓 to obtain its version at `t` is written as 𝐓(t). Indexing a
22//! collection to obtain the multiplicity of a record `x` is written as T\[x\]. These can be combined
23//! to obtain the multiplicity of a record `x` at some version `t` as 𝐓(t)\[x\].
24//!
25//! ## Overview
26//!
27//! Reclocking transforms a source collection `S` that evolves with some timestamp `FromTime` into
28//! a collection `T` that evolves with some other timestamp `IntoTime`. The reclocked collection T
29//! contains all updates `u ∈ S` that are not beyond some `FromTime` frontier R(t). The collection
30//! `R` is called the remap collection.
31//!
32//! More formally, for some arbitrary time `t` of `IntoTime` and some arbitrary record `x`, the
33//! reclocked collection `T(t)[x]` is defined to be the `sum{δ𝐒(s)[x]: !(𝐑(t) βͺ― s)}`. Since this
34//! holds for any record we can write the definition of Reclock(𝐒, 𝐑) as:
35//!
36//! > Reclock(𝐒, 𝐑) β‰œ 𝐓: βˆ€ t ∈ IntoTime : 𝐓(t) = sum{δ𝐒(s): !(𝐑(t) βͺ― s)}
37//!
38//! In order for the reclocked collection `T` to have a sensible definition of progress we require
39//! that `t1 ≀ t2 β‡’ 𝐑(t1) βͺ― 𝐑(t2)` where the first `≀` is the partial order of `IntoTime` and the
40//! second one the partial order of `FromTime` antichains.
41//!
42//! ## Total order simplification
43//!
44//! In order to simplify the implementation we will require that `IntoTime` is a total order. This
45//! limitation can be lifted in the future but further elaboration on the mechanics of reclocking
46//! is required to ensure a correct implementation.
47//!
48//! ## The difference trace
49//!
50//! By the definition of difference traces we have:
51//!
52//! ```text
53//!     δ𝐓(t) = T(t) - sum{δ𝐓(s): s < t}
54//! ```
55//!
56//! Due to the total order assumption we only need to consider two cases.
57//!
58//! **Case 1:** `t` is the minimum timestamp
59//!
60//! In this case `sum{δ𝐓(s): s < t}` is the empty set and so we obtain:
61//!
62//! ```text
63//!     δ𝐓(min) = T(min) = sum{δ𝐒(s): !(𝐑(min) ≀ s}
64//! ```
65//!
66//! **Case 2:** `t` is a timestamp with a predecessor `prev`
67//!
68//! In this case `sum{δ𝐓(s): s < t}` is equal to `T(prev)` because:
69//!
70//! ```text
71//!     sum{δ𝐓(s): s < t} = sum{δ𝐓(s): s ≀ prev} + sum{δ𝐓(s): prev < s < t}
72//!                       = T(prev) + βˆ…
73//!                       = T(prev)
74//! ```
75//!
76//! And therefore the difference trace of T is:
77//!
78//! ```text
79//!     δ𝐓(t) = 𝐓(t) - 𝐓(prev)
80//!           = sum{δ𝐒(s): !(𝐑(t) βͺ― s)} - sum{δ𝐒(s): !(𝐑(prev) βͺ― s)}
81//!           = sum{δ𝐒(s): (𝐑(prev) βͺ― s) ∧ !(𝐑(t) βͺ― s)}
82//! ```
83//!
84//! ## Unique mapping property
85//!
86//! Given the definition above we can derive the fact that for any source difference δ𝐒(s) there is
87//! at most one target timestamp t that it must be reclocked to. This property can be exploited by
88//! the implementation of the operator as it can safely discard source updates once a matching
89//! Ξ΄T(t) has been found, making it "stateless" with respect to the source trace. A formal proof of
90//! this property is [provided below](#unique-mapping-property-proof).
91//!
92//! ## Operational description
93//!
94//! The operator follows a run-to-completion model where on each scheduling it completes all
95//! outstanding work that can be completed.
96//!
97//! ### Unique mapping property proof
98//!
99//! This section contains the formal proof the unique mapping property. The proof follows the
100//! structure proof notation created by Leslie Lamport. Readers unfamiliar with structured proofs
101//! can read about them here <https://lamport.azurewebsites.net/pubs/proof.pdf>.
102//!
103//! #### Statement
104//!
105//! AtMostOne(X, Ο†(x)) β‰œ βˆ€ x1, x2 ∈ X : Ο†(x1) ∧ Ο†(x2) β‡’ x1 = x2
106//!
107//! * **THEOREM** UniqueMapping β‰œ
108//!     * **ASSUME**
109//!         * **NEW** (FromTime, βͺ―) ∈ PartiallyOrderedTimestamps
110//!         * **NEW** (IntoTime, ≀) ∈ TotallyOrderedTimestamps
111//!         * **NEW** 𝐒 ∈ SetOfCollectionTraces(FromTime)
112//!         * **NEW** 𝐑 ∈ SetOfCollectionTraces(IntoTime)
113//!         * βˆ€ t ∈ IntoTime: 𝐑(t) ∈ SetOfAntichains(FromTime)
114//!         * βˆ€ t1, t1 ∈ IntoTime: t1 ≀ t2 β‡’ 𝐑(t1) βͺ― 𝐑(t2)
115//!         * **NEW** 𝐓 = Reclock(𝐒, 𝐑)
116//!     * **PROVE**  βˆ€ s ∈ FromTime : AtMostOne(IntoTime, δ𝐒(s) ∈ δ𝐓(x))
117//!
118//! #### Proof
119//!
120//! 1. **SUFFICES ASSUME** βˆƒ s ∈ FromTime: Β¬AtMostOne(IntoTime, δ𝐒(s) ∈ δ𝐓(x))
121//!     * **PROVE FALSE**
122//!     * _By proof by contradiction._
123//! 2. **PICK** s ∈ FromTime : Β¬AtMostOne(IntoTime, δ𝐒(s) ∈ δ𝐓(x))
124//!    * _Proof: Such time exists by <1>1._
125//! 3. βˆƒ t1, t2 ∈ IntoTime : t1 β‰  t2 ∧ δ𝐒(s) ∈ δ𝐓(t1) ∧ δ𝐒(s) ∈ δ𝐓(t2)
126//!     1. Β¬(βˆ€ x1, x2 ∈ X : (δ𝐒(s) ∈ δ𝐓(x1)) ∧ (δ𝐒(s) ∈ δ𝐓(x2)) β‡’ x1 = x2)
127//!         * _Proof: By <1>2 and definition of AtMostOne._
128//!     2. Q.E.D
129//!         * _Proof: By <2>1, quantifier negation rules, and theorem of propositional logic Β¬(P β‡’ Q) ≑ P ∧ Β¬Q._
130//! 4. **PICK** t1, t2 ∈ IntoTime : t1 < t2 ∧ δ𝐒(s) ∈ δ𝐓(t1) ∧ δ𝐒(s) ∈ δ𝐓(t2)
131//!    * _Proof: By <1>3. Assume t1 < t2 without loss of generality._
132//! 5. Β¬(𝐑(t1) βͺ― s)
133//!     1. **CASE** t1 = min(IntoTime)
134//!         1. δ𝐓(t1) = sum{δ𝐒(s): !(𝐑(t1)) βͺ― s}
135//!             * _Proof: By definition of δ𝐓(min)._
136//!         2. δ𝐒(s) ∈ δ𝐓(t1)
137//!             * _Proof: By <1>4._
138//!         3. Q.E.D
139//!             * _Proof: By <3>1 and <3>2._
140//!     2. **CASE** t1 > min(IntoTime)
141//!         1. **PICK** t1_prev = Predecessor(t1)
142//!             * _Proof: Predecessor exists because the set {t: t < t1} is non-empty since it must contain at least min(IntoTime)._
143//!         2. δ𝐓(t1) = sum{δ𝐒(s): (𝐑(t1_prev) βͺ― s) ∧ !(𝐑(t1) βͺ― s)}
144//!             * _Proof: By definition of δ𝐓(t)._
145//!         3. δ𝐒(s) ∈ δ𝐓(t1)
146//!             * _Proof: By <1>4._
147//!         3. Q.E.D
148//!             * _Proof: By <3>2 and <3>3._
149//!     3. Q.E.D
150//!         * _Proof: From cases <2>1 and <2>2 which are exhaustive_
151//! 6. **PICK** t2_prev ∈ IntoTime : t2_prev = Predecessor(t2)
152//!    * _Proof: Predecessor exists because by <1>4 the set {t: t < t2} is non empty since it must contain at least t1._
153//! 7. t1 ≀ t2_prev
154//!    * _Proof: t1 ∈ {t: t < t2} and t2_prev is the maximum element of the set._
155//! 8. 𝐑(t2) βͺ― s
156//!     1. t2 > min(IntoTime)
157//!         * _Proof: By <1>5._
158//!     2. **PICK** t2_prev = Predecessor(t2)
159//!         * _Proof: Predecessor exists because the set {t: t < t2} is non-empty since it must contain at least min(IntoTime)._
160//!     3. δ𝐓(t) = sum{δ𝐒(s): (𝐑(t2_prev) βͺ― s) ∧ !(𝐑(t) βͺ― s)}
161//!         * _Proof: By definition of δ𝐓(t)_
162//!     4. δ𝐒(s) ∈ δ𝐓(t1)
163//!         * _Proof: By <1>4._
164//!     5. Q.E.D
165//!         * _Proof: By <2>3 and <2>4._
166//! 9. 𝐑(t1) βͺ― 𝐑(t2_prev)
167//!     * _Proof: By <1>.7 and hypothesis on R_
168//! 10. 𝐑(t1) βͺ― s
169//!     * _Proof: By <1>8 and <1>9._
170//! 11. Q.E.D
171//!     * _Proof: By <1>5 and <1>10_
172
173use std::cmp::{Ordering, Reverse};
174use std::collections::VecDeque;
175use std::collections::binary_heap::{BinaryHeap, PeekMut};
176use std::iter::FromIterator;
177
178use differential_dataflow::difference::Semigroup;
179use differential_dataflow::lattice::Lattice;
180use differential_dataflow::{AsCollection, ExchangeData, VecCollection, consolidation};
181use mz_ore::Overflowing;
182use mz_ore::collections::CollectionExt;
183use timely::communication::{Pull, Push};
184use timely::dataflow::Scope;
185use timely::dataflow::channels::pact::Pipeline;
186use timely::dataflow::operators::CapabilitySet;
187use timely::dataflow::operators::capture::Event;
188use timely::dataflow::operators::generic::OutputBuilder;
189use timely::dataflow::operators::generic::builder_rc::OperatorBuilder;
190use timely::order::{PartialOrder, TotalOrder};
191use timely::progress::frontier::{AntichainRef, MutableAntichain};
192use timely::progress::{Antichain, Timestamp};
193
194/// Constructs an operator that reclocks a `source` collection varying with some time `FromTime`
195/// into the corresponding `reclocked` collection varying over some time `IntoTime` using the
196/// provided `remap` collection.
197///
198/// In order for the operator to read the `source` collection a `Pusher` is returned which can be
199/// used with timely's capture facilities to connect a collection from a foreign scope to this
200/// operator.
201pub fn reclock<G, D, FromTime, IntoTime, R>(
202    remap_collection: &VecCollection<G, FromTime, Overflowing<i64>>,
203    as_of: Antichain<G::Timestamp>,
204) -> (
205    Box<dyn Push<Event<FromTime, Vec<(D, FromTime, R)>>>>,
206    VecCollection<G, D, R>,
207)
208where
209    G: Scope<Timestamp = IntoTime>,
210    D: ExchangeData,
211    FromTime: Timestamp,
212    IntoTime: Timestamp + Lattice + TotalOrder,
213    R: Semigroup + 'static,
214{
215    let mut scope = remap_collection.scope();
216    let mut builder = OperatorBuilder::new("Reclock".into(), scope.clone());
217    // Here we create a channel that can be used to send data from a foreign scope into this
218    // operator. The channel is associated with this operator's address so that it is activated
219    // every time events are available for consumption. This mechanism is similar to Timely's input
220    // handles where data can be introduced into a timely scope from an exogenous source.
221    let info = builder.operator_info();
222    let channel_id = scope.new_identifier();
223    let (pusher, mut events) =
224        scope.pipeline::<Event<FromTime, Vec<(D, FromTime, R)>>>(channel_id, info.address);
225
226    let mut remap_input = builder.new_input(&remap_collection.inner, Pipeline);
227    let (output, reclocked) = builder.new_output();
228    let mut output = OutputBuilder::from(output);
229
230    builder.build(move |caps| {
231        let mut capset = CapabilitySet::from_elem(caps.into_element());
232        capset.downgrade(&as_of.borrow());
233
234        // Received remap updates at times `into_time` greater or equal to `remap_input`'s input
235        // frontier. As the input frontier advances, we drop elements out of this priority queue
236        // and mint new associations.
237        let mut pending_remap: BinaryHeap<Reverse<(IntoTime, FromTime, i64)>> = BinaryHeap::new();
238        // A trace of `remap_input` that accumulates correctly for all times that are beyond
239        // `remap_since` and not beyond `remap_upper`. The updates in `remap_trace` are maintained
240        // in time order. An actual DD trace could be used here at the expense of a more
241        // complicated API to traverse it. This is left for future work if the naive trace
242        // maintenance implemented in this operator becomes problematic.
243        let mut remap_upper = Antichain::from_elem(IntoTime::minimum());
244        let mut remap_since = as_of.clone();
245        let mut remap_trace = Vec::new();
246
247        // A stash of source updates for which we don't know the corresponding binding yet.
248        let mut deferred_source_updates: Vec<ChainBatch<_, _, _>> = Vec::new();
249        // The frontier of the `events` input
250        let mut source_frontier = MutableAntichain::new_bottom(FromTime::minimum());
251
252        let mut binding_buffer = Vec::new();
253
254        // Accumulation buffer for `remap_input` updates.
255        use timely::progress::ChangeBatch;
256        let mut remap_accum_buffer: ChangeBatch<(IntoTime, FromTime)> = ChangeBatch::new();
257
258        // The operator drains `remap_input` and organizes new bindings that are not beyond
259        // `remap_input`'s frontier into the time ordered `remap_trace`.
260        //
261        // All received data events can either be reclocked to a time included in the
262        // `remap_trace`, or deferred until new associations are minted. Each data event that
263        // happens at some `FromTime` is mapped to the first `IntoTime` whose associated antichain
264        // is not less or equal to the input `FromTime`.
265        //
266        // As progress events are received from the `events` input, we can advance our
267        // held capability to track the least `IntoTime` a newly received `FromTime` could possibly
268        // map to and also compact the maintained `remap_trace` to that time.
269        move |frontiers| {
270            let Some(cap) = capset.get(0).cloned() else {
271                return;
272            };
273            let mut output = output.activate();
274            let mut session = output.session(&cap);
275
276            // STEP 1. Accept new bindings into `pending_remap`.
277            // Advance all `into` times by `as_of`, and consolidate all updates at that frontier.
278            while let Some((_, data)) = remap_input.next() {
279                for (from, mut into, diff) in data.drain(..) {
280                    into.advance_by(as_of.borrow());
281                    remap_accum_buffer.update((into, from), diff.into_inner());
282                }
283            }
284            // Drain consolidated bindings into the `pending_remap` heap.
285            // Only do this once any of the `remap_input` frontier has passed `as_of`.
286            // For as long as the input frontier is less-equal `as_of`, we have no finalized times.
287            if !PartialOrder::less_equal(&frontiers[0].frontier(), &as_of.borrow()) {
288                for ((into, from), diff) in remap_accum_buffer.drain() {
289                    pending_remap.push(Reverse((into, from, diff)));
290                }
291            }
292
293            // STEP 2. Extract bindings not beyond `remap_frontier` and commit them into `remap_trace`.
294            let prev_remap_upper =
295                std::mem::replace(&mut remap_upper, frontiers[0].frontier().to_owned());
296            while let Some(update) = pending_remap.peek_mut() {
297                if !remap_upper.less_equal(&update.0.0) {
298                    let Reverse((into, from, diff)) = PeekMut::pop(update);
299                    remap_trace.push((from, into, diff));
300                } else {
301                    break;
302                }
303            }
304
305            // STEP 3. Receive new data updates
306            //         The `events` input describes arbitrary progress and data over `FromTime`,
307            //         which must be translated to `IntoTime`. Each `FromTime` can be found as the
308            //         first `IntoTime` associated with a `[FromTime]` that is not less or equal to
309            //         the input `FromTime`. Received events that are not yet associated to an
310            //         `IntoTime` are collected, and formed into a "chain batch": a sequence of
311            //         chains that results from sorting the updates by `FromTime`, and then
312            //         segmenting the sequence at elements where the partial order on `FromTime` is
313            //         violated.
314            let mut stash = Vec::new();
315            // Consolidate progress updates before applying them to `source_frontier`, to avoid quadratic
316            // behavior in overload scenarios.
317            let mut change_batch = ChangeBatch::<FromTime, 2>::default();
318            while let Some(event) = events.pull() {
319                match event {
320                    Event::Progress(changes) => {
321                        change_batch.extend(changes.drain(..));
322                    }
323                    Event::Messages(_, data) => stash.append(data),
324                }
325            }
326            source_frontier.update_iter(change_batch.drain());
327            stash.sort_unstable_by(|(_, t1, _): &(D, FromTime, R), (_, t2, _)| t1.cmp(t2));
328            let mut new_source_updates = ChainBatch::from_iter(stash);
329
330            // STEP 4: Reclock new and deferred updates
331            //         We are now ready to step through the remap bindings in time order and
332            //         perform the following actions:
333            //         4.1. Match `new_source_updates` against the entirety of bindings contained
334            //              in the trace.
335            //         4.2. Match `deferred_source_updates` against the bindings that were just
336            //              added in the trace.
337            //         4.3. Reclock `source_frontier` to calculate the new since frontier of the
338            //              remap trace.
339            //
340            //         The steps above only make sense to perform if there are any times for which
341            //         we can correctly accumulate the remap trace, which is what we check here.
342            if remap_since.iter().all(|t| !remap_upper.less_equal(t)) {
343                let mut cur_binding = MutableAntichain::new();
344
345                let mut remap = remap_trace.iter().peekable();
346                let mut reclocked_source_frontier = remap_upper.clone();
347
348                // We go over all the times for which we might need to output data at. These times
349                // are restrticted to the times at which there exists an update in `remap_trace`
350                // and the minimum timestamp for the case where `remap_trace` is completely empty,
351                // in which case the minimum timestamp maps to the empty `FromTime` frontier and
352                // therefore all data events map to that minimum timestamp.
353                //
354                // The approach taken here will take time proportional to the number of elements in
355                // `remap_trace`. During development an alternative approach was considered where
356                // the updates in `remap_trace` are instead fully materialized into an ordered list
357                // of antichains in which every data update can be binary searched into. The are
358                // two concerns with this alternative approach that led to preferring this one:
359                // 1. Materializing very wide antichains with small differences between them
360                //    needs memory proportial to the number of bindings times the width of the
361                //    antichain.
362                // 2. It locks in the requirement of a totally ordered target timestamp since only
363                //    in that case can one binary search a binding.
364                // The linear scan is expected to be fine due to the run-to-completion nature of
365                // the operator since its cost is amortized among the number of outstanding
366                // updates.
367                let mut min_time = IntoTime::minimum();
368                min_time.advance_by(remap_since.borrow());
369                let mut prev_cur_time = None;
370                let mut interesting_times = std::iter::once(&min_time)
371                    .chain(remap_trace.iter().map(|(_, t, _)| t))
372                    .filter(|&v| {
373                        let prev = prev_cur_time.replace(v);
374                        prev != prev_cur_time
375                    });
376                let mut frontier_reclocked = false;
377                while !(new_source_updates.is_empty()
378                    && deferred_source_updates.is_empty()
379                    && frontier_reclocked)
380                    && let Some(cur_time) = interesting_times.next()
381                {
382                    // 4.0. Load updates of `cur_time` from the trace into `cur_binding` to
383                    //      construct the `[FromTime]` frontier that `cur_time` maps to.
384                    while let Some((t_from, _, diff)) = remap.next_if(|(_, t, _)| t == cur_time) {
385                        binding_buffer.push((t_from.clone(), *diff));
386                    }
387                    cur_binding.update_iter(binding_buffer.drain(..));
388                    let cur_binding = cur_binding.frontier();
389
390                    // 4.1. Extract updates from `new_source_updates`
391                    for (data, _, diff) in new_source_updates.extract(cur_binding) {
392                        session.give((data, cur_time.clone(), diff));
393                    }
394
395                    // 4.2. Extract updates from `deferred_source_updates`.
396                    //      The deferred updates contain all updates that were not able to be
397                    //      reclocked with the bindings until `prev_remap_upper`. For this reason
398                    //      we only need to reconsider these updates when we start looking at new
399                    //      bindings, i.e bindings that are beyond `prev_remap_upper`.
400                    if prev_remap_upper.less_equal(cur_time) {
401                        deferred_source_updates.retain_mut(|batch| {
402                            for (data, _, diff) in batch.extract(cur_binding) {
403                                session.give((data, cur_time.clone(), diff));
404                            }
405                            // Retain non-empty batches
406                            !batch.is_empty()
407                        })
408                    }
409
410                    // 4.3. Reclock `source_frontier`
411                    //      If any FromTime in source frontier could possibly be reclocked to this
412                    //      binding then we must maintain our capability to emit data at that time
413                    //      and not compact past it. Since we iterate over this loop in time order
414                    //      and IntoTime is a total order we only need to perform this step once.
415                    //      Once a `cur_time` is inserted into `reclocked_source_frontier` no more
416                    //      changes can be made to the frontier by inserting times later in the
417                    //      loop.
418                    if !frontier_reclocked
419                        && source_frontier
420                            .frontier()
421                            .iter()
422                            .any(|t| !cur_binding.less_equal(t))
423                    {
424                        reclocked_source_frontier.insert(cur_time.clone());
425                        frontier_reclocked = true;
426                    }
427                }
428
429                // STEP 5. Downgrade capability and compact remap trace
430                capset.downgrade(&reclocked_source_frontier.borrow());
431                remap_since = reclocked_source_frontier;
432                for (_, t, _) in remap_trace.iter_mut() {
433                    t.advance_by(remap_since.borrow());
434                }
435                consolidation::consolidate_updates(&mut remap_trace);
436                remap_trace
437                    .sort_unstable_by(|(_, t1, _): &(_, IntoTime, _), (_, t2, _)| t1.cmp(t2));
438
439                // If using less than a quarter of the capacity, shrink the container. To avoid having
440                // to resize the container on a subsequent push, shrink to 2x the length, which is
441                // what push would grow it to.
442                if remap_trace.len() < remap_trace.capacity() / 4 {
443                    remap_trace.shrink_to(remap_trace.len() * 2);
444                }
445            }
446
447            // STEP 6. Tidy up deferred updates
448            //         Deferred updates are represented as a list of chain batches where each batch
449            //         contains two times the updates of the batch proceeding it. This organization
450            //         leads to a logarithmic number of batches with respect to the outstanding
451            //         number of updates.
452            deferred_source_updates.sort_unstable_by_key(|b| Reverse(b.len()));
453            if !new_source_updates.is_empty() {
454                deferred_source_updates.push(new_source_updates);
455            }
456            let dsu = &mut deferred_source_updates;
457            while dsu.len() > 1 && (dsu[dsu.len() - 1].len() >= dsu[dsu.len() - 2].len() / 2) {
458                let a = dsu.pop().unwrap();
459                let b = dsu.pop().unwrap();
460                dsu.push(a.merge_with(b));
461            }
462
463            // If using less than a quarter of the capacity, shrink the container. To avoid having
464            // to resize the container on a subsequent push, shrink to 2x the length, which is
465            // what push would grow it to.
466            if deferred_source_updates.len() < deferred_source_updates.capacity() / 4 {
467                deferred_source_updates.shrink_to(deferred_source_updates.len() * 2);
468            }
469        }
470    });
471
472    (Box::new(pusher), reclocked.as_collection())
473}
474
475/// A batch of differential updates that vary over some partial order. This type maintains the data
476/// as a set of chains that allows for efficient extraction of batches given a frontier.
477#[derive(Debug, PartialEq)]
478struct ChainBatch<D, T, R> {
479    /// A list of chains (sets of mutually comparable times) sorted by the partial order.
480    chains: Vec<VecDeque<(D, T, R)>>,
481}
482
483impl<D, T: Timestamp, R> ChainBatch<D, T, R> {
484    /// Extracts all updates with time not greater or equal to any time in `upper`.
485    fn extract<'a>(
486        &'a mut self,
487        upper: AntichainRef<'a, T>,
488    ) -> impl Iterator<Item = (D, T, R)> + 'a {
489        self.chains.retain(|chain| !chain.is_empty());
490        self.chains.iter_mut().flat_map(move |chain| {
491            // A chain is a sorted list of mutually comparable elements so we keep extracting
492            // elements that are not beyond upper.
493            std::iter::from_fn(move || {
494                let (_, into, _) = chain.front()?;
495                if !upper.less_equal(into) {
496                    chain.pop_front()
497                } else {
498                    None
499                }
500            })
501        })
502    }
503
504    fn merge_with(
505        mut self: ChainBatch<D, T, R>,
506        mut other: ChainBatch<D, T, R>,
507    ) -> ChainBatch<D, T, R>
508    where
509        D: ExchangeData,
510        T: Timestamp,
511        R: Semigroup,
512    {
513        let mut updates1 = self.chains.drain(..).flatten().peekable();
514        let mut updates2 = other.chains.drain(..).flatten().peekable();
515
516        let merged = std::iter::from_fn(|| {
517            match (updates1.peek(), updates2.peek()) {
518                (Some((d1, t1, _)), Some((d2, t2, _))) => {
519                    match (t1, d1).cmp(&(t2, d2)) {
520                        Ordering::Less => updates1.next(),
521                        Ordering::Greater => updates2.next(),
522                        // If the same (d, t) pair is found, consolidate their diffs
523                        Ordering::Equal => {
524                            let (d1, t1, mut r1) = updates1.next().unwrap();
525                            while let Some((_, _, r)) =
526                                updates1.next_if(|(d, t, _)| (d, t) == (&d1, &t1))
527                            {
528                                r1.plus_equals(&r);
529                            }
530                            while let Some((_, _, r)) =
531                                updates2.next_if(|(d, t, _)| (d, t) == (&d1, &t1))
532                            {
533                                r1.plus_equals(&r);
534                            }
535                            Some((d1, t1, r1))
536                        }
537                    }
538                }
539                (Some(_), None) => updates1.next(),
540                (None, Some(_)) => updates2.next(),
541                (None, None) => None,
542            }
543        });
544
545        ChainBatch::from_iter(merged.filter(|(_, _, r)| !r.is_zero()))
546    }
547
548    /// Returns the number of updates in the batch.
549    fn len(&self) -> usize {
550        self.chains.iter().map(|chain| chain.len()).sum()
551    }
552
553    /// Returns true if the batch contains no updates.
554    fn is_empty(&self) -> bool {
555        self.len() == 0
556    }
557}
558
559impl<D, T: Timestamp, R> FromIterator<(D, T, R)> for ChainBatch<D, T, R> {
560    /// Computes the chain decomposition of updates according to the partial order `T`.
561    fn from_iter<I: IntoIterator<Item = (D, T, R)>>(updates: I) -> Self {
562        let mut chains = vec![];
563        let mut updates = updates.into_iter();
564        if let Some((d, t, r)) = updates.next() {
565            let mut chain = VecDeque::new();
566            chain.push_back((d, t, r));
567            for (d, t, r) in updates {
568                let prev_t = &chain[chain.len() - 1].1;
569                if !PartialOrder::less_equal(prev_t, &t) {
570                    chains.push(chain);
571                    chain = VecDeque::new();
572                }
573                chain.push_back((d, t, r));
574            }
575            chains.push(chain);
576        }
577        Self { chains }
578    }
579}
580
581#[cfg(test)]
582mod test {
583    use std::sync::atomic::AtomicUsize;
584    use std::sync::mpsc::{Receiver, TryRecvError};
585
586    use differential_dataflow::consolidation;
587    use differential_dataflow::input::{Input, InputSession};
588    use serde::{Deserialize, Serialize};
589    use timely::communication::allocator::Thread;
590    use timely::dataflow::operators::capture::{Event, Extract};
591    use timely::dataflow::operators::unordered_input::UnorderedHandle;
592    use timely::dataflow::operators::{ActivateCapability, Capture, UnorderedInput};
593    use timely::progress::PathSummary;
594    use timely::progress::timestamp::Refines;
595    use timely::worker::Worker;
596
597    use crate::capture::PusherCapture;
598    use crate::order::Partitioned;
599
600    use super::*;
601
602    type Diff = Overflowing<i64>;
603    type FromTime = Partitioned<u64, u64>;
604    type IntoTime = u64;
605    type BindingHandle<FromTime> = InputSession<IntoTime, FromTime, Diff>;
606    type DataHandle<D, FromTime> = (
607        UnorderedHandle<FromTime, (D, FromTime, Diff)>,
608        ActivateCapability<FromTime>,
609    );
610    type ReclockedStream<D> = Receiver<Event<IntoTime, Vec<(D, IntoTime, Diff)>>>;
611
612    /// A helper function that sets up a dataflow program to test the reclocking operator. Each
613    /// test provides a test logic closure which accepts four arguments:
614    ///
615    /// * A reference to the worker that allows the test to step the computation
616    /// * A [`BindingHandle`] that allows the test to manipulate the remap bindings
617    /// * A [`DataHandle`] that allows the test to submit the data to be reclocked
618    /// * A [`ReclockedStream`] that allows observing the result of the reclocking process
619    ///
620    /// Note that the `DataHandle` contains a capability that should be dropped or downgraded before
621    /// calling [`step`] to process data at the time.
622    fn harness<FromTime, D, F, R>(as_of: Antichain<IntoTime>, test_logic: F) -> R
623    where
624        FromTime: Timestamp + Refines<()>,
625        D: ExchangeData,
626        F: FnOnce(
627                &mut Worker<Thread>,
628                BindingHandle<FromTime>,
629                DataHandle<D, FromTime>,
630                ReclockedStream<D>,
631            ) -> R
632            + Send
633            + Sync
634            + 'static,
635        R: Send + 'static,
636    {
637        timely::execute_directly(move |worker| {
638            let (bindings, data, data_cap, reclocked) = worker.dataflow::<(), _, _>(|scope| {
639                let (bindings, data_pusher, reclocked) =
640                    scope.scoped::<IntoTime, _, _>("IntoScope", move |scope| {
641                        let (binding_handle, binding_collection) = scope.new_collection();
642                        let (data_pusher, reclocked_collection) =
643                            reclock(&binding_collection, as_of);
644                        let reclocked_capture = reclocked_collection.inner.capture();
645                        (binding_handle, data_pusher, reclocked_capture)
646                    });
647
648                let (data, data_cap) = scope.scoped::<FromTime, _, _>("FromScope", move |scope| {
649                    let ((handle, cap), data) = scope.new_unordered_input();
650                    data.capture_into(PusherCapture(data_pusher));
651                    (handle, cap)
652                });
653
654                (bindings, data, data_cap, reclocked)
655            });
656
657            test_logic(worker, bindings, (data, data_cap), reclocked)
658        })
659    }
660
661    /// Steps the worker four times which is the required number of times for both data and
662    /// frontier updates to propagate across the two scopes and into the probing channels.
663    fn step(worker: &mut Worker<Thread>) {
664        for _ in 0..4 {
665            worker.step();
666        }
667    }
668
669    #[mz_ore::test]
670    fn basic_reclocking() {
671        let as_of = Antichain::from_elem(IntoTime::minimum());
672        harness::<FromTime, _, _, _>(
673            as_of,
674            |worker, bindings, (mut data, data_cap), reclocked| {
675                // Reclock everything at the minimum IntoTime
676                bindings.close();
677                data.activate()
678                    .session(&data_cap)
679                    .give(('a', Partitioned::minimum(), Diff::ONE));
680                drop(data_cap);
681                step(worker);
682                let extracted = reclocked.extract();
683                let expected = vec![(0, vec![('a', 0, Diff::ONE)])];
684                assert_eq!(extracted, expected);
685            },
686        )
687    }
688
689    /// Generates a `Partitioned<u64, u64>` Antichain where all the provided
690    /// partitions are at the specified offset and the gaps in between are filled with range
691    /// timestamps at offset zero.
692    fn partitioned_frontier<I>(items: I) -> Antichain<Partitioned<u64, u64>>
693    where
694        I: IntoIterator<Item = (u64, u64)>,
695    {
696        let mut frontier = Antichain::new();
697        let mut prev = 0;
698        for (pid, offset) in items {
699            if prev < pid {
700                frontier.insert(Partitioned::new_range(prev, pid - 1, 0));
701            }
702            frontier.insert(Partitioned::new_singleton(pid, offset));
703            prev = pid + 1
704        }
705        frontier.insert(Partitioned::new_range(prev, u64::MAX, 0));
706        frontier
707    }
708
709    #[mz_ore::test]
710    fn test_basic_usage() {
711        let as_of = Antichain::from_elem(IntoTime::minimum());
712        harness(
713            as_of,
714            |worker, mut bindings, (mut data, data_cap), reclocked| {
715                // Reclock offsets 1 and 3 to timestamp 1000
716                bindings.update_at(Partitioned::minimum(), 0, Diff::ONE);
717                bindings.update_at(Partitioned::minimum(), 1000, Diff::MINUS_ONE);
718                for time in partitioned_frontier([(0, 4)]) {
719                    bindings.update_at(time, 1000, Diff::ONE);
720                }
721                bindings.advance_to(1001);
722                bindings.flush();
723                data.activate().session(&data_cap).give_iterator(
724                    vec![
725                        (1, Partitioned::new_singleton(0, 1), Diff::ONE),
726                        (1, Partitioned::new_singleton(0, 1), Diff::ONE),
727                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
728                    ]
729                    .into_iter(),
730                );
731
732                step(worker);
733                assert_eq!(
734                    reclocked.try_recv(),
735                    Ok(Event::Messages(
736                        0u64,
737                        vec![
738                            (1, 1000, Diff::ONE),
739                            (1, 1000, Diff::ONE),
740                            (3, 1000, Diff::ONE)
741                        ]
742                    ))
743                );
744                assert_eq!(
745                    reclocked.try_recv(),
746                    Ok(Event::Progress(vec![(0, -1), (1000, 1)]))
747                );
748
749                // Reclock more messages for offsets 3 to the same timestamp
750                data.activate().session(&data_cap).give_iterator(
751                    vec![
752                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
753                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
754                    ]
755                    .into_iter(),
756                );
757                step(worker);
758                assert_eq!(
759                    reclocked.try_recv(),
760                    Ok(Event::Messages(
761                        1000u64,
762                        vec![(3, 1000, Diff::ONE), (3, 1000, Diff::ONE)]
763                    ))
764                );
765
766                // Drop the capability which should advance the reclocked frontier to 1001.
767                drop(data_cap);
768                step(worker);
769                assert_eq!(
770                    reclocked.try_recv(),
771                    Ok(Event::Progress(vec![(1000, -1), (1001, 1)]))
772                );
773            },
774        );
775    }
776
777    #[mz_ore::test]
778    fn test_reclock_frontier() {
779        let as_of = Antichain::from_elem(IntoTime::minimum());
780        harness::<_, (), _, _>(
781            as_of,
782            |worker, mut bindings, (_data, data_cap), reclocked| {
783                // Initialize the bindings such that the minimum IntoTime contains the minimum FromTime
784                // frontier.
785                bindings.update_at(Partitioned::minimum(), 0, Diff::ONE);
786                bindings.advance_to(1);
787                bindings.flush();
788                step(worker);
789                assert_eq!(
790                    reclocked.try_recv(),
791                    Ok(Event::Progress(vec![(0, -1), (1, 1)]))
792                );
793
794                // Mint a couple of bindings for multiple partitions
795                bindings.update_at(Partitioned::minimum(), 1000, Diff::MINUS_ONE);
796                for time in partitioned_frontier([(1, 10)]) {
797                    bindings.update_at(time.clone(), 1000, Diff::ONE);
798                    bindings.update_at(time, 2000, Diff::MINUS_ONE);
799                }
800                for time in partitioned_frontier([(1, 10), (2, 10)]) {
801                    bindings.update_at(time, 2000, Diff::ONE);
802                }
803                bindings.advance_to(2001);
804                bindings.flush();
805
806                // The initial frontier should now map to the minimum between the two partitions
807                step(worker);
808                step(worker);
809                assert_eq!(
810                    reclocked.try_recv(),
811                    Ok(Event::Progress(vec![(1, -1), (1000, 1)]))
812                );
813
814                // Downgrade data frontier such that only one of the partitions is advanced
815                let mut part1_cap = data_cap.delayed(&Partitioned::new_singleton(1, 9));
816                let mut part2_cap = data_cap.delayed(&Partitioned::new_singleton(2, 0));
817                let _rest_cap = data_cap.delayed(&Partitioned::new_range(3, u64::MAX, 0));
818                drop(data_cap);
819                step(worker);
820                assert_eq!(reclocked.try_recv(), Err(TryRecvError::Empty));
821
822                // Downgrade the data frontier past the first binding
823                part1_cap.downgrade(&Partitioned::new_singleton(1, 10));
824                step(worker);
825                assert_eq!(
826                    reclocked.try_recv(),
827                    Ok(Event::Progress(vec![(1000, -1), (2000, 1)]))
828                );
829
830                // Downgrade the data frontier past the second binding
831                part2_cap.downgrade(&Partitioned::new_singleton(2, 10));
832                step(worker);
833                assert_eq!(
834                    reclocked.try_recv(),
835                    Ok(Event::Progress(vec![(2000, -1), (2001, 1)]))
836                );
837
838                // Advance the binding frontier and confirm that we get to the next timestamp
839                bindings.advance_to(3001);
840                bindings.flush();
841                step(worker);
842                assert_eq!(
843                    reclocked.try_recv(),
844                    Ok(Event::Progress(vec![(2001, -1), (3001, 1)]))
845                );
846            },
847        );
848    }
849
850    #[mz_ore::test]
851    fn test_reclock() {
852        let as_of = Antichain::from_elem(IntoTime::minimum());
853        harness(
854            as_of,
855            |worker, mut bindings, (mut data, data_cap), reclocked| {
856                // Initialize the bindings such that the minimum IntoTime contains the minimum FromTime
857                // frontier.
858                bindings.update_at(Partitioned::minimum(), 0, Diff::ONE);
859
860                // Setup more precise capabilities for the rest of the test
861                let mut part0_cap = data_cap.delayed(&Partitioned::new_singleton(0, 0));
862                let rest_cap = data_cap.delayed(&Partitioned::new_range(1, u64::MAX, 0));
863                drop(data_cap);
864
865                // Reclock offsets 1 and 2 to timestamp 1000
866                data.activate().session(&part0_cap).give_iterator(
867                    vec![
868                        (1, Partitioned::new_singleton(0, 1), Diff::ONE),
869                        (2, Partitioned::new_singleton(0, 2), Diff::ONE),
870                    ]
871                    .into_iter(),
872                );
873
874                part0_cap.downgrade(&Partitioned::new_singleton(0, 3));
875                bindings.update_at(Partitioned::minimum(), 1000, Diff::MINUS_ONE);
876                bindings.update_at(part0_cap.time().clone(), 1000, Diff::ONE);
877                bindings.update_at(rest_cap.time().clone(), 1000, Diff::ONE);
878                bindings.advance_to(1001);
879                bindings.flush();
880                step(worker);
881                assert_eq!(
882                    reclocked.try_recv(),
883                    Ok(Event::Messages(
884                        0,
885                        vec![(1, 1000, Diff::ONE), (2, 1000, Diff::ONE)]
886                    ))
887                );
888                assert_eq!(
889                    reclocked.try_recv(),
890                    Ok(Event::Progress(vec![(0, -1), (1000, 1)]))
891                );
892                assert_eq!(
893                    reclocked.try_recv(),
894                    Ok(Event::Progress(vec![(1000, -1), (1001, 1)]))
895                );
896
897                // Reclock offsets 3 and 4 to timestamp 2000
898                data.activate().session(&part0_cap).give_iterator(
899                    vec![
900                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
901                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
902                        (4, Partitioned::new_singleton(0, 4), Diff::ONE),
903                    ]
904                    .into_iter(),
905                );
906                bindings.update_at(part0_cap.time().clone(), 2000, Diff::MINUS_ONE);
907                part0_cap.downgrade(&Partitioned::new_singleton(0, 5));
908                bindings.update_at(part0_cap.time().clone(), 2000, Diff::ONE);
909                bindings.advance_to(2001);
910                bindings.flush();
911                step(worker);
912                assert_eq!(
913                    reclocked.try_recv(),
914                    Ok(Event::Messages(
915                        1001,
916                        vec![
917                            (3, 2000, Diff::ONE),
918                            (3, 2000, Diff::ONE),
919                            (4, 2000, Diff::ONE)
920                        ]
921                    ))
922                );
923                assert_eq!(
924                    reclocked.try_recv(),
925                    Ok(Event::Progress(vec![(1001, -1), (2000, 1)]))
926                );
927                assert_eq!(
928                    reclocked.try_recv(),
929                    Ok(Event::Progress(vec![(2000, -1), (2001, 1)]))
930                );
931            },
932        );
933    }
934
935    #[mz_ore::test]
936    fn test_reclock_gh16318() {
937        let as_of = Antichain::from_elem(IntoTime::minimum());
938        harness(
939            as_of,
940            |worker, mut bindings, (mut data, data_cap), reclocked| {
941                // Initialize the bindings such that the minimum IntoTime contains the minimum FromTime
942                // frontier.
943                bindings.update_at(Partitioned::minimum(), 0, Diff::ONE);
944                // First mint bindings for 0 at timestamp 1000
945                bindings.update_at(Partitioned::minimum(), 1000, Diff::MINUS_ONE);
946                for time in partitioned_frontier([(0, 50)]) {
947                    bindings.update_at(time, 1000, Diff::ONE);
948                }
949                // Then only for 1 at timestamp 2000
950                for time in partitioned_frontier([(0, 50)]) {
951                    bindings.update_at(time, 2000, Diff::MINUS_ONE);
952                }
953                for time in partitioned_frontier([(0, 50), (1, 50)]) {
954                    bindings.update_at(time, 2000, Diff::ONE);
955                }
956                // Then again only for 0 at timestamp 3000
957                for time in partitioned_frontier([(0, 50), (1, 50)]) {
958                    bindings.update_at(time, 3000, Diff::MINUS_ONE);
959                }
960                for time in partitioned_frontier([(0, 100), (1, 50)]) {
961                    bindings.update_at(time, 3000, Diff::ONE);
962                }
963                bindings.advance_to(3001);
964                bindings.flush();
965
966                // Reclockng (0, 50) must ignore the updates on the FromTime frontier that happened at
967                // timestamp 2000 since those are completely unrelated
968                data.activate().session(&data_cap).give((
969                    50,
970                    Partitioned::new_singleton(0, 50),
971                    Diff::ONE,
972                ));
973                drop(data_cap);
974                step(worker);
975                assert_eq!(
976                    reclocked.try_recv(),
977                    Ok(Event::Messages(0, vec![(50, 3000, Diff::ONE),]))
978                );
979                assert_eq!(
980                    reclocked.try_recv(),
981                    Ok(Event::Progress(vec![(0, -1), (1000, 1)]))
982                );
983                assert_eq!(
984                    reclocked.try_recv(),
985                    Ok(Event::Progress(vec![(1000, -1), (3001, 1)]))
986                );
987            },
988        );
989    }
990
991    /// Test that compact(reclock(remap, source)) == reclock(compact(remap), source)
992    #[mz_ore::test]
993    fn test_compaction() {
994        let mut remap = vec![];
995        remap.push((Partitioned::minimum(), 0, Diff::ONE));
996        // Reclock offsets 1 and 2 to timestamp 1000
997        remap.push((Partitioned::minimum(), 1000, Diff::MINUS_ONE));
998        for time in partitioned_frontier([(0, 3)]) {
999            remap.push((time, 1000, Diff::ONE));
1000        }
1001        // Reclock offsets 3 and 4 to timestamp 2000
1002        for time in partitioned_frontier([(0, 3)]) {
1003            remap.push((time, 2000, Diff::MINUS_ONE));
1004        }
1005        for time in partitioned_frontier([(0, 5)]) {
1006            remap.push((time, 2000, Diff::ONE));
1007        }
1008
1009        let source_updates = vec![
1010            (1, Partitioned::new_singleton(0, 1), Diff::ONE),
1011            (2, Partitioned::new_singleton(0, 2), Diff::ONE),
1012            (3, Partitioned::new_singleton(0, 3), Diff::ONE),
1013            (4, Partitioned::new_singleton(0, 4), Diff::ONE),
1014        ];
1015
1016        let since = Antichain::from_elem(1500);
1017
1018        // Compute reclock(remap, source)
1019        let as_of = Antichain::from_elem(IntoTime::minimum());
1020        let remap1 = remap.clone();
1021        let source_updates1 = source_updates.clone();
1022        let reclock_remap = harness(
1023            as_of,
1024            move |worker, mut bindings, (mut data, data_cap), reclocked| {
1025                for (from_ts, into_ts, diff) in remap1 {
1026                    bindings.update_at(from_ts, into_ts, diff);
1027                }
1028                bindings.close();
1029                data.activate()
1030                    .session(&data_cap)
1031                    .give_iterator(source_updates1.iter().cloned());
1032                drop(data_cap);
1033                step(worker);
1034                reclocked.extract()
1035            },
1036        );
1037        // Compute compact(reclock(remap, source))
1038        let mut compact_reclock_remap = reclock_remap;
1039        for (t, updates) in compact_reclock_remap.iter_mut() {
1040            t.advance_by(since.borrow());
1041            for (_, t, _) in updates.iter_mut() {
1042                t.advance_by(since.borrow());
1043            }
1044        }
1045
1046        // Compute compact(remap)
1047        let mut compact_remap = remap;
1048        for (_, t, _) in compact_remap.iter_mut() {
1049            t.advance_by(since.borrow());
1050        }
1051        consolidation::consolidate_updates(&mut compact_remap);
1052        // Compute reclock(compact(remap), source)
1053        let reclock_compact_remap = harness(
1054            since,
1055            move |worker, mut bindings, (mut data, data_cap), reclocked| {
1056                for (from_ts, into_ts, diff) in compact_remap {
1057                    bindings.update_at(from_ts, into_ts, diff);
1058                }
1059                bindings.close();
1060                data.activate()
1061                    .session(&data_cap)
1062                    .give_iterator(source_updates.iter().cloned());
1063                drop(data_cap);
1064                step(worker);
1065                reclocked.extract()
1066            },
1067        );
1068
1069        let expected = vec![(
1070            1500,
1071            vec![
1072                (1, 1500, Diff::ONE),
1073                (2, 1500, Diff::ONE),
1074                (3, 2000, Diff::ONE),
1075                (4, 2000, Diff::ONE),
1076            ],
1077        )];
1078        assert_eq!(expected, reclock_compact_remap);
1079        assert_eq!(expected, compact_reclock_remap);
1080    }
1081
1082    #[mz_ore::test]
1083    fn test_chainbatch_merge() {
1084        let a = ChainBatch::from_iter([('a', 0, 1)]);
1085        let b = ChainBatch::from_iter([('a', 0, -1), ('a', 1, 1)]);
1086        assert_eq!(a.merge_with(b), ChainBatch::from_iter([('a', 1, 1)]));
1087    }
1088
1089    #[mz_ore::test]
1090    #[cfg_attr(miri, ignore)] // too slow
1091    fn test_binding_consolidation() {
1092        use std::sync::atomic::Ordering;
1093
1094        #[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
1095        struct Time(u64);
1096
1097        // A counter of the number of active Time instances
1098        static INSTANCES: AtomicUsize = AtomicUsize::new(0);
1099
1100        impl Time {
1101            fn new(time: u64) -> Self {
1102                INSTANCES.fetch_add(1, Ordering::Relaxed);
1103                Self(time)
1104            }
1105        }
1106
1107        impl Clone for Time {
1108            fn clone(&self) -> Self {
1109                INSTANCES.fetch_add(1, Ordering::Relaxed);
1110                Self(self.0)
1111            }
1112        }
1113
1114        impl Drop for Time {
1115            fn drop(&mut self) {
1116                INSTANCES.fetch_sub(1, Ordering::Relaxed);
1117            }
1118        }
1119
1120        impl Timestamp for Time {
1121            type Summary = ();
1122
1123            fn minimum() -> Self {
1124                Time::new(0)
1125            }
1126        }
1127
1128        impl PathSummary<Time> for () {
1129            fn results_in(&self, src: &Time) -> Option<Time> {
1130                Some(src.clone())
1131            }
1132
1133            fn followed_by(&self, _other: &()) -> Option<Self> {
1134                Some(())
1135            }
1136        }
1137
1138        impl Refines<()> for Time {
1139            fn to_inner(_: ()) -> Self {
1140                Self::minimum()
1141            }
1142            fn to_outer(self) -> () {}
1143            fn summarize(_path: ()) {}
1144        }
1145
1146        impl PartialOrder for Time {
1147            fn less_equal(&self, other: &Self) -> bool {
1148                self.0.less_equal(&other.0)
1149            }
1150        }
1151
1152        let as_of = 1000;
1153
1154        // Test that supplying a single big batch of unconsolidated bindings gets
1155        // consolidated after a single worker step.
1156        harness::<Time, u64, _, _>(
1157            Antichain::from_elem(as_of),
1158            move |worker, mut bindings, _, _| {
1159                step(worker);
1160                let instances_before = INSTANCES.load(Ordering::Relaxed);
1161                for ts in 0..as_of {
1162                    if ts > 0 {
1163                        bindings.update_at(Time::new(ts - 1), ts, Diff::MINUS_ONE);
1164                    }
1165                    bindings.update_at(Time::new(ts), ts, Diff::ONE);
1166                }
1167                bindings.advance_to(as_of);
1168                bindings.flush();
1169                step(worker);
1170                let instances_after = INSTANCES.load(Ordering::Relaxed);
1171                // The extra instances live in a ChangeBatch which considers compaction when more
1172                // than 32 elements are inside.
1173                assert!(instances_after - instances_before < 32);
1174            },
1175        );
1176
1177        // Test that a slow feed of uncompacted bindings over multiple steps never leads to an
1178        // excessive number of bindings held in memory.
1179        harness::<Time, u64, _, _>(
1180            Antichain::from_elem(as_of),
1181            move |worker, mut bindings, _, _| {
1182                step(worker);
1183                let instances_before = INSTANCES.load(Ordering::Relaxed);
1184                for ts in 0..as_of {
1185                    if ts > 0 {
1186                        bindings.update_at(Time::new(ts - 1), ts, Diff::MINUS_ONE);
1187                    }
1188                    bindings.update_at(Time::new(ts), ts, Diff::ONE);
1189                    bindings.advance_to(ts + 1);
1190                    bindings.flush();
1191                    step(worker);
1192                    let instances_now = INSTANCES.load(Ordering::Relaxed);
1193                    // The extra instances live in a ChangeBatch which considers compaction when
1194                    // more than 32 elements are inside.
1195                    assert!(instances_now - instances_before < 32);
1196                }
1197            },
1198        );
1199    }
1200
1201    #[cfg(feature = "count-allocations")]
1202    #[mz_ore::test]
1203    #[cfg_attr(miri, ignore)] // too slow
1204    fn test_shrinking() {
1205        let as_of = 1000_u64;
1206
1207        // This workflow accumulates updates in remap_trace, advances the source frontier,
1208        // and validates that memory was reclaimed.  To avoid errant test failures due to
1209        // optimizations, this only validates that memory is reclaimed, not how much.
1210        harness::<FromTime, u64, _, _>(
1211            Antichain::from_elem(0),
1212            move |worker, mut bindings, (_data, mut data_cap), _| {
1213                let info1 = allocation_counter::measure(|| {
1214                    step(worker);
1215                    for ts in 0..as_of {
1216                        if ts > 0 {
1217                            bindings.update_at(
1218                                Partitioned::new_singleton(0, ts - 1),
1219                                ts,
1220                                Diff::MINUS_ONE,
1221                            );
1222                        }
1223                        bindings.update_at(Partitioned::new_singleton(0, ts), ts, Diff::ONE);
1224                        bindings.advance_to(ts + 1);
1225                        bindings.flush();
1226                        step(worker);
1227                    }
1228                });
1229                println!("info = {info1:?}");
1230
1231                let info2 = allocation_counter::measure(|| {
1232                    data_cap.downgrade(&Partitioned::new_singleton(0, as_of));
1233                    step(worker);
1234                });
1235                println!("info = {info2:?}");
1236                assert!(info2.bytes_current < 0);
1237            },
1238        );
1239    }
1240}