mz_timely_util/
reclock.rs

1// Copyright Materialize, Inc. and contributors. All rights reserved.
2//
3// Licensed under the Apache License, Version 2.0 (the "License");
4// you may not use this file except in compliance with the License.
5// You may obtain a copy of the License in the LICENSE file at the
6// root of this repository, or online at
7//
8//     http://www.apache.org/licenses/LICENSE-2.0
9//
10// Unless required by applicable law or agreed to in writing, software
11// distributed under the License is distributed on an "AS IS" BASIS,
12// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13// See the License for the specific language governing permissions and
14// limitations under the License.
15
16//! ## Notation
17//!
18//! Collections are represented with capital letters (T, S, R), collection traces as bold letters
19//! (𝐓, 𝐒, 𝐑), and difference traces as δ𝐓.
20//!
21//! Indexing a collection trace 𝐓 to obtain its version at `t` is written as 𝐓(t). Indexing a
22//! collection to obtain the multiplicity of a record `x` is written as T\[x\]. These can be combined
23//! to obtain the multiplicity of a record `x` at some version `t` as 𝐓(t)\[x\].
24//!
25//! ## Overview
26//!
27//! Reclocking transforms a source collection `S` that evolves with some timestamp `FromTime` into
28//! a collection `T` that evolves with some other timestamp `IntoTime`. The reclocked collection T
29//! contains all updates `u ∈ S` that are not beyond some `FromTime` frontier R(t). The collection
30//! `R` is called the remap collection.
31//!
32//! More formally, for some arbitrary time `t` of `IntoTime` and some arbitrary record `x`, the
33//! reclocked collection `T(t)[x]` is defined to be the `sum{δ𝐒(s)[x]: !(𝐑(t) βͺ― s)}`. Since this
34//! holds for any record we can write the definition of Reclock(𝐒, 𝐑) as:
35//!
36//! > Reclock(𝐒, 𝐑) β‰œ 𝐓: βˆ€ t ∈ IntoTime : 𝐓(t) = sum{δ𝐒(s): !(𝐑(t) βͺ― s)}
37//!
38//! In order for the reclocked collection `T` to have a sensible definition of progress we require
39//! that `t1 ≀ t2 β‡’ 𝐑(t1) βͺ― 𝐑(t2)` where the first `≀` is the partial order of `IntoTime` and the
40//! second one the partial order of `FromTime` antichains.
41//!
42//! ## Total order simplification
43//!
44//! In order to simplify the implementation we will require that `IntoTime` is a total order. This
45//! limitation can be lifted in the future but further elaboration on the mechanics of reclocking
46//! is required to ensure a correct implementation.
47//!
48//! ## The difference trace
49//!
50//! By the definition of difference traces we have:
51//!
52//! ```text
53//!     δ𝐓(t) = T(t) - sum{δ𝐓(s): s < t}
54//! ```
55//!
56//! Due to the total order assumption we only need to consider two cases.
57//!
58//! **Case 1:** `t` is the minimum timestamp
59//!
60//! In this case `sum{δ𝐓(s): s < t}` is the empty set and so we obtain:
61//!
62//! ```text
63//!     δ𝐓(min) = T(min) = sum{δ𝐒(s): !(𝐑(min) ≀ s}
64//! ```
65//!
66//! **Case 2:** `t` is a timestamp with a predecessor `prev`
67//!
68//! In this case `sum{δ𝐓(s): s < t}` is equal to `T(prev)` because:
69//!
70//! ```text
71//!     sum{δ𝐓(s): s < t} = sum{δ𝐓(s): s ≀ prev} + sum{δ𝐓(s): prev < s < t}
72//!                       = T(prev) + βˆ…
73//!                       = T(prev)
74//! ```
75//!
76//! And therefore the difference trace of T is:
77//!
78//! ```text
79//!     δ𝐓(t) = 𝐓(t) - 𝐓(prev)
80//!           = sum{δ𝐒(s): !(𝐑(t) βͺ― s)} - sum{δ𝐒(s): !(𝐑(prev) βͺ― s)}
81//!           = sum{δ𝐒(s): (𝐑(prev) βͺ― s) ∧ !(𝐑(t) βͺ― s)}
82//! ```
83//!
84//! ## Unique mapping property
85//!
86//! Given the definition above we can derive the fact that for any source difference δ𝐒(s) there is
87//! at most one target timestamp t that it must be reclocked to. This property can be exploited by
88//! the implementation of the operator as it can safely discard source updates once a matching
89//! Ξ΄T(t) has been found, making it "stateless" with respect to the source trace. A formal proof of
90//! this property is [provided below](#unique-mapping-property-proof).
91//!
92//! ## Operational description
93//!
94//! The operator follows a run-to-completion model where on each scheduling it completes all
95//! outstanding work that can be completed.
96//!
97//! ### Unique mapping property proof
98//!
99//! This section contains the formal proof the unique mapping property. The proof follows the
100//! structure proof notation created by Leslie Lamport. Readers unfamiliar with structured proofs
101//! can read about them here <https://lamport.azurewebsites.net/pubs/proof.pdf>.
102//!
103//! #### Statement
104//!
105//! AtMostOne(X, Ο†(x)) β‰œ βˆ€ x1, x2 ∈ X : Ο†(x1) ∧ Ο†(x2) β‡’ x1 = x2
106//!
107//! * **THEOREM** UniqueMapping β‰œ
108//!     * **ASSUME**
109//!         * **NEW** (FromTime, βͺ―) ∈ PartiallyOrderedTimestamps
110//!         * **NEW** (IntoTime, ≀) ∈ TotallyOrderedTimestamps
111//!         * **NEW** 𝐒 ∈ SetOfCollectionTraces(FromTime)
112//!         * **NEW** 𝐑 ∈ SetOfCollectionTraces(IntoTime)
113//!         * βˆ€ t ∈ IntoTime: 𝐑(t) ∈ SetOfAntichains(FromTime)
114//!         * βˆ€ t1, t1 ∈ IntoTime: t1 ≀ t2 β‡’ 𝐑(t1) βͺ― 𝐑(t2)
115//!         * **NEW** 𝐓 = Reclock(𝐒, 𝐑)
116//!     * **PROVE**  βˆ€ s ∈ FromTime : AtMostOne(IntoTime, δ𝐒(s) ∈ δ𝐓(x))
117//!
118//! #### Proof
119//!
120//! 1. **SUFFICES ASSUME** βˆƒ s ∈ FromTime: Β¬AtMostOne(IntoTime, δ𝐒(s) ∈ δ𝐓(x))
121//!     * **PROVE FALSE**
122//!     * _By proof by contradiction._
123//! 2. **PICK** s ∈ FromTime : Β¬AtMostOne(IntoTime, δ𝐒(s) ∈ δ𝐓(x))
124//!    * _Proof: Such time exists by <1>1._
125//! 3. βˆƒ t1, t2 ∈ IntoTime : t1 β‰  t2 ∧ δ𝐒(s) ∈ δ𝐓(t1) ∧ δ𝐒(s) ∈ δ𝐓(t2)
126//!     1. Β¬(βˆ€ x1, x2 ∈ X : (δ𝐒(s) ∈ δ𝐓(x1)) ∧ (δ𝐒(s) ∈ δ𝐓(x2)) β‡’ x1 = x2)
127//!         * _Proof: By <1>2 and definition of AtMostOne._
128//!     2. Q.E.D
129//!         * _Proof: By <2>1, quantifier negation rules, and theorem of propositional logic Β¬(P β‡’ Q) ≑ P ∧ Β¬Q._
130//! 4. **PICK** t1, t2 ∈ IntoTime : t1 < t2 ∧ δ𝐒(s) ∈ δ𝐓(t1) ∧ δ𝐒(s) ∈ δ𝐓(t2)
131//!    * _Proof: By <1>3. Assume t1 < t2 without loss of generality._
132//! 5. Β¬(𝐑(t1) βͺ― s)
133//!     1. **CASE** t1 = min(IntoTime)
134//!         1. δ𝐓(t1) = sum{δ𝐒(s): !(𝐑(t1)) βͺ― s}
135//!             * _Proof: By definition of δ𝐓(min)._
136//!         2. δ𝐒(s) ∈ δ𝐓(t1)
137//!             * _Proof: By <1>4._
138//!         3. Q.E.D
139//!             * _Proof: By <3>1 and <3>2._
140//!     2. **CASE** t1 > min(IntoTime)
141//!         1. **PICK** t1_prev = Predecessor(t1)
142//!             * _Proof: Predecessor exists because the set {t: t < t1} is non-empty since it must contain at least min(IntoTime)._
143//!         2. δ𝐓(t1) = sum{δ𝐒(s): (𝐑(t1_prev) βͺ― s) ∧ !(𝐑(t1) βͺ― s)}
144//!             * _Proof: By definition of δ𝐓(t)._
145//!         3. δ𝐒(s) ∈ δ𝐓(t1)
146//!             * _Proof: By <1>4._
147//!         3. Q.E.D
148//!             * _Proof: By <3>2 and <3>3._
149//!     3. Q.E.D
150//!         * _Proof: From cases <2>1 and <2>2 which are exhaustive_
151//! 6. **PICK** t2_prev ∈ IntoTime : t2_prev = Predecessor(t2)
152//!    * _Proof: Predecessor exists because by <1>4 the set {t: t < t2} is non empty since it must contain at least t1._
153//! 7. t1 ≀ t2_prev
154//!    * _Proof: t1 ∈ {t: t < t2} and t2_prev is the maximum element of the set._
155//! 8. 𝐑(t2) βͺ― s
156//!     1. t2 > min(IntoTime)
157//!         * _Proof: By <1>5._
158//!     2. **PICK** t2_prev = Predecessor(t2)
159//!         * _Proof: Predecessor exists because the set {t: t < t2} is non-empty since it must contain at least min(IntoTime)._
160//!     3. δ𝐓(t) = sum{δ𝐒(s): (𝐑(t2_prev) βͺ― s) ∧ !(𝐑(t) βͺ― s)}
161//!         * _Proof: By definition of δ𝐓(t)_
162//!     4. δ𝐒(s) ∈ δ𝐓(t1)
163//!         * _Proof: By <1>4._
164//!     5. Q.E.D
165//!         * _Proof: By <2>3 and <2>4._
166//! 9. 𝐑(t1) βͺ― 𝐑(t2_prev)
167//!     * _Proof: By <1>.7 and hypothesis on R_
168//! 10. 𝐑(t1) βͺ― s
169//!     * _Proof: By <1>8 and <1>9._
170//! 11. Q.E.D
171//!     * _Proof: By <1>5 and <1>10_
172
173use std::cmp::{Ordering, Reverse};
174use std::collections::VecDeque;
175use std::collections::binary_heap::{BinaryHeap, PeekMut};
176use std::iter::FromIterator;
177
178use differential_dataflow::difference::Semigroup;
179use differential_dataflow::lattice::Lattice;
180use differential_dataflow::{AsCollection, Collection, ExchangeData, consolidation};
181use mz_ore::Overflowing;
182use mz_ore::collections::CollectionExt;
183use timely::communication::{Pull, Push};
184use timely::dataflow::Scope;
185use timely::dataflow::channels::pact::Pipeline;
186use timely::dataflow::operators::CapabilitySet;
187use timely::dataflow::operators::capture::Event;
188use timely::dataflow::operators::generic::builder_rc::OperatorBuilder;
189use timely::order::{PartialOrder, TotalOrder};
190use timely::progress::frontier::{AntichainRef, MutableAntichain};
191use timely::progress::{Antichain, Timestamp};
192
193/// Constructs an operator that reclocks a `source` collection varying with some time `FromTime`
194/// into the corresponding `reclocked` collection varying over some time `IntoTime` using the
195/// provided `remap` collection.
196///
197/// In order for the operator to read the `source` collection a `Pusher` is returned which can be
198/// used with timely's capture facilities to connect a collection from a foreign scope to this
199/// operator.
200pub fn reclock<G, D, FromTime, IntoTime, R>(
201    remap_collection: &Collection<G, FromTime, Overflowing<i64>>,
202    as_of: Antichain<G::Timestamp>,
203) -> (
204    Box<dyn Push<Event<FromTime, Vec<(D, FromTime, R)>>>>,
205    Collection<G, D, R>,
206)
207where
208    G: Scope<Timestamp = IntoTime>,
209    D: ExchangeData,
210    FromTime: Timestamp,
211    IntoTime: Timestamp + Lattice + TotalOrder,
212    R: Semigroup + 'static,
213{
214    let mut scope = remap_collection.scope();
215    let mut builder = OperatorBuilder::new("Reclock".into(), scope.clone());
216    // Here we create a channel that can be used to send data from a foreign scope into this
217    // operator. The channel is associated with this operator's address so that it is activated
218    // every time events are available for consumption. This mechanism is similar to Timely's input
219    // handles where data can be introduced into a timely scope from an exogenous source.
220    let info = builder.operator_info();
221    let channel_id = scope.new_identifier();
222    let (pusher, mut events) =
223        scope.pipeline::<Event<FromTime, Vec<(D, FromTime, R)>>>(channel_id, info.address);
224
225    let mut remap_input = builder.new_input(&remap_collection.inner, Pipeline);
226    let (mut output, reclocked) = builder.new_output();
227
228    builder.build(move |caps| {
229        let mut capset = CapabilitySet::from_elem(caps.into_element());
230        capset.downgrade(&as_of.borrow());
231
232        // Received remap updates at times `into_time` greater or equal to `remap_input`'s input
233        // frontier. As the input frontier advances, we drop elements out of this priority queue
234        // and mint new associations.
235        let mut pending_remap: BinaryHeap<Reverse<(IntoTime, FromTime, i64)>> = BinaryHeap::new();
236        // A trace of `remap_input` that accumulates correctly for all times that are beyond
237        // `remap_since` and not beyond `remap_upper`. The updates in `remap_trace` are maintained
238        // in time order. An actual DD trace could be used here at the expense of a more
239        // complicated API to traverse it. This is left for future work if the naive trace
240        // maintenance implemented in this operator becomes problematic.
241        let mut remap_upper = Antichain::from_elem(IntoTime::minimum());
242        let mut remap_since = as_of.clone();
243        let mut remap_trace = Vec::new();
244
245        // A stash of source updates for which we don't know the corresponding binding yet.
246        let mut deferred_source_updates: Vec<ChainBatch<_, _, _>> = Vec::new();
247        // The frontier of the `events` input
248        let mut source_frontier = MutableAntichain::new_bottom(FromTime::minimum());
249
250        let mut binding_buffer = Vec::new();
251
252        // Accumulation buffer for `remap_input` updates.
253        use timely::progress::ChangeBatch;
254        let mut remap_accum_buffer: ChangeBatch<(IntoTime, FromTime)> = ChangeBatch::new();
255
256        // The operator drains `remap_input` and organizes new bindings that are not beyond
257        // `remap_input`'s frontier into the time ordered `remap_trace`.
258        //
259        // All received data events can either be reclocked to a time included in the
260        // `remap_trace`, or deferred until new associations are minted. Each data event that
261        // happens at some `FromTime` is mapped to the first `IntoTime` whose associated antichain
262        // is not less or equal to the input `FromTime`.
263        //
264        // As progress events are received from the `events` input, we can advance our
265        // held capability to track the least `IntoTime` a newly received `FromTime` could possibly
266        // map to and also compact the maintained `remap_trace` to that time.
267        move |frontiers| {
268            let Some(cap) = capset.get(0) else {
269                return;
270            };
271            let mut output = output.activate();
272            let mut session = output.session(cap);
273
274            // STEP 1. Accept new bindings into `pending_remap`.
275            // Advance all `into` times by `as_of`, and consolidate all updates at that frontier.
276            while let Some((_, data)) = remap_input.next() {
277                for (from, mut into, diff) in data.drain(..) {
278                    into.advance_by(as_of.borrow());
279                    remap_accum_buffer.update((into, from), diff.into_inner());
280                }
281            }
282            // Drain consolidated bindings into the `pending_remap` heap.
283            // Only do this once any of the `remap_input` frontier has passed `as_of`.
284            // For as long as the input frontier is less-equal `as_of`, we have no finalized times.
285            if !PartialOrder::less_equal(&frontiers[0].frontier(), &as_of.borrow()) {
286                for ((into, from), diff) in remap_accum_buffer.drain() {
287                    pending_remap.push(Reverse((into, from, diff)));
288                }
289            }
290
291            // STEP 2. Extract bindings not beyond `remap_frontier` and commit them into `remap_trace`.
292            let prev_remap_upper =
293                std::mem::replace(&mut remap_upper, frontiers[0].frontier().to_owned());
294            while let Some(update) = pending_remap.peek_mut() {
295                if !remap_upper.less_equal(&update.0.0) {
296                    let Reverse((into, from, diff)) = PeekMut::pop(update);
297                    remap_trace.push((from, into, diff));
298                } else {
299                    break;
300                }
301            }
302
303            // STEP 3. Receive new data updates
304            //         The `events` input describes arbitrary progress and data over `FromTime`,
305            //         which must be translated to `IntoTime`. Each `FromTime` can be found as the
306            //         first `IntoTime` associated with a `[FromTime]` that is not less or equal to
307            //         the input `FromTime`. Received events that are not yet associated to an
308            //         `IntoTime` are collected, and formed into a "chain batch": a sequence of
309            //         chains that results from sorting the updates by `FromTime`, and then
310            //         segmenting the sequence at elements where the partial order on `FromTime` is
311            //         violated.
312            let mut stash = Vec::new();
313            // Consolidate progress updates before applying them to `source_frontier`, to avoid quadratic
314            // behavior in overload scenarios.
315            let mut change_batch = ChangeBatch::<FromTime, 2>::default();
316            while let Some(event) = events.pull() {
317                match event {
318                    Event::Progress(changes) => {
319                        change_batch.extend(changes.drain(..));
320                    }
321                    Event::Messages(_, data) => stash.append(data),
322                }
323            }
324            source_frontier.update_iter(change_batch.drain());
325            stash.sort_unstable_by(|(_, t1, _): &(D, FromTime, R), (_, t2, _)| t1.cmp(t2));
326            let mut new_source_updates = ChainBatch::from_iter(stash);
327
328            // STEP 4: Reclock new and deferred updates
329            //         We are now ready to step through the remap bindings in time order and
330            //         perform the following actions:
331            //         4.1. Match `new_source_updates` against the entirety of bindings contained
332            //              in the trace.
333            //         4.2. Match `deferred_source_updates` against the bindings that were just
334            //              added in the trace.
335            //         4.3. Reclock `source_frontier` to calculate the new since frontier of the
336            //              remap trace.
337            //
338            //         The steps above only make sense to perform if there are any times for which
339            //         we can correctly accumulate the remap trace, which is what we check here.
340            if remap_since.iter().all(|t| !remap_upper.less_equal(t)) {
341                let mut cur_binding = MutableAntichain::new();
342
343                let mut remap = remap_trace.iter().peekable();
344                let mut reclocked_source_frontier = remap_upper.clone();
345
346                // We go over all the times for which we might need to output data at. These times
347                // are restrticted to the times at which there exists an update in `remap_trace`
348                // and the minimum timestamp for the case where `remap_trace` is completely empty,
349                // in which case the minimum timestamp maps to the empty `FromTime` frontier and
350                // therefore all data events map to that minimum timestamp.
351                //
352                // The approach taken here will take time proportional to the number of elements in
353                // `remap_trace`. During development an alternative approach was considered where
354                // the updates in `remap_trace` are instead fully materialized into an ordered list
355                // of antichains in which every data update can be binary searched into. The are
356                // two concerns with this alternative approach that led to preferring this one:
357                // 1. Materializing very wide antichains with small differences between them
358                //    needs memory proportial to the number of bindings times the width of the
359                //    antichain.
360                // 2. It locks in the requirement of a totally ordered target timestamp since only
361                //    in that case can one binary search a binding.
362                // The linear scan is expected to be fine due to the run-to-completion nature of
363                // the operator since its cost is amortized among the number of outstanding
364                // updates.
365                let mut min_time = IntoTime::minimum();
366                min_time.advance_by(remap_since.borrow());
367                let mut prev_cur_time = None;
368                let mut interesting_times = std::iter::once(&min_time)
369                    .chain(remap_trace.iter().map(|(_, t, _)| t))
370                    .filter(|&v| {
371                        let prev = prev_cur_time.replace(v);
372                        prev != prev_cur_time
373                    });
374                let mut frontier_reclocked = false;
375                while !(new_source_updates.is_empty()
376                    && deferred_source_updates.is_empty()
377                    && frontier_reclocked)
378                    && let Some(cur_time) = interesting_times.next()
379                {
380                    // 4.0. Load updates of `cur_time` from the trace into `cur_binding` to
381                    //      construct the `[FromTime]` frontier that `cur_time` maps to.
382                    while let Some((t_from, _, diff)) = remap.next_if(|(_, t, _)| t == cur_time) {
383                        binding_buffer.push((t_from.clone(), *diff));
384                    }
385                    cur_binding.update_iter(binding_buffer.drain(..));
386                    let cur_binding = cur_binding.frontier();
387
388                    // 4.1. Extract updates from `new_source_updates`
389                    for (data, _, diff) in new_source_updates.extract(cur_binding) {
390                        session.give((data, cur_time.clone(), diff));
391                    }
392
393                    // 4.2. Extract updates from `deferred_source_updates`.
394                    //      The deferred updates contain all updates that were not able to be
395                    //      reclocked with the bindings until `prev_remap_upper`. For this reason
396                    //      we only need to reconsider these updates when we start looking at new
397                    //      bindings, i.e bindings that are beyond `prev_remap_upper`.
398                    if prev_remap_upper.less_equal(cur_time) {
399                        deferred_source_updates.retain_mut(|batch| {
400                            for (data, _, diff) in batch.extract(cur_binding) {
401                                session.give((data, cur_time.clone(), diff));
402                            }
403                            // Retain non-empty batches
404                            !batch.is_empty()
405                        })
406                    }
407
408                    // 4.3. Reclock `source_frontier`
409                    //      If any FromTime in source frontier could possibly be reclocked to this
410                    //      binding then we must maintain our capability to emit data at that time
411                    //      and not compact past it. Since we iterate over this loop in time order
412                    //      and IntoTime is a total order we only need to perform this step once.
413                    //      Once a `cur_time` is inserted into `reclocked_source_frontier` no more
414                    //      changes can be made to the frontier by inserting times later in the
415                    //      loop.
416                    if !frontier_reclocked
417                        && source_frontier
418                            .frontier()
419                            .iter()
420                            .any(|t| !cur_binding.less_equal(t))
421                    {
422                        reclocked_source_frontier.insert(cur_time.clone());
423                        frontier_reclocked = true;
424                    }
425                }
426
427                // STEP 5. Downgrade capability and compact remap trace
428                capset.downgrade(&reclocked_source_frontier.borrow());
429                remap_since = reclocked_source_frontier;
430                for (_, t, _) in remap_trace.iter_mut() {
431                    t.advance_by(remap_since.borrow());
432                }
433                consolidation::consolidate_updates(&mut remap_trace);
434                remap_trace
435                    .sort_unstable_by(|(_, t1, _): &(_, IntoTime, _), (_, t2, _)| t1.cmp(t2));
436
437                // If using less than a quarter of the capacity, shrink the container. To avoid having
438                // to resize the container on a subsequent push, shrink to 2x the length, which is
439                // what push would grow it to.
440                if remap_trace.len() < remap_trace.capacity() / 4 {
441                    remap_trace.shrink_to(remap_trace.len() * 2);
442                }
443            }
444
445            // STEP 6. Tidy up deferred updates
446            //         Deferred updates are represented as a list of chain batches where each batch
447            //         contains two times the updates of the batch proceeding it. This organization
448            //         leads to a logarithmic number of batches with respect to the outstanding
449            //         number of updates.
450            deferred_source_updates.sort_unstable_by_key(|b| Reverse(b.len()));
451            if !new_source_updates.is_empty() {
452                deferred_source_updates.push(new_source_updates);
453            }
454            let dsu = &mut deferred_source_updates;
455            while dsu.len() > 1 && (dsu[dsu.len() - 1].len() >= dsu[dsu.len() - 2].len() / 2) {
456                let a = dsu.pop().unwrap();
457                let b = dsu.pop().unwrap();
458                dsu.push(a.merge_with(b));
459            }
460
461            // If using less than a quarter of the capacity, shrink the container. To avoid having
462            // to resize the container on a subsequent push, shrink to 2x the length, which is
463            // what push would grow it to.
464            if deferred_source_updates.len() < deferred_source_updates.capacity() / 4 {
465                deferred_source_updates.shrink_to(deferred_source_updates.len() * 2);
466            }
467        }
468    });
469
470    (Box::new(pusher), reclocked.as_collection())
471}
472
473/// A batch of differential updates that vary over some partial order. This type maintains the data
474/// as a set of chains that allows for efficient extraction of batches given a frontier.
475#[derive(Debug, PartialEq)]
476struct ChainBatch<D, T, R> {
477    /// A list of chains (sets of mutually comparable times) sorted by the partial order.
478    chains: Vec<VecDeque<(D, T, R)>>,
479}
480
481impl<D, T: Timestamp, R> ChainBatch<D, T, R> {
482    /// Extracts all updates with time not greater or equal to any time in `upper`.
483    fn extract<'a>(
484        &'a mut self,
485        upper: AntichainRef<'a, T>,
486    ) -> impl Iterator<Item = (D, T, R)> + 'a {
487        self.chains.retain(|chain| !chain.is_empty());
488        self.chains.iter_mut().flat_map(move |chain| {
489            // A chain is a sorted list of mutually comparable elements so we keep extracting
490            // elements that are not beyond upper.
491            std::iter::from_fn(move || {
492                let (_, into, _) = chain.front()?;
493                if !upper.less_equal(into) {
494                    chain.pop_front()
495                } else {
496                    None
497                }
498            })
499        })
500    }
501
502    fn merge_with(
503        mut self: ChainBatch<D, T, R>,
504        mut other: ChainBatch<D, T, R>,
505    ) -> ChainBatch<D, T, R>
506    where
507        D: ExchangeData,
508        T: Timestamp,
509        R: Semigroup,
510    {
511        let mut updates1 = self.chains.drain(..).flatten().peekable();
512        let mut updates2 = other.chains.drain(..).flatten().peekable();
513
514        let merged = std::iter::from_fn(|| {
515            match (updates1.peek(), updates2.peek()) {
516                (Some((d1, t1, _)), Some((d2, t2, _))) => {
517                    match (t1, d1).cmp(&(t2, d2)) {
518                        Ordering::Less => updates1.next(),
519                        Ordering::Greater => updates2.next(),
520                        // If the same (d, t) pair is found, consolidate their diffs
521                        Ordering::Equal => {
522                            let (d1, t1, mut r1) = updates1.next().unwrap();
523                            while let Some((_, _, r)) =
524                                updates1.next_if(|(d, t, _)| (d, t) == (&d1, &t1))
525                            {
526                                r1.plus_equals(&r);
527                            }
528                            while let Some((_, _, r)) =
529                                updates2.next_if(|(d, t, _)| (d, t) == (&d1, &t1))
530                            {
531                                r1.plus_equals(&r);
532                            }
533                            Some((d1, t1, r1))
534                        }
535                    }
536                }
537                (Some(_), None) => updates1.next(),
538                (None, Some(_)) => updates2.next(),
539                (None, None) => None,
540            }
541        });
542
543        ChainBatch::from_iter(merged.filter(|(_, _, r)| !r.is_zero()))
544    }
545
546    /// Returns the number of updates in the batch.
547    fn len(&self) -> usize {
548        self.chains.iter().map(|chain| chain.len()).sum()
549    }
550
551    /// Returns true if the batch contains no updates.
552    fn is_empty(&self) -> bool {
553        self.len() == 0
554    }
555}
556
557impl<D, T: Timestamp, R> FromIterator<(D, T, R)> for ChainBatch<D, T, R> {
558    /// Computes the chain decomposition of updates according to the partial order `T`.
559    fn from_iter<I: IntoIterator<Item = (D, T, R)>>(updates: I) -> Self {
560        let mut chains = vec![];
561        let mut updates = updates.into_iter();
562        if let Some((d, t, r)) = updates.next() {
563            let mut chain = VecDeque::new();
564            chain.push_back((d, t, r));
565            for (d, t, r) in updates {
566                let prev_t = &chain[chain.len() - 1].1;
567                if !PartialOrder::less_equal(prev_t, &t) {
568                    chains.push(chain);
569                    chain = VecDeque::new();
570                }
571                chain.push_back((d, t, r));
572            }
573            chains.push(chain);
574        }
575        Self { chains }
576    }
577}
578
579#[cfg(test)]
580mod test {
581    use std::sync::atomic::AtomicUsize;
582    use std::sync::mpsc::{Receiver, TryRecvError};
583
584    use differential_dataflow::consolidation;
585    use differential_dataflow::input::{Input, InputSession};
586    use serde::{Deserialize, Serialize};
587    use timely::communication::allocator::Thread;
588    use timely::dataflow::operators::capture::{Event, Extract};
589    use timely::dataflow::operators::unordered_input::UnorderedHandle;
590    use timely::dataflow::operators::{ActivateCapability, Capture, UnorderedInput};
591    use timely::progress::PathSummary;
592    use timely::progress::timestamp::Refines;
593    use timely::worker::Worker;
594
595    use crate::capture::PusherCapture;
596    use crate::order::Partitioned;
597
598    use super::*;
599
600    type Diff = Overflowing<i64>;
601    type FromTime = Partitioned<u64, u64>;
602    type IntoTime = u64;
603    type BindingHandle<FromTime> = InputSession<IntoTime, FromTime, Diff>;
604    type DataHandle<D, FromTime> = (
605        UnorderedHandle<FromTime, (D, FromTime, Diff)>,
606        ActivateCapability<FromTime>,
607    );
608    type ReclockedStream<D> = Receiver<Event<IntoTime, Vec<(D, IntoTime, Diff)>>>;
609
610    /// A helper function that sets up a dataflow program to test the reclocking operator. Each
611    /// test provides a test logic closure which accepts four arguments:
612    ///
613    /// * A reference to the worker that allows the test to step the computation
614    /// * A `BindingHandle` that allows the test to manipulate the remap bindings
615    /// * A `DataHandle` that allows the test to submit the data to be reclocked
616    /// * A `ReclockedStream` that allows observing the result of the reclocking process
617    fn harness<FromTime, D, F, R>(as_of: Antichain<IntoTime>, test_logic: F) -> R
618    where
619        FromTime: Timestamp + Refines<()>,
620        D: ExchangeData,
621        F: FnOnce(
622                &mut Worker<Thread>,
623                BindingHandle<FromTime>,
624                DataHandle<D, FromTime>,
625                ReclockedStream<D>,
626            ) -> R
627            + Send
628            + Sync
629            + 'static,
630        R: Send + 'static,
631    {
632        timely::execute_directly(move |worker| {
633            let (bindings, data, data_cap, reclocked) = worker.dataflow::<(), _, _>(|scope| {
634                let (bindings, data_pusher, reclocked) =
635                    scope.scoped::<IntoTime, _, _>("IntoScope", move |scope| {
636                        let (binding_handle, binding_collection) = scope.new_collection();
637                        let (data_pusher, reclocked_collection) =
638                            reclock(&binding_collection, as_of);
639                        let reclocked_capture = reclocked_collection.inner.capture();
640                        (binding_handle, data_pusher, reclocked_capture)
641                    });
642
643                let (data, data_cap) = scope.scoped::<FromTime, _, _>("FromScope", move |scope| {
644                    let ((handle, cap), data) = scope.new_unordered_input();
645                    data.capture_into(PusherCapture(data_pusher));
646                    (handle, cap)
647                });
648
649                (bindings, data, data_cap, reclocked)
650            });
651
652            test_logic(worker, bindings, (data, data_cap), reclocked)
653        })
654    }
655
656    /// Steps the worker four times which is the required number of times for both data and
657    /// frontier updates to propagate across the two scopes and into the probing channels.
658    fn step(worker: &mut Worker<Thread>) {
659        for _ in 0..4 {
660            worker.step();
661        }
662    }
663
664    #[mz_ore::test]
665    fn basic_reclocking() {
666        let as_of = Antichain::from_elem(IntoTime::minimum());
667        harness::<FromTime, _, _, _>(
668            as_of,
669            |worker, bindings, (mut data, data_cap), reclocked| {
670                // Reclock everything at the minimum IntoTime
671                bindings.close();
672                data.session(data_cap)
673                    .give(('a', Partitioned::minimum(), Diff::ONE));
674                step(worker);
675                let extracted = reclocked.extract();
676                let expected = vec![(0, vec![('a', 0, Diff::ONE)])];
677                assert_eq!(extracted, expected);
678            },
679        )
680    }
681
682    /// Generates a `Partitioned<u64, u64>` Antichain where all the provided
683    /// partitions are at the specified offset and the gaps in between are filled with range
684    /// timestamps at offset zero.
685    fn partitioned_frontier<I>(items: I) -> Antichain<Partitioned<u64, u64>>
686    where
687        I: IntoIterator<Item = (u64, u64)>,
688    {
689        let mut frontier = Antichain::new();
690        let mut prev = 0;
691        for (pid, offset) in items {
692            if prev < pid {
693                frontier.insert(Partitioned::new_range(prev, pid - 1, 0));
694            }
695            frontier.insert(Partitioned::new_singleton(pid, offset));
696            prev = pid + 1
697        }
698        frontier.insert(Partitioned::new_range(prev, u64::MAX, 0));
699        frontier
700    }
701
702    #[mz_ore::test]
703    fn test_basic_usage() {
704        let as_of = Antichain::from_elem(IntoTime::minimum());
705        harness(
706            as_of,
707            |worker, mut bindings, (mut data, data_cap), reclocked| {
708                // Reclock offsets 1 and 3 to timestamp 1000
709                bindings.update_at(Partitioned::minimum(), 0, Diff::ONE);
710                bindings.update_at(Partitioned::minimum(), 1000, Diff::MINUS_ONE);
711                for time in partitioned_frontier([(0, 4)]) {
712                    bindings.update_at(time, 1000, Diff::ONE);
713                }
714                bindings.advance_to(1001);
715                bindings.flush();
716                data.session(data_cap.clone()).give_iterator(
717                    vec![
718                        (1, Partitioned::new_singleton(0, 1), Diff::ONE),
719                        (1, Partitioned::new_singleton(0, 1), Diff::ONE),
720                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
721                    ]
722                    .into_iter(),
723                );
724
725                step(worker);
726                assert_eq!(
727                    reclocked.try_recv(),
728                    Ok(Event::Messages(
729                        0u64,
730                        vec![
731                            (1, 1000, Diff::ONE),
732                            (1, 1000, Diff::ONE),
733                            (3, 1000, Diff::ONE)
734                        ]
735                    ))
736                );
737                assert_eq!(
738                    reclocked.try_recv(),
739                    Ok(Event::Progress(vec![(0, -1), (1000, 1)]))
740                );
741
742                // Reclock more messages for offsets 3 to the same timestamp
743                data.session(data_cap.clone()).give_iterator(
744                    vec![
745                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
746                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
747                    ]
748                    .into_iter(),
749                );
750                step(worker);
751                assert_eq!(
752                    reclocked.try_recv(),
753                    Ok(Event::Messages(
754                        1000u64,
755                        vec![(3, 1000, Diff::ONE), (3, 1000, Diff::ONE)]
756                    ))
757                );
758
759                // Drop the capability which should advance the reclocked frontier to 1001.
760                drop(data_cap);
761                step(worker);
762                assert_eq!(
763                    reclocked.try_recv(),
764                    Ok(Event::Progress(vec![(1000, -1), (1001, 1)]))
765                );
766            },
767        );
768    }
769
770    #[mz_ore::test]
771    fn test_reclock_frontier() {
772        let as_of = Antichain::from_elem(IntoTime::minimum());
773        harness::<_, (), _, _>(
774            as_of,
775            |worker, mut bindings, (_data, data_cap), reclocked| {
776                // Initialize the bindings such that the minimum IntoTime contains the minimum FromTime
777                // frontier.
778                bindings.update_at(Partitioned::minimum(), 0, Diff::ONE);
779                bindings.advance_to(1);
780                bindings.flush();
781                step(worker);
782                assert_eq!(
783                    reclocked.try_recv(),
784                    Ok(Event::Progress(vec![(0, -1), (1, 1)]))
785                );
786
787                // Mint a couple of bindings for multiple partitions
788                bindings.update_at(Partitioned::minimum(), 1000, Diff::MINUS_ONE);
789                for time in partitioned_frontier([(1, 10)]) {
790                    bindings.update_at(time.clone(), 1000, Diff::ONE);
791                    bindings.update_at(time, 2000, Diff::MINUS_ONE);
792                }
793                for time in partitioned_frontier([(1, 10), (2, 10)]) {
794                    bindings.update_at(time, 2000, Diff::ONE);
795                }
796                bindings.advance_to(2001);
797                bindings.flush();
798
799                // The initial frontier should now map to the minimum between the two partitions
800                step(worker);
801                step(worker);
802                assert_eq!(
803                    reclocked.try_recv(),
804                    Ok(Event::Progress(vec![(1, -1), (1000, 1)]))
805                );
806
807                // Downgrade data frontier such that only one of the partitions is advanced
808                let mut part1_cap = data_cap.delayed(&Partitioned::new_singleton(1, 9));
809                let mut part2_cap = data_cap.delayed(&Partitioned::new_singleton(2, 0));
810                let _rest_cap = data_cap.delayed(&Partitioned::new_range(3, u64::MAX, 0));
811                drop(data_cap);
812                step(worker);
813                assert_eq!(reclocked.try_recv(), Err(TryRecvError::Empty));
814
815                // Downgrade the data frontier past the first binding
816                part1_cap.downgrade(&Partitioned::new_singleton(1, 10));
817                step(worker);
818                assert_eq!(
819                    reclocked.try_recv(),
820                    Ok(Event::Progress(vec![(1000, -1), (2000, 1)]))
821                );
822
823                // Downgrade the data frontier past the second binding
824                part2_cap.downgrade(&Partitioned::new_singleton(2, 10));
825                step(worker);
826                assert_eq!(
827                    reclocked.try_recv(),
828                    Ok(Event::Progress(vec![(2000, -1), (2001, 1)]))
829                );
830
831                // Advance the binding frontier and confirm that we get to the next timestamp
832                bindings.advance_to(3001);
833                bindings.flush();
834                step(worker);
835                assert_eq!(
836                    reclocked.try_recv(),
837                    Ok(Event::Progress(vec![(2001, -1), (3001, 1)]))
838                );
839            },
840        );
841    }
842
843    #[mz_ore::test]
844    fn test_reclock() {
845        let as_of = Antichain::from_elem(IntoTime::minimum());
846        harness(
847            as_of,
848            |worker, mut bindings, (mut data, data_cap), reclocked| {
849                // Initialize the bindings such that the minimum IntoTime contains the minimum FromTime
850                // frontier.
851                bindings.update_at(Partitioned::minimum(), 0, Diff::ONE);
852
853                // Setup more precise capabilities for the rest of the test
854                let mut part0_cap = data_cap.delayed(&Partitioned::new_singleton(0, 0));
855                let rest_cap = data_cap.delayed(&Partitioned::new_range(1, u64::MAX, 0));
856                drop(data_cap);
857
858                // Reclock offsets 1 and 2 to timestamp 1000
859                data.session(part0_cap.clone()).give_iterator(
860                    vec![
861                        (1, Partitioned::new_singleton(0, 1), Diff::ONE),
862                        (2, Partitioned::new_singleton(0, 2), Diff::ONE),
863                    ]
864                    .into_iter(),
865                );
866
867                part0_cap.downgrade(&Partitioned::new_singleton(0, 3));
868                bindings.update_at(Partitioned::minimum(), 1000, Diff::MINUS_ONE);
869                bindings.update_at(part0_cap.time().clone(), 1000, Diff::ONE);
870                bindings.update_at(rest_cap.time().clone(), 1000, Diff::ONE);
871                bindings.advance_to(1001);
872                bindings.flush();
873                step(worker);
874                assert_eq!(
875                    reclocked.try_recv(),
876                    Ok(Event::Messages(
877                        0,
878                        vec![(1, 1000, Diff::ONE), (2, 1000, Diff::ONE)]
879                    ))
880                );
881                assert_eq!(
882                    reclocked.try_recv(),
883                    Ok(Event::Progress(vec![(0, -1), (1000, 1)]))
884                );
885                assert_eq!(
886                    reclocked.try_recv(),
887                    Ok(Event::Progress(vec![(1000, -1), (1001, 1)]))
888                );
889
890                // Reclock offsets 3 and 4 to timestamp 2000
891                data.session(part0_cap.clone()).give_iterator(
892                    vec![
893                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
894                        (3, Partitioned::new_singleton(0, 3), Diff::ONE),
895                        (4, Partitioned::new_singleton(0, 4), Diff::ONE),
896                    ]
897                    .into_iter(),
898                );
899                bindings.update_at(part0_cap.time().clone(), 2000, Diff::MINUS_ONE);
900                part0_cap.downgrade(&Partitioned::new_singleton(0, 5));
901                bindings.update_at(part0_cap.time().clone(), 2000, Diff::ONE);
902                bindings.advance_to(2001);
903                bindings.flush();
904                step(worker);
905                assert_eq!(
906                    reclocked.try_recv(),
907                    Ok(Event::Messages(
908                        1001,
909                        vec![
910                            (3, 2000, Diff::ONE),
911                            (3, 2000, Diff::ONE),
912                            (4, 2000, Diff::ONE)
913                        ]
914                    ))
915                );
916                assert_eq!(
917                    reclocked.try_recv(),
918                    Ok(Event::Progress(vec![(1001, -1), (2000, 1)]))
919                );
920                assert_eq!(
921                    reclocked.try_recv(),
922                    Ok(Event::Progress(vec![(2000, -1), (2001, 1)]))
923                );
924            },
925        );
926    }
927
928    #[mz_ore::test]
929    fn test_reclock_gh16318() {
930        let as_of = Antichain::from_elem(IntoTime::minimum());
931        harness(
932            as_of,
933            |worker, mut bindings, (mut data, data_cap), reclocked| {
934                // Initialize the bindings such that the minimum IntoTime contains the minimum FromTime
935                // frontier.
936                bindings.update_at(Partitioned::minimum(), 0, Diff::ONE);
937                // First mint bindings for 0 at timestamp 1000
938                bindings.update_at(Partitioned::minimum(), 1000, Diff::MINUS_ONE);
939                for time in partitioned_frontier([(0, 50)]) {
940                    bindings.update_at(time, 1000, Diff::ONE);
941                }
942                // Then only for 1 at timestamp 2000
943                for time in partitioned_frontier([(0, 50)]) {
944                    bindings.update_at(time, 2000, Diff::MINUS_ONE);
945                }
946                for time in partitioned_frontier([(0, 50), (1, 50)]) {
947                    bindings.update_at(time, 2000, Diff::ONE);
948                }
949                // Then again only for 0 at timestamp 3000
950                for time in partitioned_frontier([(0, 50), (1, 50)]) {
951                    bindings.update_at(time, 3000, Diff::MINUS_ONE);
952                }
953                for time in partitioned_frontier([(0, 100), (1, 50)]) {
954                    bindings.update_at(time, 3000, Diff::ONE);
955                }
956                bindings.advance_to(3001);
957                bindings.flush();
958
959                // Reclockng (0, 50) must ignore the updates on the FromTime frontier that happened at
960                // timestamp 2000 since those are completely unrelated
961                data.session(data_cap)
962                    .give((50, Partitioned::new_singleton(0, 50), Diff::ONE));
963                step(worker);
964                assert_eq!(
965                    reclocked.try_recv(),
966                    Ok(Event::Messages(0, vec![(50, 3000, Diff::ONE),]))
967                );
968                assert_eq!(
969                    reclocked.try_recv(),
970                    Ok(Event::Progress(vec![(0, -1), (1000, 1)]))
971                );
972                assert_eq!(
973                    reclocked.try_recv(),
974                    Ok(Event::Progress(vec![(1000, -1), (3001, 1)]))
975                );
976            },
977        );
978    }
979
980    /// Test that compact(reclock(remap, source)) == reclock(compact(remap), source)
981    #[mz_ore::test]
982    fn test_compaction() {
983        let mut remap = vec![];
984        remap.push((Partitioned::minimum(), 0, Diff::ONE));
985        // Reclock offsets 1 and 2 to timestamp 1000
986        remap.push((Partitioned::minimum(), 1000, Diff::MINUS_ONE));
987        for time in partitioned_frontier([(0, 3)]) {
988            remap.push((time, 1000, Diff::ONE));
989        }
990        // Reclock offsets 3 and 4 to timestamp 2000
991        for time in partitioned_frontier([(0, 3)]) {
992            remap.push((time, 2000, Diff::MINUS_ONE));
993        }
994        for time in partitioned_frontier([(0, 5)]) {
995            remap.push((time, 2000, Diff::ONE));
996        }
997
998        let source_updates = vec![
999            (1, Partitioned::new_singleton(0, 1), Diff::ONE),
1000            (2, Partitioned::new_singleton(0, 2), Diff::ONE),
1001            (3, Partitioned::new_singleton(0, 3), Diff::ONE),
1002            (4, Partitioned::new_singleton(0, 4), Diff::ONE),
1003        ];
1004
1005        let since = Antichain::from_elem(1500);
1006
1007        // Compute reclock(remap, source)
1008        let as_of = Antichain::from_elem(IntoTime::minimum());
1009        let remap1 = remap.clone();
1010        let source_updates1 = source_updates.clone();
1011        let reclock_remap = harness(
1012            as_of,
1013            move |worker, mut bindings, (mut data, data_cap), reclocked| {
1014                for (from_ts, into_ts, diff) in remap1 {
1015                    bindings.update_at(from_ts, into_ts, diff);
1016                }
1017                bindings.close();
1018                data.session(data_cap)
1019                    .give_iterator(source_updates1.iter().cloned());
1020                step(worker);
1021                reclocked.extract()
1022            },
1023        );
1024        // Compute compact(reclock(remap, source))
1025        let mut compact_reclock_remap = reclock_remap;
1026        for (t, updates) in compact_reclock_remap.iter_mut() {
1027            t.advance_by(since.borrow());
1028            for (_, t, _) in updates.iter_mut() {
1029                t.advance_by(since.borrow());
1030            }
1031        }
1032
1033        // Compute compact(remap)
1034        let mut compact_remap = remap;
1035        for (_, t, _) in compact_remap.iter_mut() {
1036            t.advance_by(since.borrow());
1037        }
1038        consolidation::consolidate_updates(&mut compact_remap);
1039        // Compute reclock(compact(remap), source)
1040        let reclock_compact_remap = harness(
1041            since,
1042            move |worker, mut bindings, (mut data, data_cap), reclocked| {
1043                for (from_ts, into_ts, diff) in compact_remap {
1044                    bindings.update_at(from_ts, into_ts, diff);
1045                }
1046                bindings.close();
1047                data.session(data_cap)
1048                    .give_iterator(source_updates.iter().cloned());
1049                step(worker);
1050                reclocked.extract()
1051            },
1052        );
1053
1054        let expected = vec![(
1055            1500,
1056            vec![
1057                (1, 1500, Diff::ONE),
1058                (2, 1500, Diff::ONE),
1059                (3, 2000, Diff::ONE),
1060                (4, 2000, Diff::ONE),
1061            ],
1062        )];
1063        assert_eq!(expected, reclock_compact_remap);
1064        assert_eq!(expected, compact_reclock_remap);
1065    }
1066
1067    #[mz_ore::test]
1068    fn test_chainbatch_merge() {
1069        let a = ChainBatch::from_iter([('a', 0, 1)]);
1070        let b = ChainBatch::from_iter([('a', 0, -1), ('a', 1, 1)]);
1071        assert_eq!(a.merge_with(b), ChainBatch::from_iter([('a', 1, 1)]));
1072    }
1073
1074    #[mz_ore::test]
1075    #[cfg_attr(miri, ignore)] // too slow
1076    fn test_binding_consolidation() {
1077        use std::sync::atomic::Ordering;
1078
1079        #[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
1080        struct Time(u64);
1081
1082        // A counter of the number of active Time instances
1083        static INSTANCES: AtomicUsize = AtomicUsize::new(0);
1084
1085        impl Time {
1086            fn new(time: u64) -> Self {
1087                INSTANCES.fetch_add(1, Ordering::Relaxed);
1088                Self(time)
1089            }
1090        }
1091
1092        impl Clone for Time {
1093            fn clone(&self) -> Self {
1094                INSTANCES.fetch_add(1, Ordering::Relaxed);
1095                Self(self.0)
1096            }
1097        }
1098
1099        impl Drop for Time {
1100            fn drop(&mut self) {
1101                INSTANCES.fetch_sub(1, Ordering::Relaxed);
1102            }
1103        }
1104
1105        impl Timestamp for Time {
1106            type Summary = ();
1107
1108            fn minimum() -> Self {
1109                Time::new(0)
1110            }
1111        }
1112
1113        impl PathSummary<Time> for () {
1114            fn results_in(&self, src: &Time) -> Option<Time> {
1115                Some(src.clone())
1116            }
1117
1118            fn followed_by(&self, _other: &()) -> Option<Self> {
1119                Some(())
1120            }
1121        }
1122
1123        impl Refines<()> for Time {
1124            fn to_inner(_: ()) -> Self {
1125                Self::minimum()
1126            }
1127            fn to_outer(self) -> () {}
1128            fn summarize(_path: ()) {}
1129        }
1130
1131        impl PartialOrder for Time {
1132            fn less_equal(&self, other: &Self) -> bool {
1133                self.0.less_equal(&other.0)
1134            }
1135        }
1136
1137        let as_of = 1000;
1138
1139        // Test that supplying a single big batch of unconsolidated bindings gets
1140        // consolidated after a single worker step.
1141        harness::<Time, u64, _, _>(
1142            Antichain::from_elem(as_of),
1143            move |worker, mut bindings, _, _| {
1144                step(worker);
1145                let instances_before = INSTANCES.load(Ordering::Relaxed);
1146                for ts in 0..as_of {
1147                    if ts > 0 {
1148                        bindings.update_at(Time::new(ts - 1), ts, Diff::MINUS_ONE);
1149                    }
1150                    bindings.update_at(Time::new(ts), ts, Diff::ONE);
1151                }
1152                bindings.advance_to(as_of);
1153                bindings.flush();
1154                step(worker);
1155                let instances_after = INSTANCES.load(Ordering::Relaxed);
1156                // The extra instances live in a ChangeBatch which considers compaction when more
1157                // than 32 elements are inside.
1158                assert!(instances_after - instances_before < 32);
1159            },
1160        );
1161
1162        // Test that a slow feed of uncompacted bindings over multiple steps never leads to an
1163        // excessive number of bindings held in memory.
1164        harness::<Time, u64, _, _>(
1165            Antichain::from_elem(as_of),
1166            move |worker, mut bindings, _, _| {
1167                step(worker);
1168                let instances_before = INSTANCES.load(Ordering::Relaxed);
1169                for ts in 0..as_of {
1170                    if ts > 0 {
1171                        bindings.update_at(Time::new(ts - 1), ts, Diff::MINUS_ONE);
1172                    }
1173                    bindings.update_at(Time::new(ts), ts, Diff::ONE);
1174                    bindings.advance_to(ts + 1);
1175                    bindings.flush();
1176                    step(worker);
1177                    let instances_now = INSTANCES.load(Ordering::Relaxed);
1178                    // The extra instances live in a ChangeBatch which considers compaction when
1179                    // more than 32 elements are inside.
1180                    assert!(instances_now - instances_before < 32);
1181                }
1182            },
1183        );
1184    }
1185
1186    #[cfg(feature = "count-allocations")]
1187    #[mz_ore::test]
1188    #[cfg_attr(miri, ignore)] // too slow
1189    fn test_shrinking() {
1190        let as_of = 1000_u64;
1191
1192        // This workflow accumulates updates in remap_trace, advances the source frontier,
1193        // and validates that memory was reclaimed.  To avoid errant test failures due to
1194        // optimizations, this only validates that memory is reclaimed, not how much.
1195        harness::<FromTime, u64, _, _>(
1196            Antichain::from_elem(0),
1197            move |worker, mut bindings, (_data, mut data_cap), _| {
1198                let info1 = allocation_counter::measure(|| {
1199                    step(worker);
1200                    for ts in 0..as_of {
1201                        if ts > 0 {
1202                            bindings.update_at(
1203                                Partitioned::new_singleton(0, ts - 1),
1204                                ts,
1205                                Diff::MINUS_ONE,
1206                            );
1207                        }
1208                        bindings.update_at(Partitioned::new_singleton(0, ts), ts, Diff::ONE);
1209                        bindings.advance_to(ts + 1);
1210                        bindings.flush();
1211                        step(worker);
1212                    }
1213                });
1214                println!("info = {info1:?}");
1215
1216                let info2 = allocation_counter::measure(|| {
1217                    data_cap.downgrade(&Partitioned::new_singleton(0, as_of));
1218                    step(worker);
1219                });
1220                println!("info = {info2:?}");
1221                assert!(info2.bytes_current < 0);
1222            },
1223        );
1224    }
1225}