differential_dataflow/trace/implementations/
spine_fueled.rs

1//! An append-only collection of update batches.
2//!
3//! The `Spine` is a general-purpose trace implementation based on collection and merging
4//! immutable batches of updates. It is generic with respect to the batch type, and can be
5//! instantiated for any implementor of `trace::Batch`.
6//!
7//! ## Design
8//!
9//! This spine is represented as a list of layers, where each element in the list is either
10//!
11//!   1. MergeState::Vacant  empty
12//!   2. MergeState::Single  a single batch
13//!   3. MergeState::Double  a pair of batches
14//!
15//! Each "batch" has the option to be `None`, indicating a non-batch that nonetheless acts
16//! as a number of updates proportionate to the level at which it exists (for bookkeeping).
17//!
18//! Each of the batches at layer i contains at most 2^i elements. The sequence of batches
19//! should have the upper bound of one match the lower bound of the next. Batches may be
20//! logically empty, with matching upper and lower bounds, as a bookkeeping mechanism.
21//!
22//! Each batch at layer i is treated as if it contains exactly 2^i elements, even though it
23//! may actually contain fewer elements. This allows us to decouple the physical representation
24//! from logical amounts of effort invested in each batch. It allows us to begin compaction and
25//! to reduce the number of updates, without compromising our ability to continue to move
26//! updates along the spine. We are explicitly making the trade-off that while some batches
27//! might compact at lower levels, we want to treat them as if they contained their full set of
28//! updates for accounting reasons (to apply work to higher levels).
29//!
30//! We maintain the invariant that for any in-progress merge at level k there should be fewer
31//! than 2^k records at levels lower than k. That is, even if we were to apply an unbounded
32//! amount of effort to those records, we would not have enough records to prompt a merge into
33//! the in-progress merge. Ideally, we maintain the extended invariant that for any in-progress
34//! merge at level k, the remaining effort required (number of records minus applied effort) is
35//! less than the number of records that would need to be added to reach 2^k records in layers
36//! below.
37//!
38//! ## Mathematics
39//!
40//! When a merge is initiated, there should be a non-negative *deficit* of updates before the layers
41//! below could plausibly produce a new batch for the currently merging layer. We must determine a
42//! factor of proportionality, so that newly arrived updates provide at least that amount of "fuel"
43//! towards the merging layer, so that the merge completes before lower levels invade.
44//!
45//! ### Deficit:
46//!
47//! A new merge is initiated only in response to the completion of a prior merge, or the introduction
48//! of new records from outside. The latter case is special, and will maintain our invariant trivially,
49//! so we will focus on the former case.
50//!
51//! When a merge at level k completes, assuming we have maintained our invariant then there should be
52//! fewer than 2^k records at lower levels. The newly created merge at level k+1 will require up to
53//! 2^k+2 units of work, and should not expect a new batch until strictly more than 2^k records are
54//! added. This means that a factor of proportionality of four should be sufficient to ensure that
55//! the merge completes before a new merge is initiated.
56//!
57//! When new records get introduced, we will need to roll up any batches at lower levels, which we
58//! treat as the introduction of records. Each of these virtual records introduced should either be
59//! accounted for the fuel it should contribute, as it results in the promotion of batches closer to
60//! in-progress merges.
61//!
62//! ### Fuel sharing
63//!
64//! We like the idea of applying fuel preferentially to merges at *lower* levels, under the idea that
65//! they are easier to complete, and we benefit from fewer total merges in progress. This does delay
66//! the completion of merges at higher levels, and may not obviously be a total win. If we choose to
67//! do this, we should make sure that we correctly account for completed merges at low layers: they
68//! should still extract fuel from new updates even though they have completed, at least until they
69//! have paid back any "debt" to higher layers by continuing to provide fuel as updates arrive.
70
71
72use crate::logging::Logger;
73use crate::trace::{Batch, BatchReader, Trace, TraceReader, ExertionLogic};
74use crate::trace::cursor::CursorList;
75use crate::trace::Merger;
76
77use ::timely::dataflow::operators::generic::OperatorInfo;
78use ::timely::progress::{Antichain, frontier::AntichainRef};
79use ::timely::order::PartialOrder;
80
81/// An append-only collection of update tuples.
82///
83/// A spine maintains a small number of immutable collections of update tuples, merging the collections when
84/// two have similar sizes. In this way, it allows the addition of more tuples, which may then be merged with
85/// other immutable collections.
86pub struct Spine<B: Batch> {
87    operator: OperatorInfo,
88    logger: Option<Logger>,
89    logical_frontier: Antichain<B::Time>,   // Times after which the trace must accumulate correctly.
90    physical_frontier: Antichain<B::Time>,  // Times after which the trace must be able to subset its inputs.
91    merging: Vec<MergeState<B>>,            // Several possibly shared collections of updates.
92    pending: Vec<B>,                        // Batches at times in advance of `frontier`.
93    upper: Antichain<B::Time>,
94    effort: usize,
95    activator: Option<timely::scheduling::activate::Activator>,
96    /// Parameters to `exert_logic`, containing tuples of `(index, count, length)`.
97    exert_logic_param: Vec<(usize, usize, usize)>,
98    /// Logic to indicate whether and how many records we should introduce in the absence of actual updates.
99    exert_logic: Option<ExertionLogic>,
100}
101
102use crate::trace::WithLayout;
103impl<B: Batch> WithLayout for Spine<B> {
104    type Layout = B::Layout;
105}
106
107impl<B: Batch+Clone+'static> TraceReader for Spine<B> {
108
109    type Batch = B;
110    type Storage = Vec<B>;
111    type Cursor = CursorList<<B as BatchReader>::Cursor>;
112
113    fn cursor_through(&mut self, upper: AntichainRef<Self::Time>) -> Option<(Self::Cursor, Self::Storage)> {
114
115        // If `upper` is the minimum frontier, we can return an empty cursor.
116        // This can happen with operators that are written to expect the ability to acquire cursors
117        // for their prior frontiers, and which start at `[T::minimum()]`, such as `Reduce`, sadly.
118        if upper.less_equal(&<Self::Time as timely::progress::Timestamp>::minimum()) {
119            let cursors = Vec::new();
120            let storage = Vec::new();
121            return Some((CursorList::new(cursors, &storage), storage));
122        }
123
124        // The supplied `upper` should have the property that for each of our
125        // batch `lower` and `upper` frontiers, the supplied upper is comparable
126        // to the frontier; it should not be incomparable, because the frontiers
127        // that we created form a total order. If it is, there is a bug.
128        //
129        // We should acquire a cursor including all batches whose upper is less
130        // or equal to the supplied upper, excluding all batches whose lower is
131        // greater or equal to the supplied upper, and if a batch straddles the
132        // supplied upper it had better be empty.
133
134        // We shouldn't grab a cursor into a closed trace, right?
135        assert!(self.logical_frontier.borrow().len() > 0);
136
137        // Check that `upper` is greater or equal to `self.physical_frontier`.
138        // Otherwise, the cut could be in `self.merging` and it is user error anyhow.
139        // assert!(upper.iter().all(|t1| self.physical_frontier.iter().any(|t2| t2.less_equal(t1))));
140        assert!(PartialOrder::less_equal(&self.physical_frontier.borrow(), &upper));
141
142        let mut cursors = Vec::new();
143        let mut storage = Vec::new();
144
145        for merge_state in self.merging.iter().rev() {
146            match merge_state {
147                MergeState::Double(variant) => {
148                    match variant {
149                        MergeVariant::InProgress(batch1, batch2, _) => {
150                            if !batch1.is_empty() {
151                                cursors.push(batch1.cursor());
152                                storage.push(batch1.clone());
153                            }
154                            if !batch2.is_empty() {
155                                cursors.push(batch2.cursor());
156                                storage.push(batch2.clone());
157                            }
158                        },
159                        MergeVariant::Complete(Some((batch, _))) => {
160                            if !batch.is_empty() {
161                                cursors.push(batch.cursor());
162                                storage.push(batch.clone());
163                            }
164                        }
165                        MergeVariant::Complete(None) => { },
166                    }
167                },
168                MergeState::Single(Some(batch)) => {
169                    if !batch.is_empty() {
170                        cursors.push(batch.cursor());
171                        storage.push(batch.clone());
172                    }
173                },
174                MergeState::Single(None) => { },
175                MergeState::Vacant => { },
176            }
177        }
178
179        for batch in self.pending.iter() {
180
181            if !batch.is_empty() {
182
183                // For a non-empty `batch`, it is a catastrophic error if `upper`
184                // requires some-but-not-all of the updates in the batch. We can
185                // determine this from `upper` and the lower and upper bounds of
186                // the batch itself.
187                //
188                // TODO: It is not clear if this is the 100% correct logic, due
189                // to the possible non-total-orderedness of the frontiers.
190
191                let include_lower = PartialOrder::less_equal(&batch.lower().borrow(), &upper);
192                let include_upper = PartialOrder::less_equal(&batch.upper().borrow(), &upper);
193
194                if include_lower != include_upper && upper != batch.lower().borrow() {
195                    panic!("`cursor_through`: `upper` straddles batch");
196                }
197
198                // include pending batches
199                if include_upper {
200                    cursors.push(batch.cursor());
201                    storage.push(batch.clone());
202                }
203            }
204        }
205
206        Some((CursorList::new(cursors, &storage), storage))
207    }
208    #[inline]
209    fn set_logical_compaction(&mut self, frontier: AntichainRef<B::Time>) {
210        self.logical_frontier.clear();
211        self.logical_frontier.extend(frontier.iter().cloned());
212    }
213    #[inline]
214    fn get_logical_compaction(&mut self) -> AntichainRef<'_, B::Time> { self.logical_frontier.borrow() }
215    #[inline]
216    fn set_physical_compaction(&mut self, frontier: AntichainRef<'_, B::Time>) {
217        // We should never request to rewind the frontier.
218        debug_assert!(PartialOrder::less_equal(&self.physical_frontier.borrow(), &frontier), "FAIL\tthrough frontier !<= new frontier {:?} {:?}\n", self.physical_frontier, frontier);
219        self.physical_frontier.clear();
220        self.physical_frontier.extend(frontier.iter().cloned());
221        self.consider_merges();
222    }
223    #[inline]
224    fn get_physical_compaction(&mut self) -> AntichainRef<'_, B::Time> { self.physical_frontier.borrow() }
225
226    #[inline]
227    fn map_batches<F: FnMut(&Self::Batch)>(&self, mut f: F) {
228        for batch in self.merging.iter().rev() {
229            match batch {
230                MergeState::Double(MergeVariant::InProgress(batch1, batch2, _)) => { f(batch1); f(batch2); },
231                MergeState::Double(MergeVariant::Complete(Some((batch, _)))) => { f(batch) },
232                MergeState::Single(Some(batch)) => { f(batch) },
233                _ => { },
234            }
235        }
236        for batch in self.pending.iter() {
237            f(batch);
238        }
239    }
240}
241
242// A trace implementation for any key type that can be borrowed from or converted into `Key`.
243// TODO: Almost all this implementation seems to be generic with respect to the trace and batch types.
244impl<B: Batch+Clone+'static> Trace for Spine<B> {
245    fn new(
246        info: ::timely::dataflow::operators::generic::OperatorInfo,
247        logging: Option<crate::logging::Logger>,
248        activator: Option<timely::scheduling::activate::Activator>,
249    ) -> Self {
250        Self::with_effort(1, info, logging, activator)
251    }
252
253    /// Apply some amount of effort to trace maintenance.
254    ///
255    /// Whether and how much effort to apply is determined by `self.exert_logic`, a closure the user can set.
256    fn exert(&mut self) {
257        // If there is work to be done, ...
258        self.tidy_layers();
259        // Determine whether we should apply effort independent of updates.
260        if let Some(effort) = self.exert_effort() {
261
262            // If any merges exist, we can directly call `apply_fuel`.
263            if self.merging.iter().any(|b| b.is_double()) {
264                self.apply_fuel(&mut (effort as isize));
265            }
266            // Otherwise, we'll need to introduce fake updates to move merges along.
267            else {
268                // Introduce an empty batch with roughly *effort number of virtual updates.
269                let level = effort.next_power_of_two().trailing_zeros() as usize;
270                self.introduce_batch(None, level);
271            }
272            // We were not in reduced form, so let's check again in the future.
273            if let Some(activator) = &self.activator {
274                activator.activate();
275            }
276        }
277    }
278
279    fn set_exert_logic(&mut self, logic: ExertionLogic) {
280        self.exert_logic = Some(logic);
281    }
282
283    // Ideally, this method acts as insertion of `batch`, even if we are not yet able to begin
284    // merging the batch. This means it is a good time to perform amortized work proportional
285    // to the size of batch.
286    fn insert(&mut self, batch: Self::Batch) {
287
288        // Log the introduction of a batch.
289        self.logger.as_ref().map(|l| l.log(crate::logging::BatchEvent {
290            operator: self.operator.global_id,
291            length: batch.len()
292        }));
293
294        assert!(batch.lower() != batch.upper());
295        assert_eq!(batch.lower(), &self.upper);
296
297        self.upper.clone_from(batch.upper());
298
299        // TODO: Consolidate or discard empty batches.
300        self.pending.push(batch);
301        self.consider_merges();
302    }
303
304    /// Completes the trace with a final empty batch.
305    fn close(&mut self) {
306        if !self.upper.borrow().is_empty() {
307            self.insert(B::empty(self.upper.clone(), Antichain::new()));
308        }
309    }
310}
311
312// Drop implementation allows us to log batch drops, to zero out maintained totals.
313impl<B: Batch> Drop for Spine<B> {
314    fn drop(&mut self) {
315        self.drop_batches();
316    }
317}
318
319
320impl<B: Batch> Spine<B> {
321    /// Drops and logs batches. Used in `set_logical_compaction` and drop.
322    fn drop_batches(&mut self) {
323        if let Some(logger) = &self.logger {
324            for batch in self.merging.drain(..) {
325                match batch {
326                    MergeState::Single(Some(batch)) => {
327                        logger.log(crate::logging::DropEvent {
328                            operator: self.operator.global_id,
329                            length: batch.len(),
330                        });
331                    },
332                    MergeState::Double(MergeVariant::InProgress(batch1, batch2, _)) => {
333                        logger.log(crate::logging::DropEvent {
334                            operator: self.operator.global_id,
335                            length: batch1.len(),
336                        });
337                        logger.log(crate::logging::DropEvent {
338                            operator: self.operator.global_id,
339                            length: batch2.len(),
340                        });
341                    },
342                    MergeState::Double(MergeVariant::Complete(Some((batch, _)))) => {
343                        logger.log(crate::logging::DropEvent {
344                            operator: self.operator.global_id,
345                            length: batch.len(),
346                        });
347                    }
348                    _ => { },
349                }
350            }
351            for batch in self.pending.drain(..) {
352                logger.log(crate::logging::DropEvent {
353                    operator: self.operator.global_id,
354                    length: batch.len(),
355                });
356            }
357        }
358    }
359}
360
361impl<B: Batch> Spine<B> {
362    /// Determine the amount of effort we should exert in the absence of updates.
363    ///
364    /// This method prepares an iterator over batches, including the level, count, and length of each layer.
365    /// It supplies this to `self.exert_logic`, who produces the response of the amount of exertion to apply.
366    fn exert_effort(&mut self) -> Option<usize> {
367        self.exert_logic.as_ref().and_then(|exert_logic| {
368            self.exert_logic_param.clear();
369            self.exert_logic_param.extend(self.merging.iter().enumerate().rev().map(|(index, batch)| {
370                match batch {
371                    MergeState::Vacant => (index, 0, 0),
372                    MergeState::Single(_) => (index, 1, batch.len()),
373                    MergeState::Double(_) => (index, 2, batch.len()),
374                }
375            }));
376
377            (exert_logic)(&self.exert_logic_param[..])
378        })
379    }
380
381    /// Describes the merge progress of layers in the trace.
382    ///
383    /// Intended for diagnostics rather than public consumption.
384    #[allow(dead_code)]
385    fn describe(&self) -> Vec<(usize, usize)> {
386        self.merging
387            .iter()
388            .map(|b| match b {
389                MergeState::Vacant => (0, 0),
390                x @ MergeState::Single(_) => (1, x.len()),
391                x @ MergeState::Double(_) => (2, x.len()),
392            })
393            .collect()
394    }
395
396    /// Allocates a fueled `Spine` with a specified effort multiplier.
397    ///
398    /// This trace will merge batches progressively, with each inserted batch applying a multiple
399    /// of the batch's length in effort to each merge. The `effort` parameter is that multiplier.
400    /// This value should be at least one for the merging to happen; a value of zero is not helpful.
401    pub fn with_effort(
402        mut effort: usize,
403        operator: OperatorInfo,
404        logger: Option<crate::logging::Logger>,
405        activator: Option<timely::scheduling::activate::Activator>,
406    ) -> Self {
407
408        // Zero effort is .. not smart.
409        if effort == 0 { effort = 1; }
410
411        Spine {
412            operator,
413            logger,
414            logical_frontier: Antichain::from_elem(<B::Time as timely::progress::Timestamp>::minimum()),
415            physical_frontier: Antichain::from_elem(<B::Time as timely::progress::Timestamp>::minimum()),
416            merging: Vec::new(),
417            pending: Vec::new(),
418            upper: Antichain::from_elem(<B::Time as timely::progress::Timestamp>::minimum()),
419            effort,
420            activator,
421            exert_logic_param: Vec::default(),
422            exert_logic: None,
423        }
424    }
425
426    /// Migrate data from `self.pending` into `self.merging`.
427    ///
428    /// This method reflects on the bookmarks held by others that may prevent merging, and in the
429    /// case that new batches can be introduced to the pile of mergeable batches, it gets on that.
430    #[inline(never)]
431    fn consider_merges(&mut self) {
432
433        // TODO: Consider merging pending batches before introducing them.
434        // TODO: We could use a `VecDeque` here to draw from the front and append to the back.
435        while !self.pending.is_empty() && PartialOrder::less_equal(self.pending[0].upper(), &self.physical_frontier)
436            //   self.physical_frontier.iter().all(|t1| self.pending[0].upper().iter().any(|t2| t2.less_equal(t1)))
437        {
438            // Batch can be taken in optimized insertion.
439            // Otherwise it is inserted normally at the end of the method.
440            let mut batch = Some(self.pending.remove(0));
441
442            // If `batch` and the most recently inserted batch are both empty, we can just fuse them.
443            // We can also replace a structurally empty batch with this empty batch, preserving the
444            // apparent record count but now with non-trivial lower and upper bounds.
445            if batch.as_ref().unwrap().len() == 0 {
446                if let Some(position) = self.merging.iter().position(|m| !m.is_vacant()) {
447                    if self.merging[position].is_single() && self.merging[position].len() == 0 {
448                        self.insert_at(batch.take(), position);
449                        let merged = self.complete_at(position);
450                        self.merging[position] = MergeState::Single(merged);
451                    }
452                }
453            }
454
455            // Normal insertion for the batch.
456            if let Some(batch) = batch {
457                let index = batch.len().next_power_of_two();
458                self.introduce_batch(Some(batch), index.trailing_zeros() as usize);
459            }
460        }
461
462        // Having performed all of our work, if we should perform more work reschedule ourselves.
463        if self.exert_effort().is_some() {
464            if let Some(activator) = &self.activator {
465                activator.activate();
466            }
467        }
468    }
469
470    /// Introduces a batch at an indicated level.
471    ///
472    /// The level indication is often related to the size of the batch, but
473    /// it can also be used to artificially fuel the computation by supplying
474    /// empty batches at non-trivial indices, to move merges along.
475    pub fn introduce_batch(&mut self, batch: Option<B>, batch_index: usize) {
476
477        // Step 0.  Determine an amount of fuel to use for the computation.
478        //
479        //          Fuel is used to drive maintenance of the data structure,
480        //          and in particular are used to make progress through merges
481        //          that are in progress. The amount of fuel to use should be
482        //          proportional to the number of records introduced, so that
483        //          we are guaranteed to complete all merges before they are
484        //          required as arguments to merges again.
485        //
486        //          The fuel use policy is negotiable, in that we might aim
487        //          to use relatively less when we can, so that we return
488        //          control promptly, or we might account more work to larger
489        //          batches. Not clear to me which are best, of if there
490        //          should be a configuration knob controlling this.
491
492        // The amount of fuel to use is proportional to 2^batch_index, scaled
493        // by a factor of self.effort which determines how eager we are in
494        // performing maintenance work. We need to ensure that each merge in
495        // progress receives fuel for each introduced batch, and so multiply
496        // by that as well.
497        if batch_index > 32 { println!("Large batch index: {}", batch_index); }
498
499        // We believe that eight units of fuel is sufficient for each introduced
500        // record, accounted as four for each record, and a potential four more
501        // for each virtual record associated with promoting existing smaller
502        // batches. We could try and make this be less, or be scaled to merges
503        // based on their deficit at time of instantiation. For now, we remain
504        // conservative.
505        let mut fuel = 8 << batch_index;
506        // Scale up by the effort parameter, which is calibrated to one as the
507        // minimum amount of effort.
508        fuel *= self.effort;
509        // Convert to an `isize` so we can observe any fuel shortfall.
510        let mut fuel = fuel as isize;
511
512        // Step 1.  Apply fuel to each in-progress merge.
513        //
514        //          Before we can introduce new updates, we must apply any
515        //          fuel to in-progress merges, as this fuel is what ensures
516        //          that the merges will be complete by the time we insert
517        //          the updates.
518        self.apply_fuel(&mut fuel);
519
520        // Step 2.  We must ensure the invariant that adjacent layers do not
521        //          contain two batches will be satisfied when we insert the
522        //          batch. We forcibly completing all merges at layers lower
523        //          than and including `batch_index`, so that the new batch
524        //          is inserted into an empty layer.
525        //
526        //          We could relax this to "strictly less than `batch_index`"
527        //          if the layer above has only a single batch in it, which
528        //          seems not implausible if it has been the focus of effort.
529        //
530        //          This should be interpreted as the introduction of some
531        //          volume of fake updates, and we will need to fuel merges
532        //          by a proportional amount to ensure that they are not
533        //          surprised later on. The number of fake updates should
534        //          correspond to the deficit for the layer, which perhaps
535        //          we should track explicitly.
536        self.roll_up(batch_index);
537
538        // Step 3. This insertion should be into an empty layer. It is a
539        //         logical error otherwise, as we may be violating our
540        //         invariant, from which all wonderment derives.
541        self.insert_at(batch, batch_index);
542
543        // Step 4. Tidy the largest layers.
544        //
545        //         It is important that we not tidy only smaller layers,
546        //         as their ascension is what ensures the merging and
547        //         eventual compaction of the largest layers.
548        self.tidy_layers();
549    }
550
551    /// Ensures that an insertion at layer `index` will succeed.
552    ///
553    /// This method is subject to the constraint that all existing batches
554    /// should occur at higher levels, which requires it to "roll up" batches
555    /// present at lower levels before the method is called. In doing this,
556    /// we should not introduce more virtual records than 2^index, as that
557    /// is the amount of excess fuel we have budgeted for completing merges.
558    fn roll_up(&mut self, index: usize) {
559
560        // Ensure entries sufficient for `index`.
561        while self.merging.len() <= index {
562            self.merging.push(MergeState::Vacant);
563        }
564
565        // We only need to roll up if there are non-vacant layers.
566        if self.merging[.. index].iter().any(|m| !m.is_vacant()) {
567
568            // Collect and merge all batches at layers up to but not including `index`.
569            let mut merged = None;
570            for i in 0 .. index {
571                self.insert_at(merged, i);
572                merged = self.complete_at(i);
573            }
574
575            // The merged results should be introduced at level `index`, which should
576            // be ready to absorb them (possibly creating a new merge at the time).
577            self.insert_at(merged, index);
578
579            // If the insertion results in a merge, we should complete it to ensure
580            // the upcoming insertion at `index` does not panic.
581            if self.merging[index].is_double() {
582                let merged = self.complete_at(index);
583                self.insert_at(merged, index + 1);
584            }
585        }
586    }
587
588    /// Applies an amount of fuel to merges in progress.
589    ///
590    /// The supplied `fuel` is for each in progress merge, and if we want to spend
591    /// the fuel non-uniformly (e.g. prioritizing merges at low layers) we could do
592    /// so in order to maintain fewer batches on average (at the risk of completing
593    /// merges of large batches later, but tbh probably not much later).
594    pub fn apply_fuel(&mut self, fuel: &mut isize) {
595        // For the moment our strategy is to apply fuel independently to each merge
596        // in progress, rather than prioritizing small merges. This sounds like a
597        // great idea, but we need better accounting in place to ensure that merges
598        // that borrow against later layers but then complete still "acquire" fuel
599        // to pay back their debts.
600        for index in 0 .. self.merging.len() {
601            // Give each level independent fuel, for now.
602            let mut fuel = *fuel;
603            // Pass along various logging stuffs, in case we need to report success.
604            self.merging[index].work(&mut fuel);
605            // `fuel` could have a deficit at this point, meaning we over-spent when
606            // we took a merge step. We could ignore this, or maintain the deficit
607            // and account future fuel against it before spending again. It isn't
608            // clear why that would be especially helpful to do; we might want to
609            // avoid overspends at multiple layers in the same invocation (to limit
610            // latencies), but there is probably a rich policy space here.
611
612            // If a merge completes, we can immediately merge it in to the next
613            // level, which is "guaranteed" to be complete at this point, by our
614            // fueling discipline.
615            if self.merging[index].is_complete() {
616                let complete = self.complete_at(index);
617                self.insert_at(complete, index+1);
618            }
619        }
620    }
621
622    /// Inserts a batch at a specific location.
623    ///
624    /// This is a non-public internal method that can panic if we try and insert into a
625    /// layer which already contains two batches (and is still in the process of merging).
626    fn insert_at(&mut self, batch: Option<B>, index: usize) {
627        // Ensure the spine is large enough.
628        while self.merging.len() <= index {
629            self.merging.push(MergeState::Vacant);
630        }
631
632        // Insert the batch at the location.
633        match self.merging[index].take() {
634            MergeState::Vacant => {
635                self.merging[index] = MergeState::Single(batch);
636            }
637            MergeState::Single(old) => {
638                // Log the initiation of a merge.
639                self.logger.as_ref().map(|l| l.log(
640                    crate::logging::MergeEvent {
641                        operator: self.operator.global_id,
642                        scale: index,
643                        length1: old.as_ref().map(|b| b.len()).unwrap_or(0),
644                        length2: batch.as_ref().map(|b| b.len()).unwrap_or(0),
645                        complete: None,
646                    }
647                ));
648                let compaction_frontier = self.logical_frontier.borrow();
649                self.merging[index] = MergeState::begin_merge(old, batch, compaction_frontier);
650            }
651            MergeState::Double(_) => {
652                panic!("Attempted to insert batch into incomplete merge!")
653            }
654        };
655    }
656
657    /// Completes and extracts what ever is at layer `index`.
658    fn complete_at(&mut self, index: usize) -> Option<B> {
659        if let Some((merged, inputs)) = self.merging[index].complete() {
660            if let Some((input1, input2)) = inputs {
661                // Log the completion of a merge from existing parts.
662                self.logger.as_ref().map(|l| l.log(
663                    crate::logging::MergeEvent {
664                        operator: self.operator.global_id,
665                        scale: index,
666                        length1: input1.len(),
667                        length2: input2.len(),
668                        complete: Some(merged.len()),
669                    }
670                ));
671            }
672            Some(merged)
673        }
674        else {
675            None
676        }
677    }
678
679    /// Attempts to draw down large layers to size appropriate layers.
680    fn tidy_layers(&mut self) {
681
682        // If the largest layer is complete (not merging), we can attempt
683        // to draw it down to the next layer. This is permitted if we can
684        // maintain our invariant that below each merge there are at most
685        // half the records that would be required to invade the merge.
686        if !self.merging.is_empty() {
687            let mut length = self.merging.len();
688            if self.merging[length-1].is_single() {
689
690                // To move a batch down, we require that it contain few
691                // enough records that the lower level is appropriate,
692                // and that moving the batch would not create a merge
693                // violating our invariant.
694
695                let appropriate_level = self.merging[length-1].len().next_power_of_two().trailing_zeros() as usize;
696
697                // Continue only as far as is appropriate
698                while appropriate_level < length-1 {
699
700                    match self.merging[length-2].take() {
701                        // Vacant or structurally empty batches can be absorbed.
702                        MergeState::Vacant | MergeState::Single(None) => {
703                            self.merging.remove(length-2);
704                            length = self.merging.len();
705                        }
706                        // Single batches may initiate a merge, if sizes are
707                        // within bounds, but terminate the loop either way.
708                        MergeState::Single(Some(batch)) => {
709
710                            // Determine the number of records that might lead
711                            // to a merge. Importantly, this is not the number
712                            // of actual records, but the sum of upper bounds
713                            // based on indices.
714                            let mut smaller = 0;
715                            for (index, batch) in self.merging[..(length-2)].iter().enumerate() {
716                                match batch {
717                                    MergeState::Vacant => { },
718                                    MergeState::Single(_) => { smaller += 1 << index; },
719                                    MergeState::Double(_) => { smaller += 2 << index; },
720                                }
721                            }
722
723                            if smaller <= (1 << length) / 8 {
724                                self.merging.remove(length-2);
725                                self.insert_at(Some(batch), length-2);
726                            }
727                            else {
728                                self.merging[length-2] = MergeState::Single(Some(batch));
729                            }
730                            return;
731                        }
732                        // If a merge is in progress there is nothing to do.
733                        MergeState::Double(state) => {
734                            self.merging[length-2] = MergeState::Double(state);
735                            return;
736                        }
737                    }
738                }
739            }
740        }
741    }
742}
743
744
745/// Describes the state of a layer.
746///
747/// A layer can be empty, contain a single batch, or contain a pair of batches
748/// that are in the process of merging into a batch for the next layer.
749enum MergeState<B: Batch> {
750    /// An empty layer, containing no updates.
751    Vacant,
752    /// A layer containing a single batch.
753    ///
754    /// The `None` variant is used to represent a structurally empty batch present
755    /// to ensure the progress of maintenance work.
756    Single(Option<B>),
757    /// A layer containing two batches, in the process of merging.
758    Double(MergeVariant<B>),
759}
760
761impl<B: Batch<Time: Eq>> MergeState<B> {
762
763    /// The number of actual updates contained in the level.
764    fn len(&self) -> usize {
765        match self {
766            MergeState::Single(Some(b)) => b.len(),
767            MergeState::Double(MergeVariant::InProgress(b1,b2,_)) => b1.len() + b2.len(),
768            MergeState::Double(MergeVariant::Complete(Some((b, _)))) => b.len(),
769            _ => 0,
770        }
771    }
772
773    /// True only for the MergeState::Vacant variant.
774    fn is_vacant(&self) -> bool {
775        if let MergeState::Vacant = self { true } else { false }
776    }
777
778    /// True only for the MergeState::Single variant.
779    fn is_single(&self) -> bool {
780        if let MergeState::Single(_) = self { true } else { false }
781    }
782
783    /// True only for the MergeState::Double variant.
784    fn is_double(&self) -> bool {
785        if let MergeState::Double(_) = self { true } else { false }
786    }
787
788    /// Immediately complete any merge.
789    ///
790    /// The result is either a batch, if there is a non-trivial batch to return
791    /// or `None` if there is no meaningful batch to return. This does not distinguish
792    /// between Vacant entries and structurally empty batches, which should be done
793    /// with the `is_complete()` method.
794    ///
795    /// There is the additional option of input batches.
796    fn complete(&mut self) -> Option<(B, Option<(B, B)>)>  {
797        match std::mem::replace(self, MergeState::Vacant) {
798            MergeState::Vacant => None,
799            MergeState::Single(batch) => batch.map(|b| (b, None)),
800            MergeState::Double(variant) => variant.complete(),
801        }
802    }
803
804    /// True iff the layer is a complete merge, ready for extraction.
805    fn is_complete(&mut self) -> bool {
806        if let MergeState::Double(MergeVariant::Complete(_)) = self {
807            true
808        }
809        else {
810            false
811        }
812    }
813
814    /// Performs a bounded amount of work towards a merge.
815    ///
816    /// If the merge completes, the resulting batch is returned.
817    /// If a batch is returned, it is the obligation of the caller
818    /// to correctly install the result.
819    fn work(&mut self, fuel: &mut isize) {
820        // We only perform work for merges in progress.
821        if let MergeState::Double(layer) = self {
822            layer.work(fuel)
823        }
824    }
825
826    /// Extract the merge state, typically temporarily.
827    fn take(&mut self) -> Self {
828        std::mem::replace(self, MergeState::Vacant)
829    }
830
831    /// Initiates the merge of an "old" batch with a "new" batch.
832    ///
833    /// The upper frontier of the old batch should match the lower
834    /// frontier of the new batch, with the resulting batch describing
835    /// their composed interval, from the lower frontier of the old
836    /// batch to the upper frontier of the new batch.
837    ///
838    /// Either batch may be `None` which corresponds to a structurally
839    /// empty batch whose upper and lower froniers are equal. This
840    /// option exists purely for bookkeeping purposes, and no computation
841    /// is performed to merge the two batches.
842    fn begin_merge(batch1: Option<B>, batch2: Option<B>, compaction_frontier: AntichainRef<B::Time>) -> MergeState<B> {
843        let variant =
844        match (batch1, batch2) {
845            (Some(batch1), Some(batch2)) => {
846                assert!(batch1.upper() == batch2.lower());
847                let begin_merge = <B as Batch>::begin_merge(&batch1, &batch2, compaction_frontier);
848                MergeVariant::InProgress(batch1, batch2, begin_merge)
849            }
850            (None, Some(x)) => MergeVariant::Complete(Some((x, None))),
851            (Some(x), None) => MergeVariant::Complete(Some((x, None))),
852            (None, None) => MergeVariant::Complete(None),
853        };
854
855        MergeState::Double(variant)
856    }
857}
858
859enum MergeVariant<B: Batch> {
860    /// Describes an actual in-progress merge between two non-trivial batches.
861    InProgress(B, B, <B as Batch>::Merger),
862    /// A merge that requires no further work. May or may not represent a non-trivial batch.
863    Complete(Option<(B, Option<(B, B)>)>),
864}
865
866impl<B: Batch> MergeVariant<B> {
867
868    /// Completes and extracts the batch, unless structurally empty.
869    ///
870    /// The result is either `None`, for structurally empty batches,
871    /// or a batch and optionally input batches from which it derived.
872    fn complete(mut self) -> Option<(B, Option<(B, B)>)> {
873        let mut fuel = isize::max_value();
874        self.work(&mut fuel);
875        if let MergeVariant::Complete(batch) = self { batch }
876        else { panic!("Failed to complete a merge!"); }
877    }
878
879    /// Applies some amount of work, potentially completing the merge.
880    ///
881    /// In case the work completes, the source batches are returned.
882    /// This allows the caller to manage the released resources.
883    fn work(&mut self, fuel: &mut isize) {
884        let variant = std::mem::replace(self, MergeVariant::Complete(None));
885        if let MergeVariant::InProgress(b1,b2,mut merge) = variant {
886            merge.work(&b1,&b2,fuel);
887            if *fuel > 0 {
888                *self = MergeVariant::Complete(Some((merge.done(), Some((b1,b2)))));
889            }
890            else {
891                *self = MergeVariant::InProgress(b1,b2,merge);
892            }
893        }
894        else {
895            *self = variant;
896        }
897    }
898}