mz_txn_wal/
lib.rs

1// Copyright Materialize, Inc. and contributors. All rights reserved.
2//
3// Use of this software is governed by the Business Source License
4// included in the LICENSE file.
5//
6// As of the Change Date specified in that file, in accordance with
7// the Business Source License, use of this software will be governed
8// by the Apache License, Version 2.0.
9
10//! Atomic multi-shard [persist] writes.
11//!
12//! [persist]: mz_persist_client
13//!
14//! This crate presents an abstraction on top of persist shards, allowing
15//! efficient atomic multi-shard writes. This is accomplished through an
16//! additional _txn_ shard that coordinates writes to a (potentially large)
17//! number of _data_ shards. Data shards may be added and removed to the set at
18//! any time.
19//!
20//! **WARNING!** While a data shard is registered to the txn set, writing to it
21//! directly (i.e. using a [WriteHandle] instead of the [TxnsHandle]) will lead
22//! to incorrectness, undefined behavior, and (potentially sticky) panics.
23//!
24//! [WriteHandle]: mz_persist_client::write::WriteHandle
25//! [TxnsHandle]: crate::txns::TxnsHandle
26//!
27//! Benefits of these txns:
28//! - _Key idea_: A transactional write costs in proportion to the total size of
29//!   data written, and the number of data shards involved (plus one for the
30//!   txns shard).
31//! - _Key idea_: As time progresses, the upper of every data shard is logically
32//!   (but not physically) advanced en masse with a single write to the txns
33//!   shard. (Data writes may also be bundled into this, if desired.)
34//! - Transactions are write-only, but read-then-write transactions can be built
35//!   on top by using read and write timestamps selected to have no possible
36//!   writes in between (e.g. `write_ts/commit_ts = read_ts + 1`).
37//! - Transactions of any size are supported in bounded memory. This is done
38//!   though the usual persist mechanism of spilling to s3. These spilled
39//!   batched are efficiently re-timestamped when a commit must be retried at a
40//!   higher timestamp.
41//! - The data shards may be read independently of each other.
42//! - The persist "maintenance" work assigned on behalf of the committed txn is
43//!   (usually, see below) assigned to the txn committer.
44//! - It is possible to implement any of snapshot, serializable, or
45//!   strict-serializable isolation on top of this interface via write and read
46//!   timestamp selections (see [#Isolation](#isolation) below for details).
47//! - It is possible to serialize and communicate an uncommitted [Txn] between
48//!   processes and also to merge uncommitted [Txn]s, if necessary (e.g.
49//!   consolidating all monitoring collections, statement logging, etc into the
50//!   periodic timestamp advancement). This is not initially implemented, but
51//!   could be.
52//!
53//! [Txn]: crate::txn_write::Txn
54//!
55//! Restrictions:
56//! - Data shards must all use the same codecs for `K, V, T, D`. However, each
57//!   data shard may have a independent `K` and `V` schemas. The txns shard
58//!   inherits the `T` codec from the data shards (and uses its own `K, V, D`
59//!   ones).
60//! - All txn writes are linearized through the txns shard, so there is some
61//!   limit to horizontal and geographical scale out.
62//! - Performance has been tuned for _throughput_ and _un-contended latency_.
63//!   Latency on contended workloads will likely be quite bad. At a high level,
64//!   if N txns are run concurrently, 1 will commit and N-1 will have to
65//!   (usually cheaply) retry. (However, note that it is also possible to
66//!   combine and commit multiple txns at the same timestamp, as mentioned
67//!   above, which gives us some amount of knobs for doing something different
68//!   here.)
69//!
70//! # Intuition and Jargon
71//!
72//! - The _txns shard_ is the source of truth for what has (and has not)
73//!   committed to a set of _data shards_.
74//! - Each data shard must be _registered_ at some `register_ts` before being
75//!   used in transactions. Registration is for bookkeeping only, there is no
76//!   particular meaning to the timestamp other than it being a lower bound on
77//!   when txns using this data shard can commit. Registration only needs to be
78//!   run once-ever per data shard, but it is idempotent, so can also be run
79//!   at-least-once.
80//! - A txn is broken into three phases:
81//!   - (Elided: A pre-txn phase where MZ might perform reads for
82//!     read-then-write txns or might buffer writes.)
83//!   - _commit_: The txn is committed by writing lightweight pointers to
84//!     (potentially large) batches of data as updates in txns_shard with a
85//!     timestamp of `commit_ts`. Feel free to think of this as a WAL. This
86//!     makes the txn durable (thus "definite") and also advances the _logical
87//!     upper_ of every data shard registered at a timestamp before commit_ts,
88//!     including those not involved in the txn. However, at this point, it is
89//!     not yet possible to read at the commit ts.
90//!   - _apply_: We could serve reads of data shards from the information in the
91//!     txns shard, but instead we choose to serve them from the physical data
92//!     shard itself so that we may reuse existing persist infrastructure (e.g.
93//!     multi-worker persist-source). This means we must take the batch pointers
94//!     written to the txns shard and, in commit_ts order, "denormalize" them
95//!     into each data shard with `compare_and_append`. We call this process
96//!     applying the txn. Feel free to think of this as applying the WAL.
97//!
98//!     (Note that this means each data shard's _physical upper_ reflects the
99//!     last committed txn touching that shard, and so the _logical upper_ may
100//!     be greater than this. See [TxnsCache] for more details.)
101//!   - _tidy_: After a committed txn has been applied, the updates for that txn
102//!     are retracted from the txns shard. (To handle races, both application
103//!     and retraction are written to be idempotent.) This prevents the txns
104//!     shard from growing unboundedly and also means that, at any given time,
105//!     the txns shard contains the set of txns that need to be applied (as well
106//!     as the set of registered data shards).
107//!
108//! [TxnsCache]: crate::txn_cache::TxnsCache
109//!
110//! # Usage
111//!
112//! ```
113//! # use std::sync::Arc;
114//! # use mz_ore::metrics::MetricsRegistry;
115//! # use mz_persist_client::{Diagnostics, PersistClient, ShardId};
116//! # use mz_txn_wal::metrics::Metrics;
117//! # use mz_txn_wal::operator::DataSubscribe;
118//! # use mz_txn_wal::txns::TxnsHandle;
119//! # use mz_persist_types::codec_impls::{StringSchema, UnitSchema};
120//! # use timely::progress::Antichain;
121//! #
122//! # tokio::runtime::Runtime::new().unwrap().block_on(async {
123//! # let client = PersistClient::new_for_tests().await;
124//! # let dyncfgs = mz_txn_wal::all_dyncfgs(client.dyncfgs().clone());
125//! # let metrics = Arc::new(Metrics::new(&MetricsRegistry::new()));
126//! # mz_ore::test::init_logging();
127//! // Open a txn shard, initializing it if necessary.
128//! let txns_id = ShardId::new();
129//! let mut txns = TxnsHandle::<String, (), u64, i64>::open(
130//!     0u64, client.clone(), dyncfgs, metrics, txns_id
131//! ).await;
132//!
133//! // Register data shards to the txn set.
134//! let (d0, d1) = (ShardId::new(), ShardId::new());
135//! # let d0_write = client.open_writer(
136//! #    d0, StringSchema.into(), UnitSchema.into(), Diagnostics::for_tests()
137//! # ).await.unwrap();
138//! # let d1_write = client.open_writer(
139//! #    d1, StringSchema.into(), UnitSchema.into(), Diagnostics::for_tests()
140//! # ).await.unwrap();
141//! txns.register(1u64, [d0_write]).await.expect("not previously initialized");
142//! txns.register(2u64, [d1_write]).await.expect("not previously initialized");
143//!
144//! // Commit a txn. This is durable if/when the `commit_at` succeeds, but reads
145//! // at the commit ts will _block_ until after the txn is applied. Users are
146//! // free to pass up the commit ack (e.g. to pgwire) to get a bit of latency
147//! // back. NB: It is expected that the txn committer will run the apply step,
148//! // but in the event of a crash, neither correctness nor liveness depend on
149//! // it.
150//! let mut txn = txns.begin();
151//! txn.write(&d0, "0".into(), (), 1);
152//! txn.write(&d1, "1".into(), (), -1);
153//! let tidy = txn.commit_at(&mut txns, 3).await.expect("ts 3 available")
154//!     // Make it available to reads by applying it.
155//!     .apply(&mut txns).await;
156//!
157//! // Commit a contended txn at a higher timestamp. Note that the upper of `d1`
158//! // is also advanced by this. At the same time clean up after our last commit
159//! // (the tidy).
160//! let mut txn = txns.begin();
161//! txn.write(&d0, "2".into(), (), 1);
162//! txn.tidy(tidy);
163//! txn.commit_at(&mut txns, 3).await.expect_err("ts 3 not available");
164//! let _tidy = txn.commit_at(&mut txns, 4).await.expect("ts 4 available")
165//!     .apply(&mut txns).await;
166//!
167//! // Read data shard(s) at some `read_ts`.
168//! let mut subscribe = DataSubscribe::new("example", client, txns_id, d1, 4, Antichain::new(), true);
169//! while subscribe.progress() <= 4 {
170//!     subscribe.step();
171//! #   tokio::task::yield_now().await;
172//! }
173//! let updates = subscribe.output();
174//! # })
175//! ```
176//!
177//! # Isolation
178//!
179//! This section is about "read-then-write" txns where all reads are performed
180//! before any writes (read-only and write-only are trivial specializations of
181//! this). All reads are performed at some `read_ts` and then all writes are
182//! performed at `write_ts` (aka the `commit_ts`).
183//!
184//! - To implement snapshot isolation using the above, select any `read_ts <
185//!   write_ts`. The `write_ts` can advance as necessary when retrying on
186//!   conflicts.
187//! - To implement serializable isolation using the above, select `write_ts =
188//!   read_ts + 1`. If the `write_ts` must be pushed as a result of a conflict,
189//!   then the `read_ts` must be similarly advanced. Note that if you happen to
190//!   have a system for efficiently computing changes to data as inputs change
191//!   (hmmm), it may be better to reason about `(read_ts, new_read_ts]` then to
192//!   recompute the reads from scratch.
193//! - To implement strict serializable (serializable + linearizable) isolation,
194//!   do the same as serializable, but with the additional constraints on
195//!   write_ts required by linearizability (handwave).
196//!
197//! # Implementation
198//!
199//! For details of the implementation of writes, see [TxnsHandle].
200//!
201//! For details of the implementation of reads, see [TxnsCache].
202
203#![warn(missing_docs, missing_debug_implementations)]
204
205use std::fmt::Debug;
206use std::fmt::Write;
207
208use differential_dataflow::Hashable;
209use differential_dataflow::difference::Semigroup;
210use differential_dataflow::lattice::Lattice;
211use mz_dyncfg::ConfigSet;
212use mz_ore::instrument;
213use mz_persist_client::ShardId;
214use mz_persist_client::critical::SinceHandle;
215use mz_persist_client::error::UpperMismatch;
216use mz_persist_client::write::WriteHandle;
217use mz_persist_types::codec_impls::{ShardIdSchema, VecU8Schema};
218use mz_persist_types::stats::PartStats;
219use mz_persist_types::txn::{TxnsCodec, TxnsEntry};
220use mz_persist_types::{Codec, Codec64, Opaque, StepForward};
221use timely::order::TotalOrder;
222use timely::progress::{Antichain, Timestamp};
223use tracing::{debug, error};
224
225use crate::proto::ProtoIdBatch;
226use crate::txns::DataWriteApply;
227
228pub mod metrics;
229pub mod operator;
230pub mod txn_cache;
231pub mod txn_read;
232pub mod txn_write;
233pub mod txns;
234
235mod proto {
236    use bytes::Bytes;
237    use mz_persist_client::batch::ProtoBatch;
238    use prost::Message;
239    use uuid::Uuid;
240
241    include!(concat!(env!("OUT_DIR"), "/mz_txn_wal.proto.rs"));
242
243    impl ProtoIdBatch {
244        pub(crate) fn new(batch: ProtoBatch) -> ProtoIdBatch {
245            ProtoIdBatch {
246                batch_id: Bytes::copy_from_slice(Uuid::new_v4().as_bytes()),
247                batch: Some(batch),
248            }
249        }
250
251        /// Recovers the ProtoBatch from an encoded batch.
252        ///
253        /// This might be an encoded ProtoIdBatch (new path) or a ProtoBatch
254        /// (legacy path). Some proto shenanigans are done to sniff out which.
255        pub(crate) fn parse(buf: &[u8]) -> ProtoBatch {
256            let b = ProtoIdBatch::decode(buf).expect("valid ProtoIdBatch");
257            // First try the new format.
258            if let Some(batch) = b.batch {
259                return batch;
260            }
261            // Fall back to the legacy format.
262            ProtoBatch::decode(buf).expect("valid (legacy) ProtoBatch")
263        }
264    }
265}
266
267/// Adds the full set of all txn-wal `Config`s.
268pub fn all_dyncfgs(configs: ConfigSet) -> ConfigSet {
269    configs
270        .add(&crate::operator::DATA_SHARD_RETRYER_CLAMP)
271        .add(&crate::operator::DATA_SHARD_RETRYER_INITIAL_BACKOFF)
272        .add(&crate::operator::DATA_SHARD_RETRYER_MULTIPLIER)
273        .add(&crate::txns::APPLY_ENSURE_SCHEMA_MATCH)
274}
275
276/// A reasonable default implementation of [TxnsCodec].
277///
278/// This uses the "native" Codecs for `ShardId` and `Vec<u8>`, with the latter
279/// empty for [TxnsEntry::Register] and non-empty for [TxnsEntry::Append].
280#[derive(Debug)]
281pub struct TxnsCodecDefault;
282
283impl TxnsCodec for TxnsCodecDefault {
284    type Key = ShardId;
285    type Val = Vec<u8>;
286    fn schemas() -> (<Self::Key as Codec>::Schema, <Self::Val as Codec>::Schema) {
287        (ShardIdSchema, VecU8Schema)
288    }
289    fn encode(e: TxnsEntry) -> (Self::Key, Self::Val) {
290        match e {
291            TxnsEntry::Register(data_id, ts) => (data_id, ts.to_vec()),
292            TxnsEntry::Append(data_id, ts, batch) => {
293                // Put the ts at the end to let decode truncate it off.
294                (data_id, batch.into_iter().chain(ts).collect())
295            }
296        }
297    }
298    fn decode(key: Self::Key, mut val: Self::Val) -> TxnsEntry {
299        let mut ts = [0u8; 8];
300        let ts_idx = val.len().checked_sub(8).expect("ts encoded at end of val");
301        ts.copy_from_slice(&val[ts_idx..]);
302        val.truncate(ts_idx);
303        if val.is_empty() {
304            TxnsEntry::Register(key, ts)
305        } else {
306            TxnsEntry::Append(key, ts, val)
307        }
308    }
309    fn should_fetch_part(data_id: &ShardId, stats: &PartStats) -> Option<bool> {
310        let stats = stats
311            .key
312            .col("")?
313            .try_as_string()
314            .map_err(|err| error!("unexpected stats type: {}", err))
315            .ok()?;
316        let data_id_str = data_id.to_string();
317        Some(stats.lower <= data_id_str && stats.upper >= data_id_str)
318    }
319}
320
321/// Helper for common logging for compare_and_append-ing a small amount of data.
322#[instrument(level = "debug", fields(shard=%txns_or_data_write.shard_id(), ts=?new_upper))]
323pub(crate) async fn small_caa<S, F, K, V, T, D>(
324    name: F,
325    txns_or_data_write: &mut WriteHandle<K, V, T, D>,
326    updates: &[((&K, &V), &T, D)],
327    upper: T,
328    new_upper: T,
329) -> Result<(), T>
330where
331    S: AsRef<str>,
332    F: Fn() -> S,
333    K: Debug + Codec,
334    V: Debug + Codec,
335    T: Timestamp + Lattice + TotalOrder + Codec64 + Sync,
336    D: Debug + Semigroup + Ord + Codec64 + Send + Sync,
337{
338    fn debug_sep<'a, T: Debug + 'a>(sep: &str, xs: impl IntoIterator<Item = &'a T>) -> String {
339        xs.into_iter().fold(String::new(), |mut output, x| {
340            let _ = write!(output, "{}{:?}", sep, x);
341            output
342        })
343    }
344    debug!(
345        "CaA {} [{:?},{:?}){}",
346        name().as_ref(),
347        upper,
348        new_upper,
349        // This is a "small" CaA so we can inline the data in this debug log.
350        debug_sep("\n  ", updates)
351    );
352    let res = txns_or_data_write
353        .compare_and_append(
354            updates,
355            Antichain::from_elem(upper.clone()),
356            Antichain::from_elem(new_upper.clone()),
357        )
358        .await
359        .expect("usage was valid");
360    match res {
361        Ok(()) => {
362            debug!(
363                "CaA {} [{:?},{:?}) success",
364                name().as_ref(),
365                upper,
366                new_upper
367            );
368            Ok(())
369        }
370        Err(UpperMismatch { current, .. }) => {
371            let current = current
372                .into_option()
373                .expect("txns shard should not be closed");
374            debug!(
375                "CaA {} [{:?},{:?}) mismatch actual={:?}",
376                name().as_ref(),
377                upper,
378                new_upper,
379                current,
380            );
381            Err(current)
382        }
383    }
384}
385
386/// Ensures that the upper of the shard is past init_ts by writing an empty
387/// batch, retrying as necessary.
388///
389/// This method is idempotent.
390pub(crate) async fn empty_caa<S, F, K, V, T, D>(
391    name: F,
392    txns_or_data_write: &mut WriteHandle<K, V, T, D>,
393    init_ts: T,
394) where
395    S: AsRef<str>,
396    F: Fn() -> S,
397    K: Debug + Codec,
398    V: Debug + Codec,
399    T: Timestamp + Lattice + TotalOrder + StepForward + Codec64 + Sync,
400    D: Debug + Semigroup + Ord + Codec64 + Send + Sync,
401{
402    let name = name();
403    let empty: &[((&K, &V), &T, D)] = &[];
404    let Some(mut upper) = txns_or_data_write.shared_upper().into_option() else {
405        // Shard is closed, which means the upper must be past init_ts.
406        return;
407    };
408    loop {
409        if init_ts < upper {
410            return;
411        }
412        let res = small_caa(
413            || name.as_ref(),
414            txns_or_data_write,
415            empty,
416            upper,
417            init_ts.step_forward(),
418        )
419        .await;
420        match res {
421            Ok(()) => return,
422            Err(current) => {
423                upper = current;
424            }
425        }
426    }
427}
428
429/// Ensures that a committed batch has been applied into a physical data shard,
430/// making it available for reads.
431///
432/// This process is definite work on top of definite input, so the
433/// implementation assumes that if the upper of the shard passes commit_ts then
434/// the work must have already been done by someone else. (Think how our compute
435/// replicas race to compute some MATERIALIZED VIEW, but they're all guaranteed
436/// to get the same answer.)
437#[instrument(level = "debug", fields(shard=%data_write.shard_id(), ts=?commit_ts))]
438async fn apply_caa<K, V, T, D>(
439    data_write: &mut DataWriteApply<K, V, T, D>,
440    batch_raws: &Vec<&[u8]>,
441    commit_ts: T,
442) where
443    K: Debug + Codec,
444    V: Debug + Codec,
445    T: Timestamp + Lattice + TotalOrder + StepForward + Codec64 + Sync,
446    D: Semigroup + Ord + Codec64 + Send + Sync,
447{
448    let mut batches = batch_raws
449        .into_iter()
450        .map(|batch| ProtoIdBatch::parse(batch))
451        .map(|batch| data_write.batch_from_transmittable_batch(batch))
452        .collect::<Vec<_>>();
453    let Some(mut upper) = data_write.shared_upper().into_option() else {
454        // Shard is closed, which means the upper must be past init_ts.
455        // Mark the batches as consumed, so we don't get warnings in the logs.
456        for batch in batches {
457            batch.into_hollow_batch();
458        }
459        return;
460    };
461    loop {
462        if commit_ts < upper {
463            debug!(
464                "CaA data {:.9} apply t={:?} already done",
465                data_write.shard_id().to_string(),
466                commit_ts
467            );
468            // Mark the batches as consumed, so we don't get warnings in the logs.
469            for batch in batches {
470                batch.into_hollow_batch();
471            }
472            return;
473        }
474
475        // Make sure we're using the same schema to CaA these batches as what
476        // they were written with.
477        data_write.maybe_replace_with_batch_schema(&batches).await;
478
479        debug!(
480            "CaA data {:.9} apply b={:?} t={:?} [{:?},{:?})",
481            data_write.shard_id().to_string(),
482            batch_raws
483                .iter()
484                .map(|batch_raw| batch_raw.hashed())
485                .collect::<Vec<_>>(),
486            commit_ts,
487            upper,
488            commit_ts.step_forward(),
489        );
490        let mut batches = batches.iter_mut().collect::<Vec<_>>();
491        let res = data_write
492            .compare_and_append_batch(
493                batches.as_mut_slice(),
494                Antichain::from_elem(upper.clone()),
495                Antichain::from_elem(commit_ts.step_forward()),
496            )
497            .await
498            .expect("usage was valid");
499        match res {
500            Ok(()) => {
501                debug!(
502                    "CaA data {:.9} apply t={:?} [{:?},{:?}) success",
503                    data_write.shard_id().to_string(),
504                    commit_ts,
505                    upper,
506                    commit_ts.step_forward(),
507                );
508                return;
509            }
510            Err(UpperMismatch { current, .. }) => {
511                let current = current.into_option().expect("data should not be closed");
512                debug!(
513                    "CaA data {:.9} apply t={:?} [{:?},{:?}) mismatch actual={:?}",
514                    data_write.shard_id().to_string(),
515                    commit_ts,
516                    upper,
517                    commit_ts.step_forward(),
518                    current,
519                );
520                upper = current;
521                continue;
522            }
523        }
524    }
525}
526
527#[instrument(level = "debug", fields(shard=%txns_since.shard_id(), ts=?new_since_ts))]
528pub(crate) async fn cads<T, O, C>(
529    txns_since: &mut SinceHandle<C::Key, C::Val, T, i64, O>,
530    new_since_ts: T,
531) where
532    T: Timestamp + Lattice + TotalOrder + StepForward + Codec64 + Sync,
533    O: Opaque + Debug + Codec64,
534    C: TxnsCodec,
535{
536    // Fast-path, don't bother trying to CaDS if we're already past that
537    // since.
538    if !txns_since.since().less_than(&new_since_ts) {
539        return;
540    }
541    let token = txns_since.opaque().clone();
542    let res = txns_since
543        .compare_and_downgrade_since(&token, (&token, &Antichain::from_elem(new_since_ts)))
544        .await;
545    match res {
546        Ok(_) => {}
547        Err(actual) => {
548            mz_ore::halt!("fenced by another process @ {actual:?}. ours = {token:?}")
549        }
550    }
551}
552
553#[cfg(test)]
554mod tests {
555    use std::collections::{BTreeMap, BTreeSet};
556    use std::sync::Arc;
557    use std::sync::Mutex;
558
559    use crossbeam_channel::{Receiver, Sender, TryRecvError};
560    use differential_dataflow::consolidation::consolidate_updates;
561    use mz_persist_client::read::ReadHandle;
562    use mz_persist_client::{Diagnostics, PersistClient, ShardId};
563    use mz_persist_types::codec_impls::{StringSchema, UnitSchema};
564    use prost::Message;
565
566    use crate::operator::DataSubscribe;
567    use crate::txn_cache::TxnsCache;
568    use crate::txn_write::{Txn, TxnApply};
569    use crate::txns::{Tidy, TxnsHandle};
570
571    use super::*;
572
573    impl<K, V, T, D, O, C> TxnsHandle<K, V, T, D, O, C>
574    where
575        K: Debug + Codec + Clone,
576        V: Debug + Codec + Clone,
577        T: Timestamp + Lattice + TotalOrder + StepForward + Codec64 + Sync,
578        D: Debug + Semigroup + Ord + Codec64 + Send + Sync + Clone,
579        O: Opaque + Debug + Codec64,
580        C: TxnsCodec,
581    {
582        /// Returns a new, empty test transaction that can involve the data shards
583        /// registered with this handle.
584        pub(crate) fn begin_test(&self) -> TestTxn<K, V, T, D> {
585            TestTxn::new()
586        }
587    }
588
589    /// A [`Txn`] wrapper that exposes extra functionality for tests.
590    #[derive(Debug)]
591    pub struct TestTxn<K, V, T, D> {
592        txn: Txn<K, V, T, D>,
593        /// A copy of every write to use in tests.
594        writes: BTreeMap<ShardId, Vec<(K, V, D)>>,
595    }
596
597    impl<K, V, T, D> TestTxn<K, V, T, D>
598    where
599        K: Debug + Codec + Clone,
600        V: Debug + Codec + Clone,
601        T: Timestamp + Lattice + TotalOrder + StepForward + Codec64 + Sync,
602        D: Debug + Semigroup + Ord + Codec64 + Send + Sync + Clone,
603    {
604        pub(crate) fn new() -> Self {
605            Self {
606                txn: Txn::new(),
607                writes: BTreeMap::default(),
608            }
609        }
610
611        pub(crate) async fn write(&mut self, data_id: &ShardId, key: K, val: V, diff: D) {
612            self.writes
613                .entry(*data_id)
614                .or_default()
615                .push((key.clone(), val.clone(), diff.clone()));
616            self.txn.write(data_id, key, val, diff).await
617        }
618
619        pub(crate) async fn commit_at<O, C>(
620            &mut self,
621            handle: &mut TxnsHandle<K, V, T, D, O, C>,
622            commit_ts: T,
623        ) -> Result<TxnApply<T>, T>
624        where
625            O: Opaque + Debug + Codec64,
626            C: TxnsCodec,
627        {
628            self.txn.commit_at(handle, commit_ts).await
629        }
630
631        pub(crate) fn merge(&mut self, other: Self) {
632            for (data_id, writes) in other.writes {
633                self.writes.entry(data_id).or_default().extend(writes);
634            }
635            self.txn.merge(other.txn)
636        }
637
638        pub(crate) fn tidy(&mut self, tidy: Tidy) {
639            self.txn.tidy(tidy)
640        }
641
642        #[allow(dead_code)]
643        fn take_tidy(&mut self) -> Tidy {
644            self.txn.take_tidy()
645        }
646    }
647
648    /// A test helper for collecting committed writes and later comparing them
649    /// to reads for correctness.
650    #[derive(Debug, Clone)]
651    pub struct CommitLog {
652        client: PersistClient,
653        txns_id: ShardId,
654        writes: Arc<Mutex<Vec<(ShardId, String, u64, i64)>>>,
655        tx: Sender<(ShardId, String, u64, i64)>,
656        rx: Receiver<(ShardId, String, u64, i64)>,
657    }
658
659    impl CommitLog {
660        pub fn new(client: PersistClient, txns_id: ShardId) -> Self {
661            let (tx, rx) = crossbeam_channel::unbounded();
662            CommitLog {
663                client,
664                txns_id,
665                writes: Arc::new(Mutex::new(Vec::new())),
666                tx,
667                rx,
668            }
669        }
670
671        pub fn record(&self, update: (ShardId, String, u64, i64)) {
672            let () = self.tx.send(update).unwrap();
673        }
674
675        pub fn record_txn(&self, commit_ts: u64, txn: &TestTxn<String, (), u64, i64>) {
676            for (data_id, writes) in txn.writes.iter() {
677                for (k, (), d) in writes.iter() {
678                    self.record((*data_id, k.clone(), commit_ts, *d));
679                }
680            }
681        }
682
683        #[track_caller]
684        pub fn assert_eq(
685            &self,
686            data_id: ShardId,
687            as_of: u64,
688            until: u64,
689            actual: impl IntoIterator<Item = (String, u64, i64)>,
690        ) {
691            // First read everything off the channel.
692            let mut expected = {
693                let mut writes = self.writes.lock().unwrap();
694                loop {
695                    match self.rx.try_recv() {
696                        Ok(x) => writes.push(x),
697                        Err(TryRecvError::Empty) | Err(TryRecvError::Disconnected) => break,
698                    }
699                }
700                writes
701                    .iter()
702                    .flat_map(|(id, key, ts, diff)| {
703                        if id != &data_id {
704                            return None;
705                        }
706                        let mut ts = *ts;
707                        if ts < as_of {
708                            ts = as_of;
709                        }
710                        if until <= ts {
711                            None
712                        } else {
713                            Some((key.clone(), ts, *diff))
714                        }
715                    })
716                    .collect()
717            };
718            consolidate_updates(&mut expected);
719            let mut actual = actual.into_iter().filter(|(_, t, _)| t < &until).collect();
720            consolidate_updates(&mut actual);
721            // NB: Extra spaces after actual are so it lines up with expected.
722            tracing::debug!(
723                "{:.9} as_of={} until={} actual  ={:?}",
724                data_id,
725                as_of,
726                until,
727                actual
728            );
729            tracing::debug!(
730                "{:.9} as_of={} until={} expected={:?}",
731                data_id,
732                as_of,
733                until,
734                expected
735            );
736            assert_eq!(actual, expected)
737        }
738
739        #[allow(ungated_async_fn_track_caller)]
740        #[track_caller]
741        pub async fn assert_snapshot(&self, data_id: ShardId, as_of: u64) {
742            let mut cache: TxnsCache<u64, TxnsCodecDefault> =
743                TxnsCache::open(&self.client, self.txns_id, Some(data_id)).await;
744            let _ = cache.update_gt(&as_of).await;
745            let snapshot = cache.data_snapshot(data_id, as_of);
746            let mut data_read = self
747                .client
748                .open_leased_reader(
749                    data_id,
750                    Arc::new(StringSchema),
751                    Arc::new(UnitSchema),
752                    Diagnostics::from_purpose("assert snapshot"),
753                    true,
754                )
755                .await
756                .expect("reader creation shouldn't panic");
757            let snapshot = snapshot
758                .snapshot_and_fetch(&mut data_read)
759                .await
760                .expect("snapshot shouldn't panic");
761            data_read.expire().await;
762            let snapshot: Vec<_> = snapshot
763                .into_iter()
764                .map(|((k, v), t, d)| {
765                    let (k, ()) = (k.unwrap(), v.unwrap());
766                    (k, t, d)
767                })
768                .collect();
769
770            // Check that a subscribe would produce the same result.
771            let subscribe = self.subscribe(data_id, as_of, as_of + 1).await;
772            assert_eq!(
773                snapshot.iter().collect::<BTreeSet<_>>(),
774                subscribe.output().into_iter().collect::<BTreeSet<_>>()
775            );
776
777            // Check that the result is correct.
778            self.assert_eq(data_id, as_of, as_of + 1, snapshot);
779        }
780
781        #[allow(ungated_async_fn_track_caller)]
782        #[track_caller]
783        pub async fn assert_subscribe(&self, data_id: ShardId, as_of: u64, until: u64) {
784            let data_subscribe = self.subscribe(data_id, as_of, until).await;
785            self.assert_eq(data_id, as_of, until, data_subscribe.output().clone());
786        }
787
788        #[allow(ungated_async_fn_track_caller)]
789        #[track_caller]
790        pub async fn subscribe(&self, data_id: ShardId, as_of: u64, until: u64) -> DataSubscribe {
791            let mut data_subscribe = DataSubscribe::new(
792                "test",
793                self.client.clone(),
794                self.txns_id,
795                data_id,
796                as_of,
797                Antichain::new(),
798                true,
799            );
800            data_subscribe.step_past(until - 1).await;
801            data_subscribe
802        }
803    }
804
805    pub(crate) async fn writer(
806        client: &PersistClient,
807        data_id: ShardId,
808    ) -> WriteHandle<String, (), u64, i64> {
809        client
810            .open_writer(
811                data_id,
812                Arc::new(StringSchema),
813                Arc::new(UnitSchema),
814                Diagnostics::for_tests(),
815            )
816            .await
817            .expect("codecs should not change")
818    }
819
820    pub(crate) async fn reader(
821        client: &PersistClient,
822        data_id: ShardId,
823    ) -> ReadHandle<String, (), u64, i64> {
824        client
825            .open_leased_reader(
826                data_id,
827                Arc::new(StringSchema),
828                Arc::new(UnitSchema),
829                Diagnostics::for_tests(),
830                true,
831            )
832            .await
833            .expect("codecs should not change")
834    }
835
836    pub(crate) async fn write_directly(
837        ts: u64,
838        data_write: &mut WriteHandle<String, (), u64, i64>,
839        keys: &[&str],
840        log: &CommitLog,
841    ) {
842        let data_id = data_write.shard_id();
843        let keys = keys.iter().map(|x| (*x).to_owned()).collect::<Vec<_>>();
844        let updates = keys.iter().map(|k| ((k, &()), &ts, 1)).collect::<Vec<_>>();
845        let mut current = data_write.shared_upper().into_option().unwrap();
846        loop {
847            let res = crate::small_caa(
848                || format!("data {:.9} directly", data_id),
849                data_write,
850                &updates,
851                current,
852                ts + 1,
853            )
854            .await;
855            match res {
856                Ok(()) => {
857                    for ((k, ()), t, d) in updates {
858                        log.record((data_id, k.to_owned(), *t, d));
859                    }
860                    return;
861                }
862                Err(new_current) => current = new_current,
863            }
864        }
865    }
866
867    #[mz_ore::test(tokio::test)]
868    #[cfg_attr(miri, ignore)] // unsupported operation: returning ready events from epoll_wait is not yet implemented
869    async fn commit_log() {
870        let (d0, d1) = (ShardId::new(), ShardId::new());
871        let log0 = CommitLog::new(PersistClient::new_for_tests().await, ShardId::new());
872
873        // Send before cloning into another handle.
874        log0.record((d0, "0".into(), 0, 1));
875
876        // Send after cloning into another handle. Also push duplicate (which
877        // gets consolidated).
878        let log1 = log0.clone();
879        log0.record((d0, "2".into(), 2, 1));
880        log1.record((d0, "2".into(), 2, 1));
881
882        // Send retraction.
883        log0.record((d0, "3".into(), 3, 1));
884        log1.record((d0, "3".into(), 4, -1));
885
886        // Send out of order.
887        log0.record((d0, "1".into(), 1, 1));
888
889        // Send to a different shard.
890        log1.record((d1, "5".into(), 5, 1));
891
892        // Assert_eq with no advancement or truncation.
893        log0.assert_eq(
894            d0,
895            0,
896            6,
897            vec![
898                ("0".into(), 0, 1),
899                ("1".into(), 1, 1),
900                ("2".into(), 2, 2),
901                ("3".into(), 3, 1),
902                ("3".into(), 4, -1),
903            ],
904        );
905        log0.assert_eq(d1, 0, 6, vec![("5".into(), 5, 1)]);
906
907        // Assert_eq with advancement.
908        log0.assert_eq(
909            d0,
910            4,
911            6,
912            vec![("0".into(), 4, 1), ("1".into(), 4, 1), ("2".into(), 4, 2)],
913        );
914
915        // Assert_eq with truncation.
916        log0.assert_eq(
917            d0,
918            0,
919            3,
920            vec![("0".into(), 0, 1), ("1".into(), 1, 1), ("2".into(), 2, 2)],
921        );
922    }
923
924    #[mz_ore::test(tokio::test)]
925    #[cfg_attr(miri, ignore)] // too slow
926    async fn unique_batch_serialization() {
927        let client = PersistClient::new_for_tests().await;
928        let mut write = writer(&client, ShardId::new()).await;
929        let data = [(("foo".to_owned(), ()), 0, 1)];
930        let batch = write
931            .batch(&data, Antichain::from_elem(0), Antichain::from_elem(1))
932            .await
933            .unwrap();
934
935        // Pretend we somehow got two batches that happen to have the same
936        // serialization.
937        let b0_raw = batch.into_transmittable_batch();
938        let b1_raw = b0_raw.clone();
939        assert_eq!(b0_raw.encode_to_vec(), b1_raw.encode_to_vec());
940
941        // They don't if we wrap them in ProtoIdBatch.
942        let b0 = ProtoIdBatch::new(b0_raw.clone());
943        let b1 = ProtoIdBatch::new(b1_raw);
944        assert!(b0.encode_to_vec() != b1.encode_to_vec());
945
946        // The transmittable batch roundtrips.
947        let roundtrip = ProtoIdBatch::parse(&b0.encode_to_vec());
948        assert_eq!(roundtrip, b0_raw);
949
950        // We've started running things in all of staging, so we've got to be
951        // able to read the previous serialization (ProtoBatch directly) back.
952        let roundtrip = ProtoIdBatch::parse(&b0_raw.encode_to_vec());
953        assert_eq!(roundtrip, b0_raw);
954    }
955}