mz_storage/source/mysql/
snapshot.rs

1// Copyright Materialize, Inc. and contributors. All rights reserved.
2//
3// Use of this software is governed by the Business Source License
4// included in the LICENSE file.
5//
6// As of the Change Date specified in that file, in accordance with
7// the Business Source License, use of this software will be governed
8// by the Apache License, Version 2.0.
9
10//! Renders the table snapshot side of the [`MySqlSourceConnection`] dataflow.
11//!
12//! # Snapshot reading
13//!
14//! Depending on the `source_outputs resume_upper` parameters this dataflow decides which tables to
15//! snapshot and performs a simple `SELECT * FROM table` on them in order to get a snapshot.
16//! There are a few subtle points about this operation, described below.
17//!
18//! It is crucial for correctness that we always perform the snapshot of all tables at a specific
19//! point in time. This must be true even in the presence of restarts or partially committed
20//! snapshots. The consistent point that the snapshot must happen at is discovered and durably
21//! recorded during planning of the source and is exposed to this ingestion dataflow via the
22//! `initial_gtid_set` field in `MySqlSourceDetails`.
23//!
24//! Unfortunately MySQL does not provide an API to perform a transaction at a specific point in
25//! time. Instead, MySQL allows us to perform a snapshot of a table and let us know at which point
26//! in time the snapshot was taken. Using this information we can take a snapshot at an arbitrary
27//! point in time and then rewind it to the desired `initial_gtid_set` by "rewinding" it. These two
28//! phases are described in the following section.
29//!
30//! ## Producing a snapshot at a known point in time.
31//!
32//! Ideally we would like to start a transaction and ask MySQL to tell us the point in time this
33//! transaction is running at. As far as we know there isn't such API so we achieve this using
34//! table locks instead.
35//!
36//! The full set of tables that are meant to be snapshotted are partitioned among the workers. Each
37//! worker initiates a connection to the server and acquires a table lock on all the tables that
38//! have been assigned to it. By doing so we establish a moment in time where we know no writes are
39//! happening to the tables we are interested in. After the locks are taken each worker reads the
40//! current upper frontier (`snapshot_upper`) using the `@@gtid_executed` system variable. This
41//! frontier establishes an upper bound on any possible write to the tables of interest until the
42//! lock is released.
43//!
44//! Each worker now starts a transaction via a new connection with 'REPEATABLE READ' and
45//! 'CONSISTENT SNAPSHOT' semantics. Due to linearizability we know that this transaction's view of
46//! the database must some time `t_snapshot` such that `snapshot_upper <= t_snapshot`. We don't
47//! actually know the exact value of `t_snapshot` and it might be strictly greater than
48//! `snapshot_upper`. However, because this transaction will only be used to read the locked tables
49//! and we know that `snapshot_upper` is an upper bound on all the writes that have happened to
50//! them we can safely pretend that the transaction's `t_snapshot` is *equal* to `snapshot_upper`.
51//! We have therefore succeeded in starting a transaction at a known point in time!
52//!
53//! At this point it is safe for each worker to unlock the tables, since the transaction has
54//! established a point in time, and close the initial connection. Each worker can then read the
55//! snapshot of the tables it is responsible for and publish it downstream.
56//!
57//! TODO: Other software products hold the table lock for the duration of the snapshot, and some do
58//! not. We should figure out why and if we need to hold the lock longer. This may be because of a
59//! difference in how REPEATABLE READ works in some MySQL-compatible systems (e.g. Aurora MySQL).
60//!
61//! ## Rewinding the snapshot to a specific point in time.
62//!
63//! Having obtained a snapshot of a table at some `snapshot_upper` we are now tasked with
64//! transforming this snapshot into one at `initial_gtid_set`. In other words we have produced a
65//! snapshot containing all updates that happened at `t: !(snapshot_upper <= t)` but what we
66//! actually want is a snapshot containing all updates that happened at `t: !(initial_gtid <= t)`.
67//!
68//! If we assume that `initial_gtid_set <= snapshot_upper`, which is a fair assumption since the
69//! former is obtained before the latter, then we can observe that the snapshot we produced
70//! contains all updates at `t: !(initial_gtid <= t)` (i.e the snapshot we want) and some additional
71//! unwanted updates at `t: initial_gtid <= t && !(snapshot_upper <= t)`. We happen to know exactly
72//! what those additional unwanted updates are because those will be obtained by reading the
73//! replication stream in the replication operator and so all we need to do to "rewind" our
74//! `snapshot_upper` snapshot to `initial_gtid` is to ask the replication operator to "undo" any
75//! updates that falls in the undesirable region.
76//!
77//! This is exactly what `RewindRequest` is about. It informs the replication operator that a
78//! particular table has been snapshotted at `snapshot_upper` and would like all the updates
79//! discovered during replication that happen at `t: initial_gtid <= t && !(snapshot_upper <= t)`.
80//! to be cancelled. In Differential Dataflow this is as simple as flipping the sign of the diff
81//! field.
82//!
83//! The snapshot reader emits updates at the minimum timestamp (by convention) to allow the
84//! updates to be potentially negated by the replication operator, which will emit negated
85//! updates at the minimum timestamp (by convention) when it encounters rows from a table that
86//! occur before the GTID frontier in the Rewind Request for that table.
87use std::collections::{BTreeMap, BTreeSet};
88use std::rc::Rc;
89use std::sync::Arc;
90
91use differential_dataflow::AsCollection;
92use futures::TryStreamExt;
93use itertools::Itertools;
94use mysql_async::prelude::Queryable;
95use mysql_async::{IsolationLevel, Row as MySqlRow, TxOpts};
96use mz_mysql_util::{
97    ER_NO_SUCH_TABLE, MySqlError, pack_mysql_row, query_sys_var, quote_identifier,
98};
99use mz_ore::cast::CastFrom;
100use mz_ore::future::InTask;
101use mz_ore::iter::IteratorExt;
102use mz_ore::metrics::MetricsFutureExt;
103use mz_repr::{Diff, Row};
104use mz_storage_types::errors::DataflowError;
105use mz_storage_types::sources::MySqlSourceConnection;
106use mz_storage_types::sources::mysql::{GtidPartition, gtid_set_frontier};
107use mz_timely_util::antichain::AntichainExt;
108use mz_timely_util::builder_async::{OperatorBuilder as AsyncOperatorBuilder, PressOnDropButton};
109use mz_timely_util::containers::stack::AccountedStackBuilder;
110use timely::container::CapacityContainerBuilder;
111use timely::dataflow::operators::core::Map;
112use timely::dataflow::operators::{CapabilitySet, Concat};
113use timely::dataflow::{Scope, Stream};
114use timely::progress::Timestamp;
115use tracing::{error, trace};
116
117use crate::metrics::source::mysql::MySqlSnapshotMetrics;
118use crate::source::RawSourceCreationConfig;
119use crate::source::types::{SignaledFuture, SourceMessage, StackedCollection};
120use crate::statistics::SourceStatistics;
121
122use super::schemas::verify_schemas;
123use super::{
124    DefiniteError, MySqlTableName, ReplicationError, RewindRequest, SourceOutputInfo,
125    TransientError, return_definite_error, validate_mysql_repl_settings,
126};
127
128/// Renders the snapshot dataflow. See the module documentation for more information.
129pub(crate) fn render<G: Scope<Timestamp = GtidPartition>>(
130    scope: G,
131    config: RawSourceCreationConfig,
132    connection: MySqlSourceConnection,
133    source_outputs: Vec<SourceOutputInfo>,
134    metrics: MySqlSnapshotMetrics,
135) -> (
136    StackedCollection<G, (usize, Result<SourceMessage, DataflowError>)>,
137    Stream<G, RewindRequest>,
138    Stream<G, ReplicationError>,
139    PressOnDropButton,
140) {
141    let mut builder =
142        AsyncOperatorBuilder::new(format!("MySqlSnapshotReader({})", config.id), scope.clone());
143
144    let (raw_handle, raw_data) = builder.new_output::<AccountedStackBuilder<_>>();
145    let (rewinds_handle, rewinds) = builder.new_output::<CapacityContainerBuilder<_>>();
146    // Captures DefiniteErrors that affect the entire source, including all outputs
147    let (definite_error_handle, definite_errors) =
148        builder.new_output::<CapacityContainerBuilder<_>>();
149
150    // A global view of all outputs that will be snapshot by all workers.
151    let mut all_outputs = vec![];
152    // A map containing only the table infos that this worker should snapshot.
153    let mut reader_snapshot_table_info = BTreeMap::new();
154    // Maps MySQL table name to export `SourceStatistics`. Same info exists in reader_snapshot_table_info,
155    // but this avoids having to iterate + map each time the statistics are needed.
156    let mut export_statistics = BTreeMap::new();
157    for output in source_outputs.into_iter() {
158        // Determine which outputs need to be snapshot and which already have been.
159        if *output.resume_upper != [GtidPartition::minimum()] {
160            // Already has been snapshotted.
161            continue;
162        }
163        all_outputs.push(output.output_index);
164        if config.responsible_for(&output.table_name) {
165            let export_stats = config
166                .statistics
167                .get(&output.export_id)
168                .expect("statistics have been intialized")
169                .clone();
170            export_statistics
171                .entry(output.table_name.clone())
172                .or_insert_with(Vec::new)
173                .push(export_stats);
174
175            reader_snapshot_table_info
176                .entry(output.table_name.clone())
177                .or_insert_with(Vec::new)
178                .push(output);
179        }
180    }
181
182    let (button, transient_errors): (_, Stream<G, Rc<TransientError>>) =
183        builder.build_fallible(move |caps| {
184            let busy_signal = Arc::clone(&config.busy_signal);
185            Box::pin(SignaledFuture::new(busy_signal, async move {
186                let [data_cap_set, rewind_cap_set, definite_error_cap_set]: &mut [_; 3] =
187                    caps.try_into().unwrap();
188
189                let id = config.id;
190                let worker_id = config.worker_id;
191
192                if !all_outputs.is_empty() {
193                    // A worker *must* emit a count even if not responsible for snapshotting a table
194                    // as statistic summarization will return null if any worker hasn't set a value.
195                    // This will also reset snapshot stats for any exports not snapshotting.
196                    for statistics in config.statistics.values() {
197                        statistics.set_snapshot_records_known(0);
198                        statistics.set_snapshot_records_staged(0);
199                    }
200                }
201
202                // If this worker has no tables to snapshot then there is nothing to do.
203                if reader_snapshot_table_info.is_empty() {
204                    trace!(%id, "timely-{worker_id} initializing table reader \
205                                 with no tables to snapshot, exiting");
206                    return Ok(());
207                } else {
208                    trace!(%id, "timely-{worker_id} initializing table reader \
209                                 with {} tables to snapshot",
210                           reader_snapshot_table_info.len());
211                }
212
213                let connection_config = connection
214                    .connection
215                    .config(
216                        &config.config.connection_context.secrets_reader,
217                        &config.config,
218                        InTask::Yes,
219                    )
220                    .await?;
221                let task_name = format!("timely-{worker_id} MySQL snapshotter");
222
223                let lock_clauses = reader_snapshot_table_info
224                    .keys()
225                    .map(|t| format!("{} READ", t))
226                    .collect::<Vec<String>>()
227                    .join(", ");
228                let mut lock_conn = connection_config
229                    .connect(
230                        &task_name,
231                        &config.config.connection_context.ssh_tunnel_manager,
232                    )
233                    .await?;
234                if let Some(timeout) = config
235                    .config
236                    .parameters
237                    .mysql_source_timeouts
238                    .snapshot_lock_wait_timeout
239                {
240                    lock_conn
241                        .query_drop(format!(
242                            "SET @@session.lock_wait_timeout = {}",
243                            timeout.as_secs()
244                        ))
245                        .await?;
246                }
247
248                trace!(%id, "timely-{worker_id} acquiring table locks: {lock_clauses}");
249                match lock_conn
250                    .query_drop(format!("LOCK TABLES {lock_clauses}"))
251                    .await
252                {
253                    // Handle the case where a table we are snapshotting has been dropped or renamed.
254                    Err(mysql_async::Error::Server(mysql_async::ServerError {
255                        code,
256                        message,
257                        ..
258                    })) if code == ER_NO_SUCH_TABLE => {
259                        trace!(%id, "timely-{worker_id} received unknown table error from \
260                                     lock query");
261                        let err = DefiniteError::TableDropped(message);
262                        return Ok(return_definite_error(
263                            err,
264                            &all_outputs,
265                            &raw_handle,
266                            data_cap_set,
267                            &definite_error_handle,
268                            definite_error_cap_set,
269                        )
270                        .await);
271                    }
272                    e => e?,
273                };
274
275                // Record the frontier of future GTIDs based on the executed GTID set at the start
276                // of the snapshot
277                let snapshot_gtid_set =
278                    query_sys_var(&mut lock_conn, "global.gtid_executed").await?;
279                let snapshot_gtid_frontier = match gtid_set_frontier(&snapshot_gtid_set) {
280                    Ok(frontier) => frontier,
281                    Err(err) => {
282                        let err = DefiniteError::UnsupportedGtidState(err.to_string());
283                        // If we received a GTID Set with non-consecutive intervals this breaks all
284                        // our assumptions, so there is nothing else we can do.
285                        return Ok(return_definite_error(
286                            err,
287                            &all_outputs,
288                            &raw_handle,
289                            data_cap_set,
290                            &definite_error_handle,
291                            definite_error_cap_set,
292                        )
293                        .await);
294                    }
295                };
296
297                // TODO(roshan): Insert metric for how long it took to acquire the locks
298                trace!(%id, "timely-{worker_id} acquired table locks at: {}",
299                       snapshot_gtid_frontier.pretty());
300
301                let mut conn = connection_config
302                    .connect(
303                        &task_name,
304                        &config.config.connection_context.ssh_tunnel_manager,
305                    )
306                    .await?;
307
308                // Verify the MySQL system settings are correct for consistent row-based replication using GTIDs
309                match validate_mysql_repl_settings(&mut conn).await {
310                    Err(err @ MySqlError::InvalidSystemSetting { .. }) => {
311                        return Ok(return_definite_error(
312                            DefiniteError::ServerConfigurationError(err.to_string()),
313                            &all_outputs,
314                            &raw_handle,
315                            data_cap_set,
316                            &definite_error_handle,
317                            definite_error_cap_set,
318                        )
319                        .await);
320                    }
321                    Err(err) => Err(err)?,
322                    Ok(()) => (),
323                };
324
325                trace!(%id, "timely-{worker_id} starting transaction with \
326                             consistent snapshot at: {}", snapshot_gtid_frontier.pretty());
327
328                // Start a transaction with REPEATABLE READ and 'CONSISTENT SNAPSHOT' semantics
329                // so we can read a consistent snapshot of the table at the specific GTID we read.
330                let mut tx_opts = TxOpts::default();
331                tx_opts
332                    .with_isolation_level(IsolationLevel::RepeatableRead)
333                    .with_consistent_snapshot(true)
334                    .with_readonly(true);
335                let mut tx = conn.start_transaction(tx_opts).await?;
336                // Set the session time zone to UTC so that we can read TIMESTAMP columns as UTC
337                // From https://dev.mysql.com/doc/refman/8.0/en/datetime.html: "MySQL converts TIMESTAMP values
338                // from the current time zone to UTC for storage, and back from UTC to the current time zone
339                // for retrieval. (This does not occur for other types such as DATETIME.)"
340                tx.query_drop("set @@session.time_zone = '+00:00'").await?;
341
342                // Configure query execution time based on param. We want to be able to
343                // override the server value here in case it's set too low,
344                // respective to the size of the data we need to copy.
345                if let Some(timeout) = config
346                    .config
347                    .parameters
348                    .mysql_source_timeouts
349                    .snapshot_max_execution_time
350                {
351                    tx.query_drop(format!(
352                        "SET @@session.max_execution_time = {}",
353                        timeout.as_millis()
354                    ))
355                    .await?;
356                }
357
358                // We have started our transaction so we can unlock the tables.
359                lock_conn.query_drop("UNLOCK TABLES").await?;
360                lock_conn.disconnect().await?;
361
362                trace!(%id, "timely-{worker_id} started transaction");
363
364                // Verify the schemas of the tables we are snapshotting
365                let errored_outputs =
366                    verify_schemas(&mut tx, reader_snapshot_table_info.iter().collect()).await?;
367                let mut removed_outputs = BTreeSet::new();
368                for (output, err) in errored_outputs {
369                    // Publish the error for this table and stop ingesting it
370                    raw_handle
371                        .give_fueled(
372                            &data_cap_set[0],
373                            (
374                                (output.output_index, Err(err.clone().into())),
375                                GtidPartition::minimum(),
376                                Diff::ONE,
377                            ),
378                        )
379                        .await;
380                    trace!(%id, "timely-{worker_id} stopping snapshot of output {output:?} \
381                                due to schema mismatch");
382                    removed_outputs.insert(output.output_index);
383                }
384                for (_, outputs) in reader_snapshot_table_info.iter_mut() {
385                    outputs.retain(|output| !removed_outputs.contains(&output.output_index));
386                }
387                reader_snapshot_table_info.retain(|_, outputs| !outputs.is_empty());
388
389                let snapshot_total = fetch_snapshot_size(
390                    &mut tx,
391                    reader_snapshot_table_info
392                        .iter()
393                        .map(|(name, outputs)| {
394                            (
395                                name.clone(),
396                                outputs.len(),
397                                export_statistics.get(name).unwrap(),
398                            )
399                        })
400                        .collect(),
401                    metrics,
402                )
403                .await?;
404
405                // This worker has nothing else to do
406                if reader_snapshot_table_info.is_empty() {
407                    return Ok(());
408                }
409
410                // Read the snapshot data from the tables
411                let mut final_row = Row::default();
412
413                let mut snapshot_staged_total = 0;
414                for (table, outputs) in &reader_snapshot_table_info {
415                    let mut snapshot_staged = 0;
416                    let query = build_snapshot_query(outputs);
417                    trace!(%id, "timely-{worker_id} reading snapshot query='{}'", query);
418                    let mut results = tx.exec_stream(query, ()).await?;
419                    while let Some(row) = results.try_next().await? {
420                        let row: MySqlRow = row;
421                        snapshot_staged += 1;
422                        for (output, row_val) in outputs.iter().repeat_clone(row) {
423                            let event = match pack_mysql_row(&mut final_row, row_val, &output.desc)
424                            {
425                                Ok(row) => Ok(SourceMessage {
426                                    key: Row::default(),
427                                    value: row,
428                                    metadata: Row::default(),
429                                }),
430                                // Produce a DefiniteError in the stream for any rows that fail to decode
431                                Err(err @ MySqlError::ValueDecodeError { .. }) => {
432                                    Err(DataflowError::from(DefiniteError::ValueDecodeError(
433                                        err.to_string(),
434                                    )))
435                                }
436                                Err(err) => Err(err)?,
437                            };
438                            raw_handle
439                                .give_fueled(
440                                    &data_cap_set[0],
441                                    (
442                                        (output.output_index, event),
443                                        GtidPartition::minimum(),
444                                        Diff::ONE,
445                                    ),
446                                )
447                                .await;
448                        }
449                        // This overcounting maintains existing behavior but will be removed one readers no longer rely on the value.
450                        snapshot_staged_total += u64::cast_from(outputs.len());
451                        if snapshot_staged_total % 1000 == 0 {
452                            for statistics in export_statistics.get(table).unwrap() {
453                                statistics.set_snapshot_records_staged(snapshot_staged);
454                            }
455                        }
456                    }
457                    for statistics in export_statistics.get(table).unwrap() {
458                        statistics.set_snapshot_records_staged(snapshot_staged);
459                    }
460                    trace!(%id, "timely-{worker_id} snapshotted {} records from \
461                                 table '{table}'", snapshot_staged * u64::cast_from(outputs.len()));
462                }
463
464                // We are done with the snapshot so now we will emit rewind requests. It is
465                // important that this happens after the snapshot has finished because this is what
466                // unblocks the replication operator and we want this to happen serially. It might
467                // seem like a good idea to read the replication stream concurrently with the
468                // snapshot but it actually leads to a lot of data being staged for the future,
469                // which needlesly consumed memory in the cluster.
470                for (table, outputs) in reader_snapshot_table_info {
471                    for output in outputs {
472                        trace!(%id, "timely-{worker_id} producing rewind request for {table}\
473                                     output {}", output.output_index);
474                        let req = RewindRequest {
475                            output_index: output.output_index,
476                            snapshot_upper: snapshot_gtid_frontier.clone(),
477                        };
478                        rewinds_handle.give(&rewind_cap_set[0], req);
479                    }
480                }
481                *rewind_cap_set = CapabilitySet::new();
482
483                // TODO (maz): Should we remove this to match Postgres?
484                if snapshot_staged_total < snapshot_total {
485                    error!(%id, "timely-{worker_id} snapshot size {snapshot_total} is somehow \
486                                 bigger than records staged {snapshot_staged_total}");
487                }
488
489                Ok(())
490            }))
491        });
492
493    // TODO: Split row decoding into a separate operator that can be distributed across all workers
494
495    let errors = definite_errors.concat(&transient_errors.map(ReplicationError::from));
496
497    (
498        raw_data.as_collection(),
499        rewinds,
500        errors,
501        button.press_on_drop(),
502    )
503}
504
505/// Fetch the size of the snapshot on this worker and emits the appropriate emtrics and statistics
506/// for each table.
507async fn fetch_snapshot_size<Q>(
508    conn: &mut Q,
509    tables: Vec<(MySqlTableName, usize, &Vec<SourceStatistics>)>,
510    metrics: MySqlSnapshotMetrics,
511) -> Result<u64, anyhow::Error>
512where
513    Q: Queryable,
514{
515    let mut total = 0;
516    for (table, num_outputs, export_statistics) in tables {
517        let stats = collect_table_statistics(conn, &table).await?;
518        metrics.record_table_count_latency(table.1, table.0, stats.count_latency);
519        for export_stat in export_statistics {
520            export_stat.set_snapshot_records_known(stats.count);
521            export_stat.set_snapshot_records_staged(0);
522        }
523        total += stats.count * u64::cast_from(num_outputs);
524    }
525    Ok(total)
526}
527
528/// Builds the SQL query to be used for creating the snapshot using the first entry in outputs.
529///
530/// Expect `outputs` to contain entries for a single table, and to have at least 1 entry.
531/// Expect that each MySqlTableDesc entry contains all columns described in information_schema.columns.
532#[must_use]
533fn build_snapshot_query(outputs: &[SourceOutputInfo]) -> String {
534    let info = outputs.first().expect("MySQL table info");
535    for output in &outputs[1..] {
536        // the columns are decoded solely based on position, so we just need to ensure that
537        // all columns are accounted for.
538        assert!(
539            info.desc.columns.len() == output.desc.columns.len(),
540            "Mismatch in table descriptions for {}",
541            info.table_name
542        );
543    }
544    let columns = info
545        .desc
546        .columns
547        .iter()
548        .map(|col| quote_identifier(&col.name))
549        .join(", ");
550    format!("SELECT {} FROM {}", columns, info.table_name)
551}
552
553#[derive(Default)]
554struct TableStatistics {
555    count_latency: f64,
556    count: u64,
557}
558
559async fn collect_table_statistics<Q>(
560    conn: &mut Q,
561    table: &MySqlTableName,
562) -> Result<TableStatistics, anyhow::Error>
563where
564    Q: Queryable,
565{
566    let mut stats = TableStatistics::default();
567
568    let count_row: Option<u64> = conn
569        .query_first(format!("SELECT COUNT(*) FROM {}", table))
570        .wall_time()
571        .set_at(&mut stats.count_latency)
572        .await?;
573    stats.count = count_row.ok_or_else(|| anyhow::anyhow!("failed to COUNT(*) {table}"))?;
574
575    Ok(stats)
576}
577
578#[cfg(test)]
579mod tests {
580    use super::*;
581    use mz_mysql_util::{MySqlColumnDesc, MySqlTableDesc};
582    use timely::progress::Antichain;
583
584    #[mz_ore::test]
585    fn snapshot_query_duplicate_table() {
586        let schema_name = "myschema".to_string();
587        let table_name = "mytable".to_string();
588        let table = MySqlTableName(schema_name.clone(), table_name.clone());
589        let columns = ["c1", "c2", "c3"]
590            .iter()
591            .map(|col| MySqlColumnDesc {
592                name: col.to_string(),
593                column_type: None,
594                meta: None,
595            })
596            .collect::<Vec<_>>();
597        let desc = MySqlTableDesc {
598            schema_name: schema_name.clone(),
599            name: table_name.clone(),
600            columns,
601            keys: BTreeSet::default(),
602        };
603        let info = SourceOutputInfo {
604            output_index: 1, // ignored
605            table_name: table.clone(),
606            desc,
607            text_columns: vec![],
608            exclude_columns: vec![],
609            initial_gtid_set: Antichain::default(),
610            resume_upper: Antichain::default(),
611            export_id: mz_repr::GlobalId::User(1),
612        };
613        let query = build_snapshot_query(&[info.clone(), info]);
614        assert_eq!(
615            format!(
616                "SELECT `c1`, `c2`, `c3` FROM `{}`.`{}`",
617                &schema_name, &table_name
618            ),
619            query
620        );
621    }
622}