Struct mz_adapter::catalog::Catalog
source · pub struct Catalog {
state: CatalogState,
plans: CatalogPlans,
expr_cache_handle: Option<ExpressionCacheHandle>,
storage: Arc<Mutex<Box<dyn DurableCatalogState>>>,
transient_revision: u64,
}
Expand description
A Catalog
keeps track of the SQL objects known to the planner.
For each object, it keeps track of both forward and reverse dependencies: i.e., which objects are depended upon by the object, and which objects depend upon the object. It enforces the SQL rules around dropping: an object cannot be dropped until all of the objects that depend upon it are dropped. It also enforces uniqueness of names.
SQL mandates a hierarchy of exactly three layers. A catalog contains databases, databases contain schemas, and schemas contain catalog items, like sources, sinks, view, and indexes.
To the outside world, databases, schemas, and items are all identified by
name. Items can be referred to by their FullItemName
, which fully and
unambiguously specifies the item, or a PartialItemName
, which can omit the
database name and/or the schema name. Partial names can be converted into
full names via a complicated resolution process documented by the
CatalogState::resolve
method.
The catalog also maintains special “ambient schemas”: virtual schemas,
implicitly present in all databases, that house various system views.
The big examples of ambient schemas are pg_catalog
and mz_catalog
.
Fields§
§state: CatalogState
§plans: CatalogPlans
§expr_cache_handle: Option<ExpressionCacheHandle>
§storage: Arc<Mutex<Box<dyn DurableCatalogState>>>
§transient_revision: u64
Implementations§
source§impl Catalog
impl Catalog
sourcepub fn render_notices(
&self,
df_meta: DataflowMetainfo<RawOptimizerNotice>,
notice_ids: Vec<GlobalId>,
item_id: Option<GlobalId>,
) -> DataflowMetainfo<Arc<OptimizerNotice>>
pub fn render_notices( &self, df_meta: DataflowMetainfo<RawOptimizerNotice>, notice_ids: Vec<GlobalId>, item_id: Option<GlobalId>, ) -> DataflowMetainfo<Arc<OptimizerNotice>>
Transform the DataflowMetainfo
by rendering an OptimizerNotice
for each RawOptimizerNotice
.
source§impl Catalog
impl Catalog
sourcepub async fn initialize_state<'a>(
config: StateConfig,
storage: &'a mut Box<dyn DurableCatalogState>,
) -> Result<InitializeStateResult, AdapterError>
pub async fn initialize_state<'a>( config: StateConfig, storage: &'a mut Box<dyn DurableCatalogState>, ) -> Result<InitializeStateResult, AdapterError>
Initializes a CatalogState. Separate from Catalog::open
to avoid depending on state
external to a mz_catalog::durable::DurableCatalogState
(for example: no mz_secrets::SecretsReader).
sourcepub fn open(
config: Config<'_>,
) -> BoxFuture<'static, Result<OpenCatalogResult, AdapterError>>
pub fn open( config: Config<'_>, ) -> BoxFuture<'static, Result<OpenCatalogResult, AdapterError>>
Opens or creates a catalog that stores data at path
.
Returns the catalog, metadata about builtin objects that have changed schemas since last restart, a list of updates to builtin tables that describe the initial state of the catalog, and the version of the catalog before any migrations were performed.
BOXED FUTURE: As of Nov 2023 the returned Future from this function was 17KB. This would get stored on the stack which is bad for runtime performance, and blow up our stack usage. Because of that we purposefully move this Future onto the heap (i.e. Box it).
sourceasync fn initialize_storage_controller_state(
&mut self,
storage_controller: &mut dyn StorageController<Timestamp = Timestamp>,
storage_collections_to_drop: BTreeSet<CatalogItemId>,
) -> Result<(), CatalogError>
async fn initialize_storage_controller_state( &mut self, storage_controller: &mut dyn StorageController<Timestamp = Timestamp>, storage_collections_to_drop: BTreeSet<CatalogItemId>, ) -> Result<(), CatalogError>
Initializes the storage_controller
to understand all shards that
self
expects to exist.
Note that this must be done before creating/rendering collections because the storage controller might not be aware of new system collections created between versions.
sourcepub async fn initialize_controller(
&mut self,
config: ControllerConfig,
envd_epoch: NonZeroI64,
read_only: bool,
storage_collections_to_drop: BTreeSet<CatalogItemId>,
) -> Result<Controller<Timestamp>, CatalogError>
pub async fn initialize_controller( &mut self, config: ControllerConfig, envd_epoch: NonZeroI64, read_only: bool, storage_collections_to_drop: BTreeSet<CatalogItemId>, ) -> Result<Controller<Timestamp>, CatalogError>
mz_controller::Controller
depends on durable catalog state to boot,
so make it available and initialize the controller.
sourcefn generate_builtin_migration_metadata(
state: &CatalogState,
txn: &mut Transaction<'_>,
migrated_ids: Vec<CatalogItemId>,
id_fingerprint_map: BTreeMap<CatalogItemId, String>,
) -> Result<BuiltinMigrationMetadata, Error>
fn generate_builtin_migration_metadata( state: &CatalogState, txn: &mut Transaction<'_>, migrated_ids: Vec<CatalogItemId>, id_fingerprint_map: BTreeMap<CatalogItemId, String>, ) -> Result<BuiltinMigrationMetadata, Error>
The objects in the catalog form one or more DAGs (directed acyclic graph) via object
dependencies. To migrate a builtin object we must drop that object along with all of its
descendants, and then recreate that object along with all of its descendants using new
CatalogItemId
s. To achieve this we perform a DFS (depth first search) on the catalog
items starting with the nodes that correspond to builtin objects that have changed schemas.
Objects need to be dropped starting from the leafs of the DAG going up towards the roots, and they need to be recreated starting at the roots of the DAG and going towards the leafs.
fn topological_sort<'a, 'b>( state: &'a CatalogState, id: CatalogItemId, visited_set: &'b mut BTreeSet<CatalogItemId>, ) -> Vec<&'a CatalogEntry>
async fn apply_builtin_migration( state: &mut CatalogState, txn: &mut Transaction<'_>, migration_metadata: &mut BuiltinMigrationMetadata, ) -> Result<Vec<BuiltinTableUpdate<&'static BuiltinTable>>, Error>
source§impl Catalog
impl Catalog
pub fn as_optimizer_catalog(self: Arc<Self>) -> Arc<dyn OptimizerCatalog>
source§impl Catalog
impl Catalog
fn should_audit_log_item(item: &CatalogItem) -> bool
sourcefn temporary_ids(
&self,
ops: &[Op],
temporary_drops: BTreeSet<(&ConnectionId, String)>,
) -> Result<BTreeSet<CatalogItemId>, Error>
fn temporary_ids( &self, ops: &[Op], temporary_drops: BTreeSet<(&ConnectionId, String)>, ) -> Result<BTreeSet<CatalogItemId>, Error>
Gets CatalogItemId
s of temporary items to be created, checks for name collisions
within a connection id.
pub async fn transact( &mut self, storage_controller: Option<&mut dyn StorageController<Timestamp = Timestamp>>, oracle_write_ts: Timestamp, session: Option<&ConnMeta>, ops: Vec<Op>, ) -> Result<TransactionResult, AdapterError>
sourceasync fn transact_inner(
storage_controller: Option<&mut dyn StorageController<Timestamp = Timestamp>>,
oracle_write_ts: Timestamp,
session: Option<&ConnMeta>,
ops: Vec<Op>,
temporary_ids: BTreeSet<CatalogItemId>,
builtin_table_updates: &mut Vec<BuiltinTableUpdate>,
audit_events: &mut Vec<VersionedEvent>,
tx: &mut Transaction<'_>,
state: &mut CatalogState,
) -> Result<(), AdapterError>
async fn transact_inner( storage_controller: Option<&mut dyn StorageController<Timestamp = Timestamp>>, oracle_write_ts: Timestamp, session: Option<&ConnMeta>, ops: Vec<Op>, temporary_ids: BTreeSet<CatalogItemId>, builtin_table_updates: &mut Vec<BuiltinTableUpdate>, audit_events: &mut Vec<VersionedEvent>, tx: &mut Transaction<'_>, state: &mut CatalogState, ) -> Result<(), AdapterError>
Performs the transaction described by ops
.
§Panics
- If
ops
containsOp::TransactionDryRun
and the value is not the final element. - If the only element of
ops
isOp::TransactionDryRun
.
sourceasync fn transact_op(
oracle_write_ts: Timestamp,
session: Option<&ConnMeta>,
op: Op,
temporary_ids: &BTreeSet<CatalogItemId>,
audit_events: &mut Vec<VersionedEvent>,
tx: &mut Transaction<'_>,
state: &CatalogState,
storage_collections_to_create: &mut BTreeSet<GlobalId>,
storage_collections_to_drop: &mut BTreeSet<GlobalId>,
) -> Result<(Option<BuiltinTableUpdate>, Vec<(TemporaryItem, StateDiff)>), AdapterError>
async fn transact_op( oracle_write_ts: Timestamp, session: Option<&ConnMeta>, op: Op, temporary_ids: &BTreeSet<CatalogItemId>, audit_events: &mut Vec<VersionedEvent>, tx: &mut Transaction<'_>, state: &CatalogState, storage_collections_to_create: &mut BTreeSet<GlobalId>, storage_collections_to_drop: &mut BTreeSet<GlobalId>, ) -> Result<(Option<BuiltinTableUpdate>, Vec<(TemporaryItem, StateDiff)>), AdapterError>
Performs the transaction operation described by op
. This function prepares the changes in
tx
, but does not update state
. state
will be updated when applying the durable
changes.
Optionally returns a builtin table update for any builtin table updates than cannot be derived from the durable catalog state, and temporary item diffs. These are all very weird scenarios and ideally in the future don’t exist.
fn log_update(state: &CatalogState, id: &CatalogItemId)
sourcefn update_privilege_owners(
privileges: &mut PrivilegeMap,
old_owner: RoleId,
new_owner: RoleId,
)
fn update_privilege_owners( privileges: &mut PrivilegeMap, old_owner: RoleId, new_owner: RoleId, )
Update privileges to reflect the new owner. Based off of PostgreSQL’s implementation: https://github.com/postgres/postgres/blob/43a33ef54e503b61f269d088f2623ba3b9484ad7/src/backend/utils/adt/acl.c#L1078-L1177
source§impl Catalog
impl Catalog
sourcepub fn set_optimized_plan(
&mut self,
id: GlobalId,
plan: DataflowDescription<OptimizedMirRelationExpr>,
)
pub fn set_optimized_plan( &mut self, id: GlobalId, plan: DataflowDescription<OptimizedMirRelationExpr>, )
Set the optimized plan for the item identified by id
.
sourcepub fn set_physical_plan(
&mut self,
id: GlobalId,
plan: DataflowDescription<Plan>,
)
pub fn set_physical_plan( &mut self, id: GlobalId, plan: DataflowDescription<Plan>, )
Set the physical plan for the item identified by id
.
sourcepub fn try_get_optimized_plan(
&self,
id: &GlobalId,
) -> Option<&DataflowDescription<OptimizedMirRelationExpr>>
pub fn try_get_optimized_plan( &self, id: &GlobalId, ) -> Option<&DataflowDescription<OptimizedMirRelationExpr>>
Try to get the optimized plan for the item identified by id
.
sourcepub fn try_get_physical_plan(
&self,
id: &GlobalId,
) -> Option<&DataflowDescription<Plan>>
pub fn try_get_physical_plan( &self, id: &GlobalId, ) -> Option<&DataflowDescription<Plan>>
Try to get the physical plan for the item identified by id
.
sourcepub fn set_dataflow_metainfo(
&mut self,
id: GlobalId,
metainfo: DataflowMetainfo<Arc<OptimizerNotice>>,
)
pub fn set_dataflow_metainfo( &mut self, id: GlobalId, metainfo: DataflowMetainfo<Arc<OptimizerNotice>>, )
Set the DataflowMetainfo
for the item identified by id
.
sourcepub fn try_get_dataflow_metainfo(
&self,
id: &GlobalId,
) -> Option<&DataflowMetainfo<Arc<OptimizerNotice>>>
pub fn try_get_dataflow_metainfo( &self, id: &GlobalId, ) -> Option<&DataflowMetainfo<Arc<OptimizerNotice>>>
Try to get the DataflowMetainfo
for the item identified by id
.
sourcepub fn drop_plans_and_metainfos(
&mut self,
drop_ids: &BTreeSet<GlobalId>,
) -> BTreeSet<Arc<OptimizerNotice>>
pub fn drop_plans_and_metainfos( &mut self, drop_ids: &BTreeSet<GlobalId>, ) -> BTreeSet<Arc<OptimizerNotice>>
Drop all optimized and physical plans and DataflowMetainfo
s for the
item identified by id
.
Ignore requests for non-existing plans or DataflowMetainfo
s.
Return a set containing all dropped notices. Note that if for some reason we end up with two identical notices being dropped by the same call, the result will contain only one instance of that notice.
sourcepub fn source_read_policies(
&self,
id: CatalogItemId,
) -> Vec<(CatalogItemId, ReadPolicy<Timestamp>)>
pub fn source_read_policies( &self, id: CatalogItemId, ) -> Vec<(CatalogItemId, ReadPolicy<Timestamp>)>
For the Sources ids in ids
, return the read policies for all ids
and additional ids that
propagate from them. Specifically, if ids
contains a source, it and all of its source exports
will be added to the result.
sourcepub(crate) fn invalidate_for_index(
&self,
ons: impl Iterator<Item = GlobalId>,
) -> BTreeSet<GlobalId>
pub(crate) fn invalidate_for_index( &self, ons: impl Iterator<Item = GlobalId>, ) -> BTreeSet<GlobalId>
Return a set of GlobalId
s for items that need to have their cache entries invalidated
as a result of creating new indexes on the items in ons
.
When creating and inserting a new index, we need to invalidate some entries that may
optimize to new expressions. When creating index i
on object o
, we need to invalidate
the following objects:
o
.- All compute objects that depend directly on
o
. - All compute objects that would directly depend on
o
, if all views were inlined.
source§impl Catalog
impl Catalog
sourcepub fn transient_revision(&self) -> u64
pub fn transient_revision(&self) -> u64
Returns the catalog’s transient revision, which starts at 1 and is incremented on every change. This is not persisted to disk, and will restart on every load.
sourcepub async fn with_debug<F, Fut, T>(f: F) -> T
pub async fn with_debug<F, Fut, T>(f: F) -> T
Creates a debug catalog from the current
COCKROACH_URL
with parameters set appropriately for debug contexts,
like in tests.
WARNING! This function can arbitrarily fail because it does not make any effort to adjust the catalog’s contents’ structure or semantics to the currently running version, i.e. it does not apply any migrations.
This function must not be called in production contexts. Use
Catalog::open
with appropriately set configuration parameters
instead.
sourcepub async fn open_debug_catalog(
persist_client: PersistClient,
organization_id: Uuid,
) -> Result<Catalog, Error>
pub async fn open_debug_catalog( persist_client: PersistClient, organization_id: Uuid, ) -> Result<Catalog, Error>
Opens a debug catalog.
See Catalog::with_debug
.
sourcepub async fn open_debug_read_only_catalog(
persist_client: PersistClient,
organization_id: Uuid,
) -> Result<Catalog, Error>
pub async fn open_debug_read_only_catalog( persist_client: PersistClient, organization_id: Uuid, ) -> Result<Catalog, Error>
Opens a read only debug persist backed catalog defined by persist_client
and
organization_id
.
See Catalog::with_debug
.
sourcepub async fn open_debug_read_only_persist_catalog_config(
persist_client: PersistClient,
now: NowFn,
environment_id: EnvironmentId,
system_parameter_defaults: BTreeMap<String, String>,
version: Version,
) -> Result<Catalog, Error>
pub async fn open_debug_read_only_persist_catalog_config( persist_client: PersistClient, now: NowFn, environment_id: EnvironmentId, system_parameter_defaults: BTreeMap<String, String>, version: Version, ) -> Result<Catalog, Error>
Opens a read only debug persist backed catalog defined by persist_client
and
organization_id
.
See Catalog::with_debug
.
async fn open_debug_catalog_inner( persist_client: PersistClient, storage: Box<dyn DurableCatalogState>, now: NowFn, environment_id: Option<EnvironmentId>, system_parameter_defaults: BTreeMap<String, String>, ) -> Result<Catalog, Error>
pub fn for_session<'a>(&'a self, session: &'a Session) -> ConnCatalog<'a>
pub fn for_sessionless_user(&self, role_id: RoleId) -> ConnCatalog<'_>
pub fn for_system_session(&self) -> ConnCatalog<'_>
async fn storage<'a>(&'a self) -> MutexGuard<'a, Box<dyn DurableCatalogState>>
pub async fn allocate_user_id(&self) -> Result<(CatalogItemId, GlobalId), Error>
sourcepub async fn get_next_user_item_id(&self) -> Result<u64, Error>
pub async fn get_next_user_item_id(&self) -> Result<u64, Error>
Get the next user item ID without allocating it.
sourcepub async fn get_next_system_item_id(&self) -> Result<u64, Error>
pub async fn get_next_system_item_id(&self) -> Result<u64, Error>
Get the next system item ID without allocating it.
pub async fn allocate_user_cluster_id(&self) -> Result<ClusterId, Error>
sourcepub async fn get_next_system_replica_id(&self) -> Result<u64, Error>
pub async fn get_next_system_replica_id(&self) -> Result<u64, Error>
Get the next system replica id without allocating it.
sourcepub async fn get_next_user_replica_id(&self) -> Result<u64, Error>
pub async fn get_next_user_replica_id(&self) -> Result<u64, Error>
Get the next user replica id without allocating it.
pub fn resolve_database( &self, database_name: &str, ) -> Result<&Database, SqlCatalogError>
pub fn resolve_schema( &self, current_database: Option<&DatabaseId>, database_name: Option<&str>, schema_name: &str, conn_id: &ConnectionId, ) -> Result<&Schema, SqlCatalogError>
pub fn resolve_schema_in_database( &self, database_spec: &ResolvedDatabaseSpecifier, schema_name: &str, conn_id: &ConnectionId, ) -> Result<&Schema, SqlCatalogError>
pub fn resolve_replica_in_cluster( &self, cluster_id: &ClusterId, replica_name: &str, ) -> Result<&ClusterReplica, SqlCatalogError>
pub fn resolve_system_schema(&self, name: &'static str) -> SchemaId
pub fn resolve_search_path( &self, session: &Session, ) -> Vec<(ResolvedDatabaseSpecifier, SchemaSpecifier)>
sourcepub fn resolve_entry(
&self,
current_database: Option<&DatabaseId>,
search_path: &Vec<(ResolvedDatabaseSpecifier, SchemaSpecifier)>,
name: &PartialItemName,
conn_id: &ConnectionId,
) -> Result<&CatalogEntry, SqlCatalogError>
pub fn resolve_entry( &self, current_database: Option<&DatabaseId>, search_path: &Vec<(ResolvedDatabaseSpecifier, SchemaSpecifier)>, name: &PartialItemName, conn_id: &ConnectionId, ) -> Result<&CatalogEntry, SqlCatalogError>
Resolves name
to a non-function CatalogEntry
.
sourcepub fn resolve_builtin_table(
&self,
builtin: &'static BuiltinTable,
) -> CatalogItemId
pub fn resolve_builtin_table( &self, builtin: &'static BuiltinTable, ) -> CatalogItemId
Resolves a BuiltinTable
.
sourcepub fn resolve_builtin_log(&self, builtin: &'static BuiltinLog) -> CatalogItemId
pub fn resolve_builtin_log(&self, builtin: &'static BuiltinLog) -> CatalogItemId
Resolves a BuiltinLog
.
sourcepub fn resolve_builtin_storage_collection(
&self,
builtin: &'static BuiltinSource,
) -> CatalogItemId
pub fn resolve_builtin_storage_collection( &self, builtin: &'static BuiltinSource, ) -> CatalogItemId
Resolves a BuiltinSource
.
sourcepub fn resolve_function(
&self,
current_database: Option<&DatabaseId>,
search_path: &Vec<(ResolvedDatabaseSpecifier, SchemaSpecifier)>,
name: &PartialItemName,
conn_id: &ConnectionId,
) -> Result<&CatalogEntry, SqlCatalogError>
pub fn resolve_function( &self, current_database: Option<&DatabaseId>, search_path: &Vec<(ResolvedDatabaseSpecifier, SchemaSpecifier)>, name: &PartialItemName, conn_id: &ConnectionId, ) -> Result<&CatalogEntry, SqlCatalogError>
Resolves name
to a function CatalogEntry
.
sourcepub fn resolve_type(
&self,
current_database: Option<&DatabaseId>,
search_path: &Vec<(ResolvedDatabaseSpecifier, SchemaSpecifier)>,
name: &PartialItemName,
conn_id: &ConnectionId,
) -> Result<&CatalogEntry, SqlCatalogError>
pub fn resolve_type( &self, current_database: Option<&DatabaseId>, search_path: &Vec<(ResolvedDatabaseSpecifier, SchemaSpecifier)>, name: &PartialItemName, conn_id: &ConnectionId, ) -> Result<&CatalogEntry, SqlCatalogError>
Resolves name
to a type CatalogEntry
.
pub fn resolve_cluster(&self, name: &str) -> Result<&Cluster, SqlCatalogError>
sourcepub fn resolve_builtin_cluster(&self, cluster: &BuiltinCluster) -> &Cluster
pub fn resolve_builtin_cluster(&self, cluster: &BuiltinCluster) -> &Cluster
pub fn get_mz_catalog_server_cluster_id(&self) -> &ClusterId
sourcepub fn resolve_target_cluster(
&self,
target_cluster: TargetCluster,
session: &Session,
) -> Result<&Cluster, AdapterError>
pub fn resolve_target_cluster( &self, target_cluster: TargetCluster, session: &Session, ) -> Result<&Cluster, AdapterError>
Resolves a Cluster
for a TargetCluster.
pub fn active_cluster( &self, session: &Session, ) -> Result<&Cluster, AdapterError>
pub fn state(&self) -> &CatalogState
pub fn resolve_full_name( &self, name: &QualifiedItemName, conn_id: Option<&ConnectionId>, ) -> FullItemName
pub fn try_get_entry(&self, id: &CatalogItemId) -> Option<&CatalogEntry>
pub fn try_get_entry_by_global_id(&self, id: &GlobalId) -> Option<&CatalogEntry>
pub fn get_entry(&self, id: &CatalogItemId) -> &CatalogEntry
pub fn get_entry_by_global_id(&self, id: &GlobalId) -> &CatalogEntry
pub fn get_global_ids( &self, id: &CatalogItemId, ) -> impl Iterator<Item = GlobalId> + '_
pub fn resolve_item_id(&self, id: &GlobalId) -> CatalogItemId
pub fn try_resolve_item_id(&self, id: &GlobalId) -> Option<CatalogItemId>
pub fn get_schema( &self, database_spec: &ResolvedDatabaseSpecifier, schema_spec: &SchemaSpecifier, conn_id: &ConnectionId, ) -> &Schema
pub fn get_mz_catalog_schema_id(&self) -> SchemaId
pub fn get_pg_catalog_schema_id(&self) -> SchemaId
pub fn get_information_schema_id(&self) -> SchemaId
pub fn get_mz_internal_schema_id(&self) -> SchemaId
pub fn get_mz_introspection_schema_id(&self) -> SchemaId
pub fn get_mz_unsafe_schema_id(&self) -> SchemaId
pub fn system_schema_ids(&self) -> impl Iterator<Item = SchemaId> + '_
pub fn get_database(&self, id: &DatabaseId) -> &Database
pub fn try_get_role(&self, id: &RoleId) -> Option<&Role>
pub fn get_role(&self, id: &RoleId) -> &Role
pub fn try_get_role_by_name(&self, role_name: &str) -> Option<&Role>
sourcepub fn create_temporary_schema(
&mut self,
conn_id: &ConnectionId,
owner_id: RoleId,
) -> Result<(), Error>
pub fn create_temporary_schema( &mut self, conn_id: &ConnectionId, owner_id: RoleId, ) -> Result<(), Error>
Creates a new schema in the Catalog
for temporary items
indicated by the TEMPORARY or TEMP keywords.
fn item_exists_in_temp_schemas( &self, conn_id: &ConnectionId, item_name: &str, ) -> bool
sourcepub fn drop_temporary_schema(
&mut self,
conn_id: &ConnectionId,
) -> Result<(), Error>
pub fn drop_temporary_schema( &mut self, conn_id: &ConnectionId, ) -> Result<(), Error>
Drops schema for connection if it exists. Returns an error if it exists and has items. Returns Ok if conn_id’s temp schema does not exist.
pub(crate) fn object_dependents( &self, object_ids: &Vec<ObjectId>, conn_id: &ConnectionId, ) -> Vec<ObjectId>
fn full_name_detail(name: &FullItemName) -> FullNameV1
pub fn find_available_cluster_name(&self, name: &str) -> String
pub fn get_role_allowed_cluster_sizes( &self, role_id: &Option<RoleId>, ) -> Vec<String>
pub fn concretize_replica_location( &self, location: ReplicaLocation, allowed_sizes: &Vec<String>, allowed_availability_zones: Option<&[String]>, ) -> Result<ReplicaLocation, Error>
pub(crate) fn ensure_valid_replica_size( &self, allowed_sizes: &[String], size: &String, ) -> Result<(), Error>
pub fn cluster_replica_sizes(&self) -> &ClusterReplicaSizeMap
sourcepub fn get_privileges(
&self,
id: &SystemObjectId,
conn_id: &ConnectionId,
) -> Option<&PrivilegeMap>
pub fn get_privileges( &self, id: &SystemObjectId, conn_id: &ConnectionId, ) -> Option<&PrivilegeMap>
Returns the privileges of an object by its ID.
pub async fn confirm_leadership(&self) -> Result<(), AdapterError>
sourcepub fn introspection_dependencies(
&self,
id: CatalogItemId,
) -> Vec<CatalogItemId>
pub fn introspection_dependencies( &self, id: CatalogItemId, ) -> Vec<CatalogItemId>
Return the ids of all log sources the given object depends on.
sourcepub fn dump(&self) -> Result<CatalogDump, Error>
pub fn dump(&self) -> Result<CatalogDump, Error>
Serializes the catalog’s in-memory state.
There are no guarantees about the format of the serialized state, except that the serialized state for two identical catalogs will compare identically.
sourcepub fn check_consistency(&self) -> Result<(), Value>
pub fn check_consistency(&self) -> Result<(), Value>
Checks the Catalog
s internal consistency.
Returns a JSON object describing the inconsistencies, if there are any.
pub fn config(&self) -> &CatalogConfig
pub fn entries(&self) -> impl Iterator<Item = &CatalogEntry>
pub fn user_connections(&self) -> impl Iterator<Item = &CatalogEntry>
pub fn user_tables(&self) -> impl Iterator<Item = &CatalogEntry>
pub fn user_sources(&self) -> impl Iterator<Item = &CatalogEntry>
pub fn user_sinks(&self) -> impl Iterator<Item = &CatalogEntry>
pub fn user_materialized_views(&self) -> impl Iterator<Item = &CatalogEntry>
pub fn user_secrets(&self) -> impl Iterator<Item = &CatalogEntry>
pub fn get_network_policy( &self, network_policy_id: NetworkPolicyId, ) -> &NetworkPolicy
pub fn get_network_policy_by_name(&self, name: &str) -> Option<&NetworkPolicy>
pub fn clusters(&self) -> impl Iterator<Item = &Cluster>
pub fn get_cluster(&self, cluster_id: ClusterId) -> &Cluster
pub fn try_get_cluster(&self, cluster_id: ClusterId) -> Option<&Cluster>
pub fn user_clusters(&self) -> impl Iterator<Item = &Cluster>
pub fn get_cluster_replica( &self, cluster_id: ClusterId, replica_id: ReplicaId, ) -> &ClusterReplica
pub fn try_get_cluster_replica( &self, cluster_id: ClusterId, replica_id: ReplicaId, ) -> Option<&ClusterReplica>
pub fn user_cluster_replicas(&self) -> impl Iterator<Item = &ClusterReplica>
pub fn databases(&self) -> impl Iterator<Item = &Database>
pub fn user_roles(&self) -> impl Iterator<Item = &Role>
pub fn user_continual_tasks(&self) -> impl Iterator<Item = &CatalogEntry>
pub fn system_privileges(&self) -> &PrivilegeMap
pub fn default_privileges( &self, ) -> impl Iterator<Item = (&DefaultPrivilegeObject, impl Iterator<Item = &DefaultPrivilegeAclItem>)>
pub fn pack_item_update( &self, id: CatalogItemId, diff: Diff, ) -> Vec<BuiltinTableUpdate>
pub fn pack_storage_usage_update( &self, event: VersionedStorageUsage, diff: Diff, ) -> BuiltinTableUpdate
pub fn system_config(&self) -> &SystemVars
pub fn ensure_not_reserved_role(&self, role_id: &RoleId) -> Result<(), Error>
pub fn ensure_grantable_role(&self, role_id: &RoleId) -> Result<(), Error>
pub fn ensure_not_system_role(&self, role_id: &RoleId) -> Result<(), Error>
pub fn ensure_not_predefined_role(&self, role_id: &RoleId) -> Result<(), Error>
pub fn ensure_not_reserved_network_policy( &self, network_policy_id: &NetworkPolicyId, ) -> Result<(), Error>
pub fn ensure_not_reserved_object( &self, object_id: &ObjectId, conn_id: &ConnectionId, ) -> Result<(), Error>
sourcepub(crate) fn deserialize_plan_with_enable_for_item_parsing(
&mut self,
create_sql: &str,
force_if_exists_skip: bool,
) -> Result<(Plan, ResolvedIds), AdapterError>
pub(crate) fn deserialize_plan_with_enable_for_item_parsing( &mut self, create_sql: &str, force_if_exists_skip: bool, ) -> Result<(Plan, ResolvedIds), AdapterError>
pub(crate) fn update_expression_cache<'a, 'b>( &'a self, new_local_expressions: Vec<(GlobalId, LocalExpressions)>, new_global_expressions: Vec<(GlobalId, GlobalExpressions)>, ) -> BoxFuture<'b, ()>
Trait Implementations§
source§impl OptimizerCatalog for Catalog
impl OptimizerCatalog for Catalog
fn get_entry(&self, id: &GlobalId) -> &CatalogEntry
fn get_entry_by_item_id(&self, id: &CatalogItemId) -> &CatalogEntry
fn resolve_full_name( &self, name: &QualifiedItemName, conn_id: Option<&ConnectionId>, ) -> FullItemName
Auto Trait Implementations§
impl Freeze for Catalog
impl !RefUnwindSafe for Catalog
impl Send for Catalog
impl Sync for Catalog
impl Unpin for Catalog
impl !UnwindSafe for Catalog
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§default unsafe fn clone_to_uninit(&self, dst: *mut T)
default unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)source§impl<T> FmtForward for T
impl<T> FmtForward for T
source§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
self
to use its Binary
implementation when Debug
-formatted.source§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
self
to use its Display
implementation when
Debug
-formatted.source§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
self
to use its LowerExp
implementation when
Debug
-formatted.source§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
self
to use its LowerHex
implementation when
Debug
-formatted.source§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
self
to use its Octal
implementation when Debug
-formatted.source§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
self
to use its Pointer
implementation when
Debug
-formatted.source§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
self
to use its UpperExp
implementation when
Debug
-formatted.source§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
self
to use its UpperHex
implementation when
Debug
-formatted.source§impl<T> FutureExt for T
impl<T> FutureExt for T
source§fn with_context(self, otel_cx: Context) -> WithContext<Self>
fn with_context(self, otel_cx: Context) -> WithContext<Self>
source§fn with_current_context(self) -> WithContext<Self>
fn with_current_context(self) -> WithContext<Self>
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request
source§impl<T, U> OverrideFrom<Option<&T>> for Uwhere
U: OverrideFrom<T>,
impl<T, U> OverrideFrom<Option<&T>> for Uwhere
U: OverrideFrom<T>,
source§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere
T: ?Sized,
source§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
source§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read moresource§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read moresource§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
source§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R,
) -> R
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R, ) -> R
source§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
self
, then passes self.as_ref()
into the pipe function.source§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
self
, then passes self.as_mut()
into the pipe
function.source§fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
self
, then passes self.deref()
into the pipe function.source§impl<T> Pointable for T
impl<T> Pointable for T
source§impl<T> ProgressEventTimestamp for T
impl<T> ProgressEventTimestamp for T
source§impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
source§fn into_rust(self) -> Result<R, TryFromProtoError>
fn into_rust(self) -> Result<R, TryFromProtoError>
RustType::from_proto
.source§fn from_rust(rust: &R) -> P
fn from_rust(rust: &R) -> P
RustType::into_proto
.source§impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
source§fn plus_equals(&mut self, rhs: &&'a S)
fn plus_equals(&mut self, rhs: &&'a S)
std::ops::AddAssign
, for types that do not implement AddAssign
.source§impl<T> Tap for T
impl<T> Tap for T
source§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
Borrow<B>
of a value. Read moresource§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
BorrowMut<B>
of a value. Read moresource§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
AsRef<R>
view of a value. Read moresource§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
AsMut<R>
view of a value. Read moresource§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
Deref::Target
of a value. Read moresource§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
Deref::Target
of a value. Read moresource§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap()
only in debug builds, and is erased in release builds.source§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut()
only in debug builds, and is erased in release
builds.source§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
.tap_borrow()
only in debug builds, and is erased in release
builds.source§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
.tap_borrow_mut()
only in debug builds, and is erased in release
builds.source§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
.tap_ref()
only in debug builds, and is erased in release
builds.source§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
.tap_ref_mut()
only in debug builds, and is erased in release
builds.source§fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
.tap_deref()
only in debug builds, and is erased in release
builds.