pub struct ArrowReaderOptions { /* private fields */ }
Expand description
Options that control how metadata is read for a parquet file
See ArrowReaderBuilder
for how to configure how the column data
is then read from the file, including projection and filter pushdown
Implementations§
Source§impl ArrowReaderOptions
impl ArrowReaderOptions
Sourcepub fn new() -> Self
pub fn new() -> Self
Create a new ArrowReaderOptions
with the default settings
Sourcepub fn with_skip_arrow_metadata(self, skip_arrow_metadata: bool) -> Self
pub fn with_skip_arrow_metadata(self, skip_arrow_metadata: bool) -> Self
Skip decoding the embedded arrow metadata (defaults to false
)
Parquet files generated by some writers may contain embedded arrow schema and metadata. This may not be correct or compatible with your system, for example: ARROW-16184
Sourcepub fn with_schema(self, schema: SchemaRef) -> Self
pub fn with_schema(self, schema: SchemaRef) -> Self
Provide a schema hint to use when reading the Parquet file.
If provided, this schema takes precedence over any arrow schema embedded
in the metadata (see the arrow
documentation for more details).
If the provided schema is not compatible with the data stored in the parquet file schema, an error will be returned when constructing the builder.
This option is only required if you want to explicitly control the
conversion of Parquet types to Arrow types, such as casting a column to
a different type. For example, if you wanted to read an Int64 in
a Parquet file to a TimestampMicrosecondArray
in the Arrow schema.
§Notes
The provided schema must have the same number of columns as the parquet schema and the column names must be the same.
§Example
use std::io::Bytes;
use std::sync::Arc;
use tempfile::tempfile;
use arrow_array::{ArrayRef, Int32Array, RecordBatch};
use arrow_schema::{DataType, Field, Schema, TimeUnit};
use parquet::arrow::arrow_reader::{ArrowReaderOptions, ParquetRecordBatchReaderBuilder};
use parquet::arrow::ArrowWriter;
// Write data - schema is inferred from the data to be Int32
let file = tempfile().unwrap();
let batch = RecordBatch::try_from_iter(vec![
("col_1", Arc::new(Int32Array::from(vec![1, 2, 3])) as ArrayRef),
]).unwrap();
let mut writer = ArrowWriter::try_new(file.try_clone().unwrap(), batch.schema(), None).unwrap();
writer.write(&batch).unwrap();
writer.close().unwrap();
// Read the file back.
// Supply a schema that interprets the Int32 column as a Timestamp.
let supplied_schema = Arc::new(Schema::new(vec![
Field::new("col_1", DataType::Timestamp(TimeUnit::Nanosecond, None), false)
]));
let options = ArrowReaderOptions::new().with_schema(supplied_schema.clone());
let mut builder = ParquetRecordBatchReaderBuilder::try_new_with_options(
file.try_clone().unwrap(),
options
).expect("Error if the schema is not compatible with the parquet file schema.");
// Create the reader and read the data using the supplied schema.
let mut reader = builder.build().unwrap();
let _batch = reader.next().unwrap().unwrap();
Sourcepub fn with_page_index(self, page_index: bool) -> Self
pub fn with_page_index(self, page_index: bool) -> Self
Enable reading PageIndex
, if present (defaults to false
)
The PageIndex
can be used to push down predicates to the parquet scan,
potentially eliminating unnecessary IO, by some query engines.
If this is enabled, ParquetMetaData::column_index
and
ParquetMetaData::offset_index
will be populated if the corresponding
information is present in the file.
Sourcepub fn page_index(&self) -> bool
pub fn page_index(&self) -> bool
Retrieve the currently set page index behavior.
This can be set via with_page_index
.
Trait Implementations§
Source§impl Clone for ArrowReaderOptions
impl Clone for ArrowReaderOptions
Source§fn clone(&self) -> ArrowReaderOptions
fn clone(&self) -> ArrowReaderOptions
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more