Expand description
High-level API for reading/writing Arrow RecordBatches and Arrays to/from Parquet Files.
Apache Arrow is a cross-language development platform for in-memory data.
§Example of writing Arrow record batch to Parquet file
let ids = Int32Array::from(vec![1, 2, 3, 4]);
let vals = Int32Array::from(vec![5, 6, 7, 8]);
let batch = RecordBatch::try_from_iter(vec![
("id", Arc::new(ids) as ArrayRef),
("val", Arc::new(vals) as ArrayRef),
]).unwrap();
let file = tempfile().unwrap();
// WriterProperties can be used to set Parquet file options
let props = WriterProperties::builder()
.set_compression(Compression::SNAPPY)
.build();
let mut writer = ArrowWriter::try_new(file, batch.schema(), Some(props)).unwrap();
writer.write(&batch).expect("Writing batch");
// writer must be closed to write footer
writer.close().unwrap();
§Example of reading parquet file into arrow record batch
let file = File::open("data.parquet").unwrap();
let builder = ParquetRecordBatchReaderBuilder::try_new(file).unwrap();
println!("Converted arrow schema is: {}", builder.schema());
let mut reader = builder.build().unwrap();
let record_batch = reader.next().unwrap().unwrap();
println!("Read {} records.", record_batch.num_rows());
Re-exports§
pub use self::arrow_writer::ArrowWriter;
Modules§
- Contains reader which reads parquet data into arrow
RecordBatch
- Contains writer which writes arrow data into parquet data.
Structs§
- Schema information necessary to decode a parquet file as arrow
Fields
- A
ProjectionMask
identifies a set of columns within a potentially nested schema to project
Constants§
- Schema metadata key used to store serialized Arrow IPC schema
- The value of this metadata key, if present on
Field::metadata
, will be used to populateBasicTypeInfo::id
Functions§
- Convert arrow schema to parquet schema
- Convert a parquet
SchemaDescriptor
toFieldLevels
- Convert Parquet schema to Arrow schema including optional metadata
- Convert parquet schema to arrow schema including optional metadata, only preserving some leaf columns.