| Package | Description |
|---|---|
| org.apache.parquet.filter2.compat | |
| org.apache.parquet.hadoop |
Provides classes to store use Parquet files in Hadoop
In a map reduce job:
|
| org.apache.parquet.hadoop.metadata |
| Modifier and Type | Method and Description |
|---|---|
static List<BlockMetaData> |
RowGroupFilter.filterRowGroups(FilterCompat.Filter filter,
List<BlockMetaData> blocks,
MessageType schema) |
static List<BlockMetaData> |
RowGroupFilter.filterRowGroups(List<RowGroupFilter.FilterLevel> levels,
FilterCompat.Filter filter,
List<BlockMetaData> blocks,
ParquetFileReader reader) |
List<BlockMetaData> |
RowGroupFilter.visit(FilterCompat.FilterPredicateCompat filterPredicateCompat) |
List<BlockMetaData> |
RowGroupFilter.visit(FilterCompat.NoOpFilter noOpFilter) |
List<BlockMetaData> |
RowGroupFilter.visit(FilterCompat.UnboundRecordFilterCompat unboundRecordFilterCompat) |
| Modifier and Type | Method and Description |
|---|---|
static List<BlockMetaData> |
RowGroupFilter.filterRowGroups(FilterCompat.Filter filter,
List<BlockMetaData> blocks,
MessageType schema) |
static List<BlockMetaData> |
RowGroupFilter.filterRowGroups(List<RowGroupFilter.FilterLevel> levels,
FilterCompat.Filter filter,
List<BlockMetaData> blocks,
ParquetFileReader reader) |
| Modifier and Type | Method and Description |
|---|---|
List<BlockMetaData> |
ParquetInputSplit.getBlocks()
Deprecated.
the file footer is no longer read before creating input splits
|
List<BlockMetaData> |
ParquetFileReader.getRowGroups() |
| Modifier and Type | Method and Description |
|---|---|
void |
ParquetFileWriter.appendRowGroup(org.apache.hadoop.fs.FSDataInputStream from,
BlockMetaData rowGroup,
boolean dropColumns) |
void |
ParquetFileWriter.appendRowGroup(SeekableInputStream from,
BlockMetaData rowGroup,
boolean dropColumns) |
org.apache.parquet.hadoop.DictionaryPageReader |
ParquetFileReader.getDictionaryReader(BlockMetaData block) |
| Modifier and Type | Method and Description |
|---|---|
void |
ParquetFileWriter.appendRowGroups(org.apache.hadoop.fs.FSDataInputStream file,
List<BlockMetaData> rowGroups,
boolean dropColumns) |
void |
ParquetFileWriter.appendRowGroups(SeekableInputStream file,
List<BlockMetaData> rowGroups,
boolean dropColumns) |
| Constructor and Description |
|---|
ParquetFileReader(org.apache.hadoop.conf.Configuration configuration,
FileMetaData fileMetaData,
org.apache.hadoop.fs.Path filePath,
List<BlockMetaData> blocks,
List<ColumnDescriptor> columns)
Deprecated.
|
ParquetFileReader(org.apache.hadoop.conf.Configuration configuration,
org.apache.hadoop.fs.Path filePath,
List<BlockMetaData> blocks,
List<ColumnDescriptor> columns)
Deprecated.
use @link{ParquetFileReader(Configuration configuration, FileMetaData fileMetaData,
Path filePath, List
|
ParquetInputSplit(org.apache.hadoop.fs.Path path,
long start,
long length,
String[] hosts,
List<BlockMetaData> blocks,
String requestedSchema,
String fileSchema,
Map<String,String> extraMetadata,
Map<String,String> readSupportMetadata)
Deprecated.
|
| Modifier and Type | Method and Description |
|---|---|
List<BlockMetaData> |
ParquetMetadata.getBlocks() |
| Constructor and Description |
|---|
ParquetMetadata(FileMetaData fileMetaData,
List<BlockMetaData> blocks) |
Copyright © 2018 The Apache Software Foundation. All rights reserved.