public abstract class CommonExecTableSourceScan extends ExecNodeBase<org.apache.flink.table.data.RowData> implements MultipleTransformationTranslator<org.apache.flink.table.data.RowData>
ExecNode to read data from an external source defined by a ScanTableSource.| Modifier and Type | Field and Description |
|---|---|
static String |
FIELD_NAME_SCAN_TABLE_SOURCE |
static String |
SOURCE_TRANSFORMATION |
FIELD_NAME_CONFIGURATION, FIELD_NAME_DESCRIPTION, FIELD_NAME_ID, FIELD_NAME_INPUT_PROPERTIES, FIELD_NAME_OUTPUT_TYPE, FIELD_NAME_TYPE| Modifier | Constructor and Description |
|---|---|
protected |
CommonExecTableSourceScan(int id,
ExecNodeContext context,
org.apache.flink.configuration.ReadableConfig persistedConfig,
DynamicTableSourceSpec tableSourceSpec,
org.apache.flink.table.types.logical.LogicalType outputType,
String description) |
| Modifier and Type | Method and Description |
|---|---|
protected abstract org.apache.flink.api.dag.Transformation<org.apache.flink.table.data.RowData> |
createInputFormatTransformation(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment env,
org.apache.flink.api.common.io.InputFormat<org.apache.flink.table.data.RowData,?> inputFormat,
org.apache.flink.table.runtime.typeutils.InternalTypeInfo<org.apache.flink.table.data.RowData> outputTypeInfo,
String operatorName)
Creates a
Transformation based on the given InputFormat. |
protected org.apache.flink.api.dag.Transformation<org.apache.flink.table.data.RowData> |
createSourceFunctionTransformation(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment env,
org.apache.flink.streaming.api.functions.source.SourceFunction<org.apache.flink.table.data.RowData> function,
boolean isBounded,
String operatorName,
org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> outputTypeInfo)
Adopted from
StreamExecutionEnvironment.addSource(SourceFunction, String,
TypeInformation) but with custom Boundedness. |
String |
getSimplifiedName() |
DynamicTableSourceSpec |
getTableSourceSpec() |
protected org.apache.flink.api.dag.Transformation<org.apache.flink.table.data.RowData> |
translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config)
Internal method, translates this node into a Flink operator.
|
accept, createFormattedTransformationDescription, createFormattedTransformationName, createTransformationDescription, createTransformationMeta, createTransformationMeta, createTransformationName, createTransformationUid, getContextFromAnnotation, getDescription, getId, getInputEdges, getInputProperties, getOutputType, getPersistedConfig, inputsContainSingleton, replaceInputEdge, setCompiled, setInputEdges, translateToPlanclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waittranslateToPlanpublic static final String SOURCE_TRANSFORMATION
public static final String FIELD_NAME_SCAN_TABLE_SOURCE
protected CommonExecTableSourceScan(int id,
ExecNodeContext context,
org.apache.flink.configuration.ReadableConfig persistedConfig,
DynamicTableSourceSpec tableSourceSpec,
org.apache.flink.table.types.logical.LogicalType outputType,
String description)
public String getSimplifiedName()
getSimplifiedName in class ExecNodeBase<org.apache.flink.table.data.RowData>public DynamicTableSourceSpec getTableSourceSpec()
protected org.apache.flink.api.dag.Transformation<org.apache.flink.table.data.RowData> translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config)
ExecNodeBasetranslateToPlanInternal in class ExecNodeBase<org.apache.flink.table.data.RowData>planner - The planner.config - per-ExecNode configuration that contains the merged configuration from
various layers which all the nodes implementing this method should use, instead of
retrieving configuration from the planner. For more details check ExecNodeConfig.protected org.apache.flink.api.dag.Transformation<org.apache.flink.table.data.RowData> createSourceFunctionTransformation(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment env,
org.apache.flink.streaming.api.functions.source.SourceFunction<org.apache.flink.table.data.RowData> function,
boolean isBounded,
String operatorName,
org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> outputTypeInfo)
StreamExecutionEnvironment.addSource(SourceFunction, String,
TypeInformation) but with custom Boundedness.protected abstract org.apache.flink.api.dag.Transformation<org.apache.flink.table.data.RowData> createInputFormatTransformation(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment env,
org.apache.flink.api.common.io.InputFormat<org.apache.flink.table.data.RowData,?> inputFormat,
org.apache.flink.table.runtime.typeutils.InternalTypeInfo<org.apache.flink.table.data.RowData> outputTypeInfo,
String operatorName)
Transformation based on the given InputFormat. The implementation
is different for streaming mode and batch mode.Copyright © 2014–2022 The Apache Software Foundation. All rights reserved.