...
Code Block |
---|
Function Syntax : ${macroFunction(macro)} Shorthand notation: ${macro} Example Usage: ${secure(accessKey)} - get access key from secure store ${functionTimelogicalStartTime(timeFormat)} - apply time function on the time_formattimeFormat provided and use the value. The Default (shortHand) usage will substitute arguments using the following precedence: Custom Action Workflow-Token > Runtime Arguments > Stored Preferences Examples: ipConfig: ${hostname}:${port} JDBC connection string : jdbc:${jdbc-plugin}://${hostname}:${sql-port}/${db-name} Using the expanded syntax allows additional logic to be applied to the macro arguments through a macro function. Escaping can be supported using the \ (backslash) character (e.g. \${hostname} will not be substituted) |
The shorthand notation supports retrieval precedence to limit the exposure of underlying workflow-tokens and runtime-arguments to pipeline operators. The "functionTime" macro function uses the logical start time of a run to perform the substitution. This is an example of a macro function that is not just a key-value lookup but allows for extra logic to be performed before a value is returned. For now, the implementation will only support the following macro functions: runtime-arguments. Once the secure store API is available, it will also support secure store. In the future, we can see if we will allow developers to create custom macro functions (similar to functionTime(...)).
Notes:
- For now, we will not support recursive expansion of macros. That is, if a macro such as ${address} expands to ${hostname}:${port}, then ${hostname} and ${port} will not be evaluated. We will document this at first.
- We can expand the functionality later to recursively expand macros.
- In the case of a macro ${key} expanding to ${key}, we can implement a maximum depth of recursion.
Code Block | ||
---|---|---|
| ||
"stages": [ { "name": "Database", "plugin": { "name": "Database", "type": "batchsource", "properties": { ... "user": "${username}", "password": "${secure(sql-password)}", "jdbcPluginName": "jdbc", "jdbcPluginType": "${jdbc-type}", "connectionString": "jdbc:${jdbc-type}//${hostname}:${port}/${db-name}", "importQuery": "select * from ${table-name};" } } }, { "name": "Table", "plugin": { "name": "Table", "type": "batchsink", "properties": { "schema": "{\"type\":\"record\",\"name\":\"etlSchemaBody\", \"fields\":[{\"name\":\"name\",\"type\":\"string\"}, {\"name\":\"age\",\"type\":\"int\"},{\"name\":\"emp_id\",\"type\":\"long\"}]}", "name": "${table-name}", "schema.row.field": "name" } } } ] |
...
ETL API - BatchContext Changes
Given that a dataset could be created at configure time if no fields are provided macros, a check should be available for plugin developers to see whether the dataset already exists at runtime. We can do this by altering the runtime context object passed in to the prepareRun method. As the object extends BatchContext which extends DatasetContext, we can create a new method in BatchContext that checks for the existence of a dataset.
This method will return whether or not the dataset with the provided datasetName already exists and can be used in prepareRun:
If The dataset to write to is macro-substituted and a macro is using in the config, we have to defer dataset creation to prepareRun rather than doing this in the configure stage. Deferring dataset creation in prepareRun will require adding a new method to BatchContext.
Code Block |
---|
@Beta public interface BatchContext extends DatasetContext, TransformContext { /** * create dataset identified by datasetName, typeName with properties. */ void createDataset(String datasetName, String typeName, DatasetProperties properties); /** * return true if dataset with datasetName exists */ boolean datasetExists(String datasetName); ... } |
Code Block |
---|
@Override public void prepareRun(BatchSinkContext context) { if (!context.datasetExists(config.getName())) { pipelineConfigurer.createDataset(config.getName(), ...); } // ... } |
Notes
Currently if a stream given in stream source or table given in table source doesn't exist, we create a new stream/table. We want to allow table creation as we want to create external dataset for sources, but disallow stream creation, so we are adding only createDataset to the BatchContext.
However there are certain fields which are used to determine the schema in the plugin and those cannot be macro-substituted as schema validation is essential during configure time and we want to disallow macro usage for them.
Custom Action Setting Config Values:
One use case of the feature is to allow custom actions that run before a plugin to set macros. Custom actions can use workflow tokens to set values for field names. since workflow tokens are merged with runtime arguments and exposed in macro property lookup, macro substitution has access to tokens set from custom action, workflow tokens has higher priority over runtime arguments and preferences.
Scoping (Low priority):
Scoping is currently at low priority and can be done manually. In our example config from a JDBC source to a table sink, there is a common macro "${table-name}", if the user wants to provide a different name for the table-name in Table Sink, he can manually do this:
Syntax | Macro | Evaluates To |
---|---|---|
${table-name} | table-name | employees |
${TableSink:stage.name.table-name} | TableSink:stage.name.table-name | employee_sql |
This is more of the user creating unique argument keys as opposed to scoping.
Documentation/Changes
Regardless of where the substitution occurs, the guidelines for creating Hydrator plugins would have to change. For existing plugins, any validation for properties that are macro-substitutable in configurePipeline must be moved to prepareRun (see reference section for specific plugins). We also must document the convention for nulling/defaulting macroable properties at configure time.
Implementation Details
Code Block | ||
---|---|---|
| ||
interface MacroContext { /** * Given the macro key, return the substituted value */ String getValue(String macroKey); } |
Code Block | ||
---|---|---|
| ||
Based on the macro type, one of the below MacroContext's will be used to get the value for macro. DefaultMacroContext implements MacroContext { Map<String, String> runtimeArguments; String getValue(String macroKey) { return runtimeArguments.get(macroKey); } } SecureMacroContext implements MacroContext { SecureStore secureStore; String getValue(String macroKey) { return secureStore.get(macroKey); } } RuntimeFunctionMacro implements MacroContext { long logicalStartTime; Function<String, String> timezoneFunction; String getValue(String arguments) { return timezoneFunction.apply(arguments); } } |
JIRA:
Jira Legacy server Cask Community Issue Tracker serverId 45b48dee-c8d6-34f0-9990-e6367dc2fe4b key CDAP-5642
Reference
Many plugins have properties that are used in constructing or validating a schema at configure time. These fields need to have macros disabled to allow this. The following plugins and fields would be affected:
Plugin | Fields | Use | Conflict |
---|---|---|---|
BatchCassandraSource | schema | Parsed for correctness to create the schema. | Parsing a macro or schema with a nested macro would fail. |
CopybookSource | copybookContents | Copybook contents are converted to an InputStream and used to get external records, which are in turn used to add fields to the schema. | Schema would add macro literal as a field. |
DedupAggregator | uniqueFields, filterOperation | Both fields are used to validate the input schema created. | Macro literals do not exist as fields in schema and will throw IllegalArgumentException. |
DistinctAggregator | fields | Specifies the fields used to construct the output schema. | Will add macro literals as schema fields.* |
GroupByAggregator | groupByFields, aggregates, | Gets fields from input schema and adds aggregates to to output fields list. | Macro literals do not exist in input schema or are valid fields for an output schema. |
RowDenormalizerAggregator | keyField, nameField, valueField | Gets schemas by field names from the input schema. | Macro literals do not exist as fields in the input schema. |
KVTableSink | keyField, valueField | Validates that presence and type of these fields in the input schema. | Macro literals will not exist in the input schema. |
SnapshotFileBatchAvroSink | schema | Parses schema to add file properties. | Macro literals may disallow schema parsing or incorrect schema creation. |
SnapshotFileBatchParquetSink | schema | Parses schema to add file properties. | Macro literals may disallow schema parsing or incorrect schema creation. |
TableSink | schema, rowField | Validates output and input schemas if properties specified. | Macro literals will lead to failed validation of schema and row field. |
TimePartitionedFileSetDatasetAvroSink | schema | Parses schema to add file properties. | Parsing macro literals in schema would fail. |
TimePartitionedFileSetDatasetParquetSink | schema | Parses schema to add file properties. | Parsing macro literals in schema would fail. |
SnapshotFileBatchAvroSource | schema | Parses schema property to set output schema. | Macro literals can lead to invalid schema parsing or creation. |
SnapshotFileBatchParquetSource | schema | Parses schema property to set output schema. | Macro literals can lead to invalid schema parsing or creation. |
StreamBatchSource | schema, name, format | Stream is added and created through name and schema is parsed to set output schema. | Macro literals will lead to bad parsing of properties. |
TableSource | schema | Schema parsed to set output schema. | Macro literals will lead to failed or incorrect schema creation. |
TimePartitionedFileSetDatasetAvroSource | schema | Schema parsed to set output schema. | Macro literals will lead to failed or incorrect schema creation. |
TimePartitionedFileSetDatasetParquetSource | schema | Schema parsed to set output schema. | Macro literals will lead to failed or incorrect schema creation. |
JavaScriptTransform | schema, script, lookup | Schema format is used to set the output schema. JavaScript and lookup properties are also parsed for correctness. | Macro literals can cause parsing to fail for schema creation, JavaScript compilation, or lookup parsing. |
LogParserTransform | inputName | Gets field from input schema through inputName property. | With a macro literal, the field will not exist in the input schema. |
ProjectionTransform | fieldsToKeep, fieldsToDrop, fieldsToConvert, fieldsToRename | Properties are used to create output schema. | Macro literals will lied to a failed or wrong output schema being created. |
PythonEvaluator | schema | Schema parsed for correctness and set as output schema. | Macro literal can lead to failed or bad schema creation. |
ValidatorTransform | validators, validationScript, | Validator property used to set validator plugins. Script property is also parsed for correctness. | Macro literals can lead to failed parsing or plugins being set. Scripts can not be validated without validators. |
ElasticsearchSource | schema | Schema parsed for correctness and set as output schema. | Macro literals can lead to failed or incorrect schema parsing/creation. |
HBaseSink | rowField, schema | Parsed to valid the output and input schemas and set the ouput schema. | Macro literals can lead to failed or incorrect schema parsing/creation. |
HBaseSource | schema | Parsed for correctness to set output schema. | Macro literals can lead to failed or incorrect schema parsing/creation. |
HiveBatchSource | schema | Parsed for correctness to set ouput schema. | Macro literals can lead to failed or incorrect schema parsing/creation. |
MongoDBBatchSource | schema | Parsed for correctness and validated to set output schema. | Macro literals can lead to failed or incorrect schema parsing/creation. |
NaiveBayesClassifier | predictionField | Configures and sets fields of output schema and checked for existence in input schema. | Output schema would be created wrongly with macro literal as prediction field and input schema check behavior is undefined. |
Compressor | compressor, schema | Parsed for correctness and used to set output schema. | Macro literals can lead to failed or incorrect schema parsing/creation. |
CSVFormatter | schema | Parsed for correctness and used to set output schema. | Macro literals can lead to failed or incorrect schema parsing/creation. |
CSVParser | field | Validated against input schema to check existence of field. | Macro literals may not exist as fields in the input schema. |
Decoder | decode, schema | Decode property is parsed and validated then used to validate the input schema. Schema parsed to set output schema. | Macro literals can lead to failed or incorrect schema parsing/creation or incorrect validation of input schema. |
Decompressor | decompressor, schema | Decompressor property is parsed and validated then used to validate the input schema. Schema parsed to set output schema. | Macro literals can lead to failed or incorrect schema parsing/creation or incorrect validation of input schema. |
Encoder | encode, schema | Encode property is parsed and validated then used to validate the input schema. Schema parsed to set output schema. | Macro literals can lead to failed or incorrect schema parsing/creation or incorrect validation of input schema. |
JSONFormatter | schema | Parsed for correctness and used to set output schema. | Macro literals can lead to failed or incorrect schema parsing/creation. |
JSONParser | field, schema | Validates if field property is present in input schema. Parses schema property to set output schema. | Macro literal may not exist in input schema and may lead to failed parsing or creation of output schema. |
StreamFormatter | schema | Parsed for correctness and used to set output schema. | Macro literals can lead to failed or incorrect schema parsing/creation. |
* May need verification
Other plugins have fields that are validated/processed at configure time that do not affect the schema. In these cases, these can be moved to the prepare run method. The following plugins and fields would be affected:
Plugin | Fields | Use | Justification |
---|---|---|---|
StreamBatchSource | duration, delay | Parsed and validated for proper formatting. | The parsing/validation is not related to the schema's creation. |
TimePartitionedFileSetSource | duration, delay | Parsed and validated for proper formatting. | The parsing/validation is not related to the schema's or dataset's creation. |
ReferenceBatchSink | referenceName | Verifies reference name meets dataset ID constraints. | As dataset names can be macros, this supports the primary use case. |
ReferenceBatchSource | referenceName | Verifies that reference name meets dataset ID constraints. | As dataset names can be macros, this supports the primary use case. |
FileBatchSource | timeTable | Creates dataset from time table property. | This is a primary use case for macros. |
TimePartitionedFileSetSource | name, basePath | Name and basePath are used to create the dataset. | This is a primary use case for macros. |
BatchWritableSink | name, type | Creates dataset from properties. | This is a primary use case for macros. |
SnapshotFileBatchSink | name | Creates dataset from name field. | This is a primary use case for macros. |
BatchReadableSource | name, type | Dataset is created from name and type properties. | This is a primary use case for macros. |
SnapshotFileBatchSource | all properties* | Creates dataset from properties. | This is a primary use case for macros. |
TimePartitionedFileSetSink | all properties* | Creates dataset from properties. | This is a primary use case for macros. |
DBSource | importQuery, boundingQuery, splitBy, numSplits | Validate connection settings and parsed for formatting. | The parsing/validation does not lead to the creation of any schema or dataset. |
HDFSSink | timeSuffix | Parsed to validate proper formatting of time suffix. | The parsing/validation does not lead to the creation of any schema or dataset. |
KafkaProducer | async | Parsed to check proper formatting of boolean. | The parsing/validation does not lead to the creation of any schema or dataset. |
NaiveBayesClassifier | fieldToClassify | Checked if input schema field is of type String. | The validation does not lead to the creation or alteration of any schema. |
NaiveBayesTrainer | fieldToClassify, predictionField | Checked if input schema fields are of type String and Double respectively. | The validation does not lead to the creation or alteration of any schema. |
CloneRecord | copies | Validated against being 0 or over the max number of copies. | The validation does not lead to the creation of any schema or dataset. |
CSVFormatter | format | Validated for proper formatting. | The validation does not lead to the creation of any schema or dataset. |
CSVParser | format | Validated for proper formatting. | The validation does not lead to the creation of any schema or dataset. |
Hasher | hash | Checked against valid hash formats. | The check does not lead to the validation or alteration of any schema. |
JSONParser | mapping | Mappings extracted and placed into a map with their expressions. | The extraction does not affect any schema creation or validation. |
StreamFormatter | format | Checked against valid stream formats. | The check does not lead to the validation or alteration of any schema. |
ValueMapper | mapping, defaults | Parsed after configuration is initialized and validated. | The check does not lead to the validation or alteration of any schema. |
* May need verification
...