...
- The hydrator app (SparkClientContext, MapReduceContext) will have access to secure store manager to substitute values from key store.
Previous Details/Design Notes:
App Level Substitution:
One possibility is for substitution to be implemented at the app level. This would be ideal if we want to keep the concept of macros Hydrator-specific. If substitution were to occur at the app level, then the user would dictate which fields will be macro-substitutable through the plugin configuration UI. In order to allow non-string properties to be substitutable, the user must provide a default value along with the macro through the UI. For example, it a user enters the "port" property as: ${port}, the UI will provide a way for the user to enter a default port value. Creating a DB batch source would yield the following configuration JSON:
Code Block | ||
---|---|---|
| ||
"plugin": {
"name": "Database",
"type": "batchsource",
"properties": {
"user": "${username}",
"password": "${secure(sql-password)}",
"jdbcPluginName": "jdbc",
...
"importQuery": "select * from ${table-name};"
...
"macroDefaults": "{
\"user\": \"admin\",
\"password\": \"pw1234\",
\"importQuery\": \"select * from test;\"
}"
}
} |
In this case, the app understands from configuration the fields that are macros and the default values to use for those fields during configure time.
This would require a new method in PluginContext to accept key and value pairs to substitute for plugin properties.
Code Block |
---|
@Beta
public interface PluginContext {
// existing methods
PluginProperties getPluginProperties(String pluginId);
<T> Class<T> loadPluginClass(String pluginId);
<T> T newPluginInstance(String pluginId) throws InstantiationException;
/**
* Creates a new instance of a plugin. The instance returned will have the {@link PluginConfig} setup with
* {@link PluginProperties} provided at the time when the
* {@link PluginConfigurer#usePlugin(String, String, String, PluginProperties)} was called during the
* program configuration time. In addition the parameter pluginProperties can be used to override the existing
* plugin properties in config, with which the plugin instance will have substituted plugin properties.
*
* @param pluginId the unique identifier provide when declaring plugin usage in the program.
* @param <T> the class type of the plugin
* @param pluginProperties the properties to override existing plugin properties before instance creation.
* @return A new instance of the plugin being specified by the arguments
*
* @throws InstantiationException if failed create a new instance
* @throws IllegalArgumentException if pluginId is not found
* @throws UnsupportedOperationException if the program does not support plugin
*/
<T> T newPluginInstance(String pluginId, Map<String, String> pluginProperties) throws InstantiationException;
|
Configure time:
The app can call this new method with macroDefault values, so plugin instance creation will use macro default values for those config fields.
Run time:
The app performs substitution for the properties with macros using the value from runtime arguments (or workflow token) and calls the method with the field names and substitution values.
Scoping:
If the macro-substitution is performed at the DataPipeline app level, it will be possible to scope at stage name level if the user desires that.
In our example config of JDBC source to Table sink, there is a common macro "${table-name}", if the user wants to provide a different name for the table-name in Table Sink, he can use scoping.
Code Block |
---|
Example for Scoping:
Provided runtime arguments:
Key : table-name, value : employees
Key : TableSink:table-name, value : employee_sql
table-name is the macro name that is used in both DBSource stage and TableSink stage.
if user wants to provide a special value for macro "table-name" to be used in TableSink, he will prefix stage-name before the macro name separated by the delimiter (colon). |
Reference:
Changes to Existing Plugins
...