Goals
Ability to include custom actions in the hydrator pipelines.
Checklist
- User stories documented (Shankar/Sagar)
- User stories reviewed (Nitin)
- Design documented (Shankar/Sagar)
- Design reviewed (Terence/Andreas)
- Feature merged ()
- Examples and guides ()
- Integration tests ()
- Documentation for feature ()
- Blog post
Use Cases
- The source data may be present locally on one of the machine in cluster say 'M'. Before hydrator pipeline starts, that data need to moved to the HDFS, so that pipeline can process it. Once the pipeline execution is complete, the data on 'M' is no longer required and can be deleted.
- Consider a hydrator pipeline which performs some transformations on the web access data which might contain some malformed records. If the malformed records are below certain threshold, then only we should copy the products and users datasets to HDFS for the further processing by the pipeline. Once the data is in HDFS, it can be deleted from the source from where it was copied.
- Consider a scenario where data scientist wants to perform a join between customer profiles stored in the SQL server and customer browsing history stored on the HDFS. Customer profiles are generated on the fly using the SQL stored procedure. Data scientist wants to delay the generation of the customer profiles as mush as possible so that he can get more accurate view during analysis. He also wants to get the updated customer profiles each time he performs join. He can use the data pipeline which triggers the generation of the user profile asynchronously. Later pipeline continues to process the customers browsing history data. Once the browsing data is processed, pipeline should check if the customer profiles are generated. If not then it should wait till profiles are generated. At which point pipeline continues to execute join on browsing history and customer profiles.
User stories
As a creator of the data pipeline, I want an ability to
- copy the data from one of the machine on cluster to HDFS for processing.
- delete the data from the machine once its copied to HDFS.
- execute the (Shell/Python/Perl/R) script located on the remote machine during the execution of the pipeline.
- archive the data in the HDFS generated by older pipeline runs.
- execute the SQL server stored procedure on the remote SQL server during the execution of the pipeline.
- wait during the pipeline execution till the required data is generated.
- extend the action to be able to add custom plugins.
Scope for 3.5:
- Support the data pipeline - which will only include the DAG of actions to be executed.
- Support running the actions before Source or after the Sink stage of the DataPipelineApp.
- Following are few actions we might want to support
- SSH Action: To be able to execute Perl/Python/R/Shell/Revo R scripts located on the remote machine
- FileCopy action: To be able to copy files from FTP/SFTP to unix machine, from unix machine to the HDFS cluster, from FTP to HDFS
- FileMove action: To be able to move the files from source location(HDFS/Unix machine/SFTP) to destination loacation(HDFS/unix machine/SFTP)
- Impala/Hive action: To be able to run the hive scripts on CDH or HDP cluster
- SQL action: To be able to run the SQL stored procedures located on the remote SQL server, Copy Hive data to SQL Server table
- Email action: To be able to send emails
Description | Properties required | JSON Example | |
---|---|---|---|
FileCopy/FileMove Action | To be able to copy files from FTP/SFTP to unix machine, from unix machine to the HDFS cluster, from FTP to HDFS. To be able to move the files from source location A (HDFS/Unix machine/SFTP) to destination location B (HDFS/unix machine/SFTP). |
| { ... "config":{ "connections":[ "stages":[ { "name":"Copy-File", "plugin":{ "name":"File-Manipulation", "type":"action", "artifact":{ "name":"core-action-plugins", "version":"1.4.0-SNAPSHOT", "scope":"SYSTEM" }, "properties":{ "type": "SCP" "source": { } } }, ... ] } } |
SSH Script Action | To be able to execute Perl/Python/R/Shell/Revo R/Hive Query scripts located on the remote machine |
| { ... "config":{ "connections":[ "stages":[ { "name":"Copy-File", "plugin":{ "name":"SSHShell", "type":"action", "artifact":{ "name":"core-action-plugins", "version":"1.4.0-SNAPSHOT", "scope":"SYSTEM" }, "properties":{ "host": "scripthost.com", "script-path":"/tmp/myscript", { "name": "timeout", "value": 10 }, { "name": "user", "value": "some_user"} ] } } }, ... ] } } |
SQL Action | To be able to run the SQL stored procedures located on the remote SQL server, Copy Hive data to SQL Server table |
| |
Email action | To be able to send emails |
| { ... "config":{ "connections":[ "stages":[ { "name":"Email-Bob", "plugin":{ "name":"Email", "type":"action", "artifact":{ "name":"core-action-plugins", "version":"1.4.0-SNAPSHOT", "scope":"SYSTEM" }, "properties":{ "protocol": "smtp", } } }, ... ] } } |
Action Java API:
Action interface to be implemented by action plugins:
/** * Represents custom action to be executed in the pipeline. */ public interface Action extends PipelineConfigurable, StageLifecycle<ActionContext> { /** * Implement this method to execute the code as a part of action run. * @throws Exception when there is failure in method execution */ void run() throws Exception; }
ActionContext interface will be available to the action plugins.
/** * Represents context available to the action plugin during runtime. */ public interface ActionContext extends DatasetContext, ServiceDiscoverer, PluginContext { /** * Returns the logical start time of the batch job which triggers this instance of an action. * Logical start time is the time when the triggering Batch job is supposed to start if it is started by the scheduler. Otherwise it * would be the current time when the action runs. * * @return Time in milliseconds since epoch time (00:00:00 January 1, 1970 UTC). */ long getLogicalStartTime(); /** * @return runtime arguments of the Batch Job. */ Map<String, String> getRuntimeArguments(); /** * @return the state of the Action. Useful in the {@link Action#destroy()} method, * to perform any activity based on the the action state. * This method can be moved to PluginContext so that its available to all plugins? */ ActionState getState(); }
- How the ActionConfig will look?
Configurations for the action plugins:
Consider the sample pipeline containing all the action nodes.
(Script)----->(CopyToHDFS)------->(Hive)------->(SQL Exporter) | | ------>(HDFSArchive) Where: 1. Script action is responsible for executing (Shell, Perl, Python etc) script located on the specified machine. This action prepares the input data coming in multiple format(JSON, binary, CSV etc) into the single format(flatten records) as expected by the next stages. 2. CopyToHDFS action is responsible for copying the flattened files generated in the previous stage to specified HDFS directory. 3. Hive action is responsible for executing the hive script which populates the Hive tables based on the business logic contained in the script. 4. SQL Exporter exports the hive table to the relational database. 5. In parallel, HDFS files generated during the step 2 are archived by HDFSArchive action.
Possible configurations for the pipeline:
{ "artifact":{ "name":"cdap-data-pipeline", "version":"3.5.0-SNAPSHOT", "scope":"SYSTEM" }, "name":"MyActionPipeline", "config":{ "connections":[ { "from":"Script", "to":"CopyToHDFS" }, { "from":"CopyToHDFS", "to":"Hive" }, { "from":"Hive", "to":"SQLExporter" }, { "from":"Hive", "to":"HDFSArchive" } ], "stages":[ { "name":"Script", "plugin":{ "name":"SSH", "type":"action", "artifact":{ "name":"core-action-plugins", "version":"1.4.0-SNAPSHOT", "scope":"SYSTEM" }, "properties":{ "host": "scripthost.com", "scriptFileName":"/tmp/myfile", "command": "/bin/bash", "arguments": [ { "name": "timeout", "value": 10 }, { "name": "user", "value": "some_user" } ] } } }, { "name":"CopyToHDFS", "plugin":{ "name":"FileCopy", "type":"action", "artifact":{ "name":"core-action-plugins", "version":"1.4.0-SNAPSHOT", "scope":"SYSTEM" }, "properties":{ "sourceHost": "source.host.com", "sourceLocation":"/tmp/inputDir", "wildcard": "*.txt", "targetLocation": "hdfs://hdfs.cluster.com/tmp/output" } } }, { "name":"Hive", "plugin":{ "name":"HIVE", "type":"action", "artifact":{ "name":"core-action-plugins", "version":"1.4.0-SNAPSHOT", "scope":"SYSTEM" }, "properties":{ "hiveScriptURL": "URL of the hive script", "blocking": "true", "arguments": [ { "name": "timeout", "value": 10 }, { "name": "user", "value": "some_user" } ] } } }, { "name":"SQLExporter", "plugin":{ "name":"SQL", "type":"action", "artifact":{ "name":"core-action-plugins", "version":"1.4.0-SNAPSHOT", "scope":"SYSTEM" }, "properties":{ "connection": "Connection configurations", "blocking": "true", "sqlFileName": "/home/user/sql_exporter.sql", "arguments": [ { "name": "timeout", "value": 10 }, { "name": "user", "value": "some_user" } ] } } }, { "name":"HDFSArchive", "plugin":{ "name":"FileMove", "type":"action", "artifact":{ "name":"core-action-plugins", "version":"1.4.0-SNAPSHOT", "scope":"SYSTEM" }, "properties":{ "sourceLocation": "hdfs://hdfs.cluster.com/data/output", "targetLocation": "hdfs://hdfs.cluster.com/data/archive/output", "wildcard": "*.txt" } } } ] } }
Based on the connection information specified in the above application configuration, DataPipelineApp will configure the Workflow, which will have one custom action corresponding to each plugin type action in the above config.
Open Questions:
1. There are multiple SSH libraries are available for Java. We need to figure out which one best suit our use cases - for example ability to collect logs emitted by script
2. Custom action executes on the Workflow driver which runs as a YARN container. For SSHing to the remote host which is not in cluster, container requires access to the private key file. One way is to have this private file stored on the HDFS where all containers running in cluster can access it.
Potential SSH Java Libraries
SSHJ | JSCH | Ganymed SSH2 | |
---|---|---|---|
Advantages | Clear API with plenty of examples in the docs. Fewer lines of code are required to do the same thing compared to jsch. | Sticks very close to SSH specs, so could prove easier to learn for devs who are very familiar with them. It also gives devs more control with low-level SSH protocols, which sshj and trilead do not as much. | Simplicity at its finest. Community claims getting simply SSH actions working significantly easier/faster than when attempting to use jsch. |
DrawBacks | Same as Ganymed SSH2. | Documentation is nonexistent and it is not necessarily the most intuitive, therefore most likely leading to a high learning curve. That being said, there are many examples on the web. | It is not possible to have low level access to the SSH protocol, because it is meant to provide abstraction to the developer. |
Info Links | Examples: https://github.com/ hierynomus/sshj/tree/master/ examples/src/main/java/net/ schmizz/sshj/examples | Argument against using jsch: https://techtavern.wordpress.com/ 2008/09/30/about-jsch-open-source-project/ Example: http://www.jcraft.com/jsch/examples/ | Examples: http://www.programcreek.com/ |
Can we pull logs from the remote machines? | Yes! https://github.com/hierynomus/ sshj/blob/master/examples/src/main/ java/net/schmizz/sshj/examples/ RudimentaryPTY.java | Yes! http://stackoverflow.com/questions/ 6902386/how-to-read-jsch-command-output | Yes! Use the StreamGlobber class |
Library Code Source | https://github.com/hierynomus/sshj | http://www.jcraft.com/jsch/ | https://code.google.com/archive/p/ganymed-ssh-2/ |
License | Apache License | BSD License | BSD License |
Platform support required
- Error rendering macro 'jira' : Unable to locate Jira server for this macro. It may be due to Application Link configuration.