Hydrator Backend Application
To develop a back-end app to encapsulate business logic, that acts as as an intermediary between CDAP-UI and CDAP backend. The back-end app simplifies developing new features in CDAP-UI as it encapsulates the logic to translate business logic request/action to appropriate CDAP backend requests/actions and returns to the UI relevant information. This will make CDAP-UI to focus more on the UI aspects and less about the business logic involved. Ideally this back-end app will remove the "view in CDAP" as the UI will be able to get the relevant information required from the backend-app.
Checklist
- User stories documented (Shankar)
- User stories reviewed (Nitin)
- Design documented (Shankar)
- Design reviewed (Terence/Andreas/Albert)
- Feature merged (Shankar)
- UI updated (Ajai/Edwin)
- Documentation for feature (Shankar)
Use-cases
Case #1
- User adds a database plugin to the pipeline, clicks on the database plugin to fill in the configuration
- User provides JDBC string, table name or SELECT query, username, password.
- User then clicks on the button to populate the schema
- UI will make the backend call to Hydrator App to retrieve the schema associated depending on whether it's based on Table or SELECT query.
- User then has the choice to include the schema as the output schema of the database plugin.
- The information of the schema associated with the database plugin is stored as spec in the exported pipeline.
Case #2
- User adds a database plugin to the pipeline, clicks on the database plugin to fill in the configuration
- User provides JDBC string (include database and other configurations), username and password
- User on selecting table will click on the button to list the tables.
- UI makes the backend call to retrieve the list of tables and show it to the user
- User then selects the table which automatically populates the schema as the output schema of the database plugin.
Case #3
- Shankar is using the Hydrator Studio instance to build a pipeline, he is building a batch pipeline for processing data from the Stream
- Albert is also using the same instance of Hydrator Studio to build his pipeline, he is building a real-time pipeline for processing data from Twitter
- Both Albert and Shankar have complex pipelines to build and they want to ensure that their work is not lost, so they are periodically saving it as draft
- When both of them save drafts asynchronously to each other, the draft from each are visible to each other.
User Stories
There are hydrator specific functionalities which could leverage CDAP’s features.
- Drafts
User wants to add a new draft or save the pipeline he is working as a draft
- User can update an existing draft of a pipeline as new version – previous version of pipelines are saved (upto 20 versions)
- User can go back to previous version of draft or for any version of draft
- User wants to retrieve the latest version of draft for a pipeline
- User wants to view all available pipeline drafts across all users
- User wants the ability to write a pipeline draft
- User has access to only those pipelines that are available in the namespace the user is in.
- Plugin Output Schema
- User using DB-Source wants to enter connection-string, table name and automatically populate table schema information.
- User using TeraData-Source wants to enter connection-string, table name and automatically populate table schema information.
- List Field values
- User provides connection-string, user-name and password and expects list of available tables returned in DB-Source.
- User provides connection-string, user-name and password and expects list of available tables returned in DB-Source.
Proposed REST APIs
HTTP Request Type | Endpoint | Request Body | Response Status | Response Body |
POST | /extensions/hydrator/drafts/{draft-name} | { "config": {...}, "message" : "..." } | 200 OK: draft created and saved successfully 409 CONFLICT: draft-name already exists 500 Error: while creating the draft | |
PUT | /extensions/hydrator/drafts/{draft-name} | { "config": {...}, "message" : "..." } | 200 OK: draft updated successfully 404 NOT Found : draft doesn't exist already, cannot be updated. 400 BAD Request : only 20 versions can be stored, delete old version before storing. 500 Error while updating the draft | |
GET | /extensions/hydrator/drafts/{draft-name}/versions/ | 200 return all the versions for the draft identified by the draft-name 404 draft not found 500 error while getting draft | [ { "message" : "...", "config": { "source" : { .... }, "transforms" : [...], "sinks" [...] "connections" : [..] } }, ... ] | |
GET | /extensions/hydrator/drafts/{draft-name}/versions/{version-number} -1 -> latest version | 200 return the versions for the draft identified by the draft-name and version-number 404 draft not found 500 error while getting draft | { "message" : "...", "config": { "source" : { .... }, "transforms" : [...], "sinks" [...] "connections" : [..] } } | |
GET | /extensions/hydrator/drafts/ | 200 return the list of all saved drafts 500 error | [ "streamToTPFS", "DBToHBase", ... ] | |
DELETE | /extensions/hydrator/drafts/ | 200 successfully deleted all drafts 500 error while deleting | ||
DELETE | /extensions/hydrator/drafts/{draft-name} | 200 successfully deleted the specified draft 404 draft does not exist 500 error while deleting | ||
DELETE | /extensions/hydrator/drafts/{draft-name}/versions/{version-number} | 200 successfully deleted the version of a drat 404 draft with the version does not exist 500 error while deleting | ||
POST | /extensions/hydrator/plugins/{plugin-name}/schema |
{ "artifact" : { "name" : "...", "version":"...", "scope":"..." }, "jdbcConnectionString": "...", "jdbcPluginName": "...", "tableName" : "..." }
| 200 based on the plugin and plugin-properties determine output schema and return output schema 404 unrecognized plugin-name 500 Error | { "field1" : Integer, "field2" : String, ... "fieldN" : Double } |
POST | /extensions/hydrator/plugins/{plugin-name}/list QueryParam : target Example: target=table | Example: { "artifact" : { "name" : "...", "version":"...", "scope":"..." } "connectionString": ..., "username": ..., "password" : ... } | For the specified plugin, based on the provided connection information, get the list of available target field and return the list. 200 , list of available values for target type field. Example: list of tables in a database. 500 error while retrieving. | [ "tableA", "tableB" ... "tableN" ] |
Design
Option #1
Description
The hydrator app needs to be able to write/read to a dataset to store and retrieve drafts and other information about business logic. We can implement a Hydrator CDAP Application with a service that can have REST endpoints to serve the required hydrator functionalities. Enabling Hydrator in a namespace will deploy this Hydrator app and start the service. Hydrator UI would ping for this service to be available before coming up. The back-end business logic actions which directly needs to use the CDAP services endpoints can be made generic.
Pros
- Everything (Drafts, etc) stored in the same namespace, proper cleanup when namespace is deleted.
Cons
- Every namespace will have an extra app for supporting hydrator if hydrator is enabled. Running this service, will run 2 containers per namespace. we can add an option to enable/disable hydrator if we are not using hydrator in a namespace. It might feel weird as a user app, as the user didn't write/create this app.
Option #2
Description
We will still use an Hydrator CDAP app but we create an "Extensions" namespace and have the "hydrator" app only deployed in the "extensions" namespace, this app would serve the hydrator requests for all namespaces.
Pros
- Less amount of resources used, only 2 container's used rather than 2 container’s per namespace, only one dataset is used.
- Only one app for using hydrator across namespace and not an app per namespace, less clutter.
- New extensions could be added to the same namespace to support other use cases in future.
Cons
- Using a single dataset for storing all drafts across namespace is less secure?.
- User won't be able to create a new namespace called "Extensions", as it will be reserved.
Open Questions
- How to delete the drafts when the namespace is deleted ?
- When to stop this service?
- Availability of the service?
- Security
- If we decide to add more capability in hydrator back-end app, Eg: Make the pipeline validation/deploy app, etc, then in secure environment,
- The hydrator-service can discover appropriate cdap.service and call appropriate endpoints?
Option #3 (based on discussion with terence)
No new user level apps are deployed. Preference store is used to store user drafts of hydrator apps.
'configurePipeline' can be changed to return partial results, it can return pluginSpecification with possible values for missing information in plugin config, the pluginSpecification will be serialized into applicationSpecification and returned to the user.
Example:
Hydrator makes a call to Preference store to save name-spaced draft, in order to delete the drafts, delete endpoint is called in preference store for the drafts. If user deletes the namespace manually from CDAP-CLI, the preference store drops everything in that namespace including the drafts.
Plugin configure stage will accept incomplete config and will create PluginSpecification, with possible values for incomplete config.
Example : User is using a DBSource plugin, he provides connectionString, userName and password. the UI hits /validate endpoint with config, DBSource’s configurePlugin is called, it inspects the config, notices the required field ‘tableName' is missing, it connects to the database and gets the list of table names, writes this list in PluginSpecification and returns failure.
User notices the failure, reads the specification to get the list of tables, selects the table he is interested in and makes the same call again, DBSource’s configure plugin notices schema is missing and ‘import’ field is missing. It then populates schema information in spec and returns failure.
- user fills the ‘import’, ‘count’ queries and changes schema appropriately and makes the same call, all the necessary fields are present and valid, the DBSource plugin returns successful for this stage. user proceeds to next stage.
REST API:
HTTP Request Type | Endpoint | Request Body | Response Status | Response Body | |
POST | /namespaces/{namespace-id}/drafts/{draft-id}/ |
| 200 OK: draft created and saved successfully 409 CONFLICT: draft-name already exists 500 Error: while creating the draft | ||
PUT | /namespaces/{namespace-id}/drafts/{draft-id}/ |
| 200 OK: draft updated successfully 404 NOT Found : draft doesn't exist already, cannot be updated. 500 Error while updating the draft | ||
GET | /namespaces/{namespace-id}/drafts/{draft-id}/ | 200 return all the versions for the draft identified by the draft-name 404 draft not found 500 error while getting draft |
| ||
GET | /namespaces/{namespace-id}/drafts/{draft-id}/versions/{version-number} -1 -> latest version | 200 return the versions for the draft identified by the draft-name and version-number 404 draft with version found 500 error while getting draft |
| ||
GET | /namespaces/{namespace-id}/drafts/ | 200 return the name of list of all saved drafts 500 error | [ | ||
DELETE | /namespaces/{namespace-id}/drafts/ | 200 successfully deleted all drafts 500 error while deleting | |||
DELETE | /namespaces/{namespace-id}/drafts/{draft-id} | 200 successfully deleted the specified draft 404 draft does not exist 500 error while deleting |
The ConsoleSettingsHttpHandler currently makes use of ConfigStore. It's however not name-spaced and has few other issues, it can be fixed and can be improved to store configs.
ConsoleSettingsHttpHandler->ConfigStore.
Along with pipeline drafts ConsoleSettingsHttpHandler also stores the following information currently:
1) pre-configured plugin templates.
Endpoints :
GET namespaces/{namespace-id}/plugin-templates/{plugin-template-id}/
POST namespaces/{namespace-id}/plugin-templates/{plugin-template-id}/ -d '@plugin-template.json' -> create a new plugin template
PUT namespaces/{namespace-id}/plugin-templates/{plugin-template-id}/ -d '@plugin-template.json' -> update existing plugin template
2) default versions of plugins (user preferences)
Endpoints:
PUT : namespaces/{namespace-id}/defaults -d '@default.json' -> create/update defaults this include user's plugin version preferences, etc.
GET : namespaces/{namespace-id}/defaults -> Gets defaults configured by user.
ConfigStore Existing methods :
void create(String namespace, String type, Config config) throws ConfigExistsException; void createOrUpdate(String namespace, String type, Config config); void delete(String namespace, String type, String id) throws ConfigNotFoundException; List<Config> list(String namespace, String type); Config get(String namespace, String type, String id) throws ConfigNotFoundException; void update(String namespace, String type, Config config) throws ConfigNotFoundException;
ConfigStore new methods:
Config get(String namespace, String type, String id, int version) throws ConfigNotFoundException; // get a version of a draft Config getAllVersions(String namespace, String type, String id) throws ConfigNotFoundException; // get all the versions of the draft. void delete(String namespace, String type) // type-> drafts, delete all drafts in the namespace.
Existing Config class:
public final class Config { // draft-id private final String id; // config -> json-config and other properties, example:timestamp -> currentTime. private final Map<String, String> properties; }
Questions :
1) ConfigStore stores the configs in "config.store.table", currently the table properties doesn't have versioning, drafts would need versioning, would this affect the "preferences" stored by PreferenceStore?. This would also need CDAP-upgrade to update properties for the existing dataset?
REST API for configure suggestions - AppFabric :
Request-Method : POST
Request-Endpoint : /namespaces/{namespace-id}/apps/{app-id}/configure
Request-Body : config-JSON
{ "artifact": { "name": "cdap-etl-batch", "scope": "SYSTEM", "version": "3.4.0-SNAPSHOT" }, "name": "pipeline", "config": { "source": { "name": "Stream", "plugin": { "name": "StreamSource", "artifact": { "name": "core-plugins", "version": "1.3.0-SNAPSHOT", "scope": "SYSTEM" }, "properties": { "format": "syslog", "name": "test", "duration": "1d" } } }, "sinks" : [{..}], "transform": [{..}, {...}] } }
Response-Body : Config JSON
{ "artifact": { "name": "cdap-etl-batch", "scope": "SYSTEM", "version": "3.4.0-SNAPSHOT" }, "name": "pipeline", "config": { "source": { "name": "Stream", "plugin": { "name": "StreamSource", "artifact": { "name": "core-plugins", "version": "1.3.0-SNAPSHOT", "scope": "SYSTEM" }, "properties": { "format": "syslog", "name": "test", "duration": "1d", "suggestions" : [{ "schema" : [ { "ts" : "long", "headers", "Map<String, String>", "program", "string", "message":"string", "pid":"string" } ] }], "isComplete" : "false" } } }, "sinks" : [{..}], "transform": [{..}, {...}] } }
Plugin API Change
@Beta public interface PipelineConfigurable { // change in return-type. ConfigResponse configurePipeline(PipelineConfigurer pipelineConfigurer) throws IllegalArgumentException; }
public class ConfigResponse extends Config { // list of suggestions for fields. List<Suggestion> suggestions; // if there were any exception while executing configure @Nullable String exception; // is the stage configuration complete ? @DefaultValue("false") boolean isComplete; }
public class Suggestion { String fieldName; // list of possible values for the fieldName List<String> fieldValues; }
@Beta public interface ApplicationContext<T extends Config> { // existing T getConfig(); // application will set a config response void setResponseConfig(T response); // get the response config T getResponseConfig(); }
Questions
1) Though the config response makes much sense to be in ApplicationContext along with input config, since this would allow CDAP programs to set a config and read from other programs, have to consider the implication for that.
User Stories (3.5.0)
- For the hydrator use case, the backend app should be able to support hydrator related functionalities listed below:
- query for plugins available for a certain artifacts and list them in UI
- obtaining output schema of plugins provided the input configuration information
- deploying pipeline and start/stop the pipeline
- query the status of a pipeline run and current status of execution if there are multiple stages.
- get the next schedule of run, ability to query metrics and logs for the pipeline runs.
- creating and saving pipeline drafts
- get the input/output streams/datasets of the pipeline run and list them in UI.
- explore the data of streams/datasets used in the pipeline if they are explorable.
- Add new metadata about a pipeline and retrieve metadata by pipeline run,etc.
- delete hydrator pipeline
- the backend app's functionalities should be limited to hydrator and it shouldn't be like a proxy for CDAP.
Having this abilities will remove the logic in CDAP-UI to make appropriate CDAP REST calls, this encapsulation will simplify UI's interaction with the back-end and also help in debugging potential issues faster. In future, we could have more apps similar to hydrator app so our back-end app should define and implement generic cases that can be used across these apps and it should also allow extensibility to support adding new features.
Generic Endpoints
HTTP Request Type | Endpoint | Request Body | Description | Response Body |
GET | /extensions/{back-end}/status | 200 OK : platform service is available 404 Service unavailable | ||
GET | /extensions/{back-end}/program/{program-name}/runs | 200 OK: runs of the program | [ "4as432-are425-..", "4az422-are425-.." .... ] | |
POST | /extensions/{back-end}/program/{program-name}/action | 200 start/stop/status of program | ||
POST | /extensions/{back-end}/program/{program-name}/metrics/query Query Params : startTime, endTime, scope | config: time-range, tags. 200 return metrics | ||
GET | /extensions/{back-end}/program/{program-name}/logs/{log-level} Query Params : startTime, endTime | 200 return logs for a time-range | ||
GET | /extensions/{back-end}/program/{program-name}/schedule | 200 get the next schedule run-time | { "timestamp":"1455832171" }
| |
GET | /extensions/{back-end}/program/{program-name}/datasets | 200 get all the input/output datasets that's used in the program | [ purchases, history, .... ] | |
POST | /extensions/{back-end}/program/{program-name}/datasets/{dataset-name}/explore/{action} | perform action {preview, download, next} for explore on dataset 200 explore result | ||
POST | /extensions/{back-end}/program/{program-name}/metadata | { "key" : "...", "value" : "..." } | store metadata supplied in JSON for this program 200 ok | |
GET | /extensions/{back-end}/program/{program-name}/metadata | get metadata added for this program 200 metadata result | { "key" : "...", "value" : "..." } | |
DELETE | /extensions/{back-end}/program/{program-name}/metadata | 200 successfully deleted metadata added for the program |