Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 49 Next »

 

Hydrator Backend Application

To develop a back-end app to encapsulate business logic, that acts as as an intermediary between CDAP-UI and CDAP backend. The back-end app simplifies developing new features in CDAP-UI as it encapsulates the logic to translate business logic request/action to appropriate CDAP backend requests/actions and returns to the UI relevant information. This will make CDAP-UI to focus more on the UI aspects and less about the business logic involved.  Ideally this back-end app will remove the "view in CDAP" as the UI will be able to get the relevant information required from the backend-app.

Checklist

  • User stories documented (Shankar)
  • User stories reviewed (Nitin)
  • Design documented (Shankar)
  • Design reviewed (Terence/Andreas/Albert)
  • Feature merged (Shankar)
  • UI updated (Ajai/Edwin)
  • Documentation for feature (Shankar)

Use-cases

Case #1

  • User adds a database plugin to the pipeline, clicks on the database plugin to fill in the configuration
  • User provides JDBC string, table name or SELECT query, username, password.
  • User then clicks on the button to populate the schema
  • UI will make the backend call to Hydrator App to retrieve the schema associated depending on whether it's based on Table or SELECT query.
  • User then has the choice to include the schema as the output schema of the database plugin.
  • The information of the schema associated with the database plugin is stored as spec in the exported pipeline. 

Case #2

  • User adds a database plugin to the pipeline, clicks on the database plugin to fill in the configuration
  • User provides JDBC string (include database and other configurations), username and password
  • User on selecting table will click on the button to list the tables. 
  • UI makes the backend call to retrieve the list of tables and show it to the user
  • User then selects the table which automatically populates the schema as the output schema of the database plugin.

Case #3

  • Shankar is using the Hydrator Studio instance to build a pipeline, he is building a batch pipeline for processing data from the Stream
  • Albert is also using the same instance of Hydrator Studio to build his pipeline, he is building a real-time pipeline for processing data from Twitter
  • Both Albert and Shankar have complex pipelines to build and they want to ensure that their work is not lost, so they are periodically saving it as draft
  • When both of them save drafts asynchronously to each other, the draft from each are visible to each other. 

User Stories

There are hydrator specific functionalities which could leverage CDAP’s features.

  • Drafts
    • User wants to add a new draft or save the pipeline he is working as a draft

    • User can update an existing draft of a pipeline as new version – previous version of pipelines are saved (upto 20 versions)
    • User can go back to previous version of draft or for any version of draft
    • User wants to retrieve the latest version of draft for a pipeline
    • User wants to view all available pipeline drafts across all users
    • User wants the ability to write a pipeline draft
    • User has access to only those pipelines that are available in the namespace the user is in.
  • Plugin Output Schema
    • User using DB-Source wants to enter connection-string, table name and automatically populate table schema information. 
    • User using TeraData-Source wants to enter connection-string, table name and automatically populate table schema information. 
  • List Field values 
    • User provides connection-string, user-name and password and expects list of available tables returned in DB-Source.

 Proposed REST APIs

 

HTTP Request Type

Endpoint

Request Body

Response Status

Response Body
POST

/extensions/hydrator/drafts/{draft-name}

{ 
"config": {...},
"message" : "..."
}

200 OK: draft created and saved successfully

409 CONFLICT: draft-name already exists

500 Error: while creating the draft

 
PUT/extensions/hydrator/drafts/{draft-name}
{ 
"config": {...},
"message" : "..."
}

200 OK: draft updated successfully

404 NOT Found : draft doesn't exist already, cannot be updated.

400 BAD Request : only 20 versions can be stored, delete old version before storing.

500 Error while updating the draft

 
GET

/extensions/hydrator/drafts/{draft-name}/versions/

 

200 return all the versions for the draft identified by the draft-name

404 draft not found

500 error while getting draft

[
{
"message" : "...",
"config": {
 "source" : {
    ....
  }, 
 "transforms" : [...],
 "sinks" [...]
 "connections" : [..]
 }
}, 
... 
]
GET

/extensions/hydrator/drafts/{draft-name}/versions/{version-number}

-1 -> latest version

 

200 return the versions for the draft identified by the draft-name and version-number

404 draft not found

500 error while getting draft

{ 
"message" : "...",
"config": {
 "source" : {
    ....
  }, 
 "transforms" : [...],
 "sinks" [...]
 "connections" : [..]
 }
}
GET/extensions/hydrator/drafts/ 

200 return the list of all saved drafts

500 error

[ 
  "streamToTPFS",
  "DBToHBase",
   ...
]
DELETE/extensions/hydrator/drafts/ 

200 successfully deleted all drafts

500 error while deleting

 
DELETE/extensions/hydrator/drafts/{draft-name} 

200 successfully deleted the specified draft

404 draft does not exist

500 error while deleting

 
DELETE/extensions/hydrator/drafts/{draft-name}/versions/{version-number} 

200 successfully deleted the version of a drat

404 draft with the version does not exist

500 error while deleting

 
POST

/extensions/hydrator/plugins/{plugin-name}/schema

 

{ 
"artifact" : {
  "name" : "...",
  "version":"...",
  "scope":"..."
},
"jdbcConnectionString": "...", 
 "jdbcPluginName": "...", 
 "tableName" : "..."
}

 

200 based on the plugin and plugin-properties

 determine output schema and return output schema

404 unrecognized plugin-name

500 Error

{ 
  "field1" : Integer,
  "field2" : String,
   ...
  "fieldN" : Double
}
POST

/extensions/hydrator/plugins/{plugin-name}/list

QueryParam : target

Example: target=table

Example:

{ 
"artifact" : {
  "name" : "...",
  "version":"...",
  "scope":"..."
}
"connectionString":   ..., 
 "username": ..., 
 "password" : ...
}

For the specified plugin, based on the provided connection information, get the list of available target field and return the list.

200 , list of available values for target type field. Example: list of tables in a database.

500 error while retrieving.

[ 
  "tableA",
  "tableB"
   ...
  "tableN" 
]

Design

Option #1

Description

The hydrator app needs to be able to write/read to a dataset to store and retrieve drafts and other information about business logic.  We can implement a Hydrator CDAP Application with a service that can have REST endpoints to serve the required hydrator functionalities. Enabling Hydrator in a namespace will deploy this Hydrator app and start the service. Hydrator UI would ping for this service to be available before coming up. The back-end business logic actions which directly needs to use the CDAP services endpoints can be made generic. 

  • Pros

    • Everything (Drafts, etc) stored in the same namespace, proper cleanup when namespace is deleted.
  • Cons

    • Every namespace will have an extra app for supporting hydrator if hydrator is enabled. Running this service, will run 2 containers per namespace. we can add an option to enable/disable hydrator if we are not using hydrator in a namespace.  It might feel weird as a user app, as the user didn't write/create this app.  

 


Option #2

Description

We will still use an Hydrator CDAP app but we create an "Extensions" namespace and have the "hydrator" app only deployed in the "extensions" namespace, this app would serve the hydrator requests for all namespaces.

It will use a single dataset to store the drafts, row keys can be name spaced for storing the drafts, while deleting the namespace, the rows belonging to the namespace will be deleted from the dataset.
  • Pros

    • Less amount of resources used, only 2 container's used rather than 2 container’s per namespace, only one dataset is used.
    • Only one app for using hydrator across namespace and not an app per namespace, less clutter. 
    • New extensions could be added to the same namespace to support other use cases in future.
  • Cons

    • Using a single dataset for storing all drafts across namespace is less secure?.
    • User won't be able to create a new namespace called "Extensions", as it will be reserved.

Open Questions

  • How to delete the drafts when the namespace is deleted ?
  • When to stop this service? 
  • Availability of the service? 
  • Security
    • If we decide to add more capability in hydrator back-end app, Eg: Make the pipeline validation/deploy app, etc,  then in secure environment, 
    • The hydrator-service can discover appropriate cdap.service and call appropriate endpoints?

Option #3 (based on discussion with terence)

 No new user level apps are deployed. Preference store is used to store user drafts of hydrator apps.

'configurePipeline' can be changed to return partial results, it can return pluginSpecification with possible values for missing information in plugin config, the pluginSpecification will be serialized into applicationSpecification and returned to the user.  

Example:

  1. Hydrator makes a call to Preference store to save name-spaced draft, in order to delete the drafts, delete endpoint is called in preference store for the drafts. If user deletes the namespace manually from CDAP-CLI, the preference store drops everything in that namespace including the drafts.

  2. Plugin configure stage will accept incomplete config and will create PluginSpecification, with possible values for incomplete config.

    1. Example : User is using a DBSource plugin, he provides connectionString, userName and password. the UI hits /validate endpoint with config, DBSource’s configurePlugin is called, it inspects the config, notices the required field ‘tableName' is missing, it connects to the database and gets the list of table names, writes this list in PluginSpecification and returns failure.

    2. User notices the failure, reads the specification to get the list of tables, selects the table he is interested in and makes the same call again, DBSource’s configure plugin notices schema is missing and ‘import’ field is missing. It then populates schema information in spec and returns failure.

    3. user fills the ‘import’, ‘count’ queries and changes schema appropriately and makes the same call, all the necessary fields are present and valid, the DBSource plugin returns successful for this stage. user proceeds to next stage.


REST API DraftsHttpHandler:

 

 

HTTP Request Type

Endpoint

Request Body

Response Status

Response Body

POST

/namespaces/{namespace-id}/drafts/{draft-id}/


 

{

"config": {...}

}

200 OK: draft created and saved successfully

409 CONFLICT: draft-name already exists

500 Error: while creating the draft

 

PUT

/namespaces/{namespace-id}/drafts/{draft-id}/


 

{

"config ": {...}

}

200 OK: draft updated successfully

404 NOT Found : draft doesn't exist already, cannot be updated.

500 Error while updating the draft

 

GET

/namespaces/{namespace-id}/drafts/{draft-id}/

 

200 return all the versions for the draft identified by the draft-name

404 draft not found

500 error while getting draft


 

[

{

"timestamp" : "...",

"config": {

"source" : {

   ....

 },

"transforms" : [...],

"sinks" [...]

"connections" : [..]

}

},

...

]

GET

/namespaces/{namespace-id}/drafts/{draft-id}/versions/{version-number}

-1 -> latest version

 

200 return the versions for the draft identified by the draft-name and version-number

404 draft with version found

500 error while getting draft


 

{

"timestamp" : "...",

"config": {

"source" : {

   ....

 },

"transforms" : [...],

"sinks" [...]

"connections" : [..]

}

}

GET

/namespaces/{namespace-id}/drafts/

 

200 return the name of list of all saved drafts

500 error

[
 "streamToTPFS",
 "DBToHBase",
  ...
]

DELETE

/namespaces/{namespace-id}/drafts/

 

200 successfully deleted all drafts

500 error while deleting

 

DELETE

/namespaces/{namespace-id}/drafts/{draft-id}

 

200 successfully deleted the specified draft

404 draft does not exist

500 error while deleting

 

 

The DraftsHttpHandler can make use of ConfigStore. It can take a similar approach done in PreferenceHttpHandler. 

DraftsHttpHandler->DraftStore->ConfigStore.

 

ConfigStore Existing methods  :

void create(String namespace, String type, Config config) throws ConfigExistsException;

void createOrUpdate(String namespace, String type, Config config);

void delete(String namespace, String type, String id) throws ConfigNotFoundException;

List<Config> list(String namespace, String type);

Config get(String namespace, String type, String id) throws ConfigNotFoundException; 

void update(String namespace, String type, Config config) throws ConfigNotFoundException;

ConfigStore new methods:

Config get(String namespace, String type, String id, int version) throws ConfigNotFoundException; // get a version of a draft
Config getAllVersions(String namespace, String type, String id) throws ConfigNotFoundException; // get all the versions of the draft. 
void delete(String namespace, String type) // type-> drafts, delete all drafts in the namespace.

 

Existing Config class: 

 

public final class Config {

 private final String id; // draft-id

 private final Map<String, String> properties; // config -> json-config and other properties, example:timestamp -> currentTime.


}


Questions :

1) ConfigStore stores the configs in "config.store.table", currently the table properties doesn't have versioning, drafts would need versioning, would this affect the "preferences" stored by PreferenceStore?. This would also need CDAP-upgrade to update properties for the existing dataset? 

 

User Stories (3.5.0)

  1. For the hydrator use case, the backend app should be able to support hydrator related functionalities listed below:
  2. query for plugins available for a certain artifacts and list them in UI
  3. obtaining output schema of plugins provided the input configuration information
  4. deploying pipeline and start/stop the pipeline
  5. query the status of a pipeline run and current status of execution if there are multiple stages.
  6. get the next schedule of run, ability to query metrics and logs for the pipeline runs.
  7. creating and saving pipeline drafts
  8. get the input/output streams/datasets of the pipeline run and list them in UI. 
  9. explore the data of streams/datasets used in the pipeline if they are explorable. 
  10. Add new metadata about a pipeline and retrieve metadata by pipeline run,etc.
  11. delete hydrator pipeline
  12. the backend app's functionalities should be limited to hydrator and it shouldn't be like a proxy for CDAP.  

Having this abilities will remove the logic in CDAP-UI to make appropriate CDAP REST calls, this encapsulation will simplify UI's interaction with the back-end and also help in debugging potential issues faster. In future, we could have more apps similar to hydrator app so our back-end app should define and implement generic cases that can be used across these apps and it should also allow extensibility to support adding new features. 

Generic Endpoints

HTTP

Request

Type

Endpoint

Request

Body

Description

Response Body
GET

/extensions/{back-end}/status

 

200 OK : platform service is available

404  Service unavailable

 
GET

/extensions/{back-end}/program/{program-name}/runs

 

200 OK: runs of the program

[ 
  "4as432-are425-..",
  "4az422-are425-.."

....

]
POST

/extensions/{back-end}/program/{program-name}/action

 

200 start/stop/status of program

 
POST

/extensions/{back-end}/program/{program-name}/metrics/query

Query Params : startTime, endTime, scope

 

config: time-range, tags.

200 return metrics

 
GET

/extensions/{back-end}/program/{program-name}/logs/{log-level}

Query Params : startTime, endTime

 200 return logs for a time-range 
GET/extensions/{back-end}/program/{program-name}/schedule 200 get the next schedule run-time
{ 
  "timestamp":"1455832171"
}

 

GET/extensions/{back-end}/program/{program-name}/datasets 200 get all the input/output datasets that's used in the program
[ 
 purchases, 
 history, 

....

]
POST/extensions/{back-end}/program/{program-name}/datasets/{dataset-name}/explore/{action} 

perform action {preview, download, next} for explore on dataset

200 explore result

 
POST/extensions/{back-end}/program/{program-name}/metadata
{ 
  "key" : "...",
  "value" : "..."
}

store metadata supplied in JSON for this program

200 ok

 
GET/extensions/{back-end}/program/{program-name}/metadata 

get metadata added for this program

200 metadata result

{ 
  "key" : "...",
  "value" : "..."
}
DELETE/extensions/{back-end}/program/{program-name}/metadata 200 successfully deleted metadata added for the program 
  • No labels