Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Table of Contents

Checklist

  •  User Stories Documented
  •  User Stories Reviewed
  •  Design Reviewed
  •  APIs reviewed
  •  Release priorities assigned
  •  Test cases reviewed
  •  Blog post

Introduction 

CDAP pipelines can run in various environments like native(hadoop/sandbox), cloud remote(GCP/EMR/Azure). Various CDAP plugins are only capable of running in few specific environment. For example transactional plugins like CDAP Table CDAPTable is not compatible with Cloud capable of running remotely and can only be used natively. Furthermore, various plugins are compatible capable with of running on certain version of underlying processing or storage platform. For example, a plugin might only be compatible with capable to of running on Spark 2 and not with on Spark 1.

Plugins can also be either compatible capable or incompatible with incapable of various business rules for example a plugin might be PII compatiblecapable. A CDAP administrator might want to only make PII compliant capable plugins available to pipeline developers to ensure compliance.

It is a bad user experience when a user builds a pipeline for it to only fail because it was incompatible with incapable to run in the environment or business rules in which it was running. We will like to satisfy a business rule. To improvise the user experience CDAP should support filtering of plugin available to pipeline developer depending on it’s compatibilitycapability.


Note: Compatibility and capability is used in this document interchangeably. For the purpose of this document they have similar meaning and can be interpreted as/replaced with another.

Goals

There are three goals which want needs to achieve be achieves to improve the user experience around compatibility capability of plugins:

  • A plugin developer should be able to easily and effectively specify the compatibility capability of the plugin being developed.

  • CDAP platform should be able to capture and provide compatibility capability information of plugins.

  • If an incompatible incapable plugin runs it should fail early and fail with appropriate message.

User Stories 

  1. As a CDAP plugin developer, I should be able to specify capability of my plugin.

  2. As a CDAP administrator, I want to enforce that plugins have certain capabilities to run in my CDAP instance.

  3. As a CDAP pipeline developer and/or CDAP administrator, if a pipeline containing an incapable plugin runs I will like it to fail early and with appropriate error message.

Scenarios

  • Scenario 1: Specifying Capability

    • Scenario 1.1

Alice is a CDAP Plugin developer who is developing a CDAP Dataset plugin (transactional). Her plugin is supported only in transactional environment. She will like to specify this in her plugin so that pipeline developer don’t use her plugin in other modes.

    • Scenario 1.2

Alice is also developing an Action plugin which store some state information in CDAP Dataset. Since her action plugin uses CDAP Dataset it can only run in native modeenvironment. She will like to specify this in her plugin so that pipeline developer don’t use her plugin in other modes environment.

    • Scenario 1.3

Alice is CDAP plugin developer who is developing a Spark ML transform which is uses libraries available only in Spark 2 and she will like to specify her plugin is only compatible with capable of running on Spark 2.

    • Scenario 1.4

Alice is a CDAP Plugin developer who is developing a PII compliant capable plugin and she will like to specify that her plugin is PII complaint capable so that when she deploys her plugin in a CDAP instance which only allow PII complaint capable plugins to run her plugin can be run and be used by pipeline developers.

  • Scenario 2: Plugin Filtering

    • Scenario 2.1

Bob is a data analyst who is evaluating CDAP. He is running his CDAP in a particular environment and he sees a lot of plugin which does not seem compatible with capable to run in his environment. He will like to be able to filter plugins on compatibility capability to see only the plugins which is compatible capable with his environment.

    • Scenario 2.2

Eve is a CDAP administrator, who is setting up a CDAP instance in cloud. She will like to enforce that only plugins which are capable of running in cloud remote environment are available to pipeline developer for use.

    • Scenario 2.3

Eve is a CDAP administrator, who is trying to set up a CDAP environment in production for data processing. Eve’s organization has strict compliance requirement and she wants to only allow plugins which meet certain compliance to be used by the data analyst in her organization. Furthermore, she does not want any data analyst to be able to override her settings and be able to run non-compliant plugins.


  • Scenario 3: Failing Early and Gracefully

    • Scenario 3.1

Bob is trying to develop a pipeline to process some data which is stored in CDAP Table. He builds a pipeline with the appropriate plugin and configuration and the pipeline fails at runtime with a lot of cryptic error messages in logs. Bob rechecks his plugin configurations and tries to debug the issue but he is not able to run the pipeline successfully. Disappointed with the platform Bob reaches out to CDAP support group for help. After some back and forth Bob gets to know that he was running the plugin in cloud mode remotely and to run this pipeline he will need to set the correct compute profile during runtime. It makes sense to him but he wonders only if the log error messages would have pointed it out, he could have easily corrected it by himself saving the time spent in support.

    • Scenario 3.2

Bobs exported a plugin which was sent to him by another pipeline developer and tries to run it. The pipeline fails for him but works perfectly fine for his colleague. Bobs tries to debug the issue by looking into the logs but he is again greeted by cryptic error messages. He reaches out to CDAP support and was told that he is running his pipeline in incorrect mode. He gets really furious as why CDAP logs does not show any information for such a common problem.


Design

API

A plugin developer will be responsible for specifying the capabilities of the plugin. The plugin developers can use annotation provided by the platform to specify this just like they specify Name or Description of the plugin.

To support this we will add the following annotation will be added

Code Block
languagejava
titleCapability
/**
 * Annotates different environment, versions and versionsother incapabilities which thea elementsplugin is capable supportedof.
 */
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface Capability {
  String[] value();
}
Code Block
languagejava
titleCapabilities
/**
 * Defines the different capability options for CDAP programs/plugins to specify
 */
public final class Capabilities {
    public static final String NATIVE = "native"; // defines capability of running in native

    public static final String REMOTE = "remote"; // defines capability of running remotely
}

This will allow the plugin developer to specify various capabilities exposed by the platform in the following way:

Code Block
languagejava
titleMockSource
@Plugin(type = BatchSource.PLUGIN_TYPE)
@Name("Mock")
@Capability({Capabilities.Mode.NATIVE, Capabilities.Spark.V2})
public class MockSource extends BatchSource<byte[], Row, StructuredRecord> {
  ....
  ...
}

Plugins can also be annotated with custom value to specify capable of business rules compatibility. For example a plugin developer can specify that the plugin is PII compatible capable by annotating it with

Code Block
languagejava
@Capability({Capabilities.Mode.NATIVE, Capabilities.Spark.V2, "PII"})

The capability value will be case insensitive. If no, system capability option is specified then the default system capability will be used. According to default system capability the plugin will be considered as capable of all system defined capabilities but not with any user defined capability options such as PII. Custom capabilities needs to be specifically defined by the plugin developer and their presence or absence does not override the default system defined capabilities. See Plugin Changes

Allowing users to specify custom capabilities value opens up many issue with standardization of such values. For example, if two plugin developers are developing different plugins they may choose to annotate their plugins with different names for the same business rule. This might lead to confusion and we might end up with many filter options representing the same business rule. CDAP Metadata system currently suffers from the same problem where two different tags say ‘sensitive’ and ‘confidential’ might be used to tag in similar context. One way to achieve standardization of capability options which plugin developer can use might be to make the CDAP platform specifically define what are the different allowed capability options which can be used. Although for simplicity, in this release we this will not support any mechanism of standardization. It will be the responsibility of the plugin developers to use consistent taxonomy among each other.

Platform

Processing

Currently when a Plugin is deployed in CDAP we inspect it an inspection is performed to collect various information about the plugin. In this step we can also inspect and collect the capability information which is providedwill be collected.

The capability information will be processed in the Artifact inspection stage by our existing ArtifactInspector class. Here Plugins will be look inspected for @Capability annotation on plugin and collect if found all the capability information will be collected. The capability information will be stored in PluginClass which is field member of ArtifactClasses.


Code Block
languagejava
titlePluginClass
/**
 * Contains information about a plugin class.
 */
@Beta
public class PluginClass {

  private final String type;
  private final String name;
  private final String description;
  private final String className;
  private final String configFieldName;
  private final Map<String, PluginPropertyField> properties;
  private final Set<String> endpoints;
  private final Set<String> capability; // all the capability of this plugin
}


Storage

We need to store the capability information Capability information will be stored at plugin level as one artifact can have n numbers of plugins and each one of them will have their own capability information.

Approach 1: Artifact Store (Selected)

We are storing the plugin Plugin capability information will be stored in PluginClass which is contained in ArtifactMeta. Hence, we can easily store the capabilitycapability information of a plugin in can easily be stored in the ArtifactStore as a part of ArtifactMeta itself. This will allow us to store all the plugin information in one store.

Approach 2: Metadata (Considered)

The capability information can also be stored as a system metadata of the Plugin by the ArtifactSystemMetadataWriter. Since Plugin is not an EntityId in CDAP we will use , Metadata systems capability to store metadata for custom entities can be used where the Plugin will be a custom entity under Artifact.

The custom entity hierarchy will be as follows:

Code Block
namespace=<namespace-name> | artifact=<artifact-name> | version=<artifact-version> | plugin=<plugin-name>

Note: | and = are just used as a separator here for readability. In actual serialized form we use byte-length encoding is used.

This capability information will be stored as metadata property where the key will be ‘capability’ and value list of unique comma separated string representing capabilities.


Code Block
capability = mode_sandboxremote, mode_cloudnative, pii

Note: = and , is our standard key-value and individual value separator in Metadata storage

Comparison

The below table shows the comparison between the two approaches


Approach

Pros

Cons

Approach 1: Artifact Store

  • Simplicity: All the plugin information is stored in one store.

  • Single Lookup: To serve the get calls we only need to one store look up one storeis needed.

  • Extensibility: In the future, if we will want it is needed to support filtering of plugins on the fly this approach will require much more work to support it.

  • Standardization: If we store the compatibility information separately we will not be able to piggyback capability is stored separately piggybacking on metadata standardization to achieve itstandardization will not be possible.

Approach 2: Metadata

  • Extensibility: Storing the capability information as metadata allows us to support tagging and filtering of plugins on the fly. In our current this approach if a plugin developer wants to change the plugin compatibility information then the developer will have to do code changes and then rebuild and redeploy the plugin artifact.

  • Standardization: As mentioned earlier standardization of capability options is required for a good user experience. Our The metadata system suffers from the similar problem. In past we have tried prefered tags was used to solve this problem through prefered tags which but it is now deprecated and we are looking for other alternative to solve this problem is being discussed. With Once standardization of metadata we will implicitly be able to solve standardization of capability optionsis solved it will solve capability standardization too.

  • Multiple Lookup: Getting capability information will require lookups of multiple table and also multiple transactions.

  • User Experience: Given that the getting compatibility information will require additional table lookup when we do this for this operations is done on a lot of plugin the overall time to serve all the plugins with its capability information will be high which might lead to slow loading of plugins page in UI/Clients.

Filtering

CDAP will support filtering of plugins at two level. One will be for administrators to enforce strict environment and business rules. This will be done through a configuration property in cdap-site.xml. Another will be for data analyst to help them see plugins which are compatible with different environments and rules.

Admin Level Filtering

In cdap-site.xml we will add a new  new property will be added which will specify certain requirements which a plugin needs to meet to be displayed/enabled. This configuration will be used by CDAP administrators to enforce strict environment and business level rules when they want to display/enable only certain plugins. An example of this is Scenario 2.2 and Scenario 2.3.

Approach 1 (Selected)

Code Block
<property>
	<name>plugin.required.capabilities</name>
	<value>mode_cloud<<value>remote</value>
	<description>
		Comma separated list of capabilities values which are required by default. If a system level capability category is undefined no capability is required in that category and all plugins for that category will be displayed/enabled.
If this property is undefined or is empty no capability is required in any of the system defined categories.     </description>
</property>

Capabilities specified in this configuration will be considered mandatory and only the plugins which have these capabilities will be displayed. The plugins which does not have one of the capabilities specified here will be filtered out. Please see Filtering Examples section for examples of different use cases. 

Approach 2 (Considered)

We can use logical expressions Logical expressions can be used  to specify the required capabilities. This approach gives the administrator much more flexibility in specifying the required capability. Consider a case where an administrator wants to enforce the following requirement:

  1. Either mode_cloud remote or mode_ native is required (a case while running in sandbox environment)
  2. For mode_cloud remote required capability is spark_2 spark2 and for mode_ native required capability is spark_1spark1

Approach 1 does not offer enough flexibility to address this use case. But with logical expressions an administrator can specify this requirement in the following way:


Code Block
(mode_cloudremote && spark_2spark2) || (mode_native && spark_1spark1)

Although this approach gives much more flexibility to the administrator to specify various advanced capability requirement it is not user friendly and expects the administrator to form correct logical expressions. As of now, we do not have any there is no known use cases of such complex required capability and hence we we favor Approach 1 is favored which is more user friendly. In future, if we there is a need for such capability we can change how we handle the provided configuration to support thishandling of provided configuration can be changed. This will not require any upgrade step just an update of configuration value.

Comparison

ApproachProsCons
Approach 1
  • Simple and user friendly
  • Lower probability of specifying an incorrect conf value
  • Can only support simple capability requirement
Approach 2
  • Allows user to specify complex capability requirements
  • Expects administrator to understand and form correct logical expressions
  • Higher probability of specifying an incorrect conf value


Any changes to the requirements will require a CDAP restart and that is acceptable since we expect such changes to happen very infrequently.

If a pipeline was created before a capability was required i.e. the capability was added as a requirement in the above configuration after the pipeline deployment then that pipeline will start failing with appropriate error message. (User story 4)

Provisioner Checks

In some cases filtering based on capabilities does not ensure that if a pipeline is run it will not fail due to incapability. Consider the case of CDAP running in sandbox environment, in this case plugins which are either compatible with native or remote is displayed to user and it is possible that pipeline which was developed to run in native environment is launched on cloud because of the compute profile (set at namespace level). In this case the pipeline will fail to run after the provisioning is done. This is not ideal for the following reasons:

  1. The failure happens too late in the process and user has to wait just to see the pipeline fail which would have never worked.
  2. Provisioning is costly operation.

To address this This issue will also do be addressed by performing a check when a pipeline is run. For this every provisioner will specify the capability which it requires.platform will check whether all the plugins are capable to run on the provisioned instance or not through the interface exposed by the Provisioner

Code Block
languagejava
public interface Provisioner {

  /**
   * @returns Returnsa allSet theof capabilities which pluginsare mustexplicitly havedefined to be ablesupported toby runthe provisioner or an *empty set if the *provisioner @returndoes anot {@linkdefine Set}any of {@link String} which represents Capabilitiesexplicit capabilities
   */
  void Set<String> getRequiredCapabilitiesgetCapabilities();
  ...
  ...
}

Before we provision and run the pipeline we will check that all the plugins in the pipeline being run is capable of all the required capabilities defined by the provisioner. An AWS provisioner for example can specify that "aws" as a required capability, similarly GCP provisioner can specify "gcp" as required capability.All provisioner which support some specific capability will expose that using the above api. For example Native provisioner will return a ["tephratx"], Amazon AWS Provisioner will return ["aws"]. 

As mentioned above Plugins define their requirements if they explicitly needs some requirement to be meet to run successfully. When a pipeline/program containing a plugin will be run a check will be performed to ensure all the plugin requirements are meet by the capabilities of the provisioner when the pipeline/program is run.

To support this now Plugin information will need to be stored with ProgramSpecification. Currently plugin information is stored in the ApplicationSpecification and not in the ProgramSpecifications. ProgramSpecifications will be modified to store the plugin requirement information too.

This can be generalized to to store not just plugin information for programs but any set of requirements for program in ProgramSpecification. In future this will allow to support feature where a developer can specify custom requirement while configuring/implementing  a program. For example, if a service or a worker uses CDAP Dataset (using tephra transaction) they can be annotated to represent this requirement or the requirement can be specified during program configuration as name, description etc are specified. (This is out of scope of 5.1)

For now requirement will be populated from the plugin requirements if a plugin is preset in the program.

Code Block
languagejava
public interface ProgramSpecification {
  /**
   * @returns the requirements for this program or an empty set if the program does not have any specific requirements defined
   */
  Set<String> getRequirements();
}



Pipeline Developer Filtering (Beyond 5.1)

The second level of filtering capability is provided to pipeline developer. A pipeline developer can further filter the available plugin to see only the plugins which have a capability.

Approach 1 (Selected)

Currently, available plugins is retrieved by calling:

Code Block
GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}

This call returns a summary of Plugins for the provided plugin-type. The result will now only include plugins which meets the required capability defined by the configuration plugin.required.capabilities. The response will now contain the capability options of the plugin. This is second level of filtering and hence it will filter the plugins from the first set (the plugins which are enabled by plugin.required.capabilities configuration and not the plugins which exists in the system).



Code Block
[
 {
   "name": "Plugin1",
   "type": "dummy",
   "description": "This is plugin1",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin1",
   "artifact": {
     "name": "plugins",
     "version": "1.0.0",
     "scope": "USER"
   },
   "capability": [
     "mode_cloudremote",
     "spark_2spark2"
   ]
 },
 {
   "name": "Plugin2",
   "type": "dummy",
   "description": "This is plugin2",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin2",
   "artifact": {
     "name": "plugins",
     "version": "2.0.0",
     "scope": "USER"
   },
   "capability": [
     "mode_cloudremote",
     "spark_1spark1"
   ]
 }
]


(Edwin Elia: Please provide feedback for the below UI based design decision)

Client/UI will be responsible to processing the capability list of all the plugins and if needed rendering a view which will show all the unique compatible values to allow further filtering.

Pipeline developer will be able to further filter the available plugin and see plugins which have certain capability. This capability option will be passed as a query parameter to above call.


Code Block
GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capabilitycapabilities=spark_2spark2


This will further filter out the plugins from the above list to display only the plugins which have spark_2 capability.

Code Block
[
 {
   "name": "Plugin1",
   "type": "dummy",
   "description": "This is plugin1",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin1",
   "artifact": {
     "name": "plugins",
     "version": "1.0.0",
     "scope": "USER"
   },
   "capabilities": [
     "mode_cloudremote",
     "spark_2spark2"
   ]
 }
]


Including capability information of plugin in the response is beneficial as it will allow UI/Client to subdivide or label the individual plugin based on their capabilities.

Note:


  1. For simplicity , in 5.1 we will only support the the initially second level of filtering on one condition i.e. a user can only pass one capability option as query parameters. The current design supports providing multiple capability options as query parameters but we don’t have a known use case for it.be supported on one query parameter.

  2. The values provided in the plugin.required.capabilities takes precedence over the filtering parameters specified by the pipeline developer as a query parameter. If a query parameters specify to include a plugin compatibility options which is not in  plugin.required.capabilities then that call will return an empty result even if there are compatible plugins known in the system.


Approach 2 (Considered)

As mentioned before currently available plugins is rendered by calling:


Code Block
GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}

This call returns a summary of Plugins for the provided plugin-type.

We will add an An additional REST API will be added which will provide all the capability options which are known in the system.


Code Block
GET /namespaces/{namespace-id}/capabilities


returns

Code Block
[
 "mode_cloudremote",
 "spark_2spark2",
 "spark_1spark1",
 "mode_native"
]

The values from this list of compat can be passed as query parameter

Code Block
GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capabilitycapabilities=spark_2spark2

This will further filter out the plugins from the above list to display only the plugins which have spark_2 spark2 capability.

Code Block
[
 {
   "name": "Plugin1",
   "type": "dummy",
   "description": "This is plugin1",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin1",
   "artifact": {
     "name": "plugins",
     "version": "1.0.0",
     "scope": "USER"
   },
   "capabilities": [
     "mode_cloudremote",
     "spark_2spark2"
   ]
 }
]


Comparison


Approach

Pros

Cons

Approach 1

  • Does not require UI to make an additional call to get a list  of all compat options

  • Will be extra work on the client/ui side to parse all the compat options to process/render it.

Approach 2

  • Allows a client to query for all the compatibility options present in the system.

  • Easier for CDAP UI to render a selector widget to select/deselect compatibility options

  • CDAP UI will have to make an additional call to fetch this information before fetching plugins


Filtering Examples

To understand how the configuration and filtering will work in real world let us consider few use cases which we know of us known so far and see how they can be addressed through the above design.

Cloud

CDAP is running in a cloud environment and we an administrator only want to allow plugins capable of running in cloud to be displayed


plugin.required.capability

CDAP Table Source

@Capability(ModeCapabilities.NATIVE, Spark.V2"spark2", "PII")

BigTable Source

@Capability(ModeCapabilities.CLOUDREMOTE, "PII")

AWS S3 Source

@Capability(ModeCapabilities.CLOUDREMOTE, Mode Capabilities.NATIVE)

mode_cloud remote

Filtered out

Visible

Visible

In-Prem Hadoop

CDAP is running in a hadoop environment and we administrator only want to allow plugins capable of running in hadoop and outside cloud or server connection is restricted.

plugin.required.capability

CDAP Table Source

@Capability(ModeCapabilities.NATIVE, Spark.V2"spark2", "PII")

BigTable Source

@Capability(ModeCapabilities.CLOUDREMOTE, "PII")

AWS S3 Source

@Capability(ModeCapabilities.CLOUDREMOTE, ModeCapabilities.NATIVE)

mode_native

Visible

Filtered out

Visible

In-Prem Hadoop

CDAP is running in hadoop environment and we administrator only want to allow plugins capable of running in hadoop and cloud

plugin.required.capability

CDAP Table Source

@Capability(MODECapabilities.NATIVE, PII)

BigTable Source

@Capability(MODECapabilities.CLOUD, PII)

AWS S3 Source

@Capability(MODE_Capabilities.CLOUD, MODECapabilities.NATIVE)

mode_native, mode_cloudremote

Filtered out

Filtered out

Visible

Sandbox

CDAP is running in sandbox and we administrator want to allow plugin which are capable of running in native or cloud

plugin.required.capability

CDAP Table Source

@Capability(ModeCapabilities.NATIVE, Spark.V2"spark2", "PII")

BigTable Source

@Capability(ModeCapabilities.CLOUDREMOTE, "PII")

AWS S3 Source

@Capability(ModeCapabilities.CLOUDREMOTE, ModeCapabilities.NATIVE, "emr")


Visible

Visible

Visible

Note: When plugin.required.capabilities is empty it means the instance does not defines any capability to be required for any category and hence all plugins will be shown.instance does not defines any capability to be required for any category and hence all plugins will be shown. Although if a plugin is used in incompatible profile then the pipeline will fail before provisioning. Consider a case where CDAPTableSource is being used in a pipeline and the profile is set to EMR Provisioner. EMR Provisioner will do a check that all the plugins define their capability with "emr". Since the requirement will not be meet the pipeline will marked failed immediately due to incompatibility. Although if the pipeline had all "emr" compatible plugins like AWSS3Source then the requirement will be meet, the cluster will be provisioned and the pipeline will be run. 

Sandbox: Spark 2

CDAP is running in sandbox and we want administrator want to allow plugin which are capable of running with spark 2

plugin.required.capability

CDAP Table Source

@Capability(ModeCapabilities.NATIVE, Spark.V2"spark2", "PII")

BigTable Source

@Capability(ModeCapabilities.CLOUDREMOTE, "PII")

AWS S3 Source

@Capability(ModeCapabilities.CLOUDREMOTE, Mode.NATIVE)

spark_2spark2

Visible

Filtered out

Filtered out

Note: Here plugin.required.capabilities does not specify any requirement for Mode which means the system does not defines any capability for mode to be required and hence all plugins which support any mode are a candidate for being display. Although it does define a Spark requirement so only plugins which is compatible with spark 2 will be show.

Sandbox: Compliance Required

CDAP is running in sandbox and we want administrator want to allow plugin which are capable of running in native or cloud mode but also want to satisfy a compliance need and hence plugins must have PII capability

plugin.required.capability

CDAP Table Source

@Capability(ModeCapabilities.NATIVE, Spark.V2"spark2", "PII")

BigTable Source

@Capability(ModeCapabilities.CLOUDREMOTE, "PII")

AWS S3 Source

@Capability(ModeCapabilities.CLOUDREMOTE, ModeCapabilities.NATIVE)

PII

Visible

Visible

Filtered out

Cloud: Compliance Required

CDAP is running in cloud and we want administrator want to allow only plugins which are capable of running in cloud and is PII compliant

plugin.required.capability

CDAP Table Source

@Capability(ModeCapabilities.NATIVE, Spark.V2"spark2", "PII")

BigTable Source

@Capability(ModeCapabilities.CLOUDREMOTE, "PII")

AWS S3 Source

@Capability(ModeCapabilities.CLOUDREMOTE, ModeCapabilities.NATIVE)

mode_cloudremote, PII

Filtered out

Visible

Filtered out

Preview

A pipeline preview runs in native (local machine) JVM. Hence, any plugin which is capable of running in Native environment is capable of preview. If a plugin which is not capable of running in Native is run in a preview then that preview operation will failed fail with incapability as soon as the preview is started and before the actual preview process runs.

Failing Early and Gracefully

Filtering incapable plugins does not guarantee that a pipeline containing an incapable plugin will not be run in the system. This might happen because of various reason for example, a pipeline was created before a capability was required i.e. the capability was added as a requirement in the plugin.required.capabilities configuration after the pipeline deployment.

Whenever we encounter a plugin which is incapable to run we will like to fail an incapable plugin is encountered the pipeline should fail early and gracefully. Ideally, we will like to fail as early as possible. We will fail the plugin Pipeline can at different stages: 

  1. Deploy time: Since plugins which are incapable to run will be filtered out, a user will not be able to create a pipeline using them. If a user imports a pipeline json which contains an incapable plugin then the deployment will fail with Artifact/Plugin not found exception. Note: The exception here will not be Incapability exception since that will expose the existence of the plugin in the system which we do not want to exposeis not ideal.
  2. Run Time (Before run): In addition to filtering a plugin we will also be doing checks before provisioning will also be performed. This will fail the pipeline if any of the plugin is not capable to run in the provisioned environment. For example running a plugin which AWS which is not capable to run in AWS and can only run in Azure.

Whenever we will fail When a pipeline is failed due to incapability we will surface this information the failure will be surfaced to user through logs and also an error message in UI.

Plugin changes

The above plugin design tries to minimize the required plugin changes. If a plugin does not specify any capability option then the default capability will be used according to which the plugin will be considered as capable of all system defined capabilities but not with any user defined capability options such as PII. Custom capabilities needs to be specifically defined by the plugin developer. The system defined capabilities with which a plugin will be considered capable in absence of any specified capability is following:

  1. Native
  2. Remote
  3. Spark 1
  4. Spark 2

No plugin change is required after CDAP upgrade to 5.1 as plugins from previous version will be considered capable with all system defined capability as mentioned above.

Although to filter out all the plugins which require transactions or is known to be compatible with only one version of Spark will need to be annotated so that the default capability is not inferred for them. The below table list all the plugins which will need to be specifically annotated and the annotation required:


Plugin NameCapability Annotation
CDAPTableDataset (Source & Sink)

@Capability({Capabilities.NATIVE})

KVTableSource (Source & Sink)@Capability({Capabilities.NATIVE})
AvroSnapshotDataset (Source & Sink)@Capability({Capabilities.NATIVE})
ParquetSnapshotDataset (Source & Sink)@Capability({Capabilities.NATIVE})
AvroTimePartitionedDataset (Source & Sink)@Capability({Capabilities.NATIVE})
ParquetTimePartitionedDataset (Source & Sink)@Capability({Capabilities.NATIVE})
KafkaStreamingSource (kafka-plugins-0.8)@Capability({"spark1_2.10"})
KafkaStreamingSource (kafka-plugins-0.10)@Capability({"spark2_2.11"})
WIPWIP



API changes

New Programmatic APIs

Capability Annotation

Code Block
languagejava
/**
 * Annotates different environment and versions in which the elements is supported
 */
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface Capability {
  String[] value();
}


Deprecated Programmatic APIs

None

Updated Programmatic APIs

None

New REST APIs

None

Deprecated REST APIs

None

Updated REST APIs

PathMethodDescriptionResponse CodeResponse
v3/namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capabilities=mode_cloud
GETReturns the plugins information including it's capabilities

no change

Code Block
[
 {
   "name": "Plugin1",
   "type": "dummy",
   "description": "This is plugin1",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin1",
   "artifact": {
     "name": "plugins",
     "version": "1.0.0",
     "scope": "USER"
   },
   "capabilities": [
     "mode_cloud",
     "spark_2"
   ]
 }
]






CLI Impact or Changes

  • list artifact plugins <artifact-name> <artifact-version> <plugin-type> [<scope>] will be modified to take another parameter to filter just like the REST API.

UI Impact or Changes

Security Impact 

None.

Test Scenarios

Test IDTest DescriptionExpected Results
1Deploying a plugin which does not have Capability annotationPlugin must be deployed and should be considered capable with all system defined capabilities
2Deploying a plugin which have Capability annotationPlugin must be deployed and should be capable with only the options defined in the annotation
3Deploying a plugin with two Capability annotation (which might or might not have same options)Plugin must be deployed and should be capable with union of capabilities defined in various annotations
4Redeploying a plugin with updated capability annotationPlugin must be redeployed and its capability information should be updated
5Missing or Empty plugin.required.capabilities in cdap-site.xmlAll plugins in the system should be displayed
6plugin.required.capabilities = mode_cloudOnly plugins with cloud capability should be displayed
7plugin.required.capabilities = mode_nativeOnly plugins with native capability should be displayed
8plugin.required.capabilities = spark_1Plugins which are capable to run in any mode and is capable of running with spark 1 should be displayed
9plugin.required.capabilities = mode_native, mode_cloudPlugins which are capable of both cloud and native mode should be displayed
10plugin.required.capabilities = mode_native, spark_1Plugins which are capable to run in native mode and with spark 1 should be displayed
11plugin.required.capabilities = mode_native, piiPlugins which are capable of running in native mode and is PII compatible should be displayed
12

plugin.required.capabilities = mode_cloud

and the following call is made

GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capability=mode_native
Should return empty result as system has only enabled cloud compatible plugins
13

plugin.required.capabilities = <empty>

and the following call is made

GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capability=mode_native
Should only return plugins which are capable to run in native mode
14

plugin.required.capabilities = mode_cloud, mode_native

and the following call is made

GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capability=mode_native

Should show plugin which are capable of both native and cloud mode.

Note: The second level of filtering is applied on top of first layer result set. When plugin.required.capabilities = mode_cloud, mode_native then we only have plugins which are supported in both mode enabled and second level of filtering will be applied on this set. Plugins which are only capable of native mode will not be displayed in the result of this REST call since they are not enable in the first place due to the requirement setting of plugin.required.capabilities. (Scenario 2.3)

Releases

  • Release 5.1

Related Work

Jira Legacy
serverCask Community Issue Tracker
serverId45b48dee-c8d6-34f0-9990-e6367dc2fe4b
keyCDAP-14002

Future work

Additional Filtering

Dynamic Filtering

  • Support tagging and filtering of plugin on the fly

Standardization

  • Support for standardization of plugin capabilities