Improving Plugins User Experience Across Different Modes


Checklist

  • User Stories Documented
  • User Stories Reviewed
  • Design Reviewed
  • APIs reviewed
  • Release priorities assigned
  • Test cases reviewed
  • Blog post

Introduction 

CDAP pipelines can run in various environments like native(hadoop/sandbox), remote(GCP/EMR/Azure). Various CDAP plugins are only capable of running in few specific environment. For example transactional plugins like CDAPTable is not capable of running remotely and can only be used natively. Furthermore, various plugins are capable with of running on certain version of underlying processing or storage platform. For example, a plugin might only be capable to of running on Spark 2 and not on Spark 1.

Plugins can also be either capable or incapable of various business rules for example a plugin might be PII capable. A CDAP administrator might want to only make PII capable plugins available to pipeline developers to ensure compliance.

It is a bad user experience when a user builds a pipeline for it to only fail because it was incapable to run in the environment or satisfy a business rule. To improvise the user experience CDAP should support filtering of plugin available to pipeline developer depending on it’s capability.


Note: Compatibility and capability is used in this document interchangeably. For the purpose of this document they have similar meaning and can be interpreted as/replaced with another.

Goals

There are three goals which needs to be achieves to improve the user experience around capability of plugins:

  • A plugin developer should be able to easily and effectively specify the capability of the plugin being developed.

  • CDAP platform should be able to capture and provide capability information of plugins.

  • If an incapable plugin runs it should fail early and fail with appropriate message.

User Stories 

  1. As a CDAP plugin developer, I should be able to specify capability of my plugin.

  2. As a CDAP administrator, I want to enforce that plugins have certain capabilities to run in my CDAP instance.

  3. As a CDAP pipeline developer and/or CDAP administrator, if a pipeline containing an incapable plugin runs I will like it to fail early and with appropriate error message.

Scenarios

  • Scenario 1: Specifying Capability

    • Scenario 1.1

Alice is a CDAP Plugin developer who is developing a CDAP Dataset plugin (transactional). Her plugin is supported only in transactional environment. She will like to specify this in her plugin so that pipeline developer don’t use her plugin in other modes.

    • Scenario 1.2

Alice is also developing an Action plugin which store some state information in CDAP Dataset. Since her action plugin uses CDAP Dataset it can only run in native environment. She will like to specify this in her plugin so that pipeline developer don’t use her plugin in other environment.

    • Scenario 1.3

Alice is CDAP plugin developer who is developing a Spark ML transform which is uses libraries available only in Spark 2 and she will like to specify her plugin is only capable of running on Spark 2.

    • Scenario 1.4

Alice is a CDAP Plugin developer who is developing a PII capable plugin and she will like to specify that her plugin is PII capable so that when she deploys her plugin in a CDAP instance which only allow PII capable plugins to run her plugin can be run and be used by pipeline developers.

  • Scenario 2: Plugin Filtering

    • Scenario 2.1

Bob is a data analyst who is evaluating CDAP. He is running his CDAP in a particular environment and he sees a lot of plugin which does not seem capable to run in his environment. He will like to be able to filter plugins on capability to see only the plugins which is capable with his environment.

    • Scenario 2.2

Eve is a CDAP administrator, who is setting up a CDAP instance in cloud. She will like to enforce that only plugins which are capable of running in remote environment are available to pipeline developer for use.

    • Scenario 2.3

Eve is a CDAP administrator, who is trying to set up a CDAP environment in production for data processing. Eve’s organization has strict compliance requirement and she wants to only allow plugins which meet certain compliance to be used by the data analyst in her organization. Furthermore, she does not want any data analyst to be able to override her settings and be able to run non-compliant plugins.


  • Scenario 3: Failing Early and Gracefully

    • Scenario 3.1

Bob is trying to develop a pipeline to process some data which is stored in CDAP Table. He builds a pipeline with the appropriate plugin and configuration and the pipeline fails at runtime with a lot of cryptic error messages in logs. Bob rechecks his plugin configurations and tries to debug the issue but he is not able to run the pipeline successfully. Disappointed with the platform Bob reaches out to CDAP support group for help. After some back and forth Bob gets to know that he was running the plugin remotely and to run this pipeline he will need to set the correct compute profile during runtime. It makes sense to him but he wonders only if the log error messages would have pointed it out, he could have easily corrected it by himself saving the time spent in support.

    • Scenario 3.2

Bobs exported a plugin which was sent to him by another pipeline developer and tries to run it. The pipeline fails for him but works perfectly fine for his colleague. Bobs tries to debug the issue by looking into the logs but he is again greeted by cryptic error messages. He reaches out to CDAP support and was told that he is running his pipeline in incorrect mode. He gets really furious as why CDAP logs does not show any information for such a common problem.


Design

API

A plugin developer will be responsible for specifying the capabilities of the plugin. The plugin developers can use annotation provided by the platform to specify this just like they specify Name or Description of the plugin.

To support this the following annotation will be added

Capability
/**
 * Annotates different environment, versions and other capabilities which a plugin is capable of.
 */
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface Capability {
  String[] value();
}
Capabilities
/**
 * Defines the different capability options for CDAP programs/plugins to specify
 */
public final class Capabilities {
    public static final String NATIVE = "native"; // defines capability of running in native
    public static final String REMOTE = "remote"; // defines capability of running remotely
}

This will allow the plugin developer to specify various capabilities exposed by the platform in the following way:

MockSource
@Plugin(type = BatchSource.PLUGIN_TYPE)
@Name("Mock")
@Capability({Capabilities.NATIVE})
public class MockSource extends BatchSource<byte[], Row, StructuredRecord> {
  ....
  ...
}

Plugins can also be annotated with custom value to specify capable of business rules. For example a plugin developer can specify that the plugin is PII capable by annotating it with

@Capability({Capabilities.NATIVE, "PII"})

The capability value will be case insensitive. If no, system capability option is specified then the default system capability will be used. According to default system capability the plugin will be considered as capable of all system defined capabilities but not with any user defined capability options such as PII. Custom capabilities needs to be specifically defined by the plugin developer and their presence or absence does not override the default system defined capabilities. See Plugin Changes

Allowing users to specify custom capabilities value opens up many issue with standardization of such values. For example, if two plugin developers are developing different plugins they may choose to annotate their plugins with different names for the same business rule. This might lead to confusion and we might end up with many filter options representing the same business rule. CDAP Metadata system currently suffers from the same problem where two different tags say ‘sensitive’ and ‘confidential’ might be used to tag in similar context. One way to achieve standardization of capability options which plugin developer can use might be to make the CDAP platform specifically define what are the different allowed capability options which can be used. Although for simplicity, this release this will not support any mechanism of standardization. It will be the responsibility of the plugin developers to use consistent taxonomy among each other.

Platform

Processing

Currently when a Plugin is deployed in CDAP an inspection is performed to collect various information about the plugin. In this step capability information will be collected.

The capability information will be processed in the Artifact inspection stage by existing ArtifactInspector class. Plugins will be inspected for @Capability annotation and if found all capability information will be collected. The capability information will be stored in PluginClass which is field member of ArtifactClasses.


PluginClass
/**
 * Contains information about a plugin class.
 */
@Beta
public class PluginClass {

  private final String type;
  private final String name;
  private final String description;
  private final String className;
  private final String configFieldName;
  private final Map<String, PluginPropertyField> properties;
  private final Set<String> endpoints;
  private final Set<String> capability; // all the capability of this plugin
}


Storage

Capability information will be stored at plugin level as one artifact can have n numbers of plugins and each one of them will have their own capability information.

Approach 1: Artifact Store (Selected)

Plugin capability information will be stored in PluginClass which is contained in ArtifactMeta. Hence, capability information of a plugin can easily be stored in the ArtifactStore as a part of ArtifactMeta itself. This allow us to store all the plugin information in one store.

Approach 2: Metadata (Considered)

The capability information can also be stored as a system metadata of the Plugin by the ArtifactSystemMetadataWriter. Since Plugin is not an EntityId in CDAP, Metadata systems capability to store metadata for custom entities can be used where the Plugin will be a custom entity under Artifact.

The custom entity hierarchy will be as follows:

namespace=<namespace-name> | artifact=<artifact-name> | version=<artifact-version> | plugin=<plugin-name>

Note: | and = are just used as a separator here for readability. In actual serialized form byte-length encoding is used.

This capability information will be stored as metadata property where the key will be ‘capability’ and value list of unique comma separated string representing capabilities.


capability = remote, native, pii

Note: = and , is our standard key-value and individual value separator in Metadata storage

Comparison

The below table shows the comparison between the two approaches


Approach

Pros

Cons

Approach 1: Artifact Store

  • Simplicity: All the plugin information is stored in one store.

  • Single Lookup: To serve the get calls only one store look up is needed.

  • Extensibility: In the future, if it is needed to support filtering of plugins on the fly this approach will require much more work to support it.

  • Standardization: If capability is stored separately piggybacking on metadata standardization to achieve standardization will not be possible.

Approach 2: Metadata

  • Extensibility: Storing the capability information as metadata allows to support tagging and filtering of plugins on the fly. In this approach if a plugin developer wants to change the plugin compatibility information then the developer will have to do code changes and then rebuild and redeploy the plugin artifact.

  • Standardization: As mentioned earlier standardization of capability options is required for a good user experience. The metadata system suffers from the similar problem. In past prefered tags was used to solve this problem but it is now deprecated and other alternative to solve this problem is being discussed. Once standardization of metadata is solved it will solve capability standardization too.

  • Multiple Lookup: Getting capability information will require lookups of multiple table and also multiple transactions.

  • User Experience: Given that the getting compatibility information will require additional table lookup when this operations is done on a lot of plugin the overall time to serve all the plugins with its capability information will be high which might lead to slow loading of plugins page in UI/Clients.

Filtering

CDAP will support filtering of plugins at two level. One will be for administrators to enforce strict environment and business rules. This will be done through a configuration property in cdap-site.xml. Another will be for data analyst to help them see plugins which are compatible with different environments and rules.

Admin Level Filtering

In cdap-site.xml a new property will be added which will specify certain requirements which a plugin needs to meet to be displayed/enabled. This configuration will be used by CDAP administrators to enforce strict environment and business level rules when they want to display/enable only certain plugins. An example of this is Scenario 2.2 and Scenario 2.3.

Approach 1 (Selected)

<property>
	<name>plugin.required.capabilities</name>
	<value>remote</value>
	<description>
		Comma separated list of capabilities values which are required by default. If system level capability is undefined no capability is required and all plugins will be displayed/enabled.
    </description>
</property>

Capabilities specified in this configuration will be considered mandatory and only the plugins which have these capabilities will be displayed. The plugins which does not have one of the capabilities specified here will be filtered out. Please see Filtering Examples section for examples of different use cases. 

Approach 2 (Considered)

Logical expressions can be used  to specify the required capabilities. This approach gives the administrator much more flexibility in specifying the required capability. Consider a case where an administrator wants to enforce the following requirement:

  1. Either remote or native is required (a case while running in sandbox environment)
  2. For remote required capability is spark2 and for native required capability is spark1

Approach 1 does not offer enough flexibility to address this use case. But with logical expressions an administrator can specify this requirement in the following way:


(remote && spark2) || (native && spark1)

Although this approach gives much more flexibility to the administrator to specify various advanced capability requirement it is not user friendly and expects the administrator to form correct logical expressions. As of now, there is no known use cases of such complex required capability and hence Approach 1 is favored which is more user friendly. In future, if there is a need for such capability handling of provided configuration can be changed. This will not require any upgrade step just an update of configuration value.

Comparison

ApproachProsCons
Approach 1
  • Simple and user friendly
  • Lower probability of specifying an incorrect conf value
  • Can only support simple capability requirement
Approach 2
  • Allows user to specify complex capability requirements
  • Expects administrator to understand and form correct logical expressions
  • Higher probability of specifying an incorrect conf value


Any changes to the requirements will require a CDAP restart and that is acceptable since such changes to happen very infrequently.

If a pipeline was created before a capability was required i.e. the capability was added as a requirement in the above configuration after the pipeline deployment then that pipeline will start failing with appropriate error message. (User story 4)

Provisioner Checks

In some cases filtering based on capabilities does not ensure that if a pipeline is run it will not fail due to incapability. Consider the case of CDAP running in sandbox environment, in this case plugins which are either compatible with native or remote is displayed to user and it is possible that pipeline which was developed to run in native environment is launched on cloud because of the compute profile (set at namespace level). In this case the pipeline will fail to run after the provisioning is done. This is not ideal for the following reasons:

  1. The failure happens too late in the process and user has to wait just to see the pipeline fail which would have never worked.
  2. Provisioning is costly operation.

This issue will be addressed by performing a check when a pipeline is run. For this platform will check whether all the plugins are capable to run on the provisioned instance or not through the interface exposed by the Provisioner

public interface Provisioner {

  /**
   * @returns a Set of capabilities which are explicitly defined to be supported by the provisioner or an empty set if the provisioner does not define any explicit capabilities
   */
  void Set<String> getCapabilities();
  ...
  ...
}

All provisioner which support some specific capability will expose that using the above api. For example Native provisioner will return a ["tephratx"], Amazon AWS Provisioner will return ["aws"]. 

As mentioned above Plugins define their requirements if they explicitly needs some requirement to be meet to run successfully. When a pipeline/program containing a plugin will be run a check will be performed to ensure all the plugin requirements are meet by the capabilities of the provisioner when the pipeline/program is run.

To support this now Plugin information will need to be stored with ProgramSpecification. Currently plugin information is stored in the ApplicationSpecification and not in the ProgramSpecifications. ProgramSpecifications will be modified to store the plugin requirement information too.

This can be generalized to to store not just plugin information for programs but any set of requirements for program in ProgramSpecification. In future this will allow to support feature where a developer can specify custom requirement while configuring/implementing  a program. For example, if a service or a worker uses CDAP Dataset (using tephra transaction) they can be annotated to represent this requirement or the requirement can be specified during program configuration as name, description etc are specified. (This is out of scope of 5.1)

For now requirement will be populated from the plugin requirements if a plugin is preset in the program.

public interface ProgramSpecification {
  /**
   * @returns the requirements for this program or an empty set if the program does not have any specific requirements defined
   */
  Set<String> getRequirements();
}



Pipeline Developer Filtering (Beyond 5.1)

The second level of filtering capability is provided to pipeline developer. A pipeline developer can further filter the available plugin to see only the plugins which have a capability.

Approach 1 (Selected)

Currently, available plugins is retrieved by calling:

GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}

This call returns a summary of Plugins for the provided plugin-type. The result will now only include plugins which meets the required capability defined by the configuration plugin.required.capabilities. The response will now contain the capability options of the plugin. This is second level of filtering and hence it will filter the plugins from the first set (the plugins which are enabled by plugin.required.capabilities configuration and not the plugins which exists in the system).



[
 {
   "name": "Plugin1",
   "type": "dummy",
   "description": "This is plugin1",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin1",
   "artifact": {
     "name": "plugins",
     "version": "1.0.0",
     "scope": "USER"
   },
   "capability": [
     "remote",
     "spark2"
   ]
 },
 {
   "name": "Plugin2",
   "type": "dummy",
   "description": "This is plugin2",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin2",
   "artifact": {
     "name": "plugins",
     "version": "2.0.0",
     "scope": "USER"
   },
   "capability": [
     "remote",
     "spark1"
   ]
 }
]


(Edwin Elia: Please provide feedback for the below UI based design decision)

Client/UI will be responsible to processing the capability list of all the plugins and if needed rendering a view which will show all the unique compatible values to allow further filtering.

Pipeline developer will be able to further filter the available plugin and see plugins which have certain capability. This capability option will be passed as a query parameter to above call.


GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capabilities=spark2


This will further filter out the plugins from the above list to display only the plugins which have spark_2 capability.

[
 {
   "name": "Plugin1",
   "type": "dummy",
   "description": "This is plugin1",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin1",
   "artifact": {
     "name": "plugins",
     "version": "1.0.0",
     "scope": "USER"
   },
   "capabilities": [
     "remote",
     "spark2"
   ]
 }
]


Including capability information of plugin in the response is beneficial as it will allow UI/Client to subdivide or label the individual plugin based on their capabilities.

Note:


  1. For simplicity the initially second level of filtering can only be supported on one query parameter.

  2. The values provided in the plugin.required.capabilities takes precedence over the filtering parameters specified by the pipeline developer as a query parameter. If a query parameters specify to include a plugin compatibility options which is not in  plugin.required.capabilities then that call will return an empty result even if there are compatible plugins known in the system.


Approach 2 (Considered)

As mentioned before currently available plugins is rendered by calling:


GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}

This call returns a summary of Plugins for the provided plugin-type.

An additional REST API will be added which will provide all the capability options which are known in the system.


GET /namespaces/{namespace-id}/capabilities


returns

[
 "remote",
 "spark2",
 "spark1",
 "native"
]

The values from this list of compat can be passed as query parameter

GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capabilities=spark2

This will further filter out the plugins from the above list to display only the plugins which have spark2 capability.

[
 {
   "name": "Plugin1",
   "type": "dummy",
   "description": "This is plugin1",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin1",
   "artifact": {
     "name": "plugins",
     "version": "1.0.0",
     "scope": "USER"
   },
   "capabilities": [
     "remote",
     "spark2"
   ]
 }
]


Comparison


Approach

Pros

Cons

Approach 1

  • Does not require UI to make an additional call to get a list  of all compat options

  • Will be extra work on the client/ui side to parse all the compat options to process/render it.

Approach 2

  • Allows a client to query for all the compatibility options present in the system.

  • Easier for CDAP UI to render a selector widget to select/deselect compatibility options

  • CDAP UI will have to make an additional call to fetch this information before fetching plugins


Filtering Examples

To understand how the configuration and filtering will work in real world let us consider few use cases which us known so far and see how they can be addressed through the above design.

Cloud

CDAP is running in a cloud environment and an administrator only want to allow plugins capable of running in cloud to be displayed


plugin.required.capability

CDAP Table Source

@Capability(Capabilities.NATIVE, "spark2", "PII")

BigTable Source

@Capability(Capabilities.REMOTE, "PII")

AWS S3 Source

@Capability(Capabilities.REMOTE, Capabilities.NATIVE)

remote

Filtered out

Visible

Visible

In-Prem Hadoop

CDAP is running in a hadoop environment and administrator only want to allow plugins capable of running in hadoop and outside cloud or server connection is restricted.

plugin.required.capability

CDAP Table Source

@Capability(Capabilities.NATIVE, "spark2", "PII")

BigTable Source

@Capability(Capabilities.REMOTE, "PII")

AWS S3 Source

@Capability(Capabilities.REMOTE, Capabilities.NATIVE)

native

Visible

Filtered out

Visible

In-Prem Hadoop

CDAP is running in hadoop environment and administrator only want to allow plugins capable of running in hadoop and cloud

plugin.required.capability

CDAP Table Source

@Capability(Capabilities.NATIVE, PII)

BigTable Source

@Capability(Capabilities.CLOUD, PII)

AWS S3 Source

@Capability(Capabilities.CLOUD, Capabilities.NATIVE)

native, remote

Filtered out

Filtered out

Visible

Sandbox

CDAP is running in sandbox and administrator want to allow plugin which are capable of running in native or cloud

plugin.required.capability

CDAP Table Source

@Capability(Capabilities.NATIVE, "spark2", "PII")

BigTable Source

@Capability(Capabilities.REMOTE, "PII")

AWS S3 Source

@Capability(Capabilities.REMOTE, Capabilities.NATIVE, "emr")


Visible

Visible

Visible

Note: When plugin.required.capabilities is empty it means the instance does not defines any capability to be required for any category and hence all plugins will be shown. Although if a plugin is used in incompatible profile then the pipeline will fail before provisioning. Consider a case where CDAPTableSource is being used in a pipeline and the profile is set to EMR Provisioner. EMR Provisioner will do a check that all the plugins define their capability with "emr". Since the requirement will not be meet the pipeline will marked failed immediately due to incompatibility. Although if the pipeline had all "emr" compatible plugins like AWSS3Source then the requirement will be meet, the cluster will be provisioned and the pipeline will be run. 

Sandbox: Spark 2

CDAP is running in sandbox and administrator want to allow plugin which are capable of running with spark 2

plugin.required.capability

CDAP Table Source

@Capability(Capabilities.NATIVE, "spark2", "PII")

BigTable Source

@Capability(Capabilities.REMOTE, "PII")

AWS S3 Source

@Capability(Capabilities.REMOTE, Mode.NATIVE)

spark2

Visible

Filtered out

Filtered out

Note: Here plugin.required.capabilities does not specify any requirement for Mode which means the system does not defines any capability for mode to be required and hence all plugins which support any mode are a candidate for being display. Although it does define a Spark requirement so only plugins which is compatible with spark 2 will be show.

Sandbox: Compliance Required

CDAP is running in sandbox and administrator want to allow plugin which are capable of running in native or cloud mode but also want to satisfy a compliance need and hence plugins must have PII capability

plugin.required.capability

CDAP Table Source

@Capability(Capabilities.NATIVE, "spark2", "PII")

BigTable Source

@Capability(Capabilities.REMOTE, "PII")

AWS S3 Source

@Capability(Capabilities.REMOTE, Capabilities.NATIVE)

PII

Visible

Visible

Filtered out

Cloud: Compliance Required

CDAP is running in cloud and administrator want to allow only plugins which are capable of running in cloud and is PII compliant

plugin.required.capability

CDAP Table Source

@Capability(Capabilities.NATIVE, "spark2", "PII")

BigTable Source

@Capability(Capabilities.REMOTE, "PII")

AWS S3 Source

@Capability(Capabilities.REMOTE, Capabilities.NATIVE)

remote, PII

Filtered out

Visible

Filtered out

Preview

A pipeline preview runs in native (local machine) JVM. Hence, any plugin which is capable of running in Native environment is capable of preview. If a plugin which is not capable of running in Native is run in a preview then that preview operation will fail with incapability as soon as the preview is started and before the actual preview process runs.

Failing Early and Gracefully

Filtering incapable plugins does not guarantee that a pipeline containing an incapable plugin will not be run in the system. This might happen because of various reason for example, a pipeline was created before a capability was required i.e. the capability was added as a requirement in the plugin.required.capabilities configuration after the pipeline deployment.

Whenever an incapable plugin is encountered the pipeline should fail early and gracefully. Pipeline can at different stages: 

  1. Deploy time: Since plugins which are incapable to run will be filtered out, a user will not be able to create a pipeline using them. If a user imports a pipeline json which contains an incapable plugin then the deployment will fail with Artifact/Plugin not found exception. Note: The exception here will not be Incapability exception since that will expose the existence of the plugin in the system which is not ideal.
  2. Run Time (Before run): In addition to filtering a plugin we checks before provisioning will also be performed. This will fail the pipeline if any of the plugin is not capable to run in the provisioned environment. For example running a plugin which AWS which is not capable to run in AWS and can only run in Azure.

When a pipeline is failed due to incapability the failure will be surfaced to user through logs and also an error message in UI.

Plugin changes

The above plugin design tries to minimize the required plugin changes. If a plugin does not specify any capability option then the default capability will be used according to which the plugin will be considered as capable of all system defined capabilities but not with any user defined capability options such as PII. Custom capabilities needs to be specifically defined by the plugin developer. The system defined capabilities with which a plugin will be considered capable in absence of any specified capability is following:

  1. Native
  2. Remote
  3. Spark 1
  4. Spark 2

No plugin change is required after CDAP upgrade to 5.1 as plugins from previous version will be considered capable with all system defined capability as mentioned above.

Although to filter out all the plugins which require transactions or is known to be compatible with only one version of Spark will need to be annotated so that the default capability is not inferred for them. The below table list all the plugins which will need to be specifically annotated and the annotation required:


Plugin NameCapability Annotation
CDAPTableDataset (Source & Sink)

@Capability({Capabilities.NATIVE})

KVTableSource (Source & Sink)@Capability({Capabilities.NATIVE})
AvroSnapshotDataset (Source & Sink)@Capability({Capabilities.NATIVE})
ParquetSnapshotDataset (Source & Sink)@Capability({Capabilities.NATIVE})
AvroTimePartitionedDataset (Source & Sink)@Capability({Capabilities.NATIVE})
ParquetTimePartitionedDataset (Source & Sink)@Capability({Capabilities.NATIVE})
KafkaStreamingSource (kafka-plugins-0.8)@Capability({"spark1_2.10"})
KafkaStreamingSource (kafka-plugins-0.10)@Capability({"spark2_2.11"})
WIPWIP



API changes

New Programmatic APIs

Capability Annotation

/**
 * Annotates different environment and versions in which the elements is supported
 */
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface Capability {
  String[] value();
}


Deprecated Programmatic APIs

None

Updated Programmatic APIs

None

New REST APIs

None

Deprecated REST APIs

None

Updated REST APIs

PathMethodDescriptionResponse CodeResponse
v3/namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capabilities=mode_cloud
GETReturns the plugins information including it's capabilities

no change

[
 {
   "name": "Plugin1",
   "type": "dummy",
   "description": "This is plugin1",
   "className": "co.cask.cdap.internal.app.runtime.artifact.plugin.Plugin1",
   "artifact": {
     "name": "plugins",
     "version": "1.0.0",
     "scope": "USER"
   },
   "capabilities": [
     "mode_cloud",
     "spark_2"
   ]
 }
]






CLI Impact or Changes

  • list artifact plugins <artifact-name> <artifact-version> <plugin-type> [<scope>] will be modified to take another parameter to filter just like the REST API.

UI Impact or Changes

Security Impact 

None.

Test Scenarios

Test IDTest DescriptionExpected Results
1Deploying a plugin which does not have Capability annotationPlugin must be deployed and should be considered capable with all system defined capabilities
2Deploying a plugin which have Capability annotationPlugin must be deployed and should be capable with only the options defined in the annotation
3Deploying a plugin with two Capability annotation (which might or might not have same options)Plugin must be deployed and should be capable with union of capabilities defined in various annotations
4Redeploying a plugin with updated capability annotationPlugin must be redeployed and its capability information should be updated
5Missing or Empty plugin.required.capabilities in cdap-site.xmlAll plugins in the system should be displayed
6plugin.required.capabilities = mode_cloudOnly plugins with cloud capability should be displayed
7plugin.required.capabilities = mode_nativeOnly plugins with native capability should be displayed
8plugin.required.capabilities = spark_1Plugins which are capable to run in any mode and is capable of running with spark 1 should be displayed
9plugin.required.capabilities = mode_native, mode_cloudPlugins which are capable of both cloud and native mode should be displayed
10plugin.required.capabilities = mode_native, spark_1Plugins which are capable to run in native mode and with spark 1 should be displayed
11plugin.required.capabilities = mode_native, piiPlugins which are capable of running in native mode and is PII compatible should be displayed
12

plugin.required.capabilities = mode_cloud

and the following call is made

GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capability=mode_native
Should return empty result as system has only enabled cloud compatible plugins
13

plugin.required.capabilities = <empty>

and the following call is made

GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capability=mode_native
Should only return plugins which are capable to run in native mode
14

plugin.required.capabilities = mode_cloud, mode_native

and the following call is made

GET /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/extensions/{plugin-type}?capability=mode_native

Should show plugin which are capable of both native and cloud mode.

Note: The second level of filtering is applied on top of first layer result set. When plugin.required.capabilities = mode_cloud, mode_native then we only have plugins which are supported in both mode enabled and second level of filtering will be applied on this set. Plugins which are only capable of native mode will not be displayed in the result of this REST call since they are not enable in the first place due to the requirement setting of plugin.required.capabilities. (Scenario 2.3)

Releases

  • Release 5.1

Related Work

Error rendering macro 'jira' : Unable to locate Jira server for this macro. It may be due to Application Link configuration.

Future work

Additional Filtering

Dynamic Filtering

  • Support tagging and filtering of plugin on the fly

Standardization

  • Support for standardization of plugin capabilities