Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Contents

Table of Contents
maxLevel2

Goal

In CDAP 4.0, the main theme for Datasets is improving/establishing proper and semantically sound dataset management. That includes the management of dataset types (code), and the management of dataset instances (actual data) throughout their life cycle. The current dataset framework has various shortcomings that need to be addressed. This document will discuss each area of improvement, list end-to-end use cases and requirements, and finally address the design to implement the requirements.

Checklist

  •  User stories documented(Andreas)
  •  User stories reviewed(Nitin)
  •  User stories reviewed(Todd)
  •  Requirements documented(Andreas)
  •  Requirements Reviewed
  •  Mockups Built
  •  Design Built
  •  Design Accepted

...

Discussion

Dataset Type Management

Currently, the major areas of concern are:

  • Injection of dataset code: The dataset framework allows deploying the code for a dataset type. However, that only applies to Explore: For use in applications, we require that the application includes the code for the dataset type (unless it is provided by the system). There is no way to ensure that multiple applications sharing a dataset all use a compatible version of the code. 
  • Artifact management for dataset code is completely different from how it is done for application and plugin artifacts. We should unify that to create predictability about runtime class loading. 
  • Versioning of dataset code: Similarly, when updating the code for a dataset type, it again only applies to Explore. For apps using that type, every app needs to be recompiled and repackaged with the new dataset code, and then redeployed. For a deployed app, there is no insight into what version of the code it is using. Also, if the owner of a dataset changes its format (or schema, etc.), he has no way to enforce that all apps use a version of the code that supports the new format. 
  • Dataset types are only identified by their name. That is, two apps can have entirely different code (and semantics) for the same dataset type. If these two apps share a dataset of that type, data corruption is inevitable. 
  • Only one version of the code can exist in the system at the same time. It is therefore not possible to deploy a new version of the code without immediately affecting all datasets of that type. Ideally, one could deploy multiple versions of the code which coexist; and the dataset instances can be upgraded/migrated one by one over time. 
  • APIs to define a dataset type are complex: one must implement a Dataset class, a DatasetAdmin, and a DatasetDefinition

Dataset Instance Management

The two areas of concern here are configuration and management of datasets over their lifetime.

Dataset Configuration

A dataset instance is configured by passing a set of properties (that is, string-to-string pairs) to the configure() method of the dataset type. However:

  • Common properties such as schema are not standardized across dataset types
  • There is no way (other than reading documentation) to find out which properties a dataset type accepts. For a wizard-driven UI we would need a programmatic API to list all configurations. For plugins and apps, we have a very good way to include that in the implementation of the plugin. Datasets should have something similar.
  • Reconfiguration of a dataset can be problematic. Sometimes the change of a property is not compatible with existing data in a dataset (for example, changing the schema). There is no easy way to find out which properties can be changed. 
  • Also, a reconfiguration may require a data migration or other long-running process to implement the change. The current dataset framework has no APIs to implement that. 

Dataset Management over its life time

The dataset framework defines five administrative APIs: create(), exists(), drop(), truncate(), and update() (and a sixth, upgrade(), which is broken). However, many dataset types have specific administrative procedures that are not common across types. For example, an HBase table may require compaction, which is not supported by other dataset types. We need a way to implement such actions as part of the dataset administration interface. 

  • In the simple case, the app should only need to define the Dataset API itself (similar to the current AbstractDataset)
  • If a dataset type requires special administrative operations (say, "rebalance"), then this operation can be performed from the app itself, as well as through REST/CLI/UI. 

Also, the current implementation of dataset admin execution is not transactional: If it fails, it may leave behind partial artifacts of data creation. For example, if a composite dataset embeds two datasets, creation of the first succeeds, but the second fails, then the first one remains as a leftover in the the physical storage—without any clue in CDAP meta data about its existence. Similar for dropping and reconfiguring datasets. 

Explore Integration

This is related to configuration but goes beyond that. To begin with, the configuration of how a dataset is made is explorable is separate from the rest of the dataset configuration, and every dataset may use a different set of properties. For example, a Table requires a schema and a rowkey property to make it explorable, whereas a file set requires a format and an exploreSchema. As a consequence, enabling explore is implemented in the platform (explore service) code, which has special treatment for all known types of explorable datasets. Instead, it would make more sense to delegate the generation of Hive DDL commands to the dataset type code: each dataset type implementation knows exactly how to create a corresponding Hive table. At the same time, we should standardize on a set of explore properties that are used across all dataset types, for example, the schema. 

It should also be possible to enable or disable Explore for a dataset at any time during its lifecycle. That is not always a simple creation of a Hive table. For example, for a partitioned file set, this involves adding all the partitions that the dataset already has, and that can require a long running process. Again, this is better implemented by the dataset type itself than by the platform, and we need APIs that allow custom dataset types to provide an implementation.  

Scenarios

Scenario 1: Dataset Type Used Only by a Single Application

This can almost be viewed as a private utility class of that application, except that the dataset may be explorable, and the dataset type's code and configuration may evolve over time along with the application. This is also the most simple and most common use case, and we want to make it super-easy as follows:

  • Dataset Type code is part of the application
  • Upon deployment of the app, the dataset type is also deployed, and the dataset(s) of this type can be created as part of the same deployment step. 
  • When the app is redeployed, the dataset type is updated to the latest version of the code, and so are the datasets of this type. 
  • The developer/devops never needs to worry explicitly about versioning of the dataset or manually upgrading a dataset. 
  • Explore works seamlessly: It always picks up the latest version of the dataset code. 
  • If there are multiple versions of the application artifact (see Application Versioning Design), each application uses the version of the dataset type defined by its version of the artifact. 

Scenario 2: Dataset Type Shared by Multiple Applications, no Data Sharing

This case is very similar to scenario 1, however, we need to solve the problem of distributing the code of the dataset type: in scenario 1, we would simply include it in the application code, but now this code is shared between multiple apps. Including the code in each app would mean code duplication, and, over time, divergence. If that is desired (which is possible), then it is wiser to simply use different type names in each app, and we have multiple instances of scenario 1. However, in most cases it will be desirable to share one implementation of the dataset code across all apps. There are two major alternatives:

  1. The dataset type is implemented as a separate library that is available as a maven dependency to both apps:
    • Both apps include this dataset type in their jar
    • Every time one of the two apps is deployed, the dataset type is updated to that version of the code. 
    • The problem with this is that one application may use an older version of the dataset code than the one currently deployed. In that case: 
      • The update of the dataset type overrides the type's code with an outdated version. 
      • Because this code is used by Explore, queries for datasets created with a newer version of the code may not work any more. 
    • However, for ease of use, it should be possible for the developer(s) to deploy either app at any time without impacting other apps using the same dataset type. 
    • This is similar to the case of scenario 1, where multiple versions of the same dataset type coexist in different versions of the app artifact. 

  2. The dataset type has an interface and an implementation:
    • The interface is available to developers as a maven dependency, whereas the implementation is deployed as a separate artifact in the dataset framework. 
    • In order to compile and package their apps, developers only need the interface. 
    • At runtime, CDAP injects the implementation of the dataset type into the programs. 
    • This means that the dataset type is not bundled with the apps any longer, and the deployment of an app has no effect on the code of a dataset type.
    • However, it means increased complexity for app and dataset developers: both the interface in maven and the dataset module in CDAP must be kept in sync.
    • Note that this approach allow for separation of roles and skills in a larger organization: dataset types can developed and deployed independently from applications. 

This scenario suggests that we need some kind of versioning for dataset types (and with that, dataset instances are then bound to a specific version of the type).

Scenario 3: A Dataset is Maintained by a Single Organization and Shared with Many Applications

For example, a CustomerDirectory dataset is maintained by organization X in an enterprise. This dataset is used by many applications to look up customers. This dataset has a custom type with various methods to maintain its data; however, most of the applications only need one API: CustomerInfo getCustomer(String id).  

  • Applications that use this dataset need to include a dependency customer-api-1.0 in their pom in order to compile and package. (See the discussion of scenario 2 for why this should be a maven dependency). 
  • This actual dataset type implements the CustomerDirectory API, say using a class TableBasedCustomerDirectory in artifact customer-table-1.3.1
  • At runtime, when the app calls getDataset(), CDAP determines that the dataset instance has that type and version, and loads the class from that artifact. 
  • The actual dataset type has more methods in its API, including one that allows adding new customers. Therefore, the app that maintains this dataset includes the implementing artifact in its pom file. 
  • The implementation can be updated without changing the API. In this case, X deploys a new artifact customer-table-1.3.2 and upgrades the dataset to this version. The maintaining app must now pick up the new artifact the next time it runs. (Whether this requires recompiling/packaging the app is up for detailed design). No change is needed for the other applications that use this dataset, because CDAP always injects the correct version of the dataset type.
  • The implementation can be updated with an interface change, for example, adding a new field to the CustomerInfo. To make this update seamless, a new artifact customer-table-1.4.0 is deployed, and both the dataset and the maintaining app are upgraded to this version. Then a new version of the API, customer-api-1.1, is deployed, and apps may now upgrade to this version. If they don’t, then they will not see the new field, but that is fine for existing apps because their code does not use this field. Note that this requires that CustomerInfo be an interface (consisting mainly of getters) that has an implementation in the customer-table artifact. Similarly, a new method could be added to the interface and applications that do not use this new interface will not require recompile and redeploy.

This scenario is one the most complex but the complexity is limited to the app that maintains the dataset as a service for others, who only need to know the published interface. This scenario also poses some important questions:

  • What is the deployment mechanism for the two artifacts (customer-api and customer-table)?
  • How does CDAP know that customer-table implements customer-api? Does it have to know?
  • How can X migrate the dataset to a new data format without having control over the apps that consume it? Even after upgrading the dataset to a new version, X does not know when all apps have picked that up, because they may have long-running programs such as a flow or service that need to be restarted for picking up the new version.

Scenario 4: A Dataset is Created and Maintained by a Hydrator Pipeline

This is very similar to Scenario 3, but the dataset type and the dataset instance are defined by a Hydrator plugin. The plugin may embed the code for the dataset type in its own code, or it may depend on a dataset type artifact that was deployed separately. In either case, the dataset is subsequently available to Hydrator pipelines, applications and Explore, and it can be maintained using REST, CLI, or UI. 

The important distinction here is that the user does not write code (although somebody wrote the code for the plugin). The user should be able to deploy the dataset (and its type) without knowing about the mechanics of dataset (type) management.

Scenario 5: A Dataset is Created through the App Store or Marketplace

Again, this is very similar to Scenario 4, except that this time the user does not even interact with CDAP or Hydrator. He gets a dataset (and a pipeline that feeds it) from the Marketplace with the click of a button, and the dataset is available to Explore, and also to other pipelines and apps. 

Other Scenarios

It is virtually impossible to list all possible scenarios, but it is important to realize that any combination of the above scenarios must work seamlessly. For example, a dataset may be maintained by multiple apps, and still shared with many others. Or a dataset may be created through a Hydrator pipeline but shared with many other pipelines or apps. That also means that the simplest of use cases (Scenario 1) must be interoperable with the most complex one (Scenario 3). Also, any time there is a conflict between different apps, pipelines, plugins, or app store artifacts that attempt to create the same dataset, but with different types, or with a version conflict, etc., this conflict must be detected by CDAP and reported back to the user in a clear and easy-to-read way.

User Stories

This collection of stories represents the vision that we have for dataset management. It is a living document and will be maintained over time. In each release, we need to determine and prioritize which of these stories are in scope. 

[DTM] Dataset Type Management

  1. As an app developer, I want to include the code of a dataset type in my app artifact, and create a dataset of that type when deploying the app.
  2. As an app developer, I want to deploy a new version of a dataset type as part of deploying a new version of the app that includes it, and I expect that all dataset instances of that type that were created as part of the app deployment start using the new code. 
  3. As an app developer, I want to deploy a new version of a dataset type as part of an app artifact, without affecting other datasets of this type.
  4. As an app developer, I want to explore a dataset instance of a type that was deployed as part of an app.
  5. As an app developer, I expect that deploying an artifact without creating an app will not create any dataset types or instances (that is, this only happens when creating an app).
  6. As an app developer, I want to share a dataset type across multiple applications that include the dataset type's code in their artifacts.
  7. As an app developer, when deploying a new version of an app that includes a shared dataset type, I expect that all dataset instances created by this app start using the new code, but all dataset instances created by other apps remain unchanged.
  8. As an app developer, I want to deploy a new version of an app that includes an older version of a dataset type deployed by another app, and I expect that the dataset instances created by this app use the dataset type code included in this app.
  9. As an app developer, when I deploy a new version of an app that includes an different version of a dataset type deployed by another app, and this app shares a dataset instance of this type with the other app, the deployment will fail with a version conflict error. (Because otherwise I might "downgrade" the instance to an older version, making it incompatible with the other app). 
    Note: This use case needs discussion. What is proper behavior? How can we prevent data corruption due to an unintentional "downgrade" without restricting ease of use too much?

  10. As an app developer, I want to share a dataset type that I had previously deployed as part of an app.
  11. As a dataset developer, I want to deploy a dataset type independent from any app, and allow apps to create and use dataset instances of that type.
  12. As a dataset developer, I want to have the option of forcing applications to have the dataset code injected at runtime (that gives me control over what version of the code apps use).
  13. As a dataset developer, I need an archetype that helps me package my dataset type properly.
  14. As a dataset developer, I want to separate the interface from the implementation of a dataset type.
  15. As an app developer, I want to only depend on the interface of a dataset type in my app, and have the system inject the implementation at runtime. 
  16. As an app developer, I want to write unit tests for a an app that depends on the interface of a dataset type. (This means I need an extra dependency with test scope in my pom.xml)
  17. As a dataset developer, I want to assign explicit versions to the code of a dataset type.
  18. As a dataset developer, I want to deploy a new version of a dataset type without affecting the dataset instances of that type.
  19. As an app developer, I want to create a dataset instance with a specific version of a dataset type. 
  20. As a dataset developer, I want to explore a dataset instance created from a dataset type that was deployed by itself. 
  21. As a dataset developer, I want to delete outdated versions of a dataset type. I expect this to fail if there are any dataset instances with that version of the type. 
  22. As a dataset developer, I want to list all dataset instances that use a dataset type, or a specific version of a type.
  23. As a data scientist or app developer, I want to be able to create a dataset instance of an existing dataset type without writing code.
  24. As a data scientist or app developer, I want to be able to upgrade a dataset instance to a new version of its code.
  25. As a hydrator user, I want to create a pipeline that reads or writes an existing dataset instance.
  26. As a hydrator user, I want to create a pipeline that reads or writes a new dataset instance, and I want to create that dataset instance as part of pipeline creation. 
  27. As a hydrator user, I want to specify an explicit version of the dataset types of the dataset instances created by my pipeline, and I expect pipeline creation to fail (similar to app creation) if that results in incompatible upgrade of an existing dataset instance that is shared with other apps or pipelines.
  28. As a hydrator user, I want to explore the datasets created by my pipeline.
  29. As a hydrator user, I expect all dataset instances created by apps to be available as sinks and sources for pipelines (if there is a corresponding plugin).
  30. As an app developer, I expect all dataset instances created by Hydrator pipelines to be accessible to the app.
  31. As a plugin developer, I want to include the code for a dataset type in the plugin artifact. When a pipeline using this plugin is created, a dataset instance of that type is created, and it is explorable and available to apps.
  32. As a plugin developer, I want to use a custom dataset type (that was deployed independently or as part of an app) inside the plugin. 
  33. As a plugin developer, I want to upgrade the code of a dataset type used by a dataset instance created by that plugin, when I deploy a new version of the plugin and update the pipeline to use that version.
  34. As a pipeline developer, I want to upgrade a dataset instance to a newer version of the code after the pipeline was created.  

  35. As a dataset developer, I want to have the option of implementing an "upgrade step" for when a dataset instance is upgraded to a new version of the dataset type.
  36. As a dataset developer, I want to have a way to reject an upgrade of a dataset instance to a newer version of it type, if the upgrade is not compatible. 
  37. As a dataset developer, I want to have the option of implementing a migration procedure that can be run after an upgrade of a dataset instance to a new version of it type. This can be a long-running (background) process.
  38. As a developer, I want to take a dataset "offline" so that I can perform a long-running maintenance or migration procedure.
  39. As a dataset developer, I want to implement custom administrative operations (such as "compaction", or "rebalance") that are no common to all dataset types.
  40. As an app developer, I want to perform custom administrative operations on dataset instances from my app, the CLI, REST, or the UI. 

[DIC] Dataset Instance Configuration

Note: "As a user" refers to app developers, data scientists, dev-ops, or Hydrator users, pipeline developers

  1. As a user, when creating a dataset instance, I want to find out what properties are supported by the dataset type, what values are allowed, and what the defaults are. 
  2. As a user, I want to specify the schema of a dataset in a uniform way across all dataset types.
  3. As a user, I want to specify schema as a JSON string (verbose, Avro-style).
  4. As a user, I want to specify schema as a SQL schema string (brief, Hive-style).
  5. As a user, I want to configure time-to-live (TTL) in a uniform way across all dataset types. 
  6. As a user, I want to see the properties that were used to configure a dataset instance.
  7. As a user, I want to find out what properties of a dataset can be updated.  
  8. As a user, I want to update the properties of a dataset instance. I expect this to fail if the new properties are not compatible, with a meaningful error message.
  9. As a user, I want to update a single property of a dataset instance, without knowing all other properties. For example, set the TTL without having to know the schema. 
  10. As a user, I want to remove a single property of a dataset instance, without knowing all other properties. For example, remove the TTL without having to know the schema. 
  11. As a user, I want to trigger a migration process for a dataset if updating its properties requires that.
  12. As a user, I expect that if reconfiguration of a dataset fails, then no changes have taken effect. In other words, all steps required to reconfigure a dataset must be done as a single atomic action.
  13. As an app developer, I expect that application creation fails if any of its datasets cannot be created.
  14. As an app developer, I expect that application redeployment fails if any of its datasets cannot be reconfigured (if the new app spec specifies different configuration). 
  15. As an app developer, when creating a dataset as part of app deployment, I want to tolerate existing datasets if their properties are different but compatible. For example, I can configure the dataset schema, but leave the existing TTL of a table untouched.
  16. As a pipeline designer, I want to use an existing dataset as a sink or source. If the schema (or any other property) of the dataset is incompatible with what the pipeline requires, I expect that pipeline creation fails with a meaningful error message. 

[EI] Explore Integration

  1. As a user, I want to specify as part of dataset configuration whether it is explorable.
  2. As a user, I do not want to specify the explore schema (and format) as separate properties if they can be derived from other standard dataset properties.
  3. As a user, I want to specify the explore schema separately (for example, only include a subset of the fields of a table, or name fields differently).
  4. As a user, I expect that dataset creation fails if the dataset cannot be enabled for explore.
  5. As a user, I expect that dataset reconfiguration fails if the corresponding update of the explore table fails.
  6. As a user, I expect that a dataset operation fails if it fails to make its required changes to explore.
  7. As a user, I expect that an update of explore never leads to silent loss of data (or data available for explore). If, for example, partitions would be dropped from the explore table, I want to have the option to either cancel the update, or to be notified of the drop and have a tool to bring explore in sync with the data. 
  8. As a user, I want to enable explore for a dataset that was not configured for explore initially.
  9. As a user, I want to disable explore for a dataset that was configure for explore initially.

Work Breakdown

[DTM] Dataset Type Management

1. Replacing the Dataset Type Manager implementation based on the Artifact Repo

The first and necessary part of the work is to unify the current dataset module/type management with the existing artifact repository. This can be done in a way that does not make versioning explicit (since the current dataset framework has no versioning, we could switch over without introducing that). The current requirement is that all dataset code must be included in the program artifact. That is, dataset type code is not injected by the platform, but aways loaded from the program class loader. We can mimic by using a specific version string - say "embedded" - that means loading from the program class loader. The work to do that breaks down as follows:

  • Deploying a dataset type (or module) is implemented as deployment of an artifact with version "embedded"
    • what does this mean for configuration, recording of dependencies? 
  • Version "embedded" is treated like a snapshot version, that is, it can be redeployed any time. For now this is the only version we use. 
  • Creating a dataset instance tags that instance with version "embedded"
  • New implementation of dataset framework (for explore only) that loads the code from the artifact repo. 
  • In programs, since the only version is "embedded", dataset code is still loaded using the program class loader.
  • No introduction of new or versioned APIs

This implements user stories DTM 1-9, but no new user stories that were not implemented by the existing framework. Instead this it lays the foundation for those stories.

2. Introducing explicit versioning for dataset types

Next we can implement user stories DTM 10-34 that require explicit dataset type versions, along with the ability to deploy a dataset outside of an app.

  • Maven archetype for dataset artifacts
  • Explicit versioning of dataset types. This will be the same as the artifact version.
    • Coexistence of multiple versions of the same dataset type
    • This comes (almost) for free after the migration to the artifact repository
  • Explicit dependency of a dataset instance on a specific version of its type
    • When creating a dataset, and explicit version of the type can be given
    • Otherwise the latest version will be used
    • The dataset meta data (spec) will contain this version
  • Injection of dataset code at runtime, from the artifact of that version.
  • No noticeable performance degradation (some degradation is expected due to code injection at first instantiation of a type).
  • Explicit upgrade of a dataset instance to a new version of its type.
  • For dataset types deployed as part of app deployment, we will keep using "embedded" as the version.
  • Hydrator Plugins can also contain dataset types. The dataset will be loaded from the plugin artifact at runtime.
  • New versioned REST and CLI methods for versioned type and instance management
  • At this point, we may remove the REST endpoints for deploying dataset modules (that can now be done through the artifact repo APIs) 

This still keeps the dataset APIs unchanged. 

3. Introducing new Dataset APIs

At this point, we need to decide whether we want to keep the existing APIs and enhance them, or whether we want to come up with a new set of APIs. Some considerations:

  • Deploying a dataset module currently invokes Java code (the module's register() method). This is used to declare dependencies in a programmatic way. 
  • All other artifacts, however, declare their dependencies through a configuration file included in the artifact. 
  • For a true unification of the artifact management, we should probably change dataset to follow that pattern. 
  • Up for discussion.

New APIs to be added:

  • A new dataset admin method to upgrade to a new version of the type
    • This method can reject the upgrade
  • New dataset admin APIs for performing custom actions
  • Ability to take a dataset instance offline for a migration procedure

This addresses user stories DTM 35-40. 

[DIC] Dataset Instance Configuration

 Any of the following do not depend on the migration to the artifact repo. However, if we implement these in the current Dataset APIs, we may to redo again if/after we switch to a new set of APIs (DTM 3).

  1. New dataset API to retrieve the properties accepted by a type
    1. what the accepted values are
    2. whether they are mutable
    3. whether they are required
    4. what the default value is
  2. Schema as a standardized system property
    1. Validation of schema
    2. Specify schema in Avro or SQL style
    3. All system datasets to use new schema property
  3. New API to update or remove a single property of a dataset
  4. Ability to "merge" dataset properties without changing existing ones, failing in that case
  5. Dataset Management Operations are atomic
    1. Always leave behind a consistent state

[EI] Explore Integration

Any of the following do not depend on the migration to the artifact repo. However, if we implement these in the current Dataset APIs, we may to redo again if/after we switch to a new set of APIs (DTM 3).

  1. Simplification of explore configuration
    1. Whether explore is enabled is explicit property
    2. All other explore properties derived from dataset properties if possible 
  2. Explore failure also fails the DTM operation that called it
  3. Ability to communicate warnings to the user for successful explore operations
  4. Enable/Disable explore as dataset management operations 

Proposed Scope for 4.0

  1. Minimal work to remove artifact management from DatasetTypeManager
    1. Remove the (experimental) REST API to deploy a dataset module by itself
    2. For dataset types/modules deployed from an app, remove the generation of an artifact. Instead record the app artifact that is was created from
    3. Similar as b. for dataset types included in plugins
    4. For apps, load dataset types from program class loader. For explore, load from the artifact recorded for the type
    5. May require some changes in artifact repository
  2. Simplify configuration of datasets
    1. Schema and format as a system properties with validation
    2. TTL as a system property
  3. New API for a dataset type to declare what configuration it accepts (needed for Resource Center)
    1. Properties (instance configuration)
    2. Arguments (runtime configuration)
  4. Make dataset lifecycle methods (create, update, drop) consistent
    1. In case of failure, do not leave partial/inconsistent state behind
    2. Do not silently ignore explore failures: they must fail the entire operation
  5. Simplify configuration of explore properties CDAP-2790 
    1. Derived all explore properties from schema+format when possible. 
    2. Allow configuring the detailed explore properties (as today) for power users.
  6. Improved control over transactions for programs CDAP-7319
    1. Configure transaction timeout as a runtime argument / preference at namespace, app, program level CDAP-6103
    2. Programmatic APIs for programs that allow executing a transaction with custom timeout CDAP-7193CDAP-7320CDAP-7322
    3. Add a way to access datasets (and call non-transactional methods) CDAP-7323
    4. Fix the transactional behavior of WorkerContext.execute() CDAP-6837