...
...
Contents
Table of Contents maxLevel 2
Goal
In CDAP 4.0, the main theme for Datasets is improving/establishing proper and semantically sound dataset management. That includes the management of dataset types (code), and the management of dataset instances (actual data) throughout their life cycle. The current dataset framework has various shortcomings that need to be addressed. This document will discuss each area of improvement, list end-to-end use cases and requirements, and finally address the design to implement the requirements.
...
- User stories documented(Andreas)
- User stories reviewed(Nitin)
- User stories reviewed(Todd)
- Requirements documented(Andreas)
- Requirements Reviewed
- Mockups Built
- Design Built
- Design Accepted
...
This collection of stories represents the vision that we have for dataset management. It is a living document and will be maintained over time. Ineach In each release, we need to determine and prioritize which of these stories are in scope.
...
- As an app developer, I want to include the code of a dataset type in my app artifact, and create a dataset of that type when deploying the app.
- As an app developer, I want to deploy a new version of a dataset type as part of deploying a new version of the app that includes it, and I expect that all dataset instances of that type that were created as part of the app deployment start using the new code.
- As an app developer, I want to deploy a new version of a dataset type as part of an app artifact, without affecting other datasets of this type.
- As an app developer, I want to explore a dataset instance of a type that was deployed as part of an app.
- As an app developer, I expect that deploying an artifact without creating an app will not create any dataset types or instances (that is, this only happens when creating an app).
- As an app developer, I want to share a dataset type across multiple applications that include the dataset type's code in their artifacts.
- As an app developer, when deploying a new version of an app that includes a shared dataset type, I expect that all dataset instances created by this app start using the new code, but all dataset instances created by other apps remain unchanged.
- As an app developer, I want to deploy a new version of an app that includes an older version of a dataset type deployed by another app, and I expect that the dataset instances created by this app use the dataset type code included in this app.
- As an app developer, when I deploy a new version of an app that includes an different version of a dataset type deployed by another app, and this app shares a dataset instance of this type with the other app, the deployment will fail with a version conflict error. (Because otherwise I might "downgrade" the instance to an older version, making it incompatible with the other app).
Note: This use case needs discussion. What is proper behavior? How can be we prevent data corruption due to an unintentional "downgrade" without restricting ease of use too much? - As an app developer, I want to share a dataset type that I had previously deployed as part of an app.
- As a dataset developer, I want to deploy a dataset type independent from any app, and allow apps to create and use dataset instances of that type.
- As a dataset developer, I want to have the option of forcing applications to have the dataset code injected at runtime (that gives me control over what version of the code apps use).
- As a dataset developer, I need an archetype that helps me package my dataset type properly.
- As a dataset developer, I want to separate the interface from the implementation of a dataset type.
- As an app developer, I want to only depend on the interface of a dataset type in my app, and have the system inject the implementation at runtime.
- As an app developer, I want to write unit tests for a an app that depends on the interface of a dataset type. (This means I need an extra dependency with test scope in my pom.xml)
- As a dataset developer, I want to assign explicit versions to the code of a dataset type.
- As a dataset developer, I want to deploy a new version of a dataset type without affecting the dataset instances of that type.
- As an app developer, I want to create a dataset instance with a specific version of a dataset type.
- As a dataset developer, I want to explore a dataset instance created from a dataset type that was deployed by itself.
- As a dataset developer, I want to delete outdated versions of a dataset type. I expect this to fail if there are any dataset instances with that version of the type.
- As a dataset developer, I want to list all dataset instances that use a dataset type, or a specific version of a type.
- As a data scientist or app developer, I want to be able to create a dataset instance of an existing dataset type without writing code.
- As a data scientist or app developer, I want to be able to upgrade a dataset instance to a new version of its code.
- As a hydrator user, I want to create a pipeline that reads or writes an existing dataset instance.
- As a hydrator user, I want to create a pipeline that reads or writes a new dataset instance, and I want to create that dataset instance as part of pipeline creation.
- As a hydrator user, I want to specify an explicit version of the dataset types of the dataset instances created by my pipeline, and I expect pipeline creation to fail (similar to app creation) if that results in incompatible upgrade of an existing dataset instance that is shared with other apps or pipelines.
- As a hydrator user, I want to explore the datasets created by my pipeline.
- As a hydrator user, I expect all dataset instances created by apps to be available as sinks and sources for pipelines (if there is a corresponding plugin).
- As an app developer, I expect all dataset instances created by Hydrator pipelines to be accessible to the app.
- As a plugin developer, I want to include the code for a dataset type in the plugin artifact. When a pipeline using this plugin is created, a dataset instance of that type is created, and it is explorable and available to apps.
- As a plugin developer, I want to use a custom dataset type (that was deployed independently or as part of an app) inside the plugin.
- As a plugin developer, I want to upgrade the code of a dataset type used by a dataset instance created by that plugin, when I deploy a new version of the plugin and update the pipeline to use that version.
- As a pipeline developer, I want to upgrade a dataset instance to a newer version of the code after the pipeline was created.
- As a dataset developer, I want to have the option of implementing an "upgrade step" for when a dataset instance is upgraded to a new version of the dataset type.
- As a dataset developer, I want to have a way to reject an upgrade of a dataset instance to a newer version of it type, if the upgrade is not compatible.
- As a dataset developer, I want to have the option of implementing a migration procedure that can be run after an upgrade of a dataset instance to a new version of it type. This can be a long-running (background) process.
- As a developer, I want to take a dataset "offline" so that I can perform a long-running maintenance or migration procedure.
- As a dataset developer, I want to implement custom administrative operations (such as "compaction", or "rebalance") that are no common to all dataset types.
- As an app developer, I want to perform custom administrative operations on dataset instances from my app, the CLI, REST, or the UI.
[DIC] Dataset Instance Configuration
[Note: "As a user" refers to app developers, data scientists, dev-ops, or Hydrator users, pipeline developers]
- As a user, when creating a dataset instance, I want to find out what properties are supported by the dataset type, what values are allowed, and what the defaults are.
- As a user, I want to specify the schema of a dataset in a uniform way across all dataset types.
- As a user, I want to specify schema as a JSON string (verbose, Avro-style).
- As a user, I want to specify schema as a SQL schema string (brief, Hive-style).
- As a user, I want to configure time-to-live (TTL) in a uniform way across all dataset types.
- As a user, I want to see the properties that were used to configure a dataset instance.
- As a user, I want to find out what properties of a dataset can be updated.
- As a user, I want to update the properties of a dataset instance. I expect this to fail if the new properties are not compatible, with a meaningful error message.
- As a user, I want to update a single property of a dataset instance, without knowing all other properties. For example, set the TTL without having to know the schema.
- As a user, I want to remove a single property of a dataset instance, without knowing all other properties. For example, remove the TTL without having to know the schema.
- As a user, I want to trigger a migration process for a dataset if updating its properties requires that.
- As a user, I expect that if reconfiguration of a dataset fails, then no changes have taken effect. In other words, all steps required to reconfigure a dataset must be done as a single atomic action.
- As an app developer, I expect that application creation fails if any of its datasets cannot be created.
- As an app developer, I expect that application redeployment fails if any of its datasets cannot be reconfigured (if the new app spec specifies different configuration).
- As an app developer, when creating a dataset as part of app deployment, I want to tolerate existing datasets if their properties are different but compatible. For example, I can configure the dataset schema, but leave the existing TTL of a table untouched.
- As a pipeline designer, I want to use an existing dataset as a sink or source. If the schema (or any other property) of the dataset is incompatible with what the pipeline requires, I expect that pipeline creation fails with a meaningful error message.
...
- Deploying a dataset type (or module) is implemented as deployment of an artifact with version "embedded"
- what does this mean for configuration, recording of dependencies?
- Version "embedded" is treated like a snapshot version, that is, it can be redeployed any time. For now this is the only version we use.
- Creating a dataset instance tags that instance with version "embedded"
- New implementation of dataset framework (and actually, dataset instantiatorfor explore only) that loads the code from the artifact repo.
- In programs, since the only version is "embedded", dataset code is still loaded using the program class loader.
- No introduction of new or versioned APIs
...
- Simplification of explore configuration
- Whether explore is enabled is explicit property
- All other explore properties derived from dataset properties if possible
- Explore failure also fails the DTM operation that called it
- Ability to communicate warnings to the user for successful explore operations
- Enable/Disable explore as dataset management operations
Proposed Scope for 4.0
- Minimal work to remove artifact management from DatasetTypeManager
- Remove the (experimental) REST API to deploy a dataset module by itself
- For dataset types/modules deployed from an app, remove the generation of an artifact. Instead record the app artifact that is was created from
- Similar as b. for dataset types included in plugins
- For apps, load dataset types from program class loader. For explore, load from the artifact recorded for the type
- May require some changes in artifact repository
- Simplify configuration of datasets
- Schema and format as a system properties with validation
- TTL as a system property
- New API for a dataset type to declare what configuration it accepts (needed for Resource Center)
- Properties (instance configuration)
- Arguments (runtime configuration)
- Make dataset lifecycle methods (create, update, drop) consistent
- In case of failure, do not leave partial/inconsistent state behind
- Do not silently ignore explore failures: they must fail the entire operation
- Simplify configuration of explore properties CDAP-2790
- Derived all explore properties from schema+format when possible.
- Allow configuring the detailed explore properties (as today) for power users.
- Improved control over transactions for programs CDAP-7319
- Configure transaction timeout as a runtime argument / preference at namespace, app, program level CDAP-6103
- Programmatic APIs for programs that allow executing a transaction with custom timeout CDAP-7193, CDAP-7320, CDAP-7322
- Add a way to access datasets (and call non-transactional methods) CDAP-7323
- Fix the transactional behavior of WorkerContext.execute() CDAP-6837