Overview
This document captures the design of enhancements to data discovery in 4.0. Its main goal is to serve the Listing Center Home Page of CDAP 4.0.
Checklist
- User stories documented (Bhooshan)
- User stories reviewed (Nitin)
- User stories reviewed (Todd)
- Requirements documented (Bhooshan)
- Requirements Reviewed (Nitin/Todd)
- Design Documented (Bhooshan
- Design Reviewed (Andreas/Terence/Poorna)
- Implementation
- Documentation
Requirements
The main requirements influencing these enhancements are:
- Support configurable sorting for search results. Preferably, both
sortBy
andsortOrder
should be supported. In addition, it would be nice to support multiple combinations ofsortBy
andsortOrder
- Support pagination for search results. The API should accept
offset
(defines the start position in the search results) andlimit
(defines the number of results to show at a time) parameters. - Search queries should be able to filter results by one or more entity types
- Metadata for every search result should include (**needs confirmation**):
name
description
creation time
version
entity type
owner
- Status - Composed of statistics, current state, etc of the entity.
- Potential requirement: Ability to annotate (if not filter) an entity by scope (
SYSTEM
vsUSER
)
User Stories
- As a CDAP user, I should be able to search all entities (artifacts, applications, programs, datasets, streams, views) sorted by a name and/or creation time
- As a CDAP user, I should be able to paginate search results by specifying a page size. In addition, I should be able to specify the offset from where to return search results.
- As a CDAP user, I should be able to filter search results by a given entity type
Design
Alternatives
The CDAP search backend today has been implemented using an IndexedTable
. Implementing sorting and pagination on this implementation may be difficult as well as introduce performance bottlenecks, due to multiple potential HBase scans. Also, an index would have to be stored per sortBy
and sortOrder
combination. An alternative to this is to fetch results for the provided search query and sort them in-memory after that. But in a big data scenario, this option is not viable.
The eventual goal of CDAP is to move from the current IndexedTable
backed search to an external search engine. The major motivations for that are to facilitate richer search queries and full-text search. Some initial investigation about alternatives for search are at External Search and Indexing Engine Investigation. A summary of the two most viable alternatives - Apache Solr and Elasticsearch can be found at these links:
[1] http://solr-vs-elasticsearch.com/
[2] https://thinkbiganalytics.com/solr-vs-elastic-search/
Most research indicates feature parity between the two options, although Elasticsearch seems to have better REST API and JSON support. However, being that Apache Solr is more favored in Hadoop-land (supported by more distributions, is the only search engine that Cloudera supports, and has support in Slider to run on YARN), it makes more sense as the first candidate for supporting a search backend. The search backend, however, can be made pluggable (as an extension loaded using its own classloader using an SPI), so it could be swiped out for ElasticSearch if users wish to in future.
4.0 Requirements in Apache Solr
Sorting
Sorting (including multiple sort orderings) is supported in Apache Solr using the sort parameter.
Pagination
Pagination is supported as a combination of the start and rows parameters.
Filtering
Filtering is supported using the fq parameter.
Running Apache Solr
Distributed Mode
Solr can be run as either a separate Twill Runnable using logic like https://github.com/lucidworks/yarn-proto/blob/master/src/main/java/org/apache/solr/cloud/yarn/SolrMaster.java or can be housed inside the DatasetOpExecutorTwillRunnable
as well. This decision depends on some prototyping. Solr will be started to use HDFS for persistence.
Standalone Mode
Solr supports a standalone mode, which starts up a separate Solr process. However, we will prefer to use EmbeddedSolrServer, in the same process as standalone CDAP.
InMemory Mode
CDAP will use EmbeddedSolrServer in in-memory mode.
Data Flow
Like in 3.5, there would be a call to update the index everytime the metadata of an entity is updated. Unlike in 3.5 though, this call would be an HTTP call to the Search Service (running Solr in 4.0).
Note: Since this call is now an HTTP call,
- should it be asynchronous?
- it will happen outside of the transaction to update the Metadata Dataset.
Index Sync
Since the persistence stores for metadata and the search index will be different, we will need a utility to keep them in sync. This could be a service/thread that runs periodically (preferred), or a tool that is invoked manually.
Upgrade
There should be a way to upgrade existing indexes to be stored in the new Search backend. The index sync tool should be developed in a way that it can be run via the Upgrade Tool to update existing metadata in the new search backend.
Indexes (TBD)
- What is the schema of data to be indexed in the new search backend?
REST API changes
The following changes would be made in the metadata search RESTful API:
- a
sort
parameter that specifies the sort query. It contains a comma-separated list of sort fields and sort order. e.g.sort=name%20asc,created_time%20desc
- an
offset
parameter that specifies the offset into the search results. Defaults to 0. - a
size
parameter that specifies the number of results to return, starting at theoffset
. Defaults toInteger.MAX_VALUE
.
The response would contain 2 fields:
results
- Contains a set of search results matching the search querytotal
- specifies the total number of matched entities. This can be used to calculate the number of pages.
TODO: Given the format of the entityId object in the search response, figure out if sorting can be applied on the entity name.
$ curl http://localhost:11015/v3/namespaces/default/metadata/search?offset=50&size=2 { "total": 142, "results": [ { "entityId":{ "id":{ "applicationId":"PurchaseHistory", "namespace":{ "id":"default" } }, "type":"application" }, "metadata":{ "SYSTEM":{ "properties":{ "Flow:PurchaseFlow":"PurchaseFlow", "MapReduce:PurchaseHistoryBuilder":"PurchaseHistoryBuilder" }, "tags":[ "Purchase", "PurchaseHistory" ] } } }, { "entityId":{ "id":{ "instanceId":"history", "namespace":{ "id":"default" } }, "type":"datasetinstance" }, "metadata":{ "SYSTEM":{ "properties":{ "type":"co.cask.cdap.examples.purchase.PurchaseHistoryStore" }, "tags":[ "history", "explore", "batch" ] } } } ] }
Status of an Entity
Along with showing the metadata of an entity (name, description, tags, properties, etc), one of the requirements for the home page is to also show a brief 'status' for every entity, which is a summary of statistics and metrics. For each entity type, status should surface:
Artifact: # apps, # extensions, # plugins
Application: Total # programs, # Running, # Stopped
Program
Dataset: Read Rate, Write Rate, # apps using it
Stream: Read Rate, Write Rate, # apps connected to it, # stream views created
Stream View: Read Rate, Write Rate, # apps connected to it
This information will not be surfaced from the metadata system. The UI will have to make separate calls potentially for:
- Metrics APIs for getting Read Rate and Write Rate
- Usage Registry for apps using datasets, streams and stream views
- App Fabric APIs for getting the other information from App Fabric.
For 2 and 3, there could be an alternative to provide a UI-only (non-documented) batch endpoint.