Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 27 Next »

      1.  

Requirements

      • CDAP exposes the API for developers to build their own plugin for parsing data in a Stream.
      • Developer should have the ability to build his own parser using the CDAP provided API for parsing events in the stream.
      • Developer/Operations should then have the ability to deploy the parser implemented into a directory with a configuration
      • User should specify at minimum a name and description for the plugin in a configuration
      • User should have the ability to list the available plugins using REST API / CLI
      • User should have the ability to view using REST API / CLI the pre-defined schema of the plugin in case the plugin defines one.
      • User should have the ability to list the views associated with a Stream using REST API / CLI / UI
      • User should have the ability to apply the plugin to a Stream and create a view
      • User specified view name should be registered in a catalog allowing one to query (SQL) using the view name.
      • User should have the ability to apply different plugins on the same Stream creating different view
      • User should have the ability to change the plugin associated with a view
      • CDAP should provide a text wrangler plugin that allows one to create rules for parsing mostly text files.

Overview

      1. Pluggable stream record formats (the format in which data is read from a stream, which is different from the format in which files are written to a stream)
        1. Expose cdap-spi module that contains StreamEventRecordFormat abstract class
        2. Each StreamEventRecordFormat will be associated with a simple name (e.g. grok, clf, avro)
        3. "system" record formats will come from within the CDAP codebase (grok, clf, avro)
        4. "user" record formats will be loaded from jars in a certain directory containing SPI jars
          1. In a later revision, this will may be namespaced and/or managed via an HTTP API
      2. Stream views
        1. A stream view is an explorable view (Hive table) of a stream, with a particular record format
        2. A stream may have multiple views
        3. Upon creating a stream, the stream will have a default view

Stream View HTTP API

Changes to existing APIs

PathRequestResponseNotes
PUT /v3/namespaces/<namespace>/streams/<stream>  Instead of creating a Hive table with a default record format, this will create a "default" view with a default record format.
DELETE /v3/namespaces/<namespace>/streams/<stream>  This will delete all associated views for the stream.
POST /v3/namespaces/<namespace>/streams/properties"format" field will be considered "deprecated"
-> if format is given, this modifies the default view for backwards compat
 Notify user that "format" field is deprecated somehow?

New APIs

PathRequestResponseNotes
PUT /v3/namespaces/<namespace>/views/stream/<view>
{
  "stream": "stream1",
  "format": <same as before>
}
 Creates or modifies a view.
GET /v3/namespaces/<namespace>/views/stream/<view> 
{"id":"someView", "stream": "stream1", "format": ..}
Get details of an individual view.
GET /v3/namespaces/<namespace>/views/stream  Lists all views.
DELETE /v3/namespace/<namespace>/views/stream/<view>  Deletes a view.
GET /v3/namespaces/<namespace>/stream/<stream>/views 
[
  {"id":"someView", "stream": "stream1", "format": ..},
  {"id":"otherView", "stream": "stream2", "format": ..}
]
Lists all views for a stream.

Notes

      • If Explore is disabled, then all stream view APIs will be disabled (question)
      • Existing Hive queries must not be affected by the deletion or modification of any stream views it may be using

 

Sample CLI Flow

    1. User wants to create a stream "stream1" that contains CSV data and read via Explore through two views "view1" and "view2".
      1. create stream stream1
      2. send stream stream1 "a,b,c"
        send stream stream1 "d,e,f" 
      3. execute "select * from stream_stream1" // this is the default table, will be deprecated and later removed

        body
        a,b,c
        d,e,f
      4. create stream-view stream1 view1 format csv "ticker string, num_traded int, price double"
      5. execute "select * from view_view1"

        tickernum_tradedprice
        abc
        def
      6. create stream-view stream1 view2 format csv "drop=num_traded"

      7. execute "select * from view_view2"

        tickerprice
        ac
        df
  • No labels