Preview Mode

Goals

Checklist

  • User stories documented (Albert/Vinisha)
  • User stories reviewed (Nitin)
  • Design documented (Albert/Vinisha)
  • Design reviewed (Terence/Andreas)
  • Feature merged ()
  • Examples and guides ()
  • Integration tests () 
  • Documentation for feature ()
  • Blog post

Terminology


Pipeline Developer - Somebody who create pipelines through the UI or REST API. May be a business analyst, researcher, plugin developer, etc.

Stage - a node in the pipeline. Corresponds to a plugin instance.

Phase - one or more stages in the pipeline. Corresponds to a node in the physical workflow.

Validation - static checks for plugin configuration

Debug - actually running plugin code on example input

Use Cases

  1. A pipeline developer is configuring a JavaScript transform and wants to make sure the logic is correct.  The developer fills in all the configuration settings, provides some input, and wants to see the corresponding output, or any errors that happened.
  2. A pipeline developer has configured several stages and connected them together. The developer wants to see the input and output (or errors) at each stage.
  3. A pipeline developer is using the database plugin and does not know if the jdbc connection string given is correct. Syntax might wrong, or maybe the host or port or credentials are wrong, and the developer wants to check these things before publishing and running the pipeline.

User Stories

  1. As a pipeline developer, I want to be able to see what a pipeline stage would output given the config for the stage and input to the stage
  2. As a pipeline developer, I want to be able to debug a single stage by providing config and input to see the output
  3. As a pipeline developer, I want to be able to see a message describing the cause of any errors encountered while debugging 
  4. As a pipeline developer, I want to be able to validate specific fields of a plugin, so that I can debug without publishing the pipeline
  5. As a pipeline developer, I want to be able to manually provide the input records for my pipeline while debugging

Approach (WIP)

  1. Preview application deployment:

    Application deployment stepsChanges to the Persistent storeRegular ApplicationPreview Application (Proposed)
    REST Endpoint PUT "/apps/{app-id}"PUT "/apps/{app-id}/preview"
    Application Id used As specified by app-idGenerate new application id with name as app_id_preview. Note that user application may have _preview as suffix, however we will still append it with _preview.
    Enforce that the current principal has write access to the namespace  Should be similar to the regular app since it will be deployed in the same namespace as the regular app
    Application class name co.cask.cdap.datapipeline.DataPipelineAppco.cask.cdap.datapipeline.DataPipelineApp
    Config json  

    Preview configuration will have three additional configuration parameters -

    {
      "name" : "mypipeline_preview",
      "previewMode" : "ON",
      "previewRecords": "10"
      "previewStageName": "" 
      ...
    }

    Preview stage name configuration can be the name of the pipeline in which case pipeline created so far will be executed, or it could be name of the stage in the pipeline in which case only single stage would be executed.

    LocalArtifactLoaderStage responsible for calling the Application.configure() method DataPipelineApp.configure() gets called to generate the Pipeline specifications. Before generating the pipeline specifications, configurations are validated by class BatchPipelineSpecGenerator. 1. Each stage should have valid configuration for name and plugin type 2. Pipeline should have at least one stage 3. All stages should have unique names 4. Check that from and to have actual stage names 5. Source do not have incoming connections and sink do not have outgoing connection. (Where is the validation happens when we simply have source or sink in the pipeline?)For preview mode we might want to add BatchPreviewPipelineSpecGenerator class which overrides the default validation behavior. What should be validated: 1. There should be at least one source. 2. All the connections should be valid. 3. Its ok if the pipeline does not contain sink. Ideal way to use preview is first drag source then preview, then add transform to the pipeline then preview. If issues in the transform, do preview on the transform stage only.
    ApplicationVerificationStage responsible for verifying the application specs  Behavior should be similar to the regular application.
    DeployDatasetModulesStage Nothing to do hereNothing to do here
    CreateDatasetInstanceStage responsible for instantiating the datasets used by the applicationDatasets are created. For preview application, we will have to create the preview dataset as well. We will still create the datasets specified in the application configurations. (TODO: Who all creates datasets for example table dataset creates one) It is advisable to the user to not to use the production table as sink for preview mode. We will also create one special fileset dataset responsible for storing the preview data. Whenever the pipeline is run in preview mode, before transferring the record to the next stage, record will be appended to the fileset dataset. (TODO exact details need to be figure out as to how the data will be stored in the fileset.) The reason for selecting the fileset dataset is that since it will be written from the MapReduce program, writing to the table dataset would happen in the same transaction as the MapReduce program. If MapReduce program fails for some reason, then the records in the preview dataset will not be visible.
    CreateStreamStage responsible for creating streams used in the application.Streams used by application are created. Behavior would be similar to the regular application.
    DeletedProgramHandlerStage Seems like this stage just stops the currently running program. There is special handling for FLOW but would not be applicable in the pipeline cases. Nothing to do hereNothing to do here
    ProgramGenerationStage receives the ApplicationDeployable instance as an input and emits ApplicationWithPrograms as output. ApplicationWithPrograms contains ProgramDescriptors which holds the id of the program.  Behavior should be similar to the regular application.
    ApplicationRegistrationStage responsible for storing the application specification in the meta data table and update the usage registry for the program. (TODO not sure why we require to update the registry, especially when we have dynamic dataset usage by the programs.)Application specifications are stored in the app meta table. Usage registry dataset gets updated by the program dataset usage. Behavior should be similar to the regular application.
    CreateScheduleStageSchedule store dataset gets updated from schedules in the application. Should be skipped.
    SystemMetadataWriterStageProgram mesatadata (program id and program specification) is written to the dataset Should be skipped. But need to check when the application is actually published the metadata should get register,
  2. Preview application runtime:

    Application runtime stagePersistent store read from/written toRegular applicationPreview application
    REST endpoint to start programRead preferences from preference store. Get application specifications from store to get the ProgramDescriptor.  
    Program lifecycle changesRun record for the program gets updated to the app metadata.  
    Workflow executionWorkflowToken gets stored in the meta data store. Workflow node states get stored in the meta data store. Local datasets may get created by Workflow runtime.  
    Workflow statisticsWorkflow statistics dataset gets updated by the workflow runtime information.  
    LineageLineage information for the dataset accesses is written to the lineage dataset.  
    Logging contextLogs are appended to the kafka(cluster) or file(SDK)  
    MetricsMetrics data is persisted in the leveldb or HBase.  
  3. Applications also write to the user datasets.

Meeting notes from Friday, June 3 2016

  1. In preview mode we write to the records to the preview dataset. If mapper dies for some reason then records written to the preview dataset will not get cleaned.
  2. We should make clear to the user that in preview mode it is not advisable to write to the actual production sink.
  3. When would the preview datasets be cleaned up? If we decide to clean it up after the publishing the pipeline, what if user decides not to publish the pipeline?
  4. What if multiple users are doing preview on SDK? In order to isolate the writes to the preview dataset for the same pipeline from multiple user we can use some sort of session id. However ideally single SDK should not be shared by multiple users.
  5. Think about how the platform will expose a way to view preview data.
  6. Creating streams, datasets in preview?
  7. For logs we will need different appender.
  8. Pipeline preview may write to the actual user datasets used in pipeline as well as preview datasets. Should lineage information be written for the actual datasets and preview datasets?
  9. Is it possible to use separate in-memory injector for the preview process, so that preview data will be stored in memory and will not affect the actual metadata? Is it possible to write the data to different directory while in preview mode? In that case how it will be able to access the datasets stored in the directory as configured by "local.data.dir"
  10. How application communicate back to the UI something like preview collector interface?
  11. REST endpoints to start the preview and preview data collections.
  12. Design should be easy to extend for cluster mode and add more features such as schema propagation where we will simply call the pipeline.configure and return the data. 

Action Items:

  1. Try using separate injector for the preview mode.
  2. Come up with the REST endpoints and mechanism for preview and preview data collection.

REST Endpoints:

  1. To start the preview for an application: 
    Request Method and Endpoint 

    POST /v3/namespaces/{namespace-id}/preview
    where namespace-id is the name of the namespace
    Response will contain the CDAP generated unique preview-id which can be used further to get the preview data.

    Request body will contain the application configuration along with few additional configs for the preview section.

    {
        "artifact":{ 
          "name":"cdap-data-pipeline",
          "version":"3.5.0-SNAPSHOT",
          "scope":"SYSTEM"
        },
        "name":"MyPipeline",   
        "config":{
            "connections":[ 
             { 
                "from":"FTP",
                "to":"CSVParser"
             },
             { 
                "from":"CSVParser",
                "to":"Table"
             }
            ],
            "stages":[ 
             { 
                "name":"FTP",
                "plugin":{ 
                   "name":"FTP",
                   "type":"batchsource",
                   "label":"FTP",
                   "artifact":{ 
                      "name":"core-plugins",
                      "version":"1.4.0-SNAPSHOT",
                      "scope":"SYSTEM"
                   },
                   "properties":{ 
                      "referenceName":"myfile",
                      "path":"/tmp/myfile"
                   }
                },
                "outputSchema":"{\"fields\":[{\"name\":\"offset\",\"type\":\"long\"},{\"name\":\"body\",\"type\":\"string\"}]}"
             },
             { 
                "name":"MyCSVParser",
                "plugin":{ 
                   "name":"CSVParser",
                   "type":"transform",
                   "label":"CSVParser",
                   "artifact":{ 
                      "name":"transform-plugins",
                      "version":"1.4.0-SNAPSHOT",
                      "scope":"SYSTEM"
                   },
                   "properties":{ 
                      "format":"DEFAULT",
                      "schema":"{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"id\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"string\"}]}",
                      "field":"body"
                   }
                },
                "outputSchema":"{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"id\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"string\"}]}"
             },
             { 
                "name":"MyTable",
                "plugin":{ 
                   "name":"Table",
                   "type":"batchsink",
                   "label":"Table",
                   "artifact":{ 
                      "name":"core-plugins",
                      "version":"1.4.0-SNAPSHOT",
                      "scope":"SYSTEM"
                   },
                   "properties":{ 
                      "schema":"{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"id\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"string\"}]}",
                      "name":"mytable",
                      "schema.row.field":"id"
                   }
                },
                "outputSchema":"{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"id\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"string\"}]}",
                "inputSchema":[ 
                   { 
                      "name":"id",
                      "type":"int",
                      "nullable":false
                   },
                   { 
                      "name":"name",
                      "type":"string",
                      "nullable":false
                   }
                ]
             }
          ],
           "preview": {
                   "startStages": ["MyCSVParser"],
    			   "endStages": ["MyTable"],
                   "useSinks": ["MyTable"],
                   "outputs": {
    								"FTP": {
    										"data":	[
                                     			{"offset": 1, "body": "100,bob"},
                                     			{"offset": 2, "body": "200,rob"},
                                     			{"offset": 3, "body": "300,tom"}
                                  			],
    										"schema": {
    													"type" : "record",
    													"fields": [
    																{"name":"offset","type":"long"},
    															  	{"name":"body","type":"string"}
    															 ]
    												  }
    								}
    						}	
    			}
     
        }
    }

    a.  Simple pipeline

    Consider simple pipeline represented by following connections.
    
    (FTP)-------->(CSV Parser)-------->(Table)
     
    CASE 1: To preview the entire pipeline:
     
    "preview": {
                   "startStages": ["FTP"],
    			   "endStages": ["Table"],
                   "useSinks": ["Table"],
    			   "numRecords": 10,
    			   "programName": "SmartWorkflow",  // The program to be previewed
    			   "programType": "workflow"		// The program type to be previewed	
    			}
     
    CASE 2: To preview section of the pipeline: (CSV Parser)-------->(Table)
     
    "preview": {
                   "startStages": ["CSVParser"],
    			   "endStages": ["Table"],
                   "useSinks": ["Table"],
                   "outputs": {
    								"FTP": {
    										"data":	[
                                     			{"offset": 1, "body": "100,bob"},
                                     			{"offset": 2, "body": "200,rob"},
                                     			{"offset": 3, "body": "300,tom"}
                                  			],
    										"schema": {
    													"type" : "record",
    													"fields": [
    																{"name":"offset","type":"long"},
    															  	{"name":"body","type":"string"}
    															 ]
    												  }
    								}
    						}	
    			}
     
    CASE 3: To preview only single stage (CSV Parser) in the pipeline:
     
    "preview": {
                   "startStages": ["CSV Parser"],
    			   "endStages": ["CSV Parser"],
                   "outputs": {
    								"FTP": {
    										"data":	[
                                     			{"offset": 1, "body": "100,bob"},
                                     			{"offset": 2, "body": "200,rob"},
                                     			{"offset": 3, "body": "300,tom"}
                                  			],
    										"schema": {
    													"type" : "record",
    													"fields": [
    																{"name":"offset","type":"long"},
    															  	{"name":"body","type":"string"}
    															 ]
    												  }
    								}
    						}	
    			}
     
    CASE 4: To verify if records are read correctly from FTP:
    "preview": {
                   "startStages": ["FTP"],
    			   "endStages": ["FTP"],
    			   "numOfRecords": 10
    			}
     
    CASE 5: To verify the data is getting written to Table properly:
    "preview": {
                   "startStages": ["Table"],
    			   "endStages": ["Table"],
                   "useSinks": ["Table"],
                   "outputs": {
    								"CSV Parser": {
    										"data":	[
                                     			{"id": 1, "name": "bob"},
                                     			{"id": 2, "name": "rob"},
                                     			{"id": 3, "name": "tom"}
                                  			],
    										"schema": {
    													"type" : "record",
    													"fields": [
    																{"name":"id","type":"long"},
    															  	{"name":"name","type":"string"}
    															 ]
    												  }
    								}
    						}	
    			}

    b. Fork in the pipeline (multiple sinks)

    Consider the following pipeline:
     
    (S3 Source) --------->(Log Parser)--------->(Group By Aggregator)--------->(Python Evaluator)--------->(Aggregated Result)
                |
    			|
    			--------->(Javascript Transform)--------->(Raw Data)
     
    CASE 1: To preview entire pipeline
    "preview": {
                   "startStages": ["S3 Source"],
    			   "endStages": ["Aggregated Result", "Raw Data"],
                   "useSinks": ["Aggregated Result", "Raw Data"], // useSinks seem redundant as endStages is there which can control till what point the pipeline need to run
    			   "numOfRecords": 10
    			}
     
    CASE 2: To mock the source
    "preview": {
                   "startStages": ["Log Parser", "Javascript Transform"],
    			   "endStages": ["Aggregated Result", "Raw Data"],
                   "useSinks": ["Aggregated Result", "Raw Data"],
                   "outputs": {
    								"S3 Source": {
    										"data":	[
    												"127.0.0.1 - frank [10/Oct/2000:13:55:36 -0800] GET /apache_pb.gif HTTP/1.0 200 2326",
    												"127.0.0.1 - bob [10/Oct/2000:14:55:36 -0710] GET /apache_pb.gif HTTP/1.0 200 2326",
    												"127.0.0.1 - tom [10/Oct/2000:23:55:36 -0920] GET /apache_pb.gif HTTP/1.0 200 2326"
                                  			],
    										"schema": {
    													"type" : "record",
    													"fields": [
    															  	{"name":"log_line","type":"string"}
    															 ]
    												  }
    								}
    						}	
    			}
     
     
    CASE 3: To preview the section of the pipeline (Log Parser)--------->(Group By Aggregator)--------->(Python Evaluator)
    "preview": {
                   "startStages": ["Log Parser"],
    			   "endStages": ["Aggregated Result"],
                   "useSinks": ["Aggregated Result"],
                   "outputs": {
    								"S3 Source": {
    										"data":	[
    												"127.0.0.1 - frank [10/Oct/2000:13:55:36 -0800] GET /apache_pb.gif HTTP/1.0 200 2326",
    												"127.0.0.1 - bob [10/Oct/2000:14:55:36 -0710] GET /apache_pb.gif HTTP/1.0 200 2326",
    												"127.0.0.1 - tom [10/Oct/2000:23:55:36 -0920] GET /apache_pb.gif HTTP/1.0 200 2326"
                                  			],
    										"schema": {
    													"type" : "record",
    													"fields": [
    															  	{"name":"log_line","type":"string"}
    															 ]
    												  }
    								}
    						}	
    			}
    
    CASE 4: To preview the single stage Python Evaluator
    "preview": {
                   "startStages": ["Python Evaluator"],
    			   "endStages": ["Python Evaluator"],
                   "outputs": {
    								"Group By Aggregator": {
    									"data": [
    												{"ip":"127.0.0.1", "counts":3},
    												{"ip":"127.0.0.2", "counts":4},
    												{"ip":"127.0.0.3", "counts":5},
    												{"ip":"127.0.0.4", "counts":6},
    						 					],
    									"schema": {
    													"type" : "record",
    													"fields": [
    															  	{"name":"ip","type":"string"},
    															  	{"name":"counts","type":"long"}
    															 ]
    									}
    								}
    						}	
    			}
    
    

    c. Join in the pipeline (multiple sources)

    Consider the following pipeline:
    (Database)--------->(Python Evaluator)--------->
               										| 
    												|------------>(Join)-------->(Projection)------->(HBase Sink)
    												|	
               (FTP)--------->(CSV Parser)--------->
     
     
    CASE 1: To preview entire pipeline
     
    "preview": {
                   "startStages": ["Database", "FTP"],
    			   "endStages": ["HBase Sink"],
    			   "useSinks": ["HBase Sink"],	
    			   "numOfRecords": 10
    			}
    
    
    CASE 2: To mock both sources
    "preview": {
                   "startStages": ["Python Evaluator", "CSV Parser"],
    			   "endStages": ["HBase Sink"],
    			   "useSinks": ["HBase Sink"],	
                   "outputs": {
    								"Database": {
    									"data": [
    												{"name":"tom", "counts":3},
    												{"name":"bob", "counts":4},
    												{"name":"rob", "counts":5},
    												{"name":"milo", "counts":6}
    						 					],
    									"schema": {
    													"type" : "record",
    													"fields": [
    															  	{"name":"name","type":"string"},
    															  	{"name":"counts","type":"long"}
    															 ]
    									}
    								},
     
    								"FTP": {
    									"data": [
    												{"offset":1, "body":"tom,100"},
    												{"offset":2, "body":"bob,200"},
    												{"offset":3, "body":"rob,300"},
    												{"offset":4, "body":"milo,400"}
    						 					],
    									"schema": {
    													"fields": [
    															  	{"name":"name","type":"string"},
    															  	{"name":"offset","type":"long"}
    															 ]
    									}
    								}
    						}	
    			}
     
     
    CASE 3: To preview JOIN transform only
    "preview": {
                   "startStages": ["JOIN"],
    			   "endStages": ["JOIN"],
                   "outputs": {
    								"Python Evaluator": {
    									"data": [
    												{"name":"tom", "counts":3},
    												{"name":"bob", "counts":4},
    												{"name":"rob", "counts":5},
    												{"name":"milo", "counts":6}
    						 					],
    									"schema": {
    													"type" : "record",
    													"fields": [
    															  	{"name":"name","type":"string"},
    															  	{"name":"counts","type":"long"}
    															 ]
    									}
    								},
    
    								"CSV Parser": {
    									"data": [
    												{"offset":1, "body":"tom,100"},
    												{"offset":2, "body":"bob,200"},
    												{"offset":3, "body":"rob,300"},
    												{"offset":4, "body":"milo,400"}
    						 					],
    									"schema": {
    													"fields": [
    															  	{"name":"name","type":"string"},
    															  	{"name":"offset","type":"long"}
    															 ]
    									}
    								}
    						}	
    			}
    
    

    d. Preview for a single stage (TBD)

    Consider the pipeline containing only one transform which has no connections yet- 
     
    (Javascript Transform)
    
    The preview configurations can be provided as
     
    "preview": {
                   "startStages": ["Javascript Transform"],
    			   "endStages": ["Javascript Transform"],
                   "outputs": {
    								"MockSource": {
    
    									"data": [
    												{"name":"tom", "id":3},
    												{"name":"bob", "id":4},
    												{"name":"rob", "id":5},
    												{"name":"milo", "id":6}
    						 					],
    									"schema": {
    													"type" : "record",
    													"fields": [
    															  	{"name":"name","type":"string"},
    															  	{"name":"id","type":"long"}
    															 ]
    									}
    								}
    						}	
    			}
    
  2. How to specify the input data: User can specify the input data for preview by inserting the data directly in table format in UI or can upload a file containing the records.
    When the data is inserted in Table format, UI will convert the data into appropriate JSON records.
    When user decides to upload a file, he can upload the JSON file conforming to the schema of the next stage. Ideally we should allow uploading the CSV file as well, however how to interpret the data will be plugin dependent. For example consider the list of CSV records. Now for CSVParser plugin, the entire record will be treated as body however for Table sink, we will have to split the record based on comma to create multiple fields as specified by the next stage's input schema.

  3. Once the preview is started, the unique preview-id will be generated for it. The runtime information (<preview-id, STATUS) for the preview will be generated and will be stored (in-memory or disk). 

  4. Once the preview execution is complete, its runtime information will be updated with the status of the preview (COMPLETED or FAILED).

  5. To get the status of the preview
    Request Method and Endpoint

    GET /v3/namespaces/{namespace-id}/previews/{preview-id}/status
    where namespace-id is the name of the namespace
    	  preview-id is the id of the preview for which status is to be requested

    Response body will contain JSON encoded preview status and optional message if the preview failed.

    1. If preview is RUNNING
    {
    	"status": "RUNNING" 
    }
    
    2. If preview is COMPLETED
    {
    	"status": "COMPLETED" 
    }
    
    3. If preview application deployment FAILED
    {
    	"status": "DEPLOY_FAILED",
    	"failureMessage": "Exception message explaining the failure"
    }
     
    4. If preview application FAILED during execution of the stages
    {
    	"status": "RUNTIME_FAILED",
        "failureMessage": "Failure message"
        /*
        "stages": {
    		[
    			"stage_1": {
    				"numOfInputRecords": 10,
    				"numOfOutputRecords": 10
    			},
    			"stage_2": {
    				"numOfInputRecords": 10,
    				"numOfOutputRecords": 7
    			},
    			"stage_3": {
    				"numOfInputRecords": 7,
    				"numOfOutputRecords": 4,
    				"errorMessage": "Failure reason for the stage"
    			}
    		]
    	} */
    }
  6. To get the preview data for stage:
    Request Method and Endpoint

    GET /v3/namespaces/{namespace-id}/previews/{preview-id}/stages/{stage-id}
    where namespace-id is the name of the namespace
    	  preview-id is the id of the preview for which data is to be requested
          stage-id is the unique name used to identify the emitter

    Response body will contain JSON encoded input data and output data for the emitter as well as input and output schema.

    {
    	"inputData": [
                     	{"first_name": "rob", "zipcode": 95131},
                        {"first_name": "bob", "zipcode": 95054},
                        {"first_name": "tom", "zipcode": 94306}
                     ],
    	"outputData":[
    					{"name": "rob", "zipcode": 95131, "age": 21},
                        {"name": "bob", "zipcode": 95054, "age": 22},
                        {"name": "tom", "zipcode": 94306, "age": 23}
    				 ],
    	"inputSchema": {
    						"type":"record",
    						"name":"etlSchemaBody",
    						"fields":[
    									{"name":"first_name", "type":"string"},
    									{"name":"zipcode", "type":"int"}
    								 ]
    					},
    	"outputSchema": {
    						"type":"record",
    						"name":"etlSchemaBody",
    						"fields":[
    									{"name":"name", "type":"string"},
    									{"name":"zipcode", "type":"int"},
    									{"name":"age", "type":"int"}
    								 ]
    					},
        "errorRecordSchema": {
    						"type":"record",
    						"name":"schemaBody",
    						"fields":[
    									{"name":"errCode", "type":"int"},
    									{"name":"errMsg", "type":"String"},
    									{"name":"invalidRecord", "type":"String"}
    								 ]
    					},
    	"errorRecords": [  
          {  
             "errCode":12,
             "errMsg":"Invalid record",
             "invalidRecord":"{\"offset\":3,\"body\":\"This error record is not comma separated!\"}"
          }
       ],	
     
        // Logs per stage may not make sense will need to think more about it.
    	"logs" : {
    		"stage level logs"
    	}
    }
  7. To get the logs/metrics for the preview:
    Request Method and Endpoint

    GET /v3/namespaces/{namespace-id}/previews/{preview-id}/logs
    where namespace-id is the name of the namespace
    	  preview-id is the id of the preview for which data is to be requested
    logs end point return the entire preview logs.
    Sample response for the logs endpoint. Note that it is similar to the regular app.
    [  
       {  
          "log":"This is sample log - 0",
          "offset":"0.1466470037680"
       },
       {  
          "log":"This is sample log - 1",
          "offset":"1.1466470037680"
       },
       {  
          "log":"This is sample log - 2",
          "offset":"2.1466470037680"
       },
       {  
          "log":"This is sample log - 3",
          "offset":"3.1466470037680"
       },
       {  
          "log":"This is sample log - 4",
          "offset":"4.1466470037680"
       }
    ]
     
    GET /v3/namespaces/{namespace-id}/previews/{preview-id}/metrics
    where namespace-id is the name of the namespace
    	  preview-id is the id of the preview for which data is to be requested
     
    Sample response for the metrics. Note that it is similar to the regular app.
    {
        "endTime": 1466469538,
        "resolution": "2147483647s",
        "series": [
            {
                "data": [
                    {
                        "time": 0,
                        "value": 4
                    }
                ],
                "grouping": {},
                "metricName": "user.Projection.records.out"
            },
            {
                "data": [
                    {
                        "time": 0,
                        "value": 4
                    }
                ],
                "grouping": {},
                "metricName": "user.Projection.records.in"
            },
            {
                "data": [
                    {
                        "time": 0,
                        "value": 4
                    }
                ],
                "grouping": {},
                "metricName": "user.Stream.records.out"
            },
            {
                "data": [
                    {
                        "time": 0,
                        "value": 4
                    }
                ],
                "grouping": {},
                "metricName": "user.JavaScript.records.in"
            },
            {
                "data": [
                    {
                        "time": 0,
                        "value": 4
                    }
                ],
                "grouping": {},
                "metricName": "user.Stream.records.in"
            }
        ],
        "startTime": 0
    }

    Response would be similar to the regular app.


Open Questions:

  1. How to make it easy for the user to upload the CSV file?
  2. Lookup data is the user dataset. Should we allow mocking of the look up dataset as well?