This document is aimed to summarize some basic use-cases for which CDAP is being used. Basic testing to be done from UI before creating a PR.
Functional Tests:
Use-Case - 1: How is a Purchase tracked and processed
This use-case skims through the dev section of the UI to test how a purchase history app is supposed to be used.
Testing Flow:
- Deploy Purchase history app
- Go to App's detailed view.
- Go to Purchase Flow
- Start the Flow
- Inject events into the stream of a flow from UI - Should show the count of events on stream flowlet.
- See if the events flow through all the flowlets and reach the collector.
- Stop the flow
- Go to Datasets tab - Should show the datasets
- Go to History - Should show the run history that we just started
- Go to Purchases Datasets - Status page should show storage as few Bytes as we just added some streams
- Go to explore tab and execute the default select * query. - Should show the results in the bottom section (events we injected)
GIF explaining the above steps: TestingFow
Testing Workflow/Mapreduce:
- Go to PurchaseHistoryWorkflow
- Start the Workflow - This should pick up the events injected into the stream of a flow
- Mapreduce should run fine - initially having green border and once completed should be shaded with green indicating success
- Click on the Mapreduce program to go the program and check its status - Should show status as completed and switching between mappers and reducers should show proper metrics
- Hit back and it should come back to the workflow run view
- Go to History Dataset - Same, the status page should show storage as few bytes.
- Explore the dataset should show the history of purchases made by the user (Explore tab, execute query on the dataset).
GIF explaning the above steps: TestingWorkflowMR.gif
Testing Service:
Use-Case-1:
- Go to PurchaseHistoryService and Start it.
- Make a request to to /history/{customer} end point. - The customer is the same customer that we referred in our stream injection
- Should show the list of purchases the user has made.
GIF explaning the above steps: TestingWorkflowMR.gif
Use-Case-2:
- Go to UserProfileService and Start it.
- Make a POST call to /user/{id} end point with the following JSON,
{
"id":"Alice",
"firstName":"Alice",
"lastName":"Bernard",
"categories":["fruits"]
}
3. Go to flow and inject events in the name of Alice
4. Go to PurchaseHistoryWorkflow and Start it and wait till it completes successfully.
5. Go to PurchaseHistoryService again and make the same GET Request as we did above /user/{customer} - Here the customer would be Alice.
6. We should be able to see the User profile along with the purchase history information in the response.
GIF explaining the above steps: TestingService.gif
Testing Spark:
- Deploy SparkPageRank app
- Inject data by running /cdap/cdap-examples/SparkPageRank/bin/inject-data.sh (the script runs for a couple of minutes)
- Start RanksService and TotalPagesPR
- Go to SparkPageRankProgram
- Click Start
- You should see the metrics getting updated in the page
TODO: Add GIF
The above test makes sure Apps, Flows, Mapreduce, Service, Workflows, Datasets, Streams, Explorer work fine for base use-case.
Objective: Essentially what we are testing is -
We have a flow through which we can inject events - which then writes it to a dataset - a Workflow/Mapreduce will read from the dataset, process it and write it to another dataset - a Service helps us in viewing the data or we could do the same thing with explorer too. Here purchases dataset stores all purchases made by the user and history dataset stores the history of purchases made by the user.
Use-Case - 2: How an Adapter Works
Testing Adapter Creation:
- Click "All Adapters" in Home page and Click the "+" sign to Create an Adapter
- Choose the adapter type and enter a name for the adapter.
Setup Source - Stream Source
- Give Stream Name
- Set Process Time Window to 1m
- Set Format to Text
- Set Schema to:
- body (type string)
- Setup Sink - TPFSAvro sink
- Give Dataset Name
- Set Schema to:
- ts (type long)
- body (type string)
- Setup a Transform - Projection transform
- Fields to Drop:
- headers
- Fields to Drop:
- Schedule it for every 5 mins (If its ETLBatch adapter)
- Publish the adapter.
This base case should work. If not something is wrong. The UI should say what is the error.
- Once the adapter is created send events to the stream
- Start the adapter now from the adapater list view /ns/default/adapters
- Every 5 mins the dataset associated with the adapter should be injected with data we injected through our stream.
- Send some events to the stream you created
- Wait for 5 minutes
- Explore the dataset sink. You should see the events you sent to the stream
This is the basic use-case of an adapter.
GIFs explaining the above steps: AdapterTest1.gif , TestingAdapter2.gif
Objective: The objective of this test is to see if an adapter can convert a stream that is of format csv to a TPFSAvro dataset that we use internally anywhere.
TODO: For metrics we need a basic test case to test.
Once the above mentioned steps work push the code two different clusters, secure and a non-secure cluster (beamer software install cluster_id cdap-ui - should take 5 mins to beam code to a cluster)
Once the cluster is up and running we should provide the cluster url and a gif of our test. This helps for the reviewer to assume that the feature/bug fix works and could start reviewing the code.
Behavioral Tests:
This is more of an open ended section where it depends on the user/developer to test their UI extensively. This needs more thought and automated tests to run.