Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

 

Overview


Data related to cask is scattered around different sources. The goal of this application is to collect and aggregate that data to provide unified access and generate useful statistics.

sources

  • Salesforce
  • Google analytics
  • Social media analytics (Youtube, LinkedIn, Twitter)
  • Github
  • Meltwater
  • Pro Ranking (SEO)
  • Github web-hooks
  • AWS s3 access logs

Motivation

To be able to generate and display aggregates and trends in one central location and to render front end in order to help marketing team.

Requirements

  • The system automatically fetches the latest data from respective apis and keeps the historical data

  • The system should notify the stake holders in case of failures

  • The system should be extensible to add more sources

  • Retrieval is optimized and should not incur any additional cost—meaning the data retrieved should not be pulled multiple times.

  • Data should be processed without any data loss.

  • The statistics should be aggregated at different time intervals:

    • Hourly

    • Daily

    • Weekly

    • Monthly

    • Every 3 months

    • Every 1 year

  • System should be able to process and catch-up in case of major outages.

  • System should have the ability to visualize metrics in the form of Dashboard Widgets: Line, Bar, Pie, Scatter, etc.

  • System should have the ability to configure notifications based on constraints specified for metrics:

    • External Api call fail

    • High and Low mark reached

    • Weekly or daily digest

    • The system is highly-available and the reports are available 24x7x365

    • The system should render charts as well as provide raw data to feed into external applications like tablue.

Assumptions

  • All the sources have developer APIs which supports retrieval of data 

  • Information generated does not need different access for different roles

Design


  • Partitioning of TimePartitionedFileset

 

        Each data source will be in its own TPFS instance

  • Source: "Sourcename Tpfs"

    • Format: Avro Record with fields - ts, attributes

    • Cube Name: “SourceNameCube”

 

 

  • Example
    • Github: “GithubTPFS”

      • Format: Avro Record with fields - ts, repo, stars, forks, watchers, pulls

      • Cube Name: “GithubCube”

API Calls

  • Use a Workflow Custom Action to run periodic RESTful calls to  APIs
  • A spark job can read the data from filesystem and update the cube
  • In order to allow different scheduling of different calls, each call will have its own workflow


UI

  • UI could be deployed on a thin coopr node
  • Probable stack for UI will be Jquery embedded in a bootsrap dashboard
  • ChartJS and c3JS would be used to render charts
  • UI should allow refining all metrics to different time granularities (Hourly, Daily, Weekly, Monthly, Every three months, Every year)

  • Visualize metrics in the form of Dashboard (Widgets - Line, Bar, Pie, Scatter, ...)

  • Dashboard and backend should support overlaying week-over-week, month-over-month or year-over-year for any metric

  • Backend should  allow for raw querying of data through SQL commands


Reports

  • Generate daily and weekly digest report

  • Export data into PDF/Excel available for download in UI


Alerts

  • Allow user to specify some threshold values for metrics that will alert by email

    • High-mark Low-mark reached alerts to users via email and sms( tentative )

    • Api call fail alerts to Admin and dev

  • No labels