Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Task marked incomplete

Introduction


       An n-gram is a sequence of n tokens (typically words) for some integer n.

      NGramTerms NGramTransform plugin would be used to transform input features into n-grams. 

Use-Case

  • Transform input features(tokens in array form) into n-grams using parameter for number

    A bio data scientist wants to  study the sequence of the nucleotides using the input stream of DNA sequencing to identify the bonds.
    The input Stream contains the DNA sequence eg AGCTTCGA. The output contains the bigram sequence AG, GC, CT, TT, TC, CG, GA

    Input source: 

    DNASequence
    AGCTTCGA

    Mandatory inputs from user:NGramTransform: 

    • Field to be used to transform input features into n-grams:”DNASequence”
    • Number of terms in each n-gram
    .
    • :”2”
    • Transformed
    output will be an array
    • field for sequence of n-
    grams where each n-gram is represented by a space-delimited string of n consecutive words.
    • gram:”bigram” 
    • Tokenization unit used to tokenize the input string before n-gram could be created:"Character" 

    Output: 

    DNASequencebigram
    AGCTTCGA[AG, GC, CT, TT, TC, CG, GA]

 


User Stories

 

  • As a Hydrator user,I want to transfom input features data in a column from source schema into output schema which will have a single column having n n-gram data in one of the columns in output schema.
  • As a Hydrator user I want to have configuration for specifying the column name from input schema on which transformation has to be performed.
  • As a Hydrator user I want to have configuration to specify the no of terms which would be used for transformation of input features into n-grams.
  • As a Hydrator user I want to have configuration to specify output column name wherein ngrams will be emitted.
  • As a Hydrator user I want to specify the tokenization unit for the input to be tokenized before it could be converted to n-gram

Conditions

  • Source field ,to be transformed,can be of only type string array.
  • User can transform single column field only from the source schema.
  • Output schema will have a single column of type string array.
  • If the input sequence contains fewer than n strings, no output is produced.


End to End Example pipeline:
       

StreamNGramTransformTPFSAvro

 

Input source:

 

topic
tokens
sentence
Java
java
[
hi
,
i
,
heard
,
about
,
spark
]
HDFS
[
hdfs
,
is
,
a file
,
system
]
Spark
[
spark
,
is
,
an
,
engine
]


NGramTermsNGramTransform:

Mandatory inputs from user:

    • Column Field to be used to transform input features into n-grams:”tokens”
    • No Number of terms in each n-gram:”2”
    • Transformed column field for sequence of n-gram:”ngrams”

Output:

ngrams
    • Tokenization unit: "words"

TPFSAvro Output

topicsentencengrams
javahi i heard about spark[hi i,i heard,heard about,about spark]
HDFShdfs is a file system[hdfs is,is a,a file,file system]
Sparkspark is an engine[spark is,is an,an engine]

 

Design

This is a sparkcompute type of plugin and is meant to work with Spark only.

Properties:

  • **columnToBeTransformedfieldToBeTransformed:** Column to be used to transform input features into n-grams.
  • **noOfTermsnumberOfTerms:** No Number of terms in each n-gram.
  • **outputColumnoutputField:** Transformed column for sequence of n-gram.
  • **tokenizationUnit** Unit into which the input string will be tokenized.

Input JSON:

         {
           "name": "NGramTermsNGramTransform",
           "type": "sparkcompute",
           "properties": {
                                   "columnToBeTransformedfieldToBeTransformed": "tokens",
                                   "noOfTermsnumberOfTerms": "2",

                                    "outputColumn"tokenizationUnit":"word",

                                   "outputField": "ngrams"
                                }
          }


Table of Contents

Table of Contents
stylecircle

Checklist

  •  User stories documented 
  •  User stories reviewed 
  •  Design documented 
  •  Design reviewed 
  •  Feature merged 
  •  Examples and guides 
  •  Integration tests 
  •  Documentation for feature 
  •  Short video demonstrating the feature