...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
General Overview of Google NLP
Google Natural Language API comprises five different services:
- Syntax Analysis
- Sentiment Analysis
- Entity Analysis
- Entity Sentiment Analysis
- Text Classification
Syntax Analysis
For a given text, Google’s syntax analysis will return a breakdown of all words with a rich set of linguistic information for each token. The information can be divided into two parts:
Part of speech. This part contains information about the morphology of each token. For each word, a fine-grained analysis is returned containing its type (noun, verb, etc.), gender, grammatical case, tense, grammatical mood, grammatical voice, and much more.
Example sentence: “A computer once beat me at chess, but it was no match for me at kickboxing.”A tag: DET 'computer' tag: NOUN number: SINGULAR 'once' tag: ADV 'beat' tag: VERB mood: INDICATIVE tense: PAST 'me' tag: PRON case: ACCUSATIVE number: SINGULAR person: FIRST at tag: ADP 'chess' tag: NOUN number: SINGULAR ',' tag: PUNCT 'but' tag: CONJ 'it' tag: PRON case: NOMINATIVE gender: NEUTER number: SINGULAR person: THIRD 'was' tag: VERB mood: INDICATIVE number: SINGULAR person: THIRD tense: PAST 'no' tag: DET 'match' tag: NOUN number: SINGULAR 'for' tag: ADP 'kick' tag: NOUN number: SINGULAR 'boxing' tag: NOUN number: SINGULAR '.' tag: PUNCT - Dependency trees. The second part of the return is called a dependency tree, which describes the syntactic structure of each sentence.
Sentiment Analysis
Google’s sentiment analysis will provide the prevailing emotional opinion within a provided text. The API returns two values: The “score” describes the emotional leaning of the text from -1 (negative) to +1 (positive), with 0 being neutral.
The “magnitude” measures the strength of the emotion.
Input Sentence | Sentiment Results | Interpretation |
---|---|---|
The train to London leaves at four o'clock | Score: 0.0 Magnitude: 0.0 | A completely neutral statement, which doesn't contain any emotion at all. |
This blog post is good. | Score: 0.7 Magnitude: 0.7 | A positive sentiment, but not expressed very strongly. |
This blog post is good. It was very helpful. The author is amazing. | Score: 0.7 Magnitude: 2.3 | The same sentiment, but expressed much stronger. |
This blog post is very good. This author is a horrible writer usually, but here he got lucky. | Score: 0.0 Magnitude: 1.6 | The magnitude shows us that there are emotions expressed in this text, but the sentiment shows that they are mixed and not clearly positive or negative. |
Entity Analysis
Entity Analysis is the process of detecting known entities like public figures or landmarks from a given text. Entity detection is very helpful for all kinds of classification and topic modeling tasks.
A salience score is calculated. This score for an entity provides information about the importance or centrality of that entity to the entire document text.
Example: “Robert DeNiro spoke to Martin Scorsese in Hollywood on Christmas Eve in December 2011.”.
Detected Entity | Additional Information |
---|---|
Robert De Niro | type : PERSON salience : 0.5869118 wikipedia_url : https://en.wikipedia.org/wiki/Robert_De_Niro |
Hollywood | type : LOCATION salience : 0.17918482 wikipedia_url : https://en.wikipedia.org/wiki/Hollywood |
Martin Scorsese | type : LOCATION salience : 0.17712952 wikipedia_url : https://en.wikipedia.org/wiki/Martin_Scorsese |
Christmas Eve | type : PERSON salience : 0.056773853 wikipedia_url : https://en.wikipedia.org/wiki/Christmas |
December 2011 | type : DATE Year: 2011 Month: 12 salience : 0.0 wikipedia_url : - |
2011 | type : NUMBER salience : 0.0 wikipedia_url : - |
Entity Sentiment Analysis
If there are models for entity detection and sentiment analysis, it’s only natural to go a step further and combine them to detect the prevailing emotions towards the different entities in a text.
Example: “The author is a horrible writer. The reader is very intelligent on the other hand.”
Entity | Sentiment |
---|---|
author | Salience: 0.8773350715637207 Sentiment: magnitude: 1.899999976158142 score: -0.8999999761581421 |
reader | Salience: 0.08653714507818222 Sentiment: magnitude: 0.8999999761581421 score: 0.8999999761581421 |
Text Classification
Classifies the input documents into a large set of categories. The categories are structured hierarchical, e.g. the Category “Hobbies & Leisure” has several sub-categories, one of which would be “Hobbies & Leisure/Outdoors” which itself has sub-categories like “Hobbies & Leisure/Outdoors/Fishing.”
Example: “The D3500’s large 24.2 MP DX-format sensor captures richly detailed photos and Full HD movies—even when you shoot in low light. Combined with the rendering power of your NIKKON lens, you can start creating artistic portraits with smooth background blur. With ease.”
Category | Confidence |
---|---|
Arts & Entertainment/Visual Art & Design/Photographic & Digital Arts | 0.95 |
Hobbies & Leisure | 0.94 |
Computers & Electronics/Consumer Electronics/Camera & Photo Equipment | 0.85 |
Anotate text
A convenience method that provides all the features that analyzeSentiment, analyzeEntities, and analyzeSyntax provide in one call.
https://cloud.google.com/natural-language/docs/reference/rest/v1/documents/annotateText
Directives Syntax
Directives are named the same way the commands in gcloud cli are named: https://cloud.google.com/sdk/gcloud/reference/ml/language/ but this nlp prefix (please let me know if prefix is needed):
...
language: can be set as a part of API call (example 'en', 'jp', etc.). If not set, the NLP Server will try to automatically detect a language.
Authentication file
Example of authentication-file (aka. service account key json):
...
When file parameter is not provided it will take the path from env variables, which is how it will work for GCP case - the most common one.
Return value of directives
Directives will return a json string. This is because the return data can be very complex and it would be pretty troublesome for user to express it in terms of field consisting of multiple layers of nested CDAP schema maps/arrays and even records! Which would be specific and different for every nlp command.
...
https://cloud.google.com/natural-language/docs/analyzing-entities#language-entities-string-gcloud
Implementation
- There is a Google NLP Java API available. Example of usage: https://cloud.google.com/natural-language/docs/analyzing-syntax#language-syntax-string-java
- The responses there are in protobuf format, which luckily can be transformed to json.
- The code will be located in its own repository called nlp-plugins as opposed to being a part of https://github.com/data-integrations/wrangler/tree/develop/wrangler-core/src/main/java/io/cdap/directives/nlp
- Regarding integration tests. I am thinking of testing it against live NLP instance, instead of mocking it. Since the results of NLP responses can change we should just check for very general things, not checking details.
Examples
Example 1. Syntax analysis
No Format |
---|
nlp-analyze-syntax body result service_account_key.json |
...
No Format |
---|
{ "sentences": [ { "text": { "content": "Google, headquartered in Mountain View, unveiled the new Android phone at the Consumer Electronic Show.", "beginOffset": 0 } }, { "text": { "content": "Sundar Pichai said in his keynote that users love their new Android phones.", "beginOffset": 105 } } ], "tokens": [ { "text": { "content": "Google", "beginOffset": 0 }, "partOfSpeech": { "tag": "NOUN", "aspect": "ASPECT_UNKNOWN", "case": "CASE_UNKNOWN", "form": "FORM_UNKNOWN", "gender": "GENDER_UNKNOWN", "mood": "MOOD_UNKNOWN", "number": "SINGULAR", "person": "PERSON_UNKNOWN", "proper": "PROPER", "reciprocity": "RECIPROCITY_UNKNOWN", "tense": "TENSE_UNKNOWN", "voice": "VOICE_UNKNOWN" }, "dependencyEdge": { "headTokenIndex": 7, "label": "NSUBJ" }, "lemma": "Google" }, ... { "text": { "content": ".", "beginOffset": 179 }, "partOfSpeech": { "tag": "PUNCT", "aspect": "ASPECT_UNKNOWN", "case": "CASE_UNKNOWN", "form": "FORM_UNKNOWN", "gender": "GENDER_UNKNOWN", "mood": "MOOD_UNKNOWN", "number": "NUMBER_UNKNOWN", "person": "PERSON_UNKNOWN", "proper": "PROPER_UNKNOWN", "reciprocity": "RECIPROCITY_UNKNOWN", "tense": "TENSE_UNKNOWN", "voice": "VOICE_UNKNOWN" }, "dependencyEdge": { "headTokenIndex": 20, "label": "P" }, "lemma": "." } ], "language": "en" } |
Example 2. Sentiment analysis
No Format |
---|
nlp-analyze-sentiment body result service_account_key.json |
...
No Format |
---|
{ "documentSentiment": { "magnitude": 0.8, "score": 0.8 }, "language": "en", "sentences": [ { "text": { "content": "Enjoy your vacation!", "beginOffset": 0 }, "sentiment": { "magnitude": 0.8, "score": 0.8 } } ] } |
Example 3. Entity analysis
No Format |
---|
nlp-analyze-entities body result service_account_key.json |
...
No Format |
---|
{ "entities": [ { "name": "Trump", "type": "PERSON", "metadata": { "mid": "/m/0cqt90", "wikipedia_url": "https://en.wikipedia.org/wiki/Donald_Trump" }, "salience": 0.7936003, "mentions": [ { "text": { "content": "Trump", "beginOffset": 10 }, "type": "PROPER" }, { "text": { "content": "President", "beginOffset": 0 }, "type": "COMMON" } ] }, { "name": "White House", "type": "LOCATION", "metadata": { "mid": "/m/081sq", "wikipedia_url": "https://en.wikipedia.org/wiki/White_House" }, "salience": 0.09172433, "mentions": [ { "text": { "content": "White House", "beginOffset": 36 }, "type": "PROPER" } ] }, { "name": "Pennsylvania Ave NW", "type": "LOCATION", "metadata": { "mid": "/g/1tgb87cq" }, "salience": 0.085507184, "mentions": [ { "text": { "content": "Pennsylvania Ave NW", "beginOffset": 65 }, "type": "PROPER" } ] }, { "name": "Washington, DC", "type": "LOCATION", "metadata": { "mid": "/m/0rh6k", "wikipedia_url": "https://en.wikipedia.org/wiki/Washington,_D.C." }, "salience": 0.029168168, "mentions": [ { "text": { "content": "Washington, DC", "beginOffset": 86 }, "type": "PROPER" } ] } { "name": "1600 Pennsylvania Ave NW, Washington, DC", "type": "ADDRESS", "metadata": { "country": "US", "sublocality": "Fort Lesley J. McNair", "locality": "Washington", "street_name": "Pennsylvania Avenue Northwest", "broad_region": "District of Columbia", "narrow_region": "District of Columbia", "street_number": "1600" }, "salience": 0, "mentions": [ { "text": { "content": "1600 Pennsylvania Ave NW, Washington, DC", "beginOffset": 60 }, "type": "TYPE_UNKNOWN" } ] } } ... ], "language": "en" } |
Example 4. Entity analysis
No Format |
---|
nlp-analyze-entity-sentiment body result service_account_key.json |
...
Result is a string column, populated with a json which will contain all the data combined from all above jsons
Example of user scenarios (getting information from JSON)
We can put these scenarios into 3 categories.
1) The result of scenario is a single value.
A lot of this can be done via json-path. Fyi Json-path supports conditional gets and nested queries (multiple json-path in each other). Which makes in pretty flexible
2) The result of scenario is a list.
Json-path can return lists. So a lot of these can be implemented as well
3) The result of scenario is a map.
I don't think this is viable to do via wrangler. It would probably require loops. User will have to pass it to something like python transform and write his custom handling code. To do what ever he wants.
...
No Format |
---|
{ "entities": [ { "name": "Trump", "type": "PERSON", "metadata": { "mid": "/m/0cqt90", "wikipedia_url": "https://en.wikipedia.org/wiki/Donald_Trump" }, "salience": 0.7936003, "mentions": [ { "text": { "content": "Trump", "beginOffset": 10 }, "type": "PROPER" }, { "text": { "content": "President", "beginOffset": 0 }, "type": "COMMON" } ] }, { "name": "White House", "type": "LOCATION", "metadata": { "mid": "/m/081sq", "wikipedia_url": "https://en.wikipedia.org/wiki/White_House" }, "salience": 0.09172433, "mentions": [ { "text": { "content": "White House", "beginOffset": 36 }, "type": "PROPER" } ] }, { "name": "Pennsylvania Ave NW", "type": "LOCATION", "metadata": { "mid": "/g/1tgb87cq" }, "salience": 0.085507184, "mentions": [ { "text": { "content": "Pennsylvania Ave NW", "beginOffset": 65 }, "type": "PROPER" } ] }, { "name": "Washington, DC", "type": "LOCATION", "metadata": { "mid": "/m/0rh6k", "wikipedia_url": "https://en.wikipedia.org/wiki/Washington,_D.C." }, "salience": 0.029168168, "mentions": [ { "text": { "content": "Washington, DC", "beginOffset": 86 }, "type": "PROPER" } ] } { "name": "1600 Pennsylvania Ave NW, Washington, DC", "type": "ADDRESS", "metadata": { "country": "US", "sublocality": "Fort Lesley J. McNair", "locality": "Washington", "street_name": "Pennsylvania Avenue Northwest", "broad_region": "District of Columbia", "narrow_region": "District of Columbia", "street_number": "1600" }, "salience": 0, "mentions": [ { "text": { "content": "1600 Pennsylvania Ave NW, Washington, DC", "beginOffset": 60 }, "type": "TYPE_UNKNOWN" } ] } } ... ], "language": "en" } |
Will return 'Trump'.
Example of user scenarios (getting information from JSON)
We can put these scenarios into 3 categories.
1) The result of scenario is a single value.
A lot of this can be done via json-path. Fyi Json-path supports conditional gets and nested queries (multiple json-path in each other). Which makes in pretty flexible
2) The result of scenario is a list.
Json-path can return lists. So a lot of these can be implemented as well
3) The result of scenario is a map.
I don't think this is viable to do via wrangler. It would probably require loops. User will have to pass it to something like python transform and write his custom handling code. To do what ever he wants.
Example 1. Take all the nouns from the sentence.
Example sentence: "Google, headquartered in Mountain View, unveiled the new Android phone at the Consumer Electronic Show. Sundar Pichai said in his keynote that users love their new Android phones."
Wrangler directives:
No Format |
---|
nlp-analyze-syntax body result service_account_key.json
json-path result nounsList "$.tokens[?(@.partOfSpeech.tag='NOUN')]" |
...
Flattened version of JSON for transform
We need to reduce the number of nested things in JSONs in order to transform them into the cdap records correctly. Here is how it's done:
1. Syntax analysis
No Format |
---|
{
"sentences": [
{
"text": {
"content": "Google, headquartered in Mountain View, unveiled the new Android phone at the Consumer Electronic Show.",
"beginOffset": 0
}
},
{
"text": {
"content": "Sundar Pichai said in his keynote that users love their new Android phones.",
"beginOffset": 105
}
}
],
"tokens": [
{
"text": {
"content": "Google",
"beginOffset": 0
},
"partOfSpeech": {
"tag": "NOUN",
"aspect": "ASPECT_UNKNOWN",
"case": "CASE_UNKNOWN",
"form": "FORM_UNKNOWN",
"gender": "GENDER_UNKNOWN",
"mood": "MOOD_UNKNOWN",
"number": "SINGULAR",
"person": "PERSON_UNKNOWN",
"proper": "PROPER",
"reciprocity": "RECIPROCITY_UNKNOWN",
"tense": "TENSE_UNKNOWN",
"voice": "VOICE_UNKNOWN"
},
"dependencyEdge": {
"headTokenIndex": 7,
"label": "NSUBJ"
},
"lemma": "Google"
}
],
"language": "en"
} |
Flattened:
No Format |
---|
{ "sentences": [ { # record. "text": { "content": "Google, headquartered in Mountain View, unveiled the new Android phone at the Consumer Electronic Show.", at the Consumer Electronic Show.", "beginOffset": 0 }, { "content": "Sundar Pichai said in his keynote that users love their new Android phones.", "beginOffset": 0105 } } ], },"tokens": [ { # record. "textcontent": { "Google", "contentbeginOffset": "Sundar0 Pichai said in his keynote that users love their new Android phones.", "tag": "NOUN", "beginOffsetapect": 105"ASPECT_UNKNOWN", } }"case": "CASE_UNKNOWN", ], "tokensspeechForm": [ {"FORM_UNKNOWN", "textgender": {"GENDER_UNKNOWN", "contentmood": "GoogleMOOD_UNKNOWN", "number": "beginOffsetSINGULAR":, 0 }"person": "PERSON_UNKNOWN", "partOfSpeechproper": { "PROPER", "tagreciprocity": "NOUNRECIPROCITY_UNKNOWN", "aspecttense": "ASPECTTENSE_UNKNOWN", "casevoice": "CASEVOICE_UNKNOWN", "formdependencyEdgeHeadTokenIndex": "FORM_UNKNOWN"7, "genderdependencyEdgeLabel": "GENDER_UNKNOWN", NSUBJ" "moodlemma": "MOOD_UNKNOWN",Google" } ], "numberlanguage": "SINGULAR",en" } |
2. Sentiment analysis
No Format |
---|
{ "documentSentiment": { "personmagnitude": 0.8, "PERSON_UNKNOWN", "score": 0.8 }, "properlanguage": "PROPERen", "sentences": [ { "reciprocitytext": "RECIPROCITY_UNKNOWN",{ "tensecontent": "TENSE_UNKNOWNEnjoy your vacation!", "voicebeginOffset": "VOICE_UNKNOWN"0 }, "dependencyEdgesentiment": { "headTokenIndexmagnitude": 70.8, "labelscore": "NSUBJ"0.8 }, } "lemma": "Google" ] } |
Flattened:
No Format |
---|
{ }"magnitude": 0.8, "score": ... { 0.8 "textlanguage": { "en", "contentsentences": ".",[ { # record "beginOffset": 179 "content": "Enjoy your }vacation!", "partOfSpeechbeginOffset": { 0 "tagmagnitude": "PUNCT", 0.8, "aspectscore": "ASPECT_UNKNOWN", 0.8 } ] } |
3. Entity analysis
No Format |
---|
{ "caseentities": "CASE_UNKNOWN", [ { "formname": "FORM_UNKNOWN"1600 Pennsylvania Ave NW, Washington, DC", "gendertype": "GENDER_UNKNOWNADDRESS", "moodmetadata": "MOOD_UNKNOWN",{ "numbercountry": "NUMBER_UNKNOWNUS", "personsublocality": "PERSON_UNKNOWNFort Lesley J. McNair", "properlocality": "PROPER_UNKNOWNWashington", "reciprocitystreet_name": "RECIPROCITY_UNKNOWNPennsylvania Avenue Northwest", "tensebroad_region": "TENSE_UNKNOWNDistrict of Columbia", "voicenarrow_region": "VOICE_UNKNOWN"District of Columbia", }, "street_number": "1600" "dependencyEdge": { }, "headTokenIndexsalience": 200, "labelmentions": "P"[ }, "lemma": "."{ } ], "languagetext": { "en" } |
Result is ['Google', 'Mountain View', 'Android', ...]
Example 2. Get the most important entity (has maximum salience)
Example sentence: "President Trump will speak from the White House, located at 1600 Pennsylvania Ave NW, Washington, DC, on October 7."
No Format |
---|
nlp-analyze-entity-sentiment body result service_account_key.json
json-path result positive_entities "$.entities[?(@.salience = $['.entities.salience.max()'])].name" |
Json:
No Format |
---|
{ "entities": [ { "content": "1600 Pennsylvania Ave NW, Washington, DC", "beginOffset": 60 }, "nametype": "Trump", TYPE_UNKNOWN" } ] "type": "PERSON", } } "metadata": { ... ], "mid": "/m/0cqt90", "language": "en" } |
Flattened:
No Format |
---|
{ "entities": [ { # record "wikipedia_urlname": "https://en.wikipedia.org/wiki/Donald_Trump" }1600 Pennsylvania Ave NW, Washington, DC", "saliencetype": 0.7936003"ADDRESS", "mentionsmetadata": { # [as a map<string,string> since fields are dynamic. Do not {flatten. Looks better "textcountry": {"US", "contentsublocality": "Trump",Fort Lesley J. McNair", "locality": "beginOffsetWashington":, 10 "street_name": "Pennsylvania Avenue }Northwest", "typebroad_region": "PROPER"District of Columbia", }, "narrow_region": "District of Columbia", { "textstreet_number": {"1600" }, "contentsalience": "President"0, "beginOffset"mentions": 0[ { # },record "typecontent": "COMMON"1600 Pennsylvania Ave NW, Washington, DC", } ]"beginOffset": 60 }, { "nametype": "White House","TYPE_UNKNOWN" } "type": "LOCATION", ] "metadata": { } } "mid": "/m/081sq", ... ], "wikipedia_urllanguage": "https://en.wikipedia.org/wiki/White_House" " } |
4. Entity analysis
No Format |
---|
{ "entities":[ }, { "salience": 0.09172433, "mentions": [ { { "text": { "sentiment":{ "content": "White House", "beginOffsetmagnitude":0.9, 36 }, "typescore":0.9 "PROPER" } }, ] }, { "nametext":{ "Pennsylvania Ave NW", "type": "LOCATION", "metadatabeginOffset":7, { "mid": "/g/1tgb87cq" }, "content":"R&B music" "salience": 0.085507184, "mentions": [ }, { "texttype": {"COMMON" } "content": "Pennsylvania Ave NW", ], "beginOffsetmetadata": 65{ }, "typename":"R&B music"PROPER", } "salience":0.5597628, ] "sentiment":{ }, { "namemagnitude": "Washington, DC",0.9, "typescore": "LOCATION",0.9 "metadata": {}, "midtype": "/m/0rh6k", "WORK_OF_ART" } ], "wikipedia_urllanguage": "https://en.wikipedia.org/wiki/Washington,_D.C." "en" } |
Flattened:
No Format |
---|
{ "entities":[ }, { # record "salience": 0.029168168, "mentions":[ [ { # record "textmagnitude":0.9, { "contentscore": "Washington, DC",0.9 "beginOffset": 86 7, }, "typecontent": "PROPER"R&B music" } ]"type":"COMMON" } { } "name": "1600 Pennsylvania Ave NW, Washington, DC", ], "type": "ADDRESS", "metadata":{ {# as a map<string,string> since fields are dynamic. Do "country": "US", not flatten. Looks better "sublocality": "Fort Lesley J. McNair"}, "localityname":"R&B music"Washington", "street_namesalience": "Pennsylvania Avenue Northwest",0.5597628, "broad_regionmagnitude": "District of Columbia",0.9, "narrow_regionscore":0.9, "District of Columbia", "street_numbertype": "1600WORK_OF_ART" }, ], "saliencelanguage": 0, "en" } |
6. Classify content:
No Format |
---|
{ "mentionscategories": [ { "textconfidence": { 0.61, "contentname": "1600/Computers Pennsylvania Ave NW, Washington, DC",& Electronics" }, "beginOffset": 60{ }"confidence":0.53, "typename": "TYPE_UNKNOWN" "/Internet & Telecom/Mobile & Wireless" } }, ] { } }"confidence":0.53, ... ], "languagename": "en" } |
...
/News"
}
]
} |
Does not change.