Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • WorkflowToken interface changes

    Code Block
    /**
     * Put the specified key-value entryInterface to represent the data that is transferred from one node to the next node in the {@link WorkflowTokenWorkflow}.
     */
    Thepublic tokeninterface mayWorkflowToken store{
    additional
    information about the/**
    context in which  * thisPut keythe is specified key-value entry in the {@link WorkflowToken}.
       * The token may store additional information about the context in which
       * this key is being set, for example, the unique name of the workflow node.
       * @param key   the key representing the entry
       * @param value the value for the key
       */
      void setValue(String key, String value);
    
      /**
       * Same key can be added to the WorkflowToken by multiple nodes.
       * This method returns the {@link List} of unique node names to the values
       * that were added by the nodes for the specified key. The list maintains
       * the order in which the values were inserted in the WorkflowToken for a
       * specific key. In case of fork in the Workflow, copies of the WorkflowToken
       * are made and passed it along the each branch. At the join, all copies of the
       * WorkflowToken are merged together. While merging, the values from the branch
       * that was completed first will be added first to the WorkflowToken.
       *
       * Example: Consider that the following values were added to the Workflow
       * for the key "myKey". Numbers associated with the values represent
       * unique node names -
       *
       *                3   4
       *            |-->D-->E--|
       * A-->B-->C-->           >-->H-->I
       * 0   1   2  |-->F-->G--|    7   8
       *                5   6
       *
       * Assume that the branch containing node 5 finishes the execution first.
       *
       * Now the method invocation getValues("myKey") will return the list
       * in which the keys will be ordered as 0-1-2-5-6-3-4-7-8.
       *
       * @param key the key to be searched
       * @return the list of entries from node name to the value that node
       * added for the input key
       */
     List<Map.Entry<String, String>>List<NodeValueEntry> getValues(String key);
    
      /**
       * Get the value set for the specified key by specified node.
       * @param key the key to be searched
       * @param nodeName the name of the node
       * @return the value set for the key by nodeName
       */
      @Nullable
      String getValue(String key, String nodeName);
    
      /**
       * Get the most recent value for the specified key.
       * @param key the key to be searched
       * @return the value for the key
       */
      @Nullable
      String getValue(String key);
    
      /**
       * ReturnThis truemethod ifis thedeprecated {@linkas WorkflowToken}of contains release 3.1. Instead to get the
    specified key.  * MapReduce @paramcounters keyfrom the WorkflowToken, keyuse tothe beflatten testedkey forprefixed the
    presence in the {@link* WorkflowToken}
     * @return the result of the testby 'mr.counters.'.
       * 
       */ boolean containsKey(String key);
     
    /**
     * This method is deprecated as of release 3.1. User can overrideExample: 
       * 1. To get the most recent value of counter with group name 
       * the {@link MapReduce#onFinish} method and put the hadoop counters in the'org.apache.hadoop.mapreduce.TaskCounter' and counter name 'MAP_INPUT_RECORDS'  
       * WorkflowToken
    if required.  *  *String GetflattenCounterKey the Hadoop counters from the previous MapReduce program in the Workflow.
     * The method returns null if the counters are not set.
     * @return the Hadoop MapReduce counters set by the previous MapReduce program
     */
    @Deprecated
    @Nullable
    Map<String, Map<String, Long>> getMapReduceCounters();

    WorkflowConfigurer interface changes

    We generate unique numeric node ids for each node when the application is deployed.  However, while writing the Workflow, users will not be aware of the node id associated with each node in the Workflow. Since WorkflowToken stores the MapReduce counters and other information per node level, users should be able to get the value of a particular key from the token as set by the particular program in the Workflow.

    If the program is used only once in a Workflow, then the user can use its name to query for the token information. However, we allow the same program to occur multiple times in a Workflow. In that case, the program name will not be sufficient to access the token.

    The WorkflowConfigurer API can be updated to allow a user to set a unique name for the program, if it occurs multiple times in a Workflow and use that unique name to retrieve the token.
    Code Block
    /**
     * Add MapReduce program to the {@link Workflow}.
     * @param uniqueName    the unique name for the MapReduce program which will be used 
     *                      to identify particular occurrence of the program in the Workflow 
     * @param mapReduceName the name of the MapReduce program
     */
    void addMapReduce(String uniqueName, String mapReduceName);
    WorkflowToken can also be updated from a predicate on the condition node. In the presence of multiple condition nodes in a Workflow, we will need the ability to specify unique names for the conditions as well so that token values from specific condition nodes can be fetched. 
    Code Block
    /**
     * Adds a condition with the unique name= "mr.counters.org.apache.hadoop.mapreduce.TaskCounter.MAP_INPUT_RECORDS"; 
       *  workflowToken.getValue(flattenCounterKey);
       * 
       * 2. To get the value of counter with group name 'org.apache.hadoop.mapreduce.TaskCounter'
       * and counter name 'MAP_INPUT_RECORDS' as set by MapReduce program with unique name 'PurchaseHistoryBuilder'
       * 
       *   String flattenCounterKey = "mr.counters.org.apache.hadoop.mapreduce.TaskCounter.MAP_INPUT_RECORDS";
       *   workflowToken.getValue(flattenCounterKey, "PurchaseHistoryBuilder");
       *   
       * Get the Hadoop counters from the previous MapReduce program in the Workflow.
       * The method returns null if the counters are not set.
       * @return the Hadoop MapReduce counters set by the previous MapReduce program
       */
      @Deprecated
      @Nullable
      Map<String, Map<String, Long>> getMapReduceCounters();
    
      /**
       * Return true if the {@link WorkflowToken} contains the specified key.
       * @param key the key to be tested for the presence in the {@link WorkflowToken}
       * @return the result of the test
       */
      boolean containsKey(String key);
    }
  • WorkflowConfigurer interface changes

    We generate unique numeric node ids for each node when the application is deployed.  However, while writing the Workflow, users will not be aware of the node id associated with each node in the Workflow. Since WorkflowToken stores the MapReduce counters and other information per node level, users should be able to get the value of a particular key from the token as set by the particular program in the Workflow.

    If the program is used only once in a Workflow, then the user can use its name to query for the token information. However, we allow the same program to occur multiple times in a Workflow. In that case, the program name will not be sufficient to access the token.

    The WorkflowConfigurer API can be updated to allow a user to set a unique name for the program, if it occurs multiple times in a Workflow and use that unique name to retrieve the token.

    Code Block
    /**
     * Add MapReduce program to the {@link Workflow}.
     * @param uniqueName conditionName   the unique name to for the MapReduce program which will be assigned to the condition
     * @param conditionused 
     *                   the {@link Predicate} to be evaluated for the condition identify particular occurrence of the program in the Workflow 
     * @return@param mapReduceName the configurername forof the MapReduce conditionprogram
     */
    WorkflowConditionConfigurer<? extends WorkflowConfigurer> conditionvoid addMapReduce(String conditionNameuniqueName, Predicate<WorkflowContext>String conditionmapReduceName);


    Provide ability to set and get information in the WorkflowToken
    1. MapReduce program: Users should be able to access and modify WorkflowToken from "beforeSubmit" and "onFinish" methods of the MapReduce program. Since these methods get the MapReduceContextWorkflowToken can also be updated from a predicate on the condition node. In the presence of multiple condition nodes in a Workflow, we will need to update the MapReduceContext interface to get the WorkflowToken.ability to specify unique names for the conditions as well so that token values from specific condition nodes can be fetched. 

    Code Block
    /**
     * If {@link MapReduce} program is executed as a part of the {@link Workflow} 
     * then get the {@link WorkflowToken} associated with the current run, otherwise return null.  
     * @returnAdds a condition with the unique name to the {@link WorkflowToken} if available
     */
    @Nullable
    WorkflowToken getWorkflowToken();

    Consider the following code sample to update the WorkflowToken in the MapReduce program:

    Code Block
    @Override
    public void beforeSubmit(MapReduceContext context) throws Exception {
      ...
      WorkflowToken workflowToken = context.getWorkflowToken();
      if (workflowToken != null) {
        // Put the action type in the WorkflowToken
        workflowToken.setValue("action", "MAPREDUCE");
        // Put the start time for the action
        workflowToken.setValue("startTime", String.valueOf(System.currentTimeMillis()));
      }
      ...
    }
     
    @Override
    public void onFinish(boolean succeeded, MapReduceContext context) throws Exception {
      ...
      // if job is successful put hadoop counters in the WorkflowTokenWorkflow}.
     * @param conditionName the unique name to be assigned to the condition
     * @param condition     the {@link Predicate} to be evaluated for the condition
     * @return the configurer for the condition
     */
    WorkflowConditionConfigurer<? extends WorkflowConfigurer> condition(String conditionName, Predicate<WorkflowContext> condition);
  • Provide ability to set and get information in the WorkflowToken

    1. MapReduce program: Users should be able to access and modify WorkflowToken from "beforeSubmit" and "onFinish" methods of the MapReduce program. Since these methods get the MapReduceContext, we will need to update the MapReduceContext interface to get the WorkflowToken.

    Code Block
    /**
     * If {@link MapReduce} program is executed as a part of the {@link Workflow} 
     * then get the {@link WorkflowToken} associated with the current run, otherwise return null.  
     * @return the {@link WorkflowToken} if available
     */
    @Nullable
    WorkflowToken getWorkflowToken();


    Consider the following code sample to update the WorkflowToken in the MapReduce program:

    Code Block
    @Override
    public void beforeSubmit(MapReduceContext context) throws Exception {
      ...
      WorkflowToken workflowToken = context.getWorkflowToken();
      if (workflowToken != null) {
        // Put the endaction timetype forin the action WorkflowToken
        workflowToken.setValue("endTimeaction", String.valueOf(System.currentTimeMillis())"MAPREDUCE");
         // Put counters inthe start time for the WorkflowTokenaction
        	workflowToken.setValue("countersstartTime", getHadoopCountersString.valueOf(contextSystem.getHadoopJobcurrentTimeMillis()));
      }
      ...
    }
     
    @Override
    //public Methodvoid returns the hadoop counters in JSON format
    private static String getHadoopCounters(Job job) throws Exception {
      Map<String, Map<String, Long>> mapReduceCounters = Maps.newHashMap();
      Counters counters = job.getCountersonFinish(boolean succeeded, MapReduceContext context) throws Exception {
      ...
      // if job is successful put hadoop counters in the WorkflowToken
      WorkflowToken workflowToken = context.getWorkflowToken();
      forif (CounterGroupworkflowToken group : counters!= null) {
        mapReduceCounters.put(group.getName(), new HashMap<String, Long>());
        for (Counter counter : group) {
    // Put the end time for the action 
        mapReduceCountersworkflowToken.get(group.getName()).put(counter.getName(), counter.getValue(setValue("endTime", String.valueOf(System.currentTimeMillis()));
        }
      }
      return new Gson().toJson(mapReduceCounters);...
    }
    
    

    2. Spark program: Users should be able to access and modify WorkflowToken from "beforeSubmit" and "onFinish" methods of the Spark program. Since these methods get the SparkContext, we will need to update the SparkContext interface to get the WorkflowToken.

     

    Code Block
    /**
     * If {@link Spark} program is executed as a part of the {@link Workflow}
     * then get the {@link WorkflowToken} associated with the current run, otherwise return null.
     * @return the {@link WorkflowToken} if available
     */
    @Nullable
    WorkflowToken getWorkflowToken();

    3. Custom Workflow action: Since custom workflow actions already receive WorkflowContext, no changes are anticipated in the interface.

    Following is the sample code to get values from the WorkflowToken in custom action:

    Code Block
    @Override
    public void run() {
      ...
      WorkflowToken token = getContext().getToken();
      // set the type of the action of the current node	
      token.setValue("action", "CUSTOM_ACTION");
     
      // Assuming that every node in the Workflow adds the key "action" with the value as action type in the WorkflowToken
      List<Map.Entry<String, String>> nodeValues = token.getValues("action");
     
      // To get the number of nodes executed by the Workflow - nodeValues.size();
     
      // Simply iterate over the nodeValues to get the order in which the nodes were executed.
     
      // To get the start time of the MapReduce program with unique name "PurchaseHistoryBuilder"
      String startTime = token.getValue("startTime", "PurchaseHistoryBuilder");
     
      // To get the MapReduce counters set by MapReduce program with unique name "PurchaseHistoryBuilder"
      Type mapReduceCounterType = new TypeToken<Map<String, Map<String, Long>>>() {}.getType();
      Map<String, Map<String, Long>> counters = new Gson().fromJson(token.getValue("counters", "PurchaseHistoryBuilder"),
                                                                    mapReduceCounterType);  
      ...
    }
  • WorkflowToken in presence of Fork and Join
    When a fork is encountered in the Workflow, we make a copy of the WorkflowToken and pass it along to each branch. At the join, we create a new WorkflowToken, which will be a merge of the WorkflowTokens associated with each of the branches of the fork. Since we are storing the information in the token at the node level, there will not be any conflicts during the merge process.

  • Persisting the WorkflowToken
    The RunRecord for the Workflow will contain the WorkflowToken as its property. This token will be updated before the execution of the action in the Workflow. We will add a version field to the RunRecord itself which will help in the upgrade process.

  • RESTful end-points to access the value of the WorkflowToken that was received by an individual node in the Workflow
    We will expose a RESTful end point to retrieve the token values that were set by a particular node as identified by its unique name.

     

    Code Block
    /apps/{app-id}/workflows/{workflow-name}/runs/{run-id}/nodes/{unique-node-name}/token

     

     

...