Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

User will need to extend and implement a DynamicPartitioner, which is responsible for defining the PartitionKey to use for each record.
For each of the partition keys that this DynamicPartitioner returns, a partition will be created in the output PartitionedFileSet dataset.

 

Code Block
languagejava
public static final class CustomDynamicPartitioner extends DynamicPartitioner<byte[], Long> {
  private long logicalStartTime;

  @Override
  public void initialize(long logicalStartTime) {
    this.logicalStartTime = logicalStartTime;
  }

  @Override
  public PartitionKey getPartitionKey(byte[] key, Long value) {
    return PartitionKey.builder().addLongField("time", logicalStartTime).addLongField("other", value).build();
  }
}


 In the beforeSubmit() method of the user's MapReduce job, set the PartitionedFileSet dataset with the user's Dynamic Partitioner as output

Code Block
languagejava
public void beforeSubmit(MapReduceContext context) throws Exception {
  // define input and other setup
  // ...

  Map<String, String> outputArgs = new HashMap<>();
  // alternative to setting the PartitionKey, set a DynamicPartitioner class
  PartitionedFileSetArguments.setDynamicPartitioner(outputArgs, CustomDynamicPartitioner.class);
  context.addOutput("outputLines", outputArgs);
}

 
 Alternatives:
1) For more flexibility to the user, an alternative to passing in the logicalStartTime into the DynamicPartitioner is to pass in CDAP's MapReduceTaskContext or Hadoop's TaskAttemptContext. Passing in the latter will require moving the DynamicPartitioner interface into a new cdap-api-hadoop module.   

High-level Implementation Design

Currently, when using a PartitionedFileSet dataset as an output for MapReduce, a single partition must be set as the output partition.
This means that currently, the MapReduce job will write to a single output directory and register a single partition within the PartitionedFileSet's metadata table.
Dynamic Partitioning will allow users to specify a PartitionKey based upon the records being processed by the MapReduce job. 


If this output PartitionKey is missing, we will look for a specified DynamicPartitioner, which maps records to PartitionKey.
We will also have to implement our own DynamicPartitioningOutputFormat which is responsible for writing to multiple paths, depending on the PartitionKey.
We will also need to implement an OutputCommitter, which is responsible for creating partitions for each of the partition keys written to.

Code Block
languagejava
titleDynamicPartitioner.java
/**
 * Responsible for dynamically determining a @{link PartitionKey}.
 * For each K, V pair, first the getPartitionKey(K, V) method is called to determine a PartitionKey.
 * Then, the transformKey and transformValue methods are called to allow transforming the actual key and values written.
 *
 * @param <K> Type of key
 * @param <V> Type of value
 */
public abstract class DynamicPartitioner<K, V> {


  /**
   *  Initializes a DynamicPartitioner.
   *  <p>
   *    This method will be called only once per {@link DynamicPartitioner} instance. It is the first method call
    *    on that instance.
   *  </p>
   *  @param logicalStartTime see {@link co.cask.cdap.api.mapreduce.MapReduceContext#getLogicalStartTime}mapReduceTaskContext the mapReduceTaskContext for the task that this DynamicPartitioner is running in.
   *
   */
  public void initialize(long logicalStartTimeMapReduceTaskContext<K, V> mapReduceTaskContext) {
    // do nothing beby default
  }


  /**
   *  Destroys a DynamicPartitioner.
   *  <p>
   *    This method will be called only once per {@link DynamicPartitioner} instance. It is the last method call
   *    on that instance.
   *  </p>
   */
  public void destroy() {
    // do nothing by default
  }

  /**
   * Determine the PartitionKey for the key-value pair to be written to.
   *
   * @param key the key to be written
   * @param value the value to be written
   * @return the {@link PartitionKey} for the key-value pair to be written to.
   */
  public abstract PartitionKey getPartitionKey(K key, V value);
}

 

Code Block
languagejava
public static final class CustomDynamicPartitioner extends DynamicPartitioner<byte[], Long> {
  private long logicalStartTime;

  @Override
  public void initialize(long logicalStartTime) {
    this.logicalStartTime = logicalStartTime;
  }

  @Override
  public PartitionKey getPartitionKey(byte[] key, Long

  /**
   * Determine the key to be written.
   *
   * @param key the key to be written
   * @param value the value to be written
   * @return a transformed key to be written.
   */
  public K transformKey(K key, V value) {
    return PartitionKey.builder().addLongField("time", logicalStartTime).addLongField("other", value).build();
  }
}

...

Code Block
languagejava
public void beforeSubmit(MapReduceContext context) throws Exception {
  // define input and other setup
  // ...

  Map<String, String> outputArgs = new HashMap<>();
  // alternative to setting the PartitionKey, set a DynamicPartitioner class
  PartitionedFileSetArguments.setDynamicPartitioner(outputArgs, CustomDynamicPartitioner.class);
  context.addOutput("outputLines", outputArgs);
}

 
 Alternatives:
1) For more flexibility to the user, an alternative to passing in the logicalStartTime into the DynamicPartitioner is to pass in CDAP's MapReduceTaskContext or Hadoop's TaskAttemptContext. Passing in the latter will require moving the DynamicPartitioner interface into a new cdap-api-hadoop module.   

High-level Implementation Design

Currently, when using a PartitionedFileSet dataset as an output for MapReduce, a single partition must be set as the output partition.
This means that currently, the MapReduce job will write to a single output directory and register a single partition within the PartitionedFileSet's metadata table.
Dynamic Partitioning will allow users to specify a PartitionKey based upon the records being processed by the MapReduce job. 

...

 key;
  }

  /**
   * Determine the value to be written.
   *
   * @param key the key to be written
   * @param value the value to be written
   * @return a transformed value to be written.
   */
  public V transformValue(K key, V value) {
    return value;
  }
}