API (purely programmatic)
User will need to implement a DynamicPartitioner, which is responsible for defining the PartitionKey to use for each record.
For each of the partition keys that this DynamicPartitioner returns, a partition will be created in the output PartitionedFileSet dataset.
public static final class CustomDynamicPartitioner implements DynamicPartitioner<byte[], Long> { @Override public PartitionKey getPartitionKey(byte[] key, Long value, long logicalStartTime) { return PartitionKey.builder().addLongField("time", logicalStartTime).addLongField("other", value).build(); } }
In the beforeSubmit() method of the user's MapReduce job, set the PartitionedFileSet dataset with the user's Dynamic Partitioner as output
public void beforeSubmit(MapReduceContext context) throws Exception { // define input and other setup // ... Map<String, String> outputArgs = new HashMap<>(); // alternative to setting the PartitionKey, set a DynamicPartitioner class PartitionedFileSetArguments.setDynamicPartitioner(outputArgs, CustomDynamicPartitioner.class); context.addOutput("outputLines", outputArgs); }
High-level Implementation Design
Currently, when using a PartitionedFileSet dataset as an output for MapReduce, a single partition must be set as the output partition.
This means that currently, the MapReduce job will write to a single output directory and register a single partition within the PartitionedFileSet's metadata table.
Dynamic Partitioning will allow users to specify a PartitionKey based upon the records being processed by the MapReduce job.
If this output PartitionKey is missing, we will look for a specified DynamicPartitioner, which maps records to PartitionKey.
We will also have to implement our own DynamicPartitioningOutputFormat which is responsible for writing to multiple paths, depending on the PartitionKey.
We will also need to implement an OutputCommitter, which is responsible for creating partitions for each of the partition keys written to.