Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. (3.4) A developer should be able to create pipelines that contain aggregations (GROUP BY -> count/sum/unique)
  2. (3.5) A developer should be able to control some parts of the pipeline running before others. For example, one source -> sink branch running before another source -> sink branch.
  3. (3.54) A developer should be able to use a Spark ML job as a pipeline stage
  4. (3.4) A developer should be able to rerun failed pipeline runs without reconfiguring the pipeline
  5. (3.4) A developer should be able to de-duplicate records in a pipeline
  6. (3.5) A developer should be able to join multiple branches of a pipeline
  7. (3.5) A developer should be able to use an Explore action as a pipeline stage
  8. (3.5) A developer should be able to create pipelines that contain Spark Streaming jobs
  9. (3.5) A developer should be able to create pipelines that run based on various conditions, including input data availability and Kafka events

...