Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Navigate to the pipeline detail page.

  2. In the Configure menu, click on Resources.

  3. Enter the desired amount under Executor.

    Image RemovedImage Added

  4. In the same Configure menu, click on Compute config.

  5. Click customize on the desired compute profile.

  6. Ensure that the worker memory is a multiple of the executor memory. For example, if executor memory is 4096, worker memory should use 4, 8, 12, etc GB of memory. Also scale the worker cores accordingly. Note that it is not strictly necessary for worker memory to be an exact multiple, but if it is not, it is more likely for cluster capacity to be wasted.

...

  1. Navigate to the pipeline detail page.

  2. In the Configure menu, click on Engine config.

  3. Enter 'spark.cdap.pipeline.autocache.enable' as the key, and 'false' as the value.

...

Page Properties
hiddentrue

Related issues