Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This page describes the different configuration settings that can be used to manage the amount of memory a pipeline uses. Memory management is especially important in Spark pipelines that contain aggregations or joins.

Before you begin

Deploy a pipeline that uses the Spark engine.

Setting Executor Memory

Spark pipelines consists of a driver and multiple executors. Executors are the ones that do most of the work, and are usually the ones that will require more memory.

  1. Navigate to the pipeline detail page.

  2. In the Configure menu, click on Resources.

  3. Enter the desired amount under Executor.

  4. In the same Configure menu, click on Compute config.

  5. Click customize on the desired compute profile.

  6. Ensure that the worker memory is a multiple of the executor memory. For example, if executor memory is 4096, worker memory should use 4, 8, 12, etc GB of memory. Also scale the worker cores accordingly. Note that it is not strictly necessary for worker memory to be an exact multiple, but if it is not, it is more likely for cluster capacity to be wasted.

Turning off Auto-Caching

By default, pipelines will cache intermediate data in the pipeline in order to prevent Spark from re-computing data. This requires a substantial amount of memory, so pipelines that process a large amount of data will often need to turn this off.

  1. Navigate to the pipeline detail page.

  2. In the Configure menu, click on Engine config.

  3. Enter 'spark.cdap.pipeline.autocache.enable' as the key, and 'false' as the value.

  • No labels