Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Data Fusion by default has access to read and write to Big Query/GCS/Pub-Sub/Spanner/BigTable on the project where the Data Fusion instance is created. If users would like to access other GCP resources or any of above mentioned GCP resources in a different project then they would need to follow the instructions below.

Before you begin

Create a Data Fusion instance

Doing a task

Data Fusion uses service account to access GCP resources in wrangler, preview and for pipelines running on Dataproc. The service account used for running services in the tenant project such as preview, wrangler is in the following format service-<customer-project-number>@gcp-sa-datafusion.iam.gserviceaccount.com. This service account is already created when Cloud Data Fusion API is enabled on the project. Actual pipeline execution on the Dataproc cluster happens using compute engine default service account. Any additional GCP resources that Data Fusion needs access should have appropriate permissions for both of these service account.

For example, to add access to Datastore from preview and wrangler follow the steps below.:

  1. In the GCP Console, open the IAM & Admin page.

  2. In the left bar click IAM

  3. Edit roles for service-<some_number>@gcp-sa-datafusion.iam.gserviceaccount.com

  4. In Edit permissions page, add role Cloud Datastore Owner and click on Save.

5. Perform similar steps (i.e. add the same roles) for the compute engine default service account to allow pipeline to access Datastore during the its execution on Dataproc.

To provide access to BigQuery, you’ll follow the same steps to add BigQuery Admin and BigQuery Data Owner roles for the Data Fusion service account and the compute engine default service account.

  • No labels