This article explains how to run pipelines against existing Dataproc clusters step-by-step. This feature is available only on the Enterprise edition of Cloud Data Fusion ("Execution environment selection").
Prerequisites:
An existing Dataproc cluster, which can be set up following this guide.
A Cloud Data Fusion instance and a data pipeline. Learn how to create a new instance by following this guide.
Instructions
SSH Setup on Dataproc Cluster.
Navigate to Dataproc console on Google Cloud Platform. Go to “Cluster details” by clicking on your Dataproc cluster name.
Under “VM Instances”, click on the “SSH“ button to connect to the Master Dataproc VM.
To create a new SSH key, use command:
ssh-keygen -m PEM -t rsa -b 4096 -f ~/.ssh/[KEY_FILENAME] -C [USERNAME]
This will create 2 key files
~/.ssh/[KEY_FILENAME]
(Private Key)~/.ssh/[KEY_FILENAME].pub
(Public Key)
To view these in an easy copiable format, use commands:
cat [KEY_FILENAME].pub
cat [KEY_FILENAME]
Navigate to the GCE VM instance detail page. Metadata > SSH Keys. Edit and add the full public key from the copy in step [1.e.i]. Make sure to delete all Newlines that may be pasted over.
Create a customized system compute profile for your Data Fusion instance
Navigate to your Data Fusion instance console by clicking on “View Instance”.
Click on “System Admin“ on the top right corner.
Under “Configuration“ tab, expand “System Compute Profiles”. Click on “Create New Profile“, and choose “Remote Hadoop Provisioner“ on the next page.
Fill out the general information for the profile.
Host: You can find the SSH host IP information of the Master Node in the “VM instance details“ page under Compute Engine.
If the instance is private, use the master's internal IP rather than the externalUser: This is the username you specified when creating the keys in step [1.c.i]
SSH private key: Copy the SSH private key created in step [1.e.ii] , and paste it to the “SSH Private Key“ field.
Including the the beginning and ending comments in your copy:
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
Make sure your key is an RSA Private key, not OPENSSH key (if OPENSSH, make sure you used the command in step [1.c.i] and included PEM)
Click “Create” to create the profile.
Configure your Data Fusion pipeline to use the customized profile
Click on the pipeline.
Click on Configure -> Compute config and choose your newly created profile.
Start the pipeline, which will be running against your existing Dataproc cluster!
Troubleshoot
If the pipeline fails on connection timeout, check if the SSH key and the firewall rules are configured correctly. Check step 1 for the SSH setting, and here for firewall rules.
If you get an ‘invalid privatekey’ error while running the pipeline, check if the first line of your private key is:
'----BEGIN OPENSSH PRIVATE KEY-----'. If so, try generating a key pair with:ssh-keygen -m PEM -t rsa -b 4096
If connecting to the VM via SSH from the command line and a private key works, but the same setup results in an “Auth failed” exception from JSch, then verify that OS login is not enabled. From the Compute Engine UI, click “Metadata” from the menu on the left, and then click on the “Metadata” tab. Delete the “osLogin” kay or set it to “FALSE”.