Note |
---|
See the public version of this document. If you update this page, please also update the public page. |
This article explains how to run pipelines against existing Dataproc clusters step-by-step. This feature is available only on the Enterprise edition of Cloud Data Fusion ("Execution environment selection").
...
SSH Setup on Dataproc Cluster.
Navigate to the Dataproc console on Google Cloud Platform. Go to “Cluster details” Cluster details by clicking on your Dataproc cluster name.
Under “VM Instances” VM Instances, click on the “SSH“ SSH button to connect to the Master Dataproc VM.
To create a new SSH key, follow the steps here, format the public key file to enforce an expiration time, and add the newly created SSH public key at the project or instance level..
Use commanduse command:
Check GCE VM instance detail page to see if your public SSH key is added to the ‘SSH Keys’ session. If not, please edit the page and add your username and public key.ssh-keygen -m PEM -t rsa -b 4096
instead of the one in the doc link to generate a SSH key that is compatible for CDF to use
If the SSH is set up successfully, you should be able to see the SSH key you just added in the Metadata section of your Compute Engine console, as well as the authorized_keys file in your Dataproc VM.
-f ~/.ssh/[KEY_FILENAME] -C [USERNAME]
Remember to leave the passphrase empty i.e. when prompted for one, hit enter.
This will create 2 key files
~/.ssh/[KEY_FILENAME]
(Private Key)~/.ssh/[KEY_FILENAME].pub
(Public Key)
To view these in an easy copiable format, use commands:
cat [KEY_FILENAME].pub
cat [KEY_FILENAME]
Navigate to the GCE VM instance detail page. Click Metadata > SSH Keys. Edit and add the full public key from the copy in step [1.e.i]. Make sure to delete all Newlines that may be pasted over.
...
Create a customized system compute profile for your Data Fusion instance
Navigate to your Data Fusion instance console by clicking on “View Instance” View Instance.
Click on “System Admin“ System Admin on the top right corner.
Under “Configuration“ the Configuration tab, expand “System System Compute Profiles”Profiles. Click on “Create Create New Profile“Profile, and choose “Remote Remote Hadoop Provisioner“ Provisioner on the next page.
Fill out the general information for the profile.
Host: You can find the SSH host IP information on the “VM instance details“ of the Master Node in the VM instance details page under Compute Engine.
If the instance is private, use the master's internal IP rather than the externalUser: This is the username you specified when creating the keys in step [1.c.i]
SSH private key: Copy the SSH private key created in step [1.e.ii] , and paste it to the “SSH SSH Private Key“ Key field.
Including the the beginning and ending comments in your copy:
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
Make sure your key is an RSA Private key, not OPENSSH key (if OPENSSH, make sure you used the command in step [1.c.i] and included PEM)
Click Create to create the profile.
Configure your Data Fusion pipeline to use the customized profile.
Click on the pipeline.
Click on Configure - > Compute config and choose your newly created profile.
Start the pipeline, which will be running against your existing Dataproc cluster!
...
If the pipeline fails on connection timeout, check if the SSH key and the firewall rules are configured correctly. Check step 1 for the SSH setting , and here for firewall rules.
If you get an ‘invalid privatekey’ error while running the pipeline, check if the first line of your private key is:
'----BEGIN OPENSSH PRIVATE KEY-----'. If so, try generating a key pair with:ssh-keygen -m PEM -t rsa -b 4096
If connecting to the VM via SSH from the command line and a private key works, but the same setup results in an “Auth failed” exception from JSch, then verify that OS login is not enabled. From the Compute Engine UI, click “Metadata” Metadata from the menu on the left, and then click on the “Metadata” Metadata tab. Delete the “osLogin” kay key or set it to “FALSE”.