Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The salesforce batch sink is responsible for using Salesforce API to upsert salesforce objects. The sink should handle large batches of data (~10 GB) and should handle all object types - contacts, campaigns, oppurtunities, leads, custom objects. Users should be able to upload subset of fields 

...

SectionUser Configuration LabelDescriptionDefaultUser Widget
Early Validations
AuthenticationUsernameSalesforce username
Text Box

Try a to login to bulk API with given credentials.





Password

Password

Consumer KeyConsumer Key from the connected app 
Text Box

Consumer SecretConsumer Secret from the connected app 
Password

Login UrlFor Salesforce sandbox runs login url is different. That's why user needs to have this option.https://login.salesforce.com/services/oauth2/tokenText Box
AdvancedSObjectName of Salesforce sObject - ex: Contact, Campaign, Oppurtunity.
Text BoxCheck if sObject with given name exists in Bulk API.

Maximum bytes per batchIf size of batch data is larger than given number of bytes, split the batch.10,000,000 [2]Text BoxIf more than 10,000,000 than fail [3]

Maximum records per batchIf there are more than given number of records, split the batch.10,000 [2]Text BoxIf more than 10,000 fail [4]

Error handling

Bulk API will return success results per row so this is necessary [1] (unlike for source plugins).

Possible values:

"Skip on error" - ignores any reports about records not inserted. Simply prints an error log.
"Stop on error" - fails pipeline is any of records were failed on insertion

Skip on errorSelect

...

Anchor
split_data
split_data
STEP 3. Split data into batches

Here we create multiple CSV filesstrings with CSV data. Every of them contains multitude of records. Every string represents data for a separate batch.

Splitting data into batches is done considering these 3 factors:

  1. User configurations "Maximum bytes per batch" and "Maximum records per batch" must be obeyed.
  2. Bulk API limitations must be obeyed. Here's the list of them:
    1. Batches for data loads can consist of a single CSV file that CSV that is no larger than 10 MB.
    2. A batch can contain a maximum of 10,000 records.
    3. A batch can contain a maximum of 10,000,000 characters for all the data in a batch.
    4. A field can contain a maximum of 32,000 characters.
    5. A record can contain a maximum of 5,000 fields.
    6. A record can contain a maximum of 400,000 characters for all its fields.
    7. A batch must contain some content or an error occurs.
    How we handle these:

    A,B,C - controlled by splitting the data correctly.
    E - checked during schema validation.
    D,F,G - if the data, which comes from sink, exceeds these, simply let the batch fail. Nothing we can do about these.

  3. How many records comes to a specific mapper. For more details see <TODO>.

STEP 4. Add batches to CSV

Pass a FileInputStream ByteArrayInputStream of csv file string to Bulk API and ask it to create a batch using this fileit.

STEP 5. Close Job

Ask Bulk API to close the job. This means that no more batches will be expected by Salesforce.

...

Code Block
Id,Success,Created,Error
fa4t2fggee,true,true,
rqewetrter,true,true,
,false,false,Field 'Name' is required and cannot be empty for sObject 'Contact'
gre3jvd245,true,true,

Records which has either success=false OR created=false are has success=false are considered erroneous. Erroneous are processed according to user configuration "Error handling". For more information look at section "User configurations".

...

RecordWritter#constructor

Create a tmp folder and establish Establish connection to Salesforce Bulk API

RecordWritter#write

Append a record to a csv fileStringBuilder. If according to the our batching policy we need the record to go into the new batch, we close the file and submit it submit the CSV string to the Salesforce job as a separate batch. After that the new file string is created and the record is appended to it.

For information on how we calculate batches please see Splitting data into batches.

RecordWritter#close

  1. Close currently opened csv file and submit it as Submit current CSV StringBuilder as a batch to Salesforce job.
  2. Wait for completion of EVERY batch which was submitted by current mapper.
  3. Check results for every record in EVERY batch (submitted by current mapper), and act on them according to the error handling strategy configured by user.

Other points

Anchor
#unknown_fields
#unknown_fields

...

Some points on behavior if schema contains fields which are not present or are not editable (creatable) in target sObject.

Let's consider a case where user wants to copy Contacts from one Salesforce instance to another. This is done by simply connecting Salesforce Source and Sink in ETL.

...

Validating schema

SObject contains a lot of fields which cannot be inserted (non-creatable fields) like Id, isDeleted, LastModified LastModifiedDate and a lot of other fields which are often auto-generated.

These fields are different for every sObject. Good news is that we can query Salesforce SOAP API and check if any field in sObject is creatable.

Based on above said, I propose that we skip any non-creatable fields or fields that do not exist. We simply produce a log message with the list of fields that were ignoredWe do the early validation and check if schema contains fields which are not present or not creatable in target sObject.

Converting fields

We will have to convert logical types like date, datetime, time from long to string format accepted by Salesforce. Other types won't require converting.

...