I’m having trouble getting the salesforce data to ...
# troubleshooting
r
I’m having trouble getting the salesforce data to load. I have other taps like the gitlab tap working fine, using the same target as salesforce. The last few lines of the log when running the pipeline are:
Copy code
tap-salesforce  | INFO Completed sync for Account
tap-salesforce  | INFO METRIC: {"type": "counter", "metric": "record_count", "value": 4354, "tags": {"endpoint": "Contact"}}
tap-salesforce  | INFO METRIC: {"type": "counter", "metric": "record_count", "value": 4302, "tags": {"endpoint": "Lead"}}
tap-salesforce  | INFO METRIC: {"type": "counter", "metric": "record_count", "value": 4709, "tags": {"endpoint": "Contact"}}
tap-salesforce  | INFO METRIC: {"type": "counter", "metric": "record_count", "value": 4652, "tags": {"endpoint": "Lead"}}
tap-salesforce  | INFO METRIC: {"type": "counter", "metric": "record_count", "value": 4707, "tags": {"endpoint": "Contact"}}
tap-salesforce  | INFO METRIC: {"type": "counter", "metric": "record_count", "value": 4573, "tags": {"endpoint": "Lead"}}
tap-salesforce  | INFO METRIC: {"type": "counter", "metric": "record_count", "value": 5077, "tags": {"endpoint": "Contact"}}
tap-salesforce  | INFO METRIC: {"type": "counter", "metric": "record_count", "value": 4854, "tags": {"endpoint": "Lead"}}
target-postgres | INFO Stream Contact (contact) with max_version 1620254664902 targetting 1620254664902
target-postgres | INFO Root table name Contact
target-postgres | INFO Writing batch with 20723 records for `Contact` with `key_properties`: `['Id']`
meltano         | Loading failed (-9): INFO Writing batch with 20723 records for `Contact` with `key_properties`: `['Id']`
meltano         | ELT could not be completed: Loader failed
I’m using the meltano variant of tap-salesforce. In my
meltano.yml
file it looks like this:
Copy code
- name: tap-salesforce
    variant: meltano
    pip_url: git+<https://gitlab.com/meltano/tap-salesforce.git>
    load_schema: salesforce
    select:
    - Account.*
    - Contact.*
    - Contract.*
    - Lead.*
    - Opportunity.*
    - OpportunityHistory.*
    - User.*
d
@robert_kern Is it possible your system/container is running out of memory and the target process is getting killed because of it?
The
-9
implies it was killed by SIGKILL
(Note to self: create an issue about having
meltano elt
explicitly state this if the exit code of the process was
-9
)
Consider tweaking https://meltano.com/plugins/loaders/postgres.html#max-batch-rows and/or https://meltano.com/plugins/loaders/postgres.html#max-buffer-size to get the batch or buffer size down, or just increasing the available memory
r
Hmm interesting. The container wasn’t killed but it was using 1.3Gi of the available 1.4Gi. I’ll try tweaking the max batch rows and buffer size first
Copy code
target-postgres | INFO Stream Lead (lead) with max_version 1620256953947 targetting 1620256953947 target-postgres | INFO Root table name Lead target-postgres | INFO Writing batch with 18641 records for `Lead` with `key_properties`: `['Id']` meltano | Loading failed (-9): INFO Writing batch with 18641 records for `Lead` with `key_properties`: `['Id']` meltano | ELT could not be completed: Loader failed
Same thing but will tweak those settings further and also increase memory
I went crazy and reduced
TARGET_POSTGRES_MAX_BATCH_ROWS
down to
1000
and it does appear to be working so far. I’ll see how it goes and might look at increasing that value later. Thanks!
c
I think I gave my SF container 2 GB for the hourly incrememntal and 16GB for a full refresh