Hi everyone. I'm starting to lose my hair over thi...
# troubleshooting
p
Hi everyone. I'm starting to lose my hair over this one. I'm receiving a
Loader failed
error with the following python exceptions shown in the attached file. I'm running a
meltano run tap-hubspot-contacts target-s3
command with following config:
Copy code
version: 1

plugins:
  extractors:
    - name: tap-hubspot
      variant: potloc
...
    - name: tap-hubspot-contacts
      inherit_from: tap-hubspot
      select:
        - contacts.*
        - "!contacts.associations.*"
...
  loaders:
    - name: target-s3
      variant: crowemi
      config:
        format.format_type: parquet
environments:
  - name: localhost
  - name: prod
Does anybody have any idea what might cause this?
j
I don't know what that errors means, but looking at the
target-s3
there seem to be plenty more config values that are needed i.e.
cloud_provider
etc Source: https://github.com/crowemi/target-s3/blob/main/target_s3/target.py
p
Yeah, those are provided with env vars. I have those filled based on the environment. I have multiple jobs with the same target and same settings and those run just fine but this one doesnt for (to me) unknown reason.
j
I usually troubleshoot with replacing loader with something simple i.e.
target-jsonl
. This way at least I know the extractor is working properly.
Other than that, I'd need to try to reproduce this to understand what it is complaining about 😞
p
I should've added that it works on local inside a docker container but this is from AWS Fargate task. Should be almost identical but something must be different. I've spent days on this already but no luck 😞
u
Are the python versions the same between the two runtime environments? Idk what your ECS setup looks like but if you can run it using invoke you should get much better error messages, you could try replacing
meltano run tap-hubspot target-s3
with something like
meltano invoke tap-hubspot > output.json && cat output.json | meltano invoke target-s3
. There are several issues open related to making the
run
logs better. I'd suspect that theres an obvious error message telling you what configuration is messed up but its being suppressed right now
l
could be a connectivity or permissions issue between fargate and your destination? i would try to rule that out somehow. maybe you can start a one off task with the same fargate config and see if it can just do a plain ol’ read/write to s3
p
That should not be an issue, I have multiple parallel jobs with the same config just with different selects and only this one fails. I will try running the invoke command and see if I get any better errors.