joanna
11/16/2023, 2:01 PMversion: 1
default_environment: prod
project_id: XXX
environments:
- name: dev
- name: staging
- name: prod
plugins:
extractors:
- name: tap-rest-api-msdk
variant: widen
pip_url: tap-rest-api-msdk
config:
api_url: <https://company.restapi.com/api/v1>
streams:
- name: elab_exp
path: /exp
headers:
Authorization: ${XXX}
primary_keys:
- expID
records_path: $.data[*]
params:
projectID: 00
- name: elab_sec
path: /exp/AAAAAA/sec
headers:
Authorization: ${XXX}
primary_keys:
- sectID
records_path: $.data[*]
params:
projectID: 00
Hi, I try to put into my meltano.yml an environment variable. I exported my variable XXX to environment. However, when I'm running meltano locally, I get an error saying that it was not recognized. Could you please tell me what am I doing wrong? According to documentation it is a good way
https://docs.meltano.com/guide/configuration/#overriding-discoverable-plugin-properties
*Your meltano.yml project file*, under the plugin's config key.
• Inside values, environment variables can be referenced as $VAR (as a single word) or ${VAR} (inside a word).
pat_nadolny
11/16/2023, 3:06 PMTAP_REST_API_MSDK_STREAMS='[{"name":"elab_exp", "...
joanna
11/17/2023, 8:29 AMpat_nadolny
11/20/2023, 3:01 PMHow would you solve the issue that I need to fetch IDs from parent endpoint and query another endpoint based on those values?Theres a variety of things that I've heard people do for situations like this depending on how many IDs you need. Some people will write a tap that reads in a CSV file that contains all its IDs and run that in a chain like
meltano run tap-postgres target-csv tap-<custom_using_csv> target-snowflake
. The first tap/target pair is prepping the input file for this second. You could also request those IDs and pass them into each stream during stream initialization or in the stream class init method if each stream makes a different call. Another more experimental thing that I've done in https://hub.meltano.com/mappers/map-gpt-embeddings is to build what I called a "tap mapper" that has both tap and mapper SDK functionality in it so you can run tap-postgres custom-tap-mapper target-snowflake
to skip the whole input CSV thing, in my case I make a call(s) to open AI's api for each input record.