Hi team! I’m working on a tap-posgtres (transferw...
# troubleshooting
s
Hi team! I’m working on a tap-posgtres (transferwise variant) => target-duckdb extraction. I’m not doing any transformations yet, so those are the only two steps. The source database has some tables with custom column types. In my meltano.yml, I’ve managed to configure the
plugins.extractors.tap-postgres.schema.stream-name.XX
to correctly handle the columns XX. I also had to alter
plugins.extracts.tap-postgres.metadata.stream-name.XX
to make sure it gets selected. Now if I dump the catalog I see my updated information instead of empty column properties (I was seeing things like
{'id": {}}
and now I have the proper
{"id": {"type": ["integer"], …}}
) The problem is that the target does not seem to be getting this additional schema information. I dumped a few lines of code into the target-duckdb plugin and am seeing empty schema entries for the columns I configured above. See below:
Copy code
{'type': 'SCHEMA', 'stream': 'public-order_item', 'schema': {'type': 'object', 'properties': {'id': {}, 'item_id': {}, 'order_id': {}, 'item_option_id': {}, 'quantity': {}, 'price': {}, 'sales_tax_total': {'type': ['null', 'number']}, ...}
So, when the
CREATE TABLE
statement is issued, the columns like
id
,
item_id
, etc. are excluded from the target table. Is there a way I can make sure that the same singer schema the tap-postgres is using (that I configured in meltano.yml) is sent on to the target-duckdb?
a
One similar issue I was facing with customized schema is that sometimes once the table is created on your target, the change in schema is not captured. So deleting the table from trget and then letting meltano create it again (with the new schema) helped me.
s
Thanks for the suggestion! Unfortunately that didn’t work for me 😞 Even when I delete the table in the target meltano does not seem to give the target the updated schema so the table isn’t created correctly