Hello all, I want to use the postgres-snowflake ex...
# troubleshooting
a
Hello all, I want to use the postgres-snowflake extractor/loader pipeline for a really big postgres DB that I have (with multiple tables containing 25+ mil rows). When I've been testing it locally, I've noticed that some tables are always stuck on full table replication, meaning every time I try to sync it after the initial historical sync, they still conduct a full table replication. This has been making every full table replication take hours, even though I have logical replication specified in the
meltano.yaml
file. Is there any reason for this behavior, and if so how can I fix it?
a
Things to check: 1. replication key needs to be configured for state to work on a table (stream in tap lingo) 2. database_uri needs to be set for state to be stored 3. Using 'meltano run ...' the key in meltano job table will be generated for you, using legacy 'meltano elt' you have to specify the state key To debug, I would: 1.
meltano run tap-postgres target-snowflake
2.
ctrl+c
almost immediately 3. Check the job table (see database_uri) for the state written correctly