Good Morning All, I'm working on a replication job...
# troubleshooting
d
Good Morning All, I'm working on a replication job using pipelinewise-tap-postgres and loading using target-snowflake. One of the tables I am replicating has ~45 million records. An initial run has taken about 7+ hours to run. Any suggestions or ideas on how I can improve performance?
meltano.yml
file can be found in the 🧵.
Copy code
default_environment: dev
environments:
- name: dev
- name: staging
- name: prod
plugins:
  extractors:
  - config:
      capabilities:
      - discover
      - catalog
      - state
      dbname: inventory
      default_replication_method: LOG_BASED
      host: **
      logical_poll_total_seconds: 600
      password: ***
      port: **
      ssl: **
      user: **
    name: tap-postgres
    pip_url: pipelinewise-tap-postgres
    select:
    - public-table1.*
    - public-table2.*
    - public-table3.*
    - public-table4.*
    - public-table5.*
    - public-table6.*
    - public-large_table_7.*
    - second_schema-table1.*
    - second_schema-table2.*
    variant: transferwise
  loaders:
  - config:
      account: 
      add_metadata_columns: 'true'
      dbname: **
      default_target_schema: **
      file_format: **
      password: ***
      role: **
      schema_mapping:
        second_schema:
          target_schema: target_second_schema
        public:
          target_schema: target
      user: ***
      warehouse: ***
    name: target-snowflake
    pip_url: pipelinewise-target-snowflake
    variant: transferwise
project_id: 
version: 1
Hey @bryan_rose ! I did a quick search on fastsync and I've noticed you mentioned a bunch. Would you happen to have any links to videos or discussions around using fastsync?