Hi all. I have faced the following problem with fu...
# troubleshooting
k
Hi all. I have faced the following problem with full tables from mysql 5.7 in postgresql 13.
Copy code
2025-01-21T23:33:33.547257Z [info     ] time=2025-01-22 04:33:33 name=tap_mysql level=CRITICAL message='NoneType' object has no attribute 'settimeout' cmd_type=elb consumer=False job_name=dev:tap-mysql-to-target-postgres name=tap-mysql producer=True run_id=4d781c3f-de02-4767-89f1-f42cea0d75ed stdio=stderr string_id=tap-mysql
2025-01-21T23:33:33.547579Z [info     ] Traceback (most recent call last): cmd_type=elb consumer=False job_name=dev:tap-mysql-to-target-postgres name=tap-mysql producer=True run_id=4d781c3f-de02-4767-89f1-f42cea0d75ed stdio=stderr string_id=tap-mysql
2025-01-21T23:33:33.547877Z [info     ]   File "/srv/meltano/my-meltano-project/.meltano/extractors/tap-mysql/venv/lib/python3.10/site-packages/tap_mysql/sync_strategies/full_table.py", line 167, in sync_table cmd_type=elb consumer=False job_name=dev:tap-mysql-to-target-postgres name=tap-mysql producer=True run_id=4d781c3f-de02-4767-89f1-f42cea0d75ed stdio=stderr string_id=tap-mysql
2025-01-21T23:33:33.548226Z [info     ]     common.sync_query(cur,     cmd_type=elb consumer=False job_name=dev:tap-mysql-to-target-postgres name=tap-mysql producer=True run_id=4d781c3f-de02-4767-89f1-f42cea0d75ed stdio=stderr string_id=tap-mysql
My tap and target configs:
Copy code
plugins:
  extractors:
  - name: tap-mysql
    variant: transferwise
    pip_url: git+<https://github.com/edgarrmondragon/pipelinewise-tap-mysql.git@patch-1>
    config:
      database: ***
      engine: mysql
      session_sqls:
      - SET @@session.time_zone='+0:00'
      - SET @@session.wait_timeout=86400
      - SET @@session.net_read_timeout=86400
      - SET @@session.innodb_lock_wait_timeout=3600
    select:
    - schema-table.*
    metadata:
      '*':
        replication-method: LOG_BASED

  loaders:
  - name: target-postgres
    variant: meltanolabs
    pip_url: meltanolabs-target-postgres
    config:
      batch_size_rows: 50000
      hard_delete: true
      load_method: upsert
      use_copy: true
      validate_records: true
      sanitize_null_text_characters: true
Error at the same time. I have never encountered such errors before. The loading of the table with 160 million was successful. Now there are more errors and there is no way to completely loading the table. The server resources on which meltano runs are sufficient. Uses only 2 cores. Can any processes or queries in postgres interrupt the meltano process?
e
j
@Kuanysh Zhaksylyk please have you been able to resolve this? I'm facing similar issue atm. Thanks
k
@Jacob Ukokobili Hello! Unfortunately I haven't tried to solve this problem yet. I don't have time to look at the problem. Do you have any news? My opinion, I think that this is somehow related to the high load of the target, since I have airflow, bento, debezium and other tools running in parallel. IOPS load is very high.