For the BigQuery target. I have a table of 7.5m ro...
# singer-targets
p
For the BigQuery target. I have a table of 7.5m rows. At c. 5.8m rows i get the following: 2024-09-04T071836.770220Z [info ] 2024-09-04 071836,769 | INFO | target-bigquery | Target 'target-bigquery' completed reading 5804784 lines of input (5804781 records, (0 batch manifests, 2 stat e messages). cmd_type=elb consumer=True job_name=dev:tap-mssql-to-target-bigquery name=target-bigquery producer=False run_id=cf74ffbb-27ec-4781-b38d-5dcb3ae0dccb stdio=stderr string_id=target-bigquery 2024-09-04T071841.685106Z [info ] 2024-09-04 071841,684 | WARNING | urllib3.connectionpool | Connection pool is full, discarding connection: bigquery.googleapis.com. Connection pool size: 10 cmd_type=elb consumer=True job_name=dev:tap-mssql-to-target-bigquery name=target-bigquery producer=False run_id=cf74ffbb-27ec-4781-b38d-5dcb3ae0dccb stdio=stderr string_id=target-bigquery I am running the target in batch_job mode. Without having to spend hours doing a deep dive, does anyone know where the issue may lie? After 5.8m lines, i doubt whether the issue is the pool size (it would certainly fail much sooner?).
v
Can you share your meltano.yml and the configs you're using. I'm guessing you're hitting https://github.com/z3z1ma/target-bigquery/blob/81046a4c77d2ea6dd2409a3ac171b949625fe0b4/target_bigquery/target.py#L550 It sounds like you have
fail_false
set to False and the warnings come through then your target exits as it thinks it's done? Or is there another exception or more information in your log file? The warnings are there and that target is designed to keep trying even when those kinds of things happen so there should be more information somewhere
p
Regrettably the information is fairly scarce. But I think we found a workaround. Testing some heavy lifting now. Thank you for the reply.
v
Can you share any more information for the next person?
1