https://meltano.com/ logo
#announcements
Title
# announcements
c

colossal-cricket-61413

02/16/2021, 11:32 PM
Hey all! I had a quick follow up question regarding Redshift. I am receiving an error with the target-redshift:
Copy code
target-redshift | CRITICAL cursor already closed
target-redshift | psycopg2.OperationalError: SSL SYSCALL error: EOF detected
r

ripe-musician-59933

02/17/2021, 12:20 AM
@colossal-cricket-61413 Someone else ran into this error with target-postgres: https://meltano.slack.com/archives/C013EKWA2Q1/p1610123961021700?thread_ts=1609901332.013400&cid=C013EKWA2Q1
Could Redshift be dropping the connection?
c

colossal-cricket-61413

02/17/2021, 12:23 AM
I guess it could be possible, would a resolution look like something as simple as adding a handler to re-establish connection if it is dropped? Do you have any suggestions?
Copy code
target-redshift | CRITICAL cursor already closed
target-redshift | Traceback (most recent call last):
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/postgres.py", line 300, in write_batch
target-redshift | {'version': target_table_version})
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/sql_base.py", line 840, in write_batch_helper
target-redshift | metadata)
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/postgres.py", line 613, in write_table_batch
target-redshift | csv_rows)
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_redshift/redshift.py", line 170, in persist_csv_rows
target-redshift | cur.execute(copy_sql)
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/postgres.py", line 65, in execute
target-redshift | return super(_MillisLoggingCursor, self).execute(query, vars)
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/psycopg2/extras.py", line 461, in execute
target-redshift | return super(LoggingCursor, self).execute(query, vars)
target-redshift | psycopg2.OperationalError: SSL SYSCALL error: EOF detected
target-redshift |
target-redshift |
target-redshift | During handling of the above exception, another exception occurred:
target-redshift |
target-redshift | Traceback (most recent call last):
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_redshift/__init__.py", line 49, in main
target-redshift | target_tools.main(redshift_target)
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/target_tools.py", line 28, in main
target-redshift | stream_to_target(input_stream, target, config=config)
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/target_tools.py", line 77, in stream_to_target
target-redshift | raise e
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/target_tools.py", line 67, in stream_to_target
target-redshift | state_tracker.flush_streams()
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/stream_tracker.py", line 47, in flush_streams
target-redshift | self._write_batch_and_update_watermarks(stream)
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/stream_tracker.py", line 67, in _write_batch_and_update_watermarks
target-redshift | self.target.write_batch(stream_buffer)
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_redshift/redshift.py", line 72, in write_batch
target-redshift | return PostgresTarget.write_batch(self, nullable_stream_buffer)
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/postgres.py", line 306, in write_batch
target-redshift | cur.execute('ROLLBACK;')
target-redshift | File "/project/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/target_postgres/postgres.py", line 65, in execute
target-redshift | return super(_MillisLoggingCursor, self).execute(query, vars)
target-redshift | File "/vegamour/.meltano/loaders/target-redshift/venv/lib/python3.6/site-packages/psycopg2/extras.py", line 461, in execute
target-redshift | return super(LoggingCursor, self).execute(query, vars)
target-redshift | psycopg2.InterfaceError: cursor already closed
This is the main body of the error, and the last line of the log is: "BrokenPipeError"
r

ripe-musician-59933

02/17/2021, 12:28 AM
I guess it could be possible, would a resolution look like something as simple as adding a handler to re-establish connection if it is dropped?
That's what I'm thinking, yeah
c

colossal-cricket-61413

02/17/2021, 12:28 AM
This error came up with AdWords tap to a redshift target. It occurred on both a historic run, and on a month-to-date run (only 16 days long)
r

ripe-musician-59933

02/17/2021, 12:29 AM
Let's try something crazy: does the error go away if you increase the ELT buffer size (https://meltano.com/docs/settings.html#elt-buffer-size) to something definitely big enough to hold all extracted data in memory at once?
c

colossal-cricket-61413

02/17/2021, 12:29 AM
I'll try that first! I will let you know the result.
r

ripe-musician-59933

02/17/2021, 12:30 AM
You can run
meltano elt
or
meltano schedule run
again, this time passing in
MELTANO_ELT_BUFFER_SIZE=52428800
on
docker run
using the
-e
option
Copy code
docker run -v ... -w ... -e MELTANO_ELT_BUFFER_SIZE=52428800 meltano/meltano elt ...
That's 50MB, but you probably have enough RAM to bump it to 500MB or beyond
c

colossal-cricket-61413

02/17/2021, 12:33 AM
Do I need to add any additional statements after "elt"
r

ripe-musician-59933

02/17/2021, 12:33 AM
Whatever you're currently using to run your pipeline 🙂
c

colossal-cricket-61413

02/17/2021, 12:33 AM
I will run first with 52428800
Sounds good!
r

ripe-musician-59933

02/17/2021, 12:33 AM
That may be
meltano/meltano schedule run ...
c

colossal-cricket-61413

02/17/2021, 12:37 AM
So far so good! I will post an update to this thread upon completion if that fixed the issue! (or if it runs into the error again haha)
👍 1
Unfortunately the error came up again. It happened right after the first "Writing table batch with XXXXXX rows for '('XXXXXXXXX',)'... target-redshift | psycopg2.InterfaceError: connection already closed
r

ripe-musician-59933

02/17/2021, 12:56 AM
Ah, then the batch size that the target is using may be too large, resulting in the connection already being closed by the time the batch is flushed and written to the database. I suggest seeing if the target has a batch size you can configure