bryan_wise
01/13/2021, 12:22 AMpipe closed by peer or os.write(pipe, data) raised exception.
less that 10% of the way through the initial load (185K out of 4.7M). I saw others have run into this type of error when the tap is faster than the loader (I think). Would upping my warehouse size in snowflake help? Would changing the buffer size help?bryan_wise
01/13/2021, 5:37 AMroot@106a7c474d38:/project# meltano invoke tap-nlp-data --discover
time=2021-01-12 23:32:59 name=tap_mysql level=CRITICAL message=255
Traceback (most recent call last):
File "/project/.meltano/extractors/tap-nlp-data/venv/bin/tap-mysql", line 8, in <module>
sys.exit(main())
File "/project/.meltano/extractors/tap-nlp-data/venv/lib/python3.6/site-packages/tap_mysql/__init__.py", line 404, in main
raise exc
File "/project/.meltano/extractors/tap-nlp-data/venv/lib/python3.6/site-packages/tap_mysql/__init__.py", line 401, in main
main_impl()
File "/project/.meltano/extractors/tap-nlp-data/venv/lib/python3.6/site-packages/tap_mysql/__init__.py", line 384, in main_impl
log_server_params(mysql_conn)
File "/project/.meltano/extractors/tap-nlp-data/venv/lib/python3.6/site-packages/tap_mysql/__init__.py", line 350, in log_server_params
with connect_with_backoff(mysql_conn) as open_conn:
File "/project/.meltano/extractors/tap-nlp-data/venv/lib/python3.6/site-packages/backoff/_sync.py", line 94, in retry
ret = target(*args, **kwargs)
File "/project/.meltano/extractors/tap-nlp-data/venv/lib/python3.6/site-packages/tap_mysql/connection.py", line 28, in connect_with_backoff
connection.connect()
File "/project/.meltano/extractors/tap-nlp-data/venv/lib/python3.6/site-packages/pymysql/connections.py", line 931, in connect
self._get_server_information()
File "/project/.meltano/extractors/tap-nlp-data/venv/lib/python3.6/site-packages/pymysql/connections.py", line 1269, in _get_server_information
self.server_charset = charset_by_id(lang).name
File "/project/.meltano/extractors/tap-nlp-data/venv/lib/python3.6/site-packages/pymysql/charset.py", line 38, in by_id
return self._by_id[id]
KeyError: 255
This is fixed in more recent versions of pymysql. Fix was implemented here: https://github.com/PyMySQL/PyMySQL/commit/4b7e9c98c0441449352d732f6a2453e4c868505cdouwe_maan
01/13/2021, 4:25 PMdouwe_maan
01/13/2021, 4:27 PMAfter I got the connection working, I ran intoAre you seeing this on the latest version of Meltano? v1.64.0 (https://meltano.slack.com/archives/CP8K1MXAN/p1610064894064300) was supposed to fix this issue (https://gitlab.com/meltano/meltano/-/issues/2478), so if you're still seeing it in a newer version, please file an issue with full log output and stack traces so I can continue debugging.less that 10% of the way through the initial load (185K out of 4.7M). I saw others have run into this type of error when the tap is faster than the loader (I think). Would upping my warehouse size in snowflake help? Would changing the buffer size help?pipe closed by peer or os.write(pipe, data) raised exception.
bryan_wise
01/13/2021, 4:29 PM