What about sending over large CLOB data? Can Meltano/Singer do this?
I have some cases of large documents stored in CLOB columns in database where fields may have many MB. I saw 40 MB in one case, but I do not know what is the upper limit. How does this translate to Meltano and Singer limitations? Will I hit some hard-coded limits?
As I understand, all data is being sent as JSON. I don't think JSON specification itself has any limit, but I imagine that there may be practical issues with how particular python handle large data including buffering and other issues. Please share best practices and if you know of practical limitations, please share them.
Scenario: I am copying from tap-oracle to target-oracle. I see that target-oracle does not handle CLOBS correctly: it translates them to VARCHAR. I saw the client is hard-coded to use thin client. I am thinking to follow advice from @Edgar Ramírez (Arch.dev) and fork it to use thick client rather than thin, hoping this resolves the issue. But I assume I might hit various other issues down the road. Maybe the translation from CLOB to VARCHAR is not a mistake, but a way to overcome some problem? Before going too far and hitting some wall, I want to check what is awaiting me there.
Any feedback is more than welcome.