daniel_wiesmann
04/23/2021, 8:27 AMtaylor
04/23/2021, 12:20 PMjon_brasher
04/24/2021, 9:43 PMvisch
04/25/2021, 3:45 PMscp
or aws
,etc etc command that you need to run to finish the job.
Loader seems like the right place to take care of that but it's almost like you'd want a library they all share to do the "sftp, ftp, s3, azure blob, etc, etc load" and each target-csv, target-json (file related tap) would use that library to push the file at the end of the job.visch
04/25/2021, 3:47 PMbcp
from MSSQL, that you might also want to call. But then you're getting into the territory of just wanting to run arbitrary commands after the loader is finished. Plugins could do that as well 🤷visch
06/29/2021, 8:58 PMvisch
06/29/2021, 8:58 PMvisch
06/29/2021, 9:02 PMedgar_ramirez_mondragon
06/29/2021, 9:22 PMvisch
06/29/2021, 10:43 PMvisch
06/29/2021, 10:45 PMvisch
06/29/2021, 10:51 PMvisch
06/29/2021, 10:54 PMpullsftpdata | tap-dbase | target-snowflake
visch
06/29/2021, 10:56 PMvisch
06/29/2021, 10:58 PMedgar_ramirez_mondragon
06/29/2021, 11:51 PMFor a TAP it's almost link any File Format tap would want to use some module / abstraction layer that sites ontop of pyfilesystemthat's what I was going for with that approach, making the filesystem pluggable via some pip extras and credentials baked into the file path like here for S3
Second way is: Keep the tap/target stupid and put something in front of the tap/target that will give the tap/target a file on the local file system (or in memory)an orchestrator like airflow helps a lot with that approach, but you end up with bespoke dependencies for your file taps
composable ftw! (and a relevant meltano issue). I've actually been playing around with a "merge" transform for multi-tenant sources here: https://github.com/edgarrmondragon/singer-playground/tree/main/merge_streamspullsftpdata | tap-dbase | target-snowflake
visch
06/30/2021, 1:11 AMvisch
06/30/2021, 1:11 AMvisch
06/30/2021, 1:12 AMvisch
06/30/2021, 3:48 AMvisch
06/30/2021, 5:39 AM