Hello community. I'm writing an ops script that has to take data from RDS (MySQL), but only certain databases (schemas) and for every selected database (schema), (as the GoodData instructions and our needs say) I need to create a folder in an existing s3 bucket with the name of the database, and inside of it create another folder that signifies date (extracted from the .csv file naming in the original database). Inside of that folder is where the files are supposed to be dumped, (files with the following naming convention - customer_yyyyMMDDhhmmss_full and .gz.parquet formatting). Some tables per each schema need to be skipped and not included in the parquet file. So my understanding, that I can achieve all of that with the TAP-MYSQL (singer-io repo) and TARGET-S3-CSV (pipelinewise repo) modules by adding them and editing upload_file method to do my magic, binding it all via the fancy meltano.yaml config that looks somewhat like that: