Also, I'm trying to use LOG_BASED replication. How...
# getting-started
h
Also, I'm trying to use LOG_BASED replication. However, when running, the new parquet file is pushed to S3, but the previous ones are not deleted/updated, leading to lots of duplication when querying an external table point to the s3 prefix. What can I do? I need to be able to run meltano, pull any data from the source table if it is missing, and delete/update any old parquet files... essentially sync the source with the destination using CDC.