I'm running meltano inside dagster. For a csv file with 99k rows to be loaded into iceberg, it's taking a long time. I'm using
tap-csv
and a custom loader
target-iceberg
. Is there any way to enable parallel processing to speed up the integration process? I read in some thread that loaders, that deal with large volume of data usually slows down the process. It was also suggested to change the
batch size
. I don't know how I can change the batch size in my custom target. Currently,
max_size
is 10000. Will it help if I increase it?