Hi everyone, not sure where I should post it.
I’m starting implementing a data engineering project for my company. I considered Meltano for my extract/load pipelines, I am very new at it.
I started with a very easy case after completing the quickstart tutorials : extract data from a distant server MySQL and load it into Postgres (localhost).
For it, I am using tap-mysql and target-postgres. However the job seems (very) long as it takes more than 6 minutes for 100MB (500k rows, 40 columns or something like that). The query takes 0.004 seconds to be executed and 3 secs to be fetched on MySQL Workbench. I also tried to load it into a JSON only (target-jsonl), it takes more than 3 minutes.
Moreover, when I am launching the job (with both loaders) the application that the database MySQL is based on crashes and stays down for the duration of the job. This behaviour doesn't appear when I am just launching the query on the full table on MySql workbench.
Is there anything I am missing ? Like parameters for batches or anything that could explain this behaviour?