Hello, first of all, thank you for Meltano, it really helps a lot for me to migrate the more than 10 years old script mess to data integration pipeline, i have completed the build of two container images, that work well.
I have some architectural questions regarding orchrestration, scheduling, and observability. What is the best practice when i have more pipelines? I have to use one external deployed Airflow to manage the schedules, logging and all of the operation stuff or is there a simpler approach? I have to monitor and log my pipelines, because if something blocking a pipeline or any error occurs, then is have to fix that, but first i have to get the error in our monitoring systems (currently zabbix + graylog (for logs), but we also can consume prometheus metrics from Airflow for example).
Anyway, i think the local airflow integration / project is not for me, because in the long run i have to run 4-5 pipelines (4-5 docker images) and i have to monitor all of them.