Hello, first of all, thank you for Meltano, it rea...
# best-practices
z
Hello, first of all, thank you for Meltano, it really helps a lot for me to migrate the more than 10 years old script mess to data integration pipeline, i have completed the build of two container images, that work well. I have some architectural questions regarding orchrestration, scheduling, and observability. What is the best practice when i have more pipelines? I have to use one external deployed Airflow to manage the schedules, logging and all of the operation stuff or is there a simpler approach? I have to monitor and log my pipelines, because if something blocking a pipeline or any error occurs, then is have to fix that, but first i have to get the error in our monitoring systems (currently zabbix + graylog (for logs), but we also can consume prometheus metrics from Airflow for example). Anyway, i think the local airflow integration / project is not for me, because in the long run i have to run 4-5 pipelines (4-5 docker images) and i have to monitor all of them.
just replying to myself, the concept was wrong 🙂 so i can define a lot of pipelines in a single meltano.yaml and then i can handle everyting in one repository with airflow, and all of the good stuff integrated. nevermind 🙂
👍 1
e
I'm glad to read Meltano is helping you migrate away from legacy integrations! Let me know if I can help clarify anything about containerization, monitoring, etc.
🙏 1