Here's what I'm doing currently. Not with MWAA, but it may help. The Airflow scheduler and web were installed with meltano install. They actual meltano.yml file is rendered by HashiCorp Vault, so I can update the Vault secret and have a new file rendered. The DAG generator automatically picks up new schedules from the file. There are no loaders or extractors in this meltano.yml file, only the orchestrator and schedules. Liveness probe can check dates on files to determine whether the web or scheduler needs a kick if required. For the worker pod template, I created an initcontainer that has an env that is set from the dag_id label, which is populated by the DAG ID (schedule name prefixed with meltano_) by Airflow (actually the DAG generator). This init container simply saves the dag_id in a file. I do this because Vault Agent can't use env vars, but can use content of a file. So I run Vault Agent on the worker pod also, and that pulls secrets from a path specific to the DAG ID (job). Changed up the commands run on the worker image to a shell file that pulls from a git repo (named after the job). This meltano.yml file has the extractor, loader, orchestrator, and schedule for the one job. Runs install, creates links for the shared EFS filesystem for logs and output, then runs
meltano invoke $*
. Like I said, all I need to do is to produce actual DAGs that Airflow (MWAA) will call to run the job on a pod (there's AWS docs for doing this), instead of using the Meltano DAG generator, and everything should work. You might be able to use some of this, so hope it helps.