We do as I think @Sven Balnojan suggests:
• Docker containers hosted in our own GCP artifact/container registry
• Developers run a script (run-meltano-docker.sh) to start the container:
◦ Latest version of the meltano artifact docker image is pulled
◦ with mounted GCP credentials
◦ git repo with meltano files mounted at /usr/app/, so that dev can keep working in VSCode on their machine, and run tests using docker
▪︎ (static meltano files are stored for reference in /usr/local/meltano-image-static)
◦ some environment variables
• And also, in Dockerfile-meltano
◦ We always use explicit versions, ie., so that we do not accidentally run into a new version which might break things
▪︎ FROM meltano/meltano:*v2.17*-python3.9
This way, we ensure that we all use the exact same version and execution environment, and upgrades are central and pushed to our artifact registry so that all devs easily get the new version.
Works like a charm 😛
This very same architecture is used for dbt and terraform/terragrunt.