Not sure I can dictate best practices, but I can tell you a high level what we do 😅
We use Airflow (composer), to orchestrate our pipelines. In short a datsource specific DAG would:
• Run the meltano ELT job
• Run the resulting downstream dbt models
For auditing, we export all query logs to BQ. Users only have read access, so any DDL has to be done via git/CICD.
Pretty much all gcp infra is managed via terraform (Including table schemas). We also use Data Catalog's policy tags to limit access to certain columns.