I can see how this benefits your use-case, but it still feels like an antipattern that leaves you open to a pipeline failure.
what i'm struggling to get on board with is that you have two separate source feeding into the same schema. granted that my opinion is colored by my own experience and not necessarily be generalized as a point against your approach
it might happen that the schema for one gets updated, while the other does not, or as happened to me, the schema for 1 data source gets rolled back.
In my example (an internal legacy etl system not written in meltano), there are 4 supposedly identical databases that first get united before further transformation. this union is done mostly via dbt macros, but importantly, the fields are statically set.