Another question on best practice from me, sorry! ...
# best-practices
a
Another question on best practice from me, sorry! However, linked to my last question. I have around 90 tables that I'm replicating from MSSQL to Snowflake and currently they all seem to run without errors. However, I've had to add some mappings to deal with illegal bytes in some NVARCHAR fields (due to upstream environment). I don't know if these illegal bytes may appear in other fields in the future, so want to make the process a bit more robust. My plan was to just have all 90 tables in the same job, however I'm wondering what best practice might be on this? I'm happy to see errors and then fix them, but want as many tables as possible to be replicated each time the job runs (the error stops the job dead, so no further tables are replicated). A) is there a way to allow a table replication to fail, log that failure, then move to the next one? B) should I separate the tables out into separate jobs, maybe one job for those that I know won't generate an error in the future, and then a job each for those that I think are at risk? Many thanks, Andy.
e
Option B is what currently works. I would like to make both tap and targets continue after a stream fails, but I'm still not sure what the best approach for that is yet.
a
Thanks @Edgar Ramírez (Arch.dev) - I'll take that approach