Hi again, I'm trying to understand the concept her...
# getting-started
m
Hi again, I'm trying to understand the concept here: When moving data from postgres to postgres with LOG_BASED replication. Let's say after a few scheduled run succeded, data on the target is destroyed but we know it's available on the source. What do we do? The next run on the log_based replication will not bring the older records that are destroyed. Because taps doesn't know about the target state, right?
t
Correct. If you want the data brought back to the target you'll need to do something to make the tap aware it needs to re-send the data. In practice I think the only option is to re-initialize the table, i.e. drop it in the target and remove Meltano's state information for that table so the tap treats it as new and re-sends all the data.
m
Thanks, how do I remove its state information?
t
Unfortunately, the only way I know of is to run
meltano state get
to dump the state data to a file, manually edit that file to remove the state for the table(s) in question, then run
meltano state set
to load the new state. Then drop the table(s) in question in the target DB manually. After that the next Meltano run will act like the table is new (because it is) and do a full_table load instead of log_based.