Hey everyone, I have an incremental sorted stream ...
# singer-tap-development
d
Hey everyone, I have an incremental sorted stream that has a request like
Copy code
orders?start_modified_at=2021-06-01T12:00:00&page=1
I set the start date parameter in
get_url_params()
by using
self.get_starting_timestamp(context)
. I expected
self.get_starting_timestamp(context)
to be constant and return the initial starting state, but since the stream is sorted, it is emitting the incremental state and
self.get_starting_timestamp(context)
is always progressing. The pagination is then thrown off since I am also changing the
start_modified_at
at the same time as the page number. • Is this expected behavior for
self.get_starting_timestamp(context)
when the stream is sorted? • Is there another way to get the initial starting state?
v
interesting! I may be a bit naïve here so forgive me, but I would think in your case the state shouldn't update until all pages have been read from? I'd guess there's some way to tell the sdk not to update your state
d
Yea, on one hand, I want the state to update so if the job is interrupted, it can resume from that updated start_modified_at on the next run. But I don't want
self.get_starting_timestamp(context)
/ start_modified_at to update midrun on each call.
v
Got it, so every record that you're reading in here has a
modified_at
Thought about this, not exactly sure. The question I'm pondering is what do you want to have happen when the target fails. I don't understand enough about this stuff quite yet still getting therre
d
If it fails, I want it to follow this behavior (which it is correctly doing).
I might be reading the documentation incorrectly here. I was expecting the next
meltano elt
run would pick up from where the incremental state was interrupted. This feature might be to recover and resume within the same run.
a
Is this expected behavior for 
self.get_starting_timestamp(context)
 when the stream is sorted?
No, this sounds like a bug to me. I think get_starting_timestamp() should be cached. I'm opening an issue...