<@U06C8RU9AAZ> - Anecdotally, I started with 2GB a...
# docker
a
@chris_kings-lynne - Anecdotally, I started with 2GB and bumped mine to 4GB. Not all of the containers and pipelines needed 4GB but we had one larger one which did need more than 2GB.
Side note - in theory, the piping process should be a low-memory consumption, since records are read and written and then dropped from memory. I haven’t been able to prove it yet or mitigate, but I believe a large part of the memory pressure comes from backpressure, when streams can read much faster than they are able to write. I think I saw an issue a while back about this, but I don’t know what came of it.
v
Theoretically https://meltano.com/docs/settings.html#elt-buffer-size this should limit the amount of back pressure but I haven't dove into this much
a
That’s the one I was thinking of! Thanks, @visch!
c
Hey guys. @aaronsteers I too started with 2GB and immediately found it had out of memory errors pulling 100k rows off our biggest salesforce tables. 4 GB make the problem go away. pretty much only an issue for a full refresh though.
I am curious tho about the salesforce tap now. I’m not convinced it handles the bulk api max limit of 100k rows properly. it should do multiple fetches until it gets everything. i’ll keep digging