Problem:
While using decryptStream to stream-decrypt large S3 objects, I noticed a memory increase roughly equal to the size of the object.
I think the issue is that _decryptStream returns a duplexify wrapper and internally runs a pipeline:
const stream = new Duplexify(parseHeaderStream, decipherStream)
pipeline(
parseHeaderStream,
verifyStream,
decipherStream,
new PassThrough(),
(err: Error) => {
if (err) stream.emit('error', err)
}
)
The caller reads from decipherStream via the duplexify wrapper. The pipeline also pushes decipherStream's output into the PassThrough. Since nothing ever reads from that PassThrough, its internal buffer appears to grow without bound.
Solution:
Replacing the PassThrough with a no-op Writable that discards chunks fixes the memory growth while still absorbing the destroy() call:
const drain = new Writable({
write(_chunk, _encoding, callback) {
callback()
},
})
pipeline(
parseHeaderStream,
verifyStream,
decipherStream,
drain,
(err: Error) => {
if (err) stream.emit('error', err)
}
)
I tested this change locally and the memory usage stopped increasing while streaming.
Problem:
While using
decryptStreamto stream-decrypt large S3 objects, I noticed a memory increase roughly equal to the size of the object.I think the issue is that
_decryptStreamreturns a duplexify wrapper and internally runs a pipeline:The caller reads from
decipherStreamvia the duplexify wrapper. The pipeline also pushesdecipherStream's output into thePassThrough. Since nothing ever reads from thatPassThrough, its internal buffer appears to grow without bound.Solution:
Replacing the
PassThroughwith a no-opWritablethat discards chunks fixes the memory growth while still absorbing thedestroy()call:I tested this change locally and the memory usage stopped increasing while streaming.