Move reading of multipart response into try body#19062
Move reading of multipart response into try body#19062anoadragon453 merged 3 commits intodevelopfrom
try body#19062Conversation
Otherwise the exception will be raised outside of the error handling code.
| response, output_stream, boundary, expected_size + 1 | ||
| ) | ||
| deferred.addTimeout(self.default_timeout_seconds, self.reactor) | ||
| multipart_response = await make_deferred_yieldable(deferred) |
There was a problem hiding this comment.
Should this be inside the async with self.remote_download_linearizer.queue(ip_address):?
Maybe. We do it in get_file which is probably what federation_get_file was based off of:
synapse/synapse/http/matrixfederationclient.py
Lines 1578 to 1582 in da6c0ca
And although this isn't explained anywhere, I guess the point of self.remote_download_linearizer is to ensure we only download 6 pieces of remote media at a time for some reason (maybe resource exhaustion, just a bunch of assumptions). So it makes sense that the download part is under the lock.
There was a problem hiding this comment.
I would argue yes. This code:
# add a byte of headroom to max size as `_MultipartParserProtocol.dataReceived` errs at >=
deferred = read_multipart_response(
response, output_stream, boundary, expected_size + 1
)
deferred.addTimeout(self.default_timeout_seconds, self.reactor)merely sets up the download, whereas:
multipart_response = await make_deferred_yieldable(deferred)is where we actually wait for the download to complete. So I would expect the lineariser to cover the latter.
I can't speak to why we limit the number of downloads. Potentially the limit the number of open file descriptors? But 6 seems quite low for only that reason. 🤷
| @@ -1758,6 +1719,7 @@ async def federation_get_file( | |||
| response, output_stream, boundary, expected_size + 1 | |||
There was a problem hiding this comment.
Do we have tests for this kind of timeout? (just wondering)
It looks like this was previously functional and the only problem was the unexpected logs/error handling so a high-level integration/end-to-end test wouldn't have really prevented this.
It looks like we already have other tests in tests/http/test_matrixfederationclient.py that ensure we raise a RequestSendFailed in certain scenarios. We could add one of those 🤷
There was a problem hiding this comment.
Thanks for the pointer to the test file. I've written up a test (that fails on develop) in 1f20fe7.
The test checks that a `RequestSendFailed` wrapping a `defer.TimeoutError` is raised.. Whereas before the fix, a raw `defer.TimeoutError` was raised, which calling code wouldn't expect.
Otherwise the exception will be raised outside of the error handling code. Fixes #19061.
Missed in #17365.
Pull Request Checklist
EventStoretoEventWorkerStore.".code blocks.