Faster redis replication handling#19138
Conversation
MadLittleMods
left a comment
There was a problem hiding this comment.
Functionality-wise, I think this looks good 👍
Could use an extra dose and context explaining "why" and how it's meant to work.
| logger.exception("Failed to handle command %s", cmd) | ||
| finally: | ||
| self._processing_streams.discard(stream_name) | ||
| async def _unsafe_process(self, item: _StreamCommandQueueItem) -> None: |
There was a problem hiding this comment.
Is this unsafe anymore?
If unsafe, we should maintain the docstring note from before.
If not, we should rename this function.
There was a problem hiding this comment.
Feel like this still needs a bit more clarity on why unsafe.
Here is what it was changed to:
synapse/synapse/replication/tcp/handler.py
Lines 359 to 364 in 6790312
"Unsafe because this should be called ..." (and it's the caller's responsibility)
Fixes logcontext leaks introduced in #19138.
Fixes logcontext leaks introduced in #19138.
| # If there is already a background process then we signal it to wake | ||
| # up and exit. We do not want multiple background processes running | ||
| # at a time. | ||
| self._wakeup_event.set() | ||
| return |
There was a problem hiding this comment.
This doesn't seem accurate. It will wakeup the other background process which will loop around to process more data.
While this call exits to prevent multiple background processes from running.
Spawning a background process comes with a bunch of overhead, so let's try to reduce the number of background processes we need to spawn when handling inbound fed.
Currently, we seem to be doing roughly one per command. Instead, lets keep the background process alive for a bit waiting for a new command to come in.