Add buf_read_size setting for WSGI configurable request body buffering#3586
Open
ixunio wants to merge 3 commits intobenoitc:masterfrom
Open
Add buf_read_size setting for WSGI configurable request body buffering#3586ixunio wants to merge 3 commits intobenoitc:masterfrom
buf_read_size setting for WSGI configurable request body buffering#3586ixunio wants to merge 3 commits intobenoitc:masterfrom
Conversation
This was referenced Apr 11, 2026
benoitc
reviewed
Apr 20, 2026
benoitc
requested changes
Apr 20, 2026
Owner
benoitc
left a comment
There was a problem hiding this comment.
Thanks for the PR! see my comments below. The PR looks good otherwise. If you can add an integration test that would be useful.
Owner
|
@ixunio did you have the time to look at the review? feel free to ask if you have any questions |
- Introduced `buf_read_size` setting to control buffer size for reading request data from the socket. - Updated `Body` class to utilize `buf_read_size` for reading operations. - Added validation for `buf_read_size` to ensure it is a positive integer. - Enhanced tests to cover the new setting and its validation.
Author
thanks for your comments, I will look into it tomorrow! |
Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <copilot@github.com>
Author
|
@benoitc might as well rename the PR to reflect the new naming |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR introduces
wsgi-input-block-sizesetting to control buffer size for reading request data from the socket.Cf this issue #2596 and this stalled PR #3068
Experiment
Benchmarks running Flask 3.1 and Gunicorn 25.3 on Python 3.12, on different environments / setup:
Code running
flask_app.py
command
gunicorn --bind 0.0.0.0:18100 --workers 1 --worker-class sync --wsgi-input-block-size xxx flask_app:appResults
Relative throughput change versus `1kB` wsgi-input-block-size, shown as a positive percentage when bandwidth increases.2kB4kB8kB16kB32kB64kB128kB256kB512kB1MB1kB+24%+13%+7%+7%+5%+5%+22%+16%+11%+14%10kB+28%+28%+34%+36%+35%+34%+38%+50%+44%+41%50kB+50%+95%+60%+149%+83%+124%+89%+99%+131%+96%100kB+34%+40%+64%+59%+16%+6%+80%+41%+57%+101%200kB+29%+55%+75%+82%+50%+60%+67%+72%+105%+59%500kB-1%+5%+2%-2%+4%+4%-2%-5%+10%-4%1MB-2%-1%-6%+1%-0%-3%+3%+12%-1%-19%2MB+15%+12%+17%+30%+4%+14%+21%+22%-3%-2%5MB+13%+23%+23%+17%+30%+20%+17%+12%+10%-15%10MB+13%+25%+27%+25%+26%+22%+19%+20%+11%-14%50MB+17%+37%+42%+40%+45%+47%+42%+22%+43%+12%100MB+24%+38%+47%+59%+53%+55%+45%+40%+48%+22%250MB+24%+43%+59%+58%+61%+55%+52%+50%+29%+21%500MB+34%+58%+57%+70%+67%+61%+55%+59%+46%+25%1000MB+37%+54%+61%+70%+68%+67%+55%+60%+64%+30%Findings
Note: these apply to WSGI sync workers. This parameter has no effect on ASGI configurations.
wsgi-input-block-size.wsgi-input-block-sizeconfiguration. This behavior is consistent across different environments. Any insights into why we might be hitting a bottleneck specifically at the 1MB threshold @benoitc ?