What type of defect/bug is this?
Unexpected behaviour (obvious or verified by project member)
How can the issue be reproduced?
Hi,
We’re currently running FreeRADIUS 3.2.8 as a proxy to a relatively large number of distinct home servers, and we’ve been doing some scale testing around this setup. During these tests, we observed what appears to be fairly rapid growth in the number of proxy sockets. Within about 20 minutes of traffic starting, the server creates thousands of proxy sockets and eventually reaches the file descriptor limit.
From the logs (excerpted):
Thu Feb 12 07:07:15 : Info: ... adding new socket proxy address * port 50169
Thu Feb 12 07:07:15 : Info: ... adding new socket proxy address * port 53668
Thu Feb 12 07:07:15 : Info: ... adding new socket proxy address * port 55857
Thu Feb 12 07:09:26 2026 : Error: Failed adding event handler for socket proxy address * port 44699: Too many readers
Thu Feb 12 07:09:26 2026 : Info: ... adding new socket proxy address * port 46011
Thu Feb 12 07:09:26 2026 : Error: Failed adding event handler for socket proxy address * port 46011: Too many readers
Thu Feb 12 07:09:26 2026 : Info: ... adding new socket proxy address * port 37866
Thu Feb 12 07:09:26 2026 : Error: Failed adding event handler for socket proxy address * port 37866: Too many readers
Thu Feb 12 07:09:26 2026 : Info: ... adding new socket proxy address * port 42246
Thu Feb 12 07:09:26 2026 : Error: Failed adding event handler for socket proxy address * port 42246: Too many readers
Thu Feb 12 07:09:26 2026 : Info: ... adding new socket proxy address * port 35673
Thu Feb 12 07:09:26 2026 : Error: Failed adding event handler for socket proxy address * port 35673: Too many readers
Thu Feb 12 07:09:26 2026 : Info: ... adding new socket proxy address * port 52457
Thu Feb 12 07:09:26 2026 : Error: Failed adding event handler for socket proxy address * port 52457: Too many readers
From reading the code, it seems this behavior may be related to insert_into_proxy_hash(). My understanding is that when fr_packet_list_id_alloc() cannot find a free ID, a new socket is created via proxy_new_listener(). That socket is then added using the home server’s specific address and port (sock->other_ipaddr, sock->other_port), which makes it destination-specific (dst_any = 0).
If this understanding is correct, it would mean that once the default socket’s 256 IDs are exhausted for a given home server, a dedicated socket is created for that specific destination. With a large number of distinct home servers, this could result in a growing number of destination-specific sockets, since they cannot be shared across destinations. These sockets are not released as well.
In contrast, the default proxy socket created by create_default_proxy_listener() appears to be added with a zeroed destination (dst_any = 1, dst_port = 0), allowing it to act as a wildcard that can serve all destinations.
Based on that, I wanted to ask whether it would make sense for dynamically-created UDP proxy sockets to behave similarly — i.e., be wildcard rather than destination-specific. For example:
if (sock->proto == IPPROTO_UDP) {
memset(&sock->other_ipaddr, 0, sizeof(sock->other_ipaddr));
sock->other_ipaddr.af = request->home_server->ipaddr.af;
sock->other_port = 0;
}
My thinking is that this would allow the additional sockets to be shared across destinations. The tradeoff, as I understand it, is that the 256-ID space per socket would then be shared across all destinations, but this already seems to be the case for the default proxy socket today. Any potential ID collision across destinations would therefore not be fundamentally different from what already exists.
I may be misunderstanding some of the design assumptions here, so I’d appreciate confirmation on whether this interpretation is correct, and whether this kind of change would be considered safe or appropriate.
Thanks very much for your time and guidance,
Cecil
Log output from the FreeRADIUS daemon
Thu Feb 12 07:09:23 2026 : Info: ... adding new socket proxy address * port 41908
Thu Feb 12 07:09:23 2026 : Info: ... adding new socket proxy address * port 35026
Thu Feb 12 07:09:23 2026 : Info: ... adding new socket proxy address * port 49318
Thu Feb 12 07:09:23 2026 : Info: ... adding new socket proxy address * port 35228
Thu Feb 12 07:09:23 2026 : Info: ... adding new socket proxy address * port 33523
Thu Feb 12 07:09:23 2026 : Info: ... adding new socket proxy address * port 60747
Thu Feb 12 07:09:23 2026 : Info: ... adding new socket proxy address * port 33138
Thu Feb 12 07:09:23 2026 : Error: Failed adding event handler for socket proxy address * port 33138: Too many readers
Thu Feb 12 07:09:23 2026 : Info: ... adding new socket proxy address * port 55326
Thu Feb 12 07:09:23 2026 : Error: Failed adding event handler for socket proxy address * port 55326: Too many readers
Thu Feb 12 07:09:24 2026 : Info: ... adding new socket proxy address * port 36000
Thu Feb 12 07:09:24 2026 : Error: Failed adding event handler for socket proxy address * port 36000: Too many readers
Thu Feb 12 07:09:24 2026 : Info: ... adding new socket proxy address * port 51650
Thu Feb 12 07:09:24 2026 : Error: Failed adding event handler for socket proxy address * port 51650: Too many readers
Thu Feb 12 07:09:24 2026 : Info: ... adding new socket proxy address * port 52325
Thu Feb 12 07:09:24 2026 : Error: Failed adding event handler for socket proxy address * port 52325: Too many readers
Thu Feb 12 07:09:24 2026 : Info: ... adding new socket proxy address * port 50916
Thu Feb 12 07:09:24 2026 : Error: Failed adding event handler for socket proxy address * port 50916: Too many readers
Thu Feb 12 07:09:24 2026 : Info: ... adding new socket proxy address * port 44989
Thu Feb 12 07:09:24 2026 : Error: Failed adding event handler for socket proxy address * port 44989: Too many readers
Thu Feb 12 07:09:24 2026 : Info: ... adding new socket proxy address * port 36856
Thu Feb 12 07:09:24 2026 : Error: Failed adding event handler for socket proxy address * port 36856: Too many readers
Relevant log output from client utilities
No response
Backtrace from LLDB or GDB
What type of defect/bug is this?
Unexpected behaviour (obvious or verified by project member)
How can the issue be reproduced?
Hi,
We’re currently running FreeRADIUS 3.2.8 as a proxy to a relatively large number of distinct home servers, and we’ve been doing some scale testing around this setup. During these tests, we observed what appears to be fairly rapid growth in the number of proxy sockets. Within about 20 minutes of traffic starting, the server creates thousands of proxy sockets and eventually reaches the file descriptor limit.
From the logs (excerpted):
From reading the code, it seems this behavior may be related to
insert_into_proxy_hash(). My understanding is that whenfr_packet_list_id_alloc()cannot find a free ID, a new socket is created viaproxy_new_listener(). That socket is then added using the home server’s specific address and port(sock->other_ipaddr, sock->other_port), which makes it destination-specific (dst_any = 0).If this understanding is correct, it would mean that once the default socket’s 256 IDs are exhausted for a given home server, a dedicated socket is created for that specific destination. With a large number of distinct home servers, this could result in a growing number of destination-specific sockets, since they cannot be shared across destinations. These sockets are not released as well.
In contrast, the default proxy socket created by
create_default_proxy_listener()appears to be added with a zeroed destination (dst_any = 1, dst_port = 0), allowing it to act as a wildcard that can serve all destinations.Based on that, I wanted to ask whether it would make sense for dynamically-created UDP proxy sockets to behave similarly — i.e., be wildcard rather than destination-specific. For example:
My thinking is that this would allow the additional sockets to be shared across destinations. The tradeoff, as I understand it, is that the 256-ID space per socket would then be shared across all destinations, but this already seems to be the case for the default proxy socket today. Any potential ID collision across destinations would therefore not be fundamentally different from what already exists.
I may be misunderstanding some of the design assumptions here, so I’d appreciate confirmation on whether this interpretation is correct, and whether this kind of change would be considered safe or appropriate.
Thanks very much for your time and guidance,
Cecil
Log output from the FreeRADIUS daemon
Relevant log output from client utilities
No response
Backtrace from LLDB or GDB