You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/docs/using/plugins/plugins.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ Plugins for improving system reliability, performance, and resource management.
40
40
|--------|------|-------------|
41
41
|[Circuit Breaker](https://github.com/IBM/mcp-context-forge/tree/main/plugins/circuit_breaker)| Native | Trips per-tool breaker on high error rates or consecutive failures and blocks during cooldown |
42
42
|[Watchdog](https://github.com/IBM/mcp-context-forge/tree/main/plugins/watchdog)| Native | Enforces maximum runtime for tools with warn or block actions on threshold violations |
43
-
|[Rate Limiter](https://github.com/IBM/mcp-context-forge/tree/main/plugins/rate_limiter)| Native |Fixed-window in-memory rate limiting by user, tenant, or tool|
43
+
|[Rate Limiter](https://github.com/IBM/mcp-context-forge/tree/main/plugins/rate_limiter)| Native |Per-user, tenant, and tool rate limiting with selectable algorithms (fixed_window, sliding_window, token_bucket) and memory or Redis backends|
44
44
|[Cached Tool Result](https://github.com/IBM/mcp-context-forge/tree/main/plugins/cached_tool_result)| Native | Caches idempotent tool results in-memory with configurable TTL and key fields |
45
45
|[Response Cache by Prompt](https://github.com/IBM/mcp-context-forge/tree/main/plugins/response_cache_by_prompt)| Native | Advisory response cache using cosine similarity over prompt/input fields with configurable threshold |
46
46
|[Retry with Backoff](https://github.com/IBM/mcp-context-forge/tree/main/plugins/retry_with_backoff)| Native | Annotates retry/backoff policy in metadata with exponential backoff on specific HTTP status codes |
Copy file name to clipboardExpand all lines: plugins/rate_limiter/README.md
+64-4Lines changed: 64 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
> Author: Mihai Criveti
4
4
> Version: 0.1.0
5
5
6
-
Enforces fixed-window rate limits per user, tenant, and tool across `tool_pre_invoke` and `prompt_pre_fetch` hooks. Supports an in-process memory backend (single-instance) and a Redis backend (shared across all gateway instances).
6
+
Enforces rate limits per user, tenant, and tool across `tool_pre_invoke` and `prompt_pre_fetch` hooks. Supports pluggable counting algorithms (fixed window, sliding window, token bucket), an in-process memory backend (single-instance), and a Redis backend (shared across all gateway instances).
7
7
8
8
## Hooks
9
9
@@ -32,6 +32,9 @@ If any configured dimension is exceeded, the plugin returns a violation with HTT
| `redis_key_prefix` | string | `"rl"` | Prefix for all Redis keys |
@@ -69,18 +73,44 @@ Every request (allowed or blocked) includes:
69
73
| `X-RateLimit-Reset` | Unix timestamp when the current window resets |
70
74
| `Retry-After` | Seconds until the window resets (blocked requests only) |
71
75
76
+
## Algorithms
77
+
78
+
Three counting algorithms are available, selected via the `algorithm` config field.
79
+
80
+
| Algorithm | Config value | Best for | Trade-off |
81
+
|---|---|---|---|
82
+
| Fixed window | `fixed_window` | General use, lowest overhead | Up to 2× the limit at window boundaries |
83
+
| Sliding window | `sliding_window` | Smooth enforcement, no boundary burst | Higher memory: stores one timestamp per request per key |
84
+
| Token bucket | `token_bucket` | Bursty workloads — allows short spikes up to capacity | Slightly higher Redis overhead: stores `{tokens, last_refill}` hash per key |
85
+
86
+
### Fixed window (default)
87
+
88
+
Counts requests in a fixed time slot (e.g. "minute 14:03"). Resets at the slot boundary. Simple and fast. The 2× burst at a boundary (N requests at the end of slot T, N requests at the start of T+1) is a known trade-off; use `by_user` with headroom if this matters.
89
+
90
+
### Sliding window
91
+
92
+
Stores a timestamp for every request in the current window. At each check, expired timestamps are discarded and the remaining count is compared against the limit. Prevents boundary bursts entirely. Memory usage grows with request volume — roughly one float per request per active key.
93
+
94
+
### Token bucket
95
+
96
+
Each identity (user, tenant, tool) has a bucket that holds up to `count` tokens. Tokens refill at a steady rate of `count/window`. A request consumes one token. Bursts up to the bucket capacity are allowed; sustained rate above `count/window` is rejected. Useful for APIs where short spikes are acceptable but sustained overload is not.
97
+
98
+
**Redis support:** `token_bucket` with `backend: redis` is fully supported. The plugin stores `{tokens, last_refill}` in a Redis hash per key and uses an atomic Lua script to refill and consume tokens in a single round-trip — the same pattern as the other two algorithms. This means `token_bucket` enforces a true cluster-wide limit in multi-instance deployments.
99
+
72
100
## Backends
73
101
74
102
### Memory backend (default)
75
103
76
104
- Counters are stored in a process-local dict (`_store`)
77
105
- An `asyncio.Lock` serialises all counter reads and writes — safe under concurrent asyncio tasks
78
-
- A background sweep task evicts expired windows every 0.5s — memory is bounded to active windows only
106
+
- A background sweep task evicts expired windows every 0.5s — for `fixed_window` and `token_bucket`, expired entries are removed promptly; for `sliding_window`, keys with fully stale timestamps are evicted by the sweep
79
107
- **Limitation:** state is not shared across processes or hosts. In a multi-instance deployment (e.g. 3 gateway instances behind nginx), each instance tracks its own counter — the effective limit is `N × configured_limit`
80
108
81
109
### Redis backend
82
110
83
-
- Counters are stored in Redis using an atomic Lua `INCR`+`EXPIRE` script — a single Redis call per check with no race condition
111
+
- `fixed_window`: atomic Lua `INCR`+`EXPIRE` — one Redis round-trip per check, no race condition
112
+
- `sliding_window`: atomic Lua `ZADD`+`ZREMRANGEBYSCORE`+`ZCARD`+`EXPIRE` — one round-trip, no race condition
113
+
- `token_bucket`: atomic Lua script — reads `{tokens, last_refill}` hash, refills proportionally, consumes 1 token, writes back — one round-trip, no race condition
84
114
- All gateway instances share the same counter — the configured limit is the true cluster-wide limit
85
115
- Requires `redis_url` to be set
86
116
- If `redis_fallback: true` (default) and Redis is unavailable, the plugin falls back to the in-process `MemoryBackend` automatically — requests are never blocked due to Redis downtime
@@ -111,6 +141,34 @@ config:
111
141
search: "10/m"
112
142
```
113
143
144
+
### Sliding window (no boundary bursts)
145
+
146
+
```yaml
147
+
config:
148
+
algorithm: "sliding_window"
149
+
by_user: "30/m"
150
+
by_tenant: "300/m"
151
+
```
152
+
153
+
### Token bucket — memory backend (default)
154
+
155
+
```yaml
156
+
config:
157
+
algorithm: "token_bucket"
158
+
by_user: "30/m" # bucket holds 30 tokens, refills at 30/min
159
+
```
160
+
161
+
### Token bucket — Redis backend (multi-instance)
162
+
163
+
```yaml
164
+
config:
165
+
algorithm: "token_bucket"
166
+
backend: "redis"
167
+
redis_url: "redis://redis:6379/0"
168
+
redis_fallback: true
169
+
by_user: "30/m"
170
+
```
171
+
114
172
### Permissive mode (observe without blocking)
115
173
116
174
```yaml
@@ -126,7 +184,9 @@ In `permissive` mode the plugin records violations and emits `X-RateLimit-*` hea
126
184
| Limitation | Severity | Status |
127
185
|---|---|---|
128
186
| Memory backend not shared across processes | HIGH | Use Redis backend for multi-instance deployments |
129
-
| Fixed window allows up to 2× limit at window boundary | LOW | Deferred — use `by_user` with headroom as a workaround |
187
+
| Fixed window allows up to 2× limit at window boundary | LOW | Use `sliding_window` algorithm, or use `by_user` with headroom |
188
+
| `by_tool` matching is case-sensitive | LOW | Fixed — tool names are now normalised with `.strip().lower()` at init |
189
+
| Whitespace-only user identity bypasses anonymous bucket | LOW | Documented gap; strip identities before passing to hooks |
130
190
| No per-server limits (`server_id` dimension missing) | LOW | Not implemented |
131
191
| No config hot-reload — rate string changes require restart | LOW | Not implemented |
132
192
| Memory backend not safe under threaded workers (gunicorn `--threads`) | LOW | asyncio.Lock is loop-safe; use async workers (`-k uvicorn`) |
0 commit comments