fix(security): add rate limiting and perPage cap on public API endpoints#1125
fix(security): add rate limiting and perPage cap on public API endpoints#1125JasonOA888 wants to merge 6 commits intof:mainfrom
Conversation
The public GET /api/prompts endpoint had no upper bound on the perPage query parameter. An attacker could request /api/prompts?perPage=999999 to dump the entire database into memory, causing OOM crashes. Additionally, there was no rate limiting on public API access, leaving self-hosted instances vulnerable to scrapers and DB connection exhaustion. Changes: - Cap perPage to a maximum of 100 (Math.min(perPage, MAX_PER_PAGE)) - Validate page parameter (must be >= 1) - Add in-memory sliding-window rate limiter (60 req/min per IP) - Return 429 with Retry-After header when rate limit exceeded - Zero external dependencies (no Redis required) Fixes f#1061
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughUnauthenticated GET /api/prompts now applies IP-derived rate limiting via Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant API as /api/prompts (Route)
participant Limiter as publicApiLimiter
participant DB as PromptsDB
Client->>API: GET /api/prompts?page=&perPage=
API->>API: getClientIp(request)
API->>Limiter: check(clientId)
alt allowed
API->>DB: query prompts (with clamped page/perPage)
DB-->>API: results
API-->>Client: 200 JSON (prompts)
else denied
Limiter-->>API: {allowed: false, retryAfterSeconds}
API-->>Client: 429 JSON + Retry-After
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/lib/rate-limit.ts (1)
37-43: Cleanup runs synchronously during request handling.The periodic cleanup iterates over all store entries on the request path. For high-traffic scenarios with many unique IPs, this could introduce latency spikes every 5 minutes. Consider offloading cleanup to
setIntervalat module load instead.♻️ Alternative: background cleanup with setInterval
-let lastCleanup = Date.now(); +// Start background cleanup interval +setInterval(() => { + const now = Date.now(); + for (const [k, v] of store) { + if (v.resetAt <= now) store.delete(k); + } +}, CLEANUP_INTERVAL_MS); export function checkRateLimit( key: string, opts: { windowMs?: number; maxRequests?: number } = {} ): { allowed: true } | { allowed: false; retryAfterMs: number } { const now = Date.now(); const windowMs = opts.windowMs ?? DEFAULT_WINDOW_MS; const maxRequests = opts.maxRequests ?? DEFAULT_MAX_REQUESTS; - // Periodic cleanup - if (now - lastCleanup > CLEANUP_INTERVAL_MS) { - for (const [k, v] of store) { - if (v.resetAt <= now) store.delete(k); - } - lastCleanup = now; - } - const entry = store.get(key);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/rate-limit.ts` around lines 37 - 43, The periodic cleanup currently performed inline during request handling should be moved to a background task: remove the synchronous block that checks now - lastCleanup > CLEANUP_INTERVAL_MS inside the request path and instead set up a module-level setInterval (using CLEANUP_INTERVAL_MS) that iterates the store Map and deletes entries where v.resetAt <= Date.now(); keep or remove lastCleanup as appropriate (you can drop it if only used for this check). Reference the existing symbols store, v.resetAt, CLEANUP_INTERVAL_MS and lastCleanup (to remove) and ensure the request-path code no longer performs the cleanup loop so requests won't block on large store iterations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/rate-limit.ts`:
- Around line 66-76: The getClientIp function currently returns the literal
"unknown" which groups all unidentified requests into one rate-limit bucket;
update getClientIp to check additional common gateway headers (e.g.
"cf-connecting-ip", "true-client-ip", "fastly-client-ip" or provider-specific
headers) before falling back, and when no IP header is found emit a
warning/debug log including available request headers to aid debugging; as a
safer fallback, return a more specific per-request identifier (for example a
short hash or concatenation of stable headers like "user-agent" +
"accept-language") instead of the static "unknown" so different clients don't
collide in the same rate-limit bucket.
- Around line 2-5: The comment incorrectly calls the algorithm "sliding-window"
while the implementation uses a fixed-window reset (it checks resetAt <= now).
Update the module comment to describe a fixed-window counter per IP (or
implement a true sliding-window by storing per-IP request timestamps and
counting those within now - windowMs); reference the resetAt variable and the
counter logic in the rate limiter implementation when making the change so the
documentation matches the behavior.
- Around line 1-60: The refactor removed the exported limiters
(mcpGeneralLimiter, mcpToolCallLimiter, mcpWriteToolLimiter, mcpAiToolLimiter)
that other code expects; restore compatibility by reintroducing those four
exports as objects with a check() method that delegates to the new
checkRateLimit(key, opts). Implement each limiter (mcpGeneralLimiter,
mcpToolCallLimiter, mcpWriteToolLimiter, mcpAiToolLimiter) to call
checkRateLimit with the appropriate fixed windowMs/maxRequests you want for each
limiter and use a per-IP key (e.g., `${key}:mcpGeneral`) so existing callers in
src/pages/api/mcp.ts and tests can keep calling limiter.check() without changes.
Ensure the exported symbols are named exactly as before.
---
Nitpick comments:
In `@src/lib/rate-limit.ts`:
- Around line 37-43: The periodic cleanup currently performed inline during
request handling should be moved to a background task: remove the synchronous
block that checks now - lastCleanup > CLEANUP_INTERVAL_MS inside the request
path and instead set up a module-level setInterval (using CLEANUP_INTERVAL_MS)
that iterates the store Map and deletes entries where v.resetAt <= Date.now();
keep or remove lastCleanup as appropriate (you can drop it if only used for this
check). Reference the existing symbols store, v.resetAt, CLEANUP_INTERVAL_MS and
lastCleanup (to remove) and ensure the request-path code no longer performs the
cleanup loop so requests won't block on large store iterations.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7d0533df-4954-4377-8dfb-dfbd4e107784
📒 Files selected for processing (2)
src/app/api/prompts/route.tssrc/lib/rate-limit.ts
| /** | ||
| * In-memory sliding window rate limiter. | ||
| * Lightweight in-memory rate limiter for public API endpoints. | ||
| * | ||
| * Each limiter instance tracks request timestamps per identifier (IP or API key) | ||
| * and rejects requests that exceed the configured window/max. | ||
| * Uses a sliding-window counter per IP address. No external dependencies | ||
| * (Redis, etc.) required — suitable for single-instance self-hosted deploys. | ||
| * | ||
| * NOTE: This is per-process. In a multi-instance deployment, consider a | ||
| * Redis-backed implementation instead. | ||
| * For multi-instance production deployments, replace the store with an | ||
| * Upstash Redis or similar backend. | ||
| */ | ||
|
|
||
| interface RateLimitEntry { | ||
| timestamps: number[]; | ||
| } | ||
|
|
||
| interface RateLimiterOptions { | ||
| /** Maximum number of requests allowed within the window. */ | ||
| max: number; | ||
| /** Time window in seconds. */ | ||
| windowSeconds: number; | ||
| } | ||
| type RateLimitEntry = { count: number; resetAt: number }; | ||
|
|
||
| export class RateLimiter { | ||
| private store = new Map<string, RateLimitEntry>(); | ||
| private readonly max: number; | ||
| private readonly windowMs: number; | ||
| // Cleanup stale entries every 60 s to prevent memory leaks | ||
| private cleanupInterval: ReturnType<typeof setInterval>; | ||
| const store = new Map<string, RateLimitEntry>(); | ||
|
|
||
| constructor(opts: RateLimiterOptions) { | ||
| this.max = opts.max; | ||
| this.windowMs = opts.windowSeconds * 1000; | ||
| this.cleanupInterval = setInterval(() => this.cleanup(), 60_000); | ||
| // Allow the process to exit without waiting for the interval | ||
| if (this.cleanupInterval.unref) { | ||
| this.cleanupInterval.unref(); | ||
| } | ||
| } | ||
| /** Default: 60 requests per 60-second window. */ | ||
| const DEFAULT_WINDOW_MS = 60_000; | ||
| const DEFAULT_MAX_REQUESTS = 60; | ||
|
|
||
| /** | ||
| * Check whether the given identifier is allowed to make a request. | ||
| * Returns `{ allowed: true, remaining }` or `{ allowed: false, retryAfterSeconds }`. | ||
| */ | ||
| check(identifier: string): { allowed: true; remaining: number } | { allowed: false; retryAfterSeconds: number } { | ||
| const now = Date.now(); | ||
| const windowStart = now - this.windowMs; | ||
| /** Clean up stale entries every 5 minutes to prevent memory leaks. */ | ||
| const CLEANUP_INTERVAL_MS = 5 * 60_000; | ||
| let lastCleanup = Date.now(); | ||
|
|
||
| let entry = this.store.get(identifier); | ||
| if (!entry) { | ||
| entry = { timestamps: [] }; | ||
| this.store.set(identifier, entry); | ||
| } | ||
|
|
||
| // Drop timestamps outside the current window | ||
| entry.timestamps = entry.timestamps.filter((t) => t > windowStart); | ||
| /** | ||
| * Check whether the given `key` (typically an IP) has exceeded the rate limit. | ||
| * | ||
| * Returns `{ allowed: true }` if the request is permitted, or | ||
| * `{ allowed: false, retryAfterMs }` if the limit has been reached. | ||
| */ | ||
| export function checkRateLimit( | ||
| key: string, | ||
| opts: { windowMs?: number; maxRequests?: number } = {} | ||
| ): { allowed: true } | { allowed: false; retryAfterMs: number } { | ||
| const now = Date.now(); | ||
| const windowMs = opts.windowMs ?? DEFAULT_WINDOW_MS; | ||
| const maxRequests = opts.maxRequests ?? DEFAULT_MAX_REQUESTS; | ||
|
|
||
| if (entry.timestamps.length >= this.max) { | ||
| // Earliest timestamp that will leave the window | ||
| const oldest = entry.timestamps[0]; | ||
| const retryAfterMs = oldest + this.windowMs - now; | ||
| return { allowed: false, retryAfterSeconds: Math.ceil(retryAfterMs / 1000) }; | ||
| // Periodic cleanup | ||
| if (now - lastCleanup > CLEANUP_INTERVAL_MS) { | ||
| for (const [k, v] of store) { | ||
| if (v.resetAt <= now) store.delete(k); | ||
| } | ||
|
|
||
| entry.timestamps.push(now); | ||
| return { allowed: true, remaining: this.max - entry.timestamps.length }; | ||
| lastCleanup = now; | ||
| } | ||
|
|
||
| private cleanup() { | ||
| const now = Date.now(); | ||
| const windowStart = now - this.windowMs; | ||
| for (const [key, entry] of this.store) { | ||
| entry.timestamps = entry.timestamps.filter((t) => t > windowStart); | ||
| if (entry.timestamps.length === 0) { | ||
| this.store.delete(key); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| const entry = store.get(key); | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // Pre-configured limiters for MCP tool calls | ||
| // --------------------------------------------------------------------------- | ||
| if (!entry || entry.resetAt <= now) { | ||
| // New window | ||
| store.set(key, { count: 1, resetAt: now + windowMs }); | ||
| return { allowed: true }; | ||
| } | ||
|
|
||
| /** General MCP POST requests – 20 req / min per identifier */ | ||
| export const mcpGeneralLimiter = new RateLimiter({ max: 20, windowSeconds: 60 }); | ||
| entry.count++; | ||
|
|
||
| /** tool calls (tools/call) – 10 req / min per identifier */ | ||
| export const mcpToolCallLimiter = new RateLimiter({ max: 10, windowSeconds: 60 }); | ||
| if (entry.count > maxRequests) { | ||
| return { allowed: false, retryAfterMs: entry.resetAt - now }; | ||
| } | ||
|
|
||
| /** Write-mutation tools – 5 req / min per identifier */ | ||
| export const mcpWriteToolLimiter = new RateLimiter({ max: 5, windowSeconds: 60 }); | ||
| return { allowed: true }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that the removed exports are still being imported elsewhere in the codebase
echo "=== Searching for imports of removed limiter exports ==="
rg -n "mcpGeneralLimiter|mcpToolCallLimiter|mcpWriteToolLimiter|mcpAiToolLimiter" --type ts
echo ""
echo "=== Checking mcp.ts import statement ==="
rg -n "from.*rate-limit" src/pages/api/mcp.ts
echo ""
echo "=== Checking test mocks ==="
rg -n "mcpGeneralLimiter|mcpToolCallLimiter" --glob "*test*"Repository: f/prompts.chat
Length of output: 1464
Critical: Removed exports break the MCP endpoint.
The refactor removes mcpGeneralLimiter, mcpToolCallLimiter, mcpWriteToolLimiter, and mcpAiToolLimiter exports that are actively imported in src/pages/api/mcp.ts (lines 16-19) and called with .check() at lines 1458, 1473, 1484, and 1493. Tests in src/__tests__/api/mcp-handler.test.ts also mock these exports (lines 15-18).
Either restore these exports or update all consuming code in src/pages/api/mcp.ts and the tests to use the new checkRateLimit function.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/rate-limit.ts` around lines 1 - 60, The refactor removed the exported
limiters (mcpGeneralLimiter, mcpToolCallLimiter, mcpWriteToolLimiter,
mcpAiToolLimiter) that other code expects; restore compatibility by
reintroducing those four exports as objects with a check() method that delegates
to the new checkRateLimit(key, opts). Implement each limiter (mcpGeneralLimiter,
mcpToolCallLimiter, mcpWriteToolLimiter, mcpAiToolLimiter) to call
checkRateLimit with the appropriate fixed windowMs/maxRequests you want for each
limiter and use a per-IP key (e.g., `${key}:mcpGeneral`) so existing callers in
src/pages/api/mcp.ts and tests can keep calling limiter.check() without changes.
Ensure the exported symbols are named exactly as before.
…ove IP detection - Restore mcpGeneralLimiter, mcpToolCallLimiter, mcpWriteToolLimiter, mcpAiToolLimiter as RateLimiter class instances with check() method - Fix comment: describe as fixed-window (not sliding-window) - Move cleanup to background setInterval instead of inline in request path - Add CDN header checks: cf-connecting-ip, true-client-ip, fastly-client-ip - Fallback to UA+Accept-Language fingerprint instead of static 'unknown' - Log warning with available headers when no IP can be determined
…ters, and improved IP detection - Restore original sliding-window RateLimiter class with check() method - Restore all 4 MCP limiters (mcpGeneral, mcpToolCall, mcpWrite, mcpAi) - Add publicApiLimiter for GET /api/prompts rate limiting - Add CDN header checks in getClientIp (Cloudflare, Fastly, True-Client-IP) - Fallback to UA+Accept-Language fingerprint instead of static 'unknown' - Log warning when no IP headers found
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/api/prompts/route.ts`:
- Around line 323-328: The perPage parsing uses parseInt(... ) || 24 before
clamping so an explicit 0 becomes falsy and gets replaced by the default;
instead, read the raw string from searchParams, call parseInt once (e.g. let raw
= searchParams.get("perPage"); let n = parseInt(raw || "", 10)), then if
isNaN(n) set n = 24, otherwise clamp n = Math.min(MAX_PER_PAGE, Math.max(1, n));
assign that to perPage (reference MAX_PER_PAGE, perPage, page, and the
parseInt/searchParams usage to locate the code).
In `@src/lib/rate-limit.ts`:
- Around line 155-160: The fallback console.warn in the rate-limiting code
currently logs full header entries via
JSON.stringify(Object.fromEntries(request.headers.entries())), which can leak
sensitive values; change the warning in the fallback path to log only header
names or a redacted allowlist instead. Locate the console.warn call (the
fallback returning "unknown") and replace the payload with either
Array.from(request.headers.keys()) or a filtered map of headers that includes
only allowed names (e.g., ["x-forwarded-for","x-real-ip","user-agent"]) with
values redacted; ensure no header values (cookies, authorization, etc.) are ever
included in logs.
- Around line 117-139: The getClientIp function currently trusts
client-controlled forwarded headers; require an explicit config flag (e.g.,
process.env.TRUST_PROXY or a boolean passed into getClientIp) before reading any
X-Forwarded-For / cf/true-client-ip/fastly headers and otherwise fall back to a
safe server-provided source (eg. connection remoteAddr or only trusted
Cloudflare header when running behind Cloudflare and an env flag indicates
that); update getClientIp to check that flag (reference getClientIp and the
header names cf-connecting-ip, true-client-ip, fastly-client-ip,
x-forwarded-for, x-real-ip) and only parse forwarded headers when the flag is
true. Also stop logging all request headers via
JSON.stringify(Object.fromEntries(request.headers.entries())); replace that
logging with a redaction step that filters/removes sensitive headers
(Authorization, Cookie/Set-Cookie, X-API-Key, X-Api-Key and similar) before
stringifying, or omit header logging entirely if no safe subset exists.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 50401beb-c277-44a6-beaa-72dc32a96e75
📒 Files selected for processing (2)
src/app/api/prompts/route.tssrc/lib/rate-limit.ts
There was a problem hiding this comment.
♻️ Duplicate comments (2)
src/lib/rate-limit.ts (2)
154-157:⚠️ Potential issue | 🟠 MajorDo not log raw request headers in fallback warnings.
Line 156 serializes all header values, which can leak
authorization,cookie, and other sensitive identifiers into logs.🔧 Safer logging
console.warn( - "[rate-limit] No client IP or fingerprint headers found. Headers:", - JSON.stringify(Object.fromEntries(request.headers.entries())) + "[rate-limit] No client IP or fingerprint headers found", + { headerNames: Array.from(request.headers.keys()) } );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/rate-limit.ts` around lines 154 - 157, The fallback warning currently logs raw header values via JSON.stringify(Object.fromEntries(request.headers.entries())), which may expose sensitive values; change the console.warn in this rate-limit fallback to log only header names (or a redacted summary) instead of full values — for example, replace the Object.fromEntries(...) usage with an array of header keys (e.g., [...request.headers.keys()]) or a mapped object that lists keys and masks known sensitive headers like "authorization", "cookie", "set-cookie", and "x-api-key"; update the console.warn call (the line using console.warn and request.headers.entries()) to output only the safe header names or a redacted summary.
116-138:⚠️ Potential issue | 🟠 MajorGate forwarded-header trust to prevent rate-limit bypass.
At Line 130 and related header branches, client-controlled forwarded headers are accepted unconditionally. On deployments without a trusted proxy boundary, attackers can rotate these values and evade per-IP limiting.
🔧 Suggested hardening
export function getClientIp(request: Request): string { + const TRUST_PROXY = process.env.TRUST_PROXY === "true"; + + if (TRUST_PROXY) { // Cloudflare const cfIp = request.headers.get("cf-connecting-ip"); if (cfIp?.trim()) return cfIp.trim(); // True-Client-IP (Cloudflare Enterprise / Akamai) const trueClientIp = request.headers.get("true-client-ip"); if (trueClientIp?.trim()) return trueClientIp.trim(); // Fastly const fastlyIp = request.headers.get("fastly-client-ip"); if (fastlyIp?.trim()) return fastlyIp.trim(); // X-Forwarded-For (may contain multiple IPs; first is the original client) const xff = request.headers.get("x-forwarded-for"); if (xff) { const first = xff.split(",")[0].trim(); if (first) return first; } // X-Real-IP (nginx) const realIp = request.headers.get("x-real-ip"); if (realIp?.trim()) return realIp.trim(); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/rate-limit.ts` around lines 116 - 138, getClientIp currently accepts client-controlled forwarded headers unconditionally; add a trust boundary so forwarded headers (true-client-ip, fastly-client-ip, x-forwarded-for, x-real-ip) are only used when the request originates from a configured trusted proxy. Implement a small helper (e.g., isFromTrustedProxy(request, trustedProxies) or validateRemoteAddr(remoteAddr, trustedProxyCidrs)) and a config (trustedProxies or TRUSTED_PROXY_CIDRS) and change getClientIp to first determine the immediate remote IP and call that helper; only if it returns true should you read and trust the forwarded headers (true-client-ip, fastly-client-ip, x-forwarded-for, x-real-ip); otherwise fall back to the remote IP and/or a single authoritative header you control. Ensure the helper and config are referenced from getClientIp so the gating is clear.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/lib/rate-limit.ts`:
- Around line 154-157: The fallback warning currently logs raw header values via
JSON.stringify(Object.fromEntries(request.headers.entries())), which may expose
sensitive values; change the console.warn in this rate-limit fallback to log
only header names (or a redacted summary) instead of full values — for example,
replace the Object.fromEntries(...) usage with an array of header keys (e.g.,
[...request.headers.keys()]) or a mapped object that lists keys and masks known
sensitive headers like "authorization", "cookie", "set-cookie", and "x-api-key";
update the console.warn call (the line using console.warn and
request.headers.entries()) to output only the safe header names or a redacted
summary.
- Around line 116-138: getClientIp currently accepts client-controlled forwarded
headers unconditionally; add a trust boundary so forwarded headers
(true-client-ip, fastly-client-ip, x-forwarded-for, x-real-ip) are only used
when the request originates from a configured trusted proxy. Implement a small
helper (e.g., isFromTrustedProxy(request, trustedProxies) or
validateRemoteAddr(remoteAddr, trustedProxyCidrs)) and a config (trustedProxies
or TRUSTED_PROXY_CIDRS) and change getClientIp to first determine the immediate
remote IP and call that helper; only if it returns true should you read and
trust the forwarded headers (true-client-ip, fastly-client-ip, x-forwarded-for,
x-real-ip); otherwise fall back to the remote IP and/or a single authoritative
header you control. Ensure the helper and config are referenced from getClientIp
so the gating is clear.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fd7a1716-b34e-4957-be85-0ca015b27149
📒 Files selected for processing (1)
src/lib/rate-limit.ts
…ted logging
- Fix perPage/page parsing: use parseInt with radix 10, only default on NaN
(parseInt('0') || 24 incorrectly treated 0 as falsy)
- Add TRUST_PROXY env flag: forwarded headers only read when set
(prevents IP spoofing when not behind a known reverse proxy)
- Redact console.warn: only log header NAMES, never values
(avoids leaking Authorization, Cookie, etc.)
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/api/prompts/route.ts`:
- Around line 313-314: The code currently calls
publicApiLimiter.check(getClientIp(request)) which uses client-controllable
inputs; change it to use an ingress-authenticated client identifier (for example
a verified IP set by your ingress middleware like request.locals.ingressClientIp
or a proxy-populated header you trust) when calling publicApiLimiter.check; if
that trusted identifier is not present, pass a coarse shared fallback key (e.g.
"shared-public-bucket") to fail closed rather than using the spoofable
getClientIp fingerprint. Update the call site (publicApiLimiter.check) to use
the trusted value (or the shared fallback) and ensure any related rate-limit
logic in rate-limit.ts can accept this shared fallback key.
- Around line 324-329: The page parameter is not capped, allowing huge offsets
downstream (used to compute skip); update validation to reject oversized
page/offset windows by adding a Zod schema for query params (e.g., validate
searchParams with a schema that enforces perPage <= MAX_PER_PAGE and page <=
MAX_PAGE or maxOffset/window), replace the current parseInt logic for
rawPage/rawPerPage with parsing via that Zod schema, and compute skip from the
validated values (used around the skip calculation) so requests with page or
perPage beyond the allowed window are rejected early.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bc148702-f2b3-4f38-bb2c-642d633099f7
📒 Files selected for processing (2)
src/app/api/prompts/route.tssrc/lib/rate-limit.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/lib/rate-limit.ts
- When getClientIp returns a fingerprint ('fp:...') or 'unknown', use a
shared 'shared-public-bucket' key instead of per-client keys. This
prevents scrapers from rotating User-Agent/Accept-Language to bypass
rate limits.
- Add MAX_PAGE cap (10,000) to prevent huge offset/skip values that
could cause slow DB queries.
Replace MAX_PAGE with MAX_OFFSET check: reject requests where (page-1)*perPage > 10,000. Returns 400 with clear error message. This is more precise than capping page alone since the actual DB burden is determined by the offset, not the page number.
Fixes #1061
Problem
The public
GET /api/promptsendpoint had no upper bound on theperPagequery parameter. An attacker could request/api/prompts?perPage=999999to dump the entire database into memory, causing OOM crashes and 500 downtime.Additionally, there was no rate limiting on public API access, leaving self-hosted instances vulnerable to scrapers and DB connection exhaustion.
Changes
Math.min(perPage, 100)— max 100 results per pageMath.max(1, page)— prevents negative/zero pagesRetry-Afterheader when rate limit exceededMapwith periodic cleanupThe rate limiter is in
src/lib/rate-limit.tsand can be swapped for an Upstash Redis backend for multi-instance deployments.Summary by CodeRabbit
New Features
Improvements