-
Notifications
You must be signed in to change notification settings - Fork 14
useCachedPromise causes OOM on fresh cache with large datasets #67
Description
useCachedPromise causes OOM on fresh cache with large datasets
Summary
I've been hitting "JS heap out of memory" crashes in my extension when loading ~1.6 MB of JSON data (~4,100 objects). After some digging, I found that useCachedPromise uses about 4x more memory than a direct fetch + useState approach when populating an empty cache.
The tricky part is this only happens on first launch or after clearing the cache—once data is cached, everything works fine. This makes it a frustrating experience for new users whose extension just crashes on first open.
Why This Matters
-
User-facing impact: Extensions crash on first launch or after cache clears, with no recovery path for users other than waiting and retrying.
-
Affects multiple extensions: Similar OOM errors have been reported in Todoist (#10127, #13491, #16487), Anki (#14150), Brew (#25354), and others. Many were closed without resolution, suggesting the root cause hasn't been addressed.
-
Counterintuitive behavior: Developers following the documented patterns have no indication that
useCachedPromisehas different memory characteristics than manual fetch + Cache API usage. -
Workaround exists: Using direct
fetch+useState+CacheAPI handles the same data without issues, suggesting this is an implementation detail inuseCachedPromiserather than a fundamental limitation.
Diagnostic Methodology
To isolate the root cause, I built instrumented test commands that measure process.memoryUsage() at each stage of data loading and rendering. This allowed me to pinpoint exactly where memory consumption diverges.
Test Environment
- @raycast/api: 1.104.3
- @raycast/utils: 2.2.2
- Dataset: https://models.dev/api.json (1.63 MB, ~4,100 objects)
Measurements
Test 1: Direct fetch + useState + List rendering (no useCachedPromise)
Mount (before fetch): heapUsed: 12.32 MB
After fetch: heapUsed: 22.39 MB
After render (4111 items): heapUsed: 69.78 MB
────────────────────────────────────────────
Heap growth: 57.45 MB
Result: SUCCESS
Test 2: useCachedPromise + List rendering (fresh cache)
Mount (before data): heapUsed: 14.84 MB
Loading state: heapUsed: 14.86 MB
Data loaded: heapUsed: 82.11 MB ← Before any rendering
────────────────────────────────────────────
Result: CRASH (Worker terminated - JS heap out of memory)
Comparison
| Stage | Direct Fetch | useCachedPromise | Difference |
|---|---|---|---|
| After data load (before render) | 22 MB | 82 MB | +60 MB |
| After full render | 70 MB | N/A (crashed) | — |
useCachedPromise uses ~60 MB more memory than direct fetch for identical data.
Likely Cause
When cache is empty, useCachedPromise appears to hold multiple copies of the data simultaneously:
- Raw fetch response
- Parsed JSON objects (application data)
- Re-serialized JSON for cache persistence
- Internal state buffers (possibly for
keepPreviousData)
This creates a memory spike during cache population that doesn't occur with manual caching, where we control when serialization happens.
Reproduction
Create a new Raycast extension and add these two commands to compare behavior.
Shared data fetching (src/lib/api.ts)
const API_URL = "https://models.dev/api.json";
export interface Model {
id: string;
name: string;
providerId: string;
providerName: string;
}
export interface ModelsData {
models: Model[];
}
export async function fetchModelsData(): Promise<ModelsData> {
const response = await fetch(API_URL);
const raw = await response.json();
// Transform nested structure to flat array
const models: Model[] = [];
for (const [providerId, provider] of Object.entries(raw) as [string, any][]) {
for (const [modelId, model] of Object.entries(provider.models) as [string, any][]) {
models.push({
id: modelId,
name: model.name,
providerId,
providerName: provider.name,
});
}
}
return { models };
}Command 1: Using useCachedPromise (CRASHES on fresh cache)
// src/test-cached-promise.tsx
import { List } from "@raycast/api";
import { useCachedPromise } from "@raycast/utils";
import { useEffect, useRef } from "react";
import { fetchModelsData } from "./lib/api";
function formatBytes(bytes: number): string {
return `${(bytes / (1024 * 1024)).toFixed(2)} MB`;
}
export default function TestCachedPromise() {
const logged = useRef(false);
const { data, isLoading } = useCachedPromise(fetchModelsData, [], {
keepPreviousData: true,
});
useEffect(() => {
if (data && !logged.current) {
logged.current = true;
const mem = process.memoryUsage();
console.log(`[useCachedPromise] Data loaded - heapUsed: ${formatBytes(mem.heapUsed)}`);
}
}, [data]);
return (
<List isLoading={isLoading}>
{data?.models.map((model) => (
<List.Item
key={`${model.providerId}-${model.id}`}
title={model.name}
subtitle={model.providerName}
/>
))}
</List>
);
}Command 2: Using direct fetch (WORKS)
// src/test-direct-fetch.tsx
import { List } from "@raycast/api";
import { useState, useEffect, useRef } from "react";
import { fetchModelsData, ModelsData } from "./lib/api";
function formatBytes(bytes: number): string {
return `${(bytes / (1024 * 1024)).toFixed(2)} MB`;
}
export default function TestDirectFetch() {
const logged = useRef(false);
const [data, setData] = useState<ModelsData | null>(null);
const [isLoading, setIsLoading] = useState(true);
useEffect(() => {
fetchModelsData().then((result) => {
setData(result);
setIsLoading(false);
});
}, []);
useEffect(() => {
if (data && !logged.current) {
logged.current = true;
const mem = process.memoryUsage();
console.log(`[directFetch] Data loaded - heapUsed: ${formatBytes(mem.heapUsed)}`);
}
}, [data]);
return (
<List isLoading={isLoading}>
{data?.models.map((model) => (
<List.Item
key={`${model.providerId}-${model.id}`}
title={model.name}
subtitle={model.providerName}
/>
))}
</List>
);
}Steps to reproduce
- Create a new extension with the above files
- Register both commands in
package.json - Run
npm run dev - Clear extension cache:
rm -rf ~/Library/Application\ Support/com.raycast.macos/extensions/*/your-extension-name - Restart Raycast
- Open "Test Direct Fetch" → Works, logs ~22 MB heap
- Clear cache again, restart Raycast
- Open "Test Cached Promise" → Crashes with OOM
Workaround
Using direct fetch + Raycast's Cache API works without issues:
import { Cache } from "@raycast/api";
import { useState, useEffect } from "react";
const cache = new Cache();
const CACHE_KEY = "models-data";
export function useModelsData() {
const [data, setData] = useState<ModelsData | null>(() => {
const cached = cache.get(CACHE_KEY);
return cached ? JSON.parse(cached) : null;
});
const [isLoading, setIsLoading] = useState(!data);
useEffect(() => {
if (data) {
setIsLoading(false);
return;
}
fetchModelsData()
.then((result) => {
setData(result);
setIsLoading(false);
// Cache write happens after state update
cache.set(CACHE_KEY, JSON.stringify(result));
})
.catch(console.error);
}, []);
return { data, isLoading };
}Suggested Improvements
1. Defer cache write to next tick
The simplest fix would be to schedule the cache write after the state update completes, allowing intermediate data to be garbage collected first:
// Current behavior (conceptual):
const data = await fn();
cache.set(key, JSON.stringify(data)); // Serialization happens immediately
setState(data); // Both copies in memory
// Suggested behavior:
const data = await fn();
setState(data);
queueMicrotask(() => {
cache.set(key, JSON.stringify(data)); // Serialization deferred
});This would reduce peak memory by allowing the raw response and parsing buffers to be collected before serialization occurs.
2. Documentation update
Add a note to the useCachedPromise docs under a "Memory Considerations" section:
Large datasets: For datasets larger than ~500KB,
useCachedPromisemay cause memory spikes during initial cache population. Consider:
- Using
useStreamJSONfor large JSON arrays that can be streamed- Using
usePromisewith manualCachewrites for datasets requiring complex transformations
This would help developers make informed choices before hitting OOM errors in production.
Thank you for all hard work on these utils.
I'm happy to provide additional diagnostics or test any proposed fixes.