feat(vanilla): make ops callback opt-in with unstable_enableOp#1189
feat(vanilla): make ops callback opt-in with unstable_enableOp#1189
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
This pull request is automatically built and testable in CodeSandbox. To see build info of the built libraries, click here or the icon next to each commit SHA. |
commit: |
|
Size Change: +211 B (+1.44%) Total Size: 14.9 kB
ℹ️ View Unchanged
|
This reverts commit 63054a5.
|
This will be a breaking change for valtio-yjs and valtio-y. |
Valtio PR #1189 makes the `ops` parameter in subscribe callbacks opt-in. This change adds forward compatibility by calling `unstable_enableOp(true)` when available, while maintaining backwards compatibility with older valtio versions through a runtime check. See: pmndrs/valtio#1189 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Ouch ! That breaking change punched me in the face. Is it possible to add this to documentation. I'm using valtio to manage reactive state on server and send state patches (just like in colyseus js) |
|
Sorry about that. Would you please open a PR to improve docs? Good to know that you use the feature. I'll be thinking about how it could be supported in v3 in somewhat a better way. |
Sure on my way. Maybe I could do this on my own using https://www.npmjs.com/package/deep-object-diff but currently it's pretty easy with valtio here is my code if you want to get my use case. The idea is to reduce network bandwidth and processing, especially on low end devices. My library has some connection to IoT Microcontroller, with pretty low memory and CPU, and unserializing JSON cost. And i don't want to embed a custom protocol without json, as i don't need stratospheric performances. But if i can just push state patches like i do this is fine. import { proxy, snapshot, subscribe, unstable_enableOp } from "valtio/vanilla";
import { subscribeKey } from "valtio/vanilla/utils";
import { set } from "lodash-es";
// https://github.com/pmndrs/valtio/releases/tag/v2.3.0
unstable_enableOp(true);
// ... Somewhere below in a function
subscribe(state, (ops) => {
const opsRes = ops.reduce((acc, [, path, value]) => {
set(acc, path, value);
return acc;
}, {});
publishTopicMessage(server, getRoomTopic(roomID), {
c: ProtocolCode.ROOM_STATE_PATCH,
r: roomID,
s: opsRes,
});
}); |
|
@ScreamZ No in rush, but could you try some experiments, if performance changes between these?
My intuition is that if we don't have performance benefit, we should consider dropping |
import { proxy, subscribe, snapshot, unstable_enableOp } from 'valtio/vanilla';
import { diff } from 'deep-object-diff';
const RUNS = 5;
const ITERATIONS = 10000;
// Configurable complexity
const LIST_SIZE = 1000; // Increased to make diffing more noticeable
const createInitialState = () => ({
count: 0,
text: 'hello world',
nested: {
a: 1,
b: {
c: [1, 2, 3],
d: 'deep string',
},
},
list: Array.from({ length: LIST_SIZE }, (_, i) => ({ id: i, value: `item-${i}` })),
});
async function benchmarkOps() {
unstable_enableOp(true);
const state = proxy(createInitialState());
// We attach a listener that consumes ops
// Using sync=true to ensure we measure the cost of generating/dispatching each op immediately
const unsub = subscribe(state, (ops) => {
// access ops to prevent dead code elimination (though unlikely with JIT)
if (ops.length > 0) {
void ops[0];
}
}, true);
const start = performance.now();
for (let i = 0; i < ITERATIONS; i++) {
state.count++;
state.nested.b.c[0]++;
// Random mutation in list
if (i % 10 === 0) {
const idx = i % LIST_SIZE;
state.list[idx].value = `updated-${i}`;
}
}
const end = performance.now();
unsub();
return end - start;
}
async function benchmarkDeepDiff() {
unstable_enableOp(false);
const state = proxy(createInitialState());
let prevSnap = snapshot(state);
const unsub = subscribe(state, () => {
const nextSnap = snapshot(state);
diff(prevSnap, nextSnap);
prevSnap = nextSnap;
}, true);
const start = performance.now();
for (let i = 0; i < ITERATIONS; i++) {
state.count++;
state.nested.b.c[0]++;
if (i % 10 === 0) {
const idx = i % LIST_SIZE;
state.list[idx].value = `updated-${i}`;
}
}
const end = performance.now();
unsub();
return end - start;
}
async function runSuite() {
console.log(`\nStarting Benchmark: ${ITERATIONS} iterations, ${LIST_SIZE} list items.`);
console.log('Comparing:');
console.log('1. Valtio with unstable_enableOp(true) -> receive Ops');
console.log('2. Valtio with unstable_enableOp(false) -> compute deep-object-diff');
console.log('-'.repeat(50));
let totalOpsTime = 0;
let totalDiffTime = 0;
for (let run = 1; run <= RUNS; run++) {
// Run Ops
const tOps = await benchmarkOps();
totalOpsTime += tOps;
// Run Diff
const tDiff = await benchmarkDeepDiff();
totalDiffTime += tDiff;
console.log(`Run ${run}: Ops=${tOps.toFixed(2)}ms, Diff=${tDiff.toFixed(2)}ms`);
// Small pause between runs for GC
await new Promise(r => setTimeout(r, 100));
}
const avgOps = totalOpsTime / RUNS;
const avgDiff = totalDiffTime / RUNS;
console.log('-'.repeat(50));
console.log(`Average Ops Time: ${avgOps.toFixed(2)}ms`);
console.log(`Average Diff Time: ${avgDiff.toFixed(2)}ms`);
const ratio = avgDiff / avgOps;
console.log(`\nConclusion: Ops approach is ~${ratio.toFixed(1)}x faster.`);
}
runSuite();Using bun |
|
Cool. Probably |
close #1188
This is a breaking change for an unstable feature.
If a user or a library uses
unstable_opsin a callback, they have to enable in advance withunstable_enableOp(true).