Skip to content

feat(vanilla): make ops callback opt-in with unstable_enableOp#1189

Merged
dai-shi merged 11 commits intomainfrom
feat/new-subscribe-option
Jan 1, 2026
Merged

feat(vanilla): make ops callback opt-in with unstable_enableOp#1189
dai-shi merged 11 commits intomainfrom
feat/new-subscribe-option

Conversation

@dai-shi
Copy link
Copy Markdown
Member

@dai-shi dai-shi commented Dec 21, 2025

close #1188

This is a breaking change for an unstable feature.
If a user or a library uses unstable_ops in a callback, they have to enable in advance with unstable_enableOp(true).

@vercel
Copy link
Copy Markdown

vercel Bot commented Dec 21, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
valtio Ready Ready Preview, Comment Dec 22, 2025 5:02am

@codesandbox-ci
Copy link
Copy Markdown

codesandbox-ci Bot commented Dec 21, 2025

This pull request is automatically built and testable in CodeSandbox.

To see build info of the built libraries, click here or the icon next to each commit SHA.

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new Bot commented Dec 21, 2025

More templates

npm i https://pkg.pr.new/valtio@1189

commit: d0b22c0

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Dec 21, 2025

Size Change: +211 B (+1.44%)

Total Size: 14.9 kB

Filename Size Change
./dist/esm/vanilla.mjs 2.44 kB +93 B (+3.97%)
./dist/esm/vanilla/utils.mjs 3.7 kB +13 B (+0.35%)
./dist/vanilla.js 2.47 kB +94 B (+3.95%)
./dist/vanilla/utils.js 3.74 kB +11 B (+0.29%)
ℹ️ View Unchanged
Filename Size
./dist/esm/index.mjs 63 B
./dist/esm/react.mjs 701 B
./dist/esm/react/utils.mjs 257 B
./dist/esm/utils.mjs 68 B
./dist/index.js 243 B
./dist/react.js 695 B
./dist/react/utils.js 278 B
./dist/utils.js 248 B

compressed-size-action

This reverts commit 63054a5.
@dai-shi dai-shi changed the title feat: new subscribe option feat: make ops callback opt-in with unstable_enableOp Dec 22, 2025
@dai-shi dai-shi marked this pull request as ready for review December 22, 2025 01:25
@dai-shi dai-shi changed the title feat: make ops callback opt-in with unstable_enableOp feat(vanilla): make ops callback opt-in with unstable_enableOp Dec 22, 2025
@dai-shi
Copy link
Copy Markdown
Member Author

dai-shi commented Dec 23, 2025

This will be a breaking change for valtio-yjs and valtio-y.
cc @agcty

agcty added a commit to valtiojs/valtio-y that referenced this pull request Dec 28, 2025
Valtio PR #1189 makes the `ops` parameter in subscribe callbacks opt-in.
This change adds forward compatibility by calling `unstable_enableOp(true)`
when available, while maintaining backwards compatibility with older
valtio versions through a runtime check.

See: pmndrs/valtio#1189

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@dai-shi dai-shi merged commit 9ad5228 into main Jan 1, 2026
34 checks passed
@dai-shi dai-shi deleted the feat/new-subscribe-option branch January 1, 2026 08:17
@ScreamZ
Copy link
Copy Markdown
Contributor

ScreamZ commented Jan 25, 2026

Ouch ! That breaking change punched me in the face. Is it possible to add this to documentation.

I'm using valtio to manage reactive state on server and send state patches (just like in colyseus js)

@dai-shi
Copy link
Copy Markdown
Member Author

dai-shi commented Jan 25, 2026

Sorry about that. Would you please open a PR to improve docs?

Good to know that you use the feature. I'll be thinking about how it could be supported in v3 in somewhat a better way.

@ScreamZ
Copy link
Copy Markdown
Contributor

ScreamZ commented Jan 26, 2026

Sorry about that. Would you please open a PR to improve docs?

Good to know that you use the feature. I'll be thinking about how it could be supported in v3 in somewhat a better way.

Sure on my way.

Maybe I could do this on my own using https://www.npmjs.com/package/deep-object-diff but currently it's pretty easy with valtio here is my code if you want to get my use case.

The idea is to reduce network bandwidth and processing, especially on low end devices. My library has some connection to IoT Microcontroller, with pretty low memory and CPU, and unserializing JSON cost. And i don't want to embed a custom protocol without json, as i don't need stratospheric performances.

But if i can just push state patches like i do this is fine.

import { proxy, snapshot, subscribe, unstable_enableOp } from "valtio/vanilla";
import { subscribeKey } from "valtio/vanilla/utils";
import { set } from "lodash-es";

// https://github.com/pmndrs/valtio/releases/tag/v2.3.0
unstable_enableOp(true);

// ... Somewhere below in a function
subscribe(state, (ops) => {
		const opsRes = ops.reduce((acc, [, path, value]) => {
			set(acc, path, value);
			return acc;
		}, {});

		publishTopicMessage(server, getRoomTopic(roomID), {
			c: ProtocolCode.ROOM_STATE_PATCH,
			r: roomID,
			s: opsRes,
		});
	});

@dai-shi
Copy link
Copy Markdown
Member Author

dai-shi commented Jan 28, 2026

@ScreamZ
Thanks for your feedback.

No in rush, but could you try some experiments, if performance changes between these?

  • unstable_enableOp(true) + using ops
  • unstable_enableOp(false) + using deep-object-diff

My intuition is that if we don't have performance benefit, we should consider dropping ops completely. Though, we need to also consider valtio-yjs and valtio-y that are using ops now.

@ScreamZ
Copy link
Copy Markdown
Contributor

ScreamZ commented Jan 30, 2026

@ScreamZ Thanks for your feedback.

No in rush, but could you try some experiments, if performance changes between these?

* unstable_enableOp(true) + using `ops`

* unstable_enableOp(false) + using `deep-object-diff`

My intuition is that if we don't have performance benefit, we should consider dropping ops completely. Though, we need to also consider valtio-yjs and valtio-y that are using ops now.

Starting Benchmark: 10000 iterations, 1000 list items.
Comparing:
1. Valtio with unstable_enableOp(true) -> receive Ops
2. Valtio with unstable_enableOp(false) -> compute deep-object-diff
--------------------------------------------------
Run 1: Ops=18.50ms, Diff=4003.71ms
Run 2: Ops=13.65ms, Diff=3973.15ms
Run 3: Ops=12.19ms, Diff=4002.36ms
Run 4: Ops=14.14ms, Diff=4063.51ms
Run 5: Ops=10.77ms, Diff=4199.96ms
--------------------------------------------------
Average Ops Time: 13.85ms
Average Diff Time: 4048.54ms

Conclusion: Ops approach is ~292.3x faster.
import { proxy, subscribe, snapshot, unstable_enableOp } from 'valtio/vanilla';
import { diff } from 'deep-object-diff';


const RUNS = 5;
const ITERATIONS = 10000;

// Configurable complexity
const LIST_SIZE = 1000; // Increased to make diffing more noticeable

const createInitialState = () => ({
  count: 0,
  text: 'hello world',
  nested: {
    a: 1,
    b: {
      c: [1, 2, 3],
      d: 'deep string',
    },
  },
  list: Array.from({ length: LIST_SIZE }, (_, i) => ({ id: i, value: `item-${i}` })),
});


async function benchmarkOps() {
  unstable_enableOp(true);
  const state = proxy(createInitialState());
  
  // We attach a listener that consumes ops
  // Using sync=true to ensure we measure the cost of generating/dispatching each op immediately
  const unsub = subscribe(state, (ops) => {
     // access ops to prevent dead code elimination (though unlikely with JIT)
     if (ops.length > 0) {
         void ops[0]; 
     }
  }, true);

  const start = performance.now();
  
  for (let i = 0; i < ITERATIONS; i++) {
    state.count++;
    state.nested.b.c[0]++;
    // Random mutation in list
    if (i % 10 === 0) {
        const idx = i % LIST_SIZE;
        state.list[idx].value = `updated-${i}`;
    }
  }
  
  const end = performance.now();
  unsub();
  return end - start;
}

async function benchmarkDeepDiff() {
  unstable_enableOp(false);
  const state = proxy(createInitialState());
  let prevSnap = snapshot(state);
  
  const unsub = subscribe(state, () => {
    const nextSnap = snapshot(state);
    diff(prevSnap, nextSnap);
    prevSnap = nextSnap;
  }, true);

  const start = performance.now();
  
  for (let i = 0; i < ITERATIONS; i++) {
    state.count++;
    state.nested.b.c[0]++;
    if (i % 10 === 0) {
        const idx = i % LIST_SIZE;
        state.list[idx].value = `updated-${i}`;
    }
  }
  
  const end = performance.now();
  unsub();
  return end - start;
}

async function runSuite() {
  console.log(`\nStarting Benchmark: ${ITERATIONS} iterations, ${LIST_SIZE} list items.`);
  console.log('Comparing:');
  console.log('1. Valtio with unstable_enableOp(true) -> receive Ops');
  console.log('2. Valtio with unstable_enableOp(false) -> compute deep-object-diff');
  console.log('-'.repeat(50));

  let totalOpsTime = 0;
  let totalDiffTime = 0;

  for (let run = 1; run <= RUNS; run++) {
    // Run Ops
    const tOps = await benchmarkOps();
    totalOpsTime += tOps;
    
    // Run Diff
    const tDiff = await benchmarkDeepDiff();
    totalDiffTime += tDiff;

    console.log(`Run ${run}: Ops=${tOps.toFixed(2)}ms, Diff=${tDiff.toFixed(2)}ms`);
    
    // Small pause between runs for GC
    await new Promise(r => setTimeout(r, 100));
  }

  const avgOps = totalOpsTime / RUNS;
  const avgDiff = totalDiffTime / RUNS;

  console.log('-'.repeat(50));
  console.log(`Average Ops Time: ${avgOps.toFixed(2)}ms`);
  console.log(`Average Diff Time: ${avgDiff.toFixed(2)}ms`);
  
  const ratio = avgDiff / avgOps;
  console.log(`\nConclusion: Ops approach is ~${ratio.toFixed(1)}x faster.`);
}

runSuite();

Using bun

@dai-shi
Copy link
Copy Markdown
Member Author

dai-shi commented Jan 30, 2026

Cool. Probably snapshot is slow. Yeah, I'll take that into account when considering the next version. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants