Skip to content

Implement projectm-eval mutex callbacks#985

Closed
struktured wants to merge 1 commit intomasterfrom
fix/eval-mutex
Closed

Implement projectm-eval mutex callbacks#985
struktured wants to merge 1 commit intomasterfrom
fix/eval-mutex

Conversation

@struktured
Copy link
Copy Markdown
Contributor

Summary

The projectm_eval_memory_host_lock/unlock_mutex() callbacks in EvalLibMutex.cpp were empty stubs ({}), providing no synchronization when multiple preset expressions access shared memory blocks (megabuf/gmegabuf) concurrently — e.g., during a soft-cut transition where both the active and transitioning presets evaluate simultaneously.

The projectm-eval library already calls these callbacks around shared memory access in MemoryBuffer.c and documents them as host-provided. This commit implements them with a std::mutex.

Changes

  • 1 file: src/libprojectM/MilkdropPreset/EvalLibMutex.cpp (+13, -2)

Test plan

  • Clean build on Linux (GCC, RelWithDebInfo)
  • Runtime test with Qt+PipeWire frontend, concurrent preset transitions

The projectm_eval_memory_host_lock/unlock_mutex() callbacks were no-ops,
providing no protection when multiple preset expressions access shared
memory blocks from different threads. Add a real std::mutex so the eval
library's built-in synchronization actually works.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@kblaschke
Copy link
Copy Markdown
Member

There's no synchronization needed as no code calls the expressions from separate threads, ever. All calls are serialized, plus each preset has its own megabufs/gmegabuf, so they're not even interfering with each other (I contrast to Milkdrop where all presets share the same megabufs).

Adding a mutex will only reduce performance due to locking overhead when accessing megabufs, but with no real issue fixed.

@kblaschke kblaschke closed this Mar 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants