Skip to content

Latest commit

 

History

History
625 lines (334 loc) · 131 KB

File metadata and controls

625 lines (334 loc) · 131 KB
# The Incompleteness of Observation ### Why Physics' Biggest Contradiction Might Not Be a Contradiction at All ### An Overview of the Observational Incompleteness Framework **Author:** Alex Maybaum **Date:** April 2026 **Status:** DRAFT PRE-PRINT **Classification:** Theoretical Physics / Foundations / Expository --- ## The Worst Prediction in Physics Physics has two spectacularly successful theories. Quantum mechanics describes the behavior of atoms, particles, and light. General relativity describes gravity, space, and time. Each has been confirmed to extraordinary precision. They have never disagreed with any experiment. They disagree with each other. Ask quantum mechanics how much energy empty space contains and it gives you a staggering number: roughly $10^{113}$ joules per cubic meter. Ask general relativity the same question — read the answer off the expansion rate of the universe — and you get about $6 \times 10^{-10}$ joules per cubic meter. The ratio is $10^{122}$. For scale, the number of atoms in the observable universe is about $10^{80}$. This is not a close call. For decades, the assumption has been that something is deeply broken — that one or both calculations contain an error, and that finding the mistake will point the way to a unified theory of everything. This paper argues the opposite. Neither calculation is wrong. They disagree because they *must*. In fact, that massive $10^{122}$ discrepancy isn't a failure at all. It is the strongest piece of existing evidence we have that quantum mechanics is not the fundamental bedrock of reality, but an emergent description forced upon us by our limited vantage point. The argument is built from a chain of mathematical proofs, each feeding into the next. This document explains what the paper claims, walks through the logic of every major proof, and shows how they connect. --- ## The Starting Point: Observation Exists Every mathematical proof starts from assumptions, and this framework has exactly one. It doesn't mention quantum mechanics. It doesn't mention general relativity. It is: *Observation occurs.* An observer records distinguishable outcomes of interactions with a system not wholly under the observer's control. This is the cogito of Descartes — "I think, therefore I am" — made mathematically precise. It is the one empirical fact that cannot be doubted: if you are reading this sentence, observation is occurring. The paper formalizes this as a definition: **Definition.** An *observation* is a triple $(S, \varphi, V)$: a total system $S$, a deterministic dynamics $\varphi: S \to S$, and an observer $V \subsetneq S$ — a proper subsystem with finitely many distinguishable internal states, coupled to the complement $H = S \setminus V$ through $\varphi$. This single sentence contains no physics. It is weaker than classical mechanics (no continuity, no Hamiltonian, no Lagrangian). A shuffled deck of cards satisfies it. A finite cellular automaton satisfies it. Any finite computation satisfies it. The paper shows that three structural lemmas follow from this definition alone: **Lemma 1** (Finiteness). *The observer has finitely many distinguishable internal states, so the visible configuration space $\mathcal{C}_V$ is finite, with a discreteness scale $\epsilon$ providing a finite minimal cell volume.* There's a smallest meaningful size ε — you can't resolve anything smaller. This means the configuration space is finite, not continuous. This matters because finite systems have a property infinite systems don't — they must eventually return to their starting state (Poincaré recurrence). Initially, ε is left unspecified. With the cosmological horizon, self-consistency forces ε = 2 l_p (twice the Planck length). **Lemma 2** (Causal partition). *An observer is a proper subsystem $V \subsetneq S$. The complement $H = S \setminus V$ is the hidden sector.* The total phase space splits into two pieces: $$\Gamma = \Gamma_V \times \Gamma_H$$ Γ_V is the visible sector (what the observer can access). Γ_H is the hidden sector (what they cannot). The total Hamiltonian splits correspondingly: $$H_{\text{tot}} = H_V + H_H + H_{\text{int}}$$ H_V governs the visible sector alone. H_H governs the hidden sector alone. H_int couples them — it's how the two sectors talk to each other. Without H_int, the two sectors would evolve independently and the observer would never feel the hidden sector's influence. **Lemma 3** (Unique measure). *The counting measure on $S$ — assigning equal weight to each state — is the unique measure invariant under $\varphi$.* The observer uses standard Kolmogorov probability theory. No exotic probability theories, no negative probabilities, no quantum probability — just ordinary probability. This is what makes the result surprising: we're putting in classical probability and getting out quantum mechanics. That's it. The claim is that quantum mechanics — the Schrödinger equation, the Born rule, superposition, entanglement, Bell inequality violations — follows from this definition alone, given the right conditions on the hidden sector: **C1: Non-zero coupling (H_int ≠ 0).** The visible and hidden sectors interact. Information flows between them. Without this, the observer's room is perfectly isolated — nothing interesting happens. **C2: Slow bath (τ_S ≪ τ_B).** The hidden sector evolves much more slowly than the visible sector. τ_S is the timescale of visible-sector processes; τ_B is the timescale of hidden-sector processes. This is the *opposite* of the usual assumption in physics. Normally, people assume the environment is fast and chaotic (a "heat bath" that quickly forgets everything). Here, the environment is slow and has a long memory. This is what makes the dynamics non-Markovian. **C3: Sufficient capacity (N_H ≫ N_V).** The hidden sector has many more degrees of freedom than the visible sector. There's enough "room" to store information about the visible sector's history without running out of space. The definition sets the stage. The conditions determine what kind of show plays on it. The next section explains why the cosmological horizon satisfies all three. --- ## The Observer's Blind Spot Light travels at a finite speed, and the universe has a finite age. Put those together and every observer has a horizon — a boundary beyond which no signal has had time to arrive. Everything beyond that boundary is ordinary physics: fields, particles, radiation. But it is structurally inaccessible. Not because our telescopes aren't good enough, but because the geometry of spacetime forbids it. No technology that obeys the speed of light can reach past it. This means every observer in the universe is in the same epistemic situation: there are degrees of freedom — a vast number of them — that influence what you measure but that you can never track. When you write down the laws of physics for the things you *can* see, you're forced to average over everything you can't. You have to "trace out" the hidden sector. Here's what that looks like concretely. The total system — visible plus hidden — is deterministic. If you knew the complete state, you could predict the future exactly. But you don't know the hidden part. You know the visible state is $x$, but there are many possible hidden states compatible with $x$, and each one sends $x$ to a different visible future. Hidden state $h_1$ might send the particle left; hidden state $h_2$ might send it right. Since you can't tell which hidden state you're in, the best you can do is assign probabilities: average over all the possible hidden states, weighted by how likely each one is. The result is a set of *transition probabilities* — the chance that visible state $x$ at time $t_1$ becomes visible state $y$ at time $t_2$. You've gone from a deterministic system you can't fully see to a probabilistic one you can. That's a stochastic process, and it's the only description available to any observer who can't access the hidden sector. The standard expectation is that this should produce something boring — classical, memoryless noise. And it would, if the hidden sector were fast and forgettable, like air molecules bouncing off a grain of pollen. Each kick is independent of the last. Physicists call this *Markovian* behavior. But the hidden sector beyond the cosmological horizon is not like that. It differs in three specific ways, and the paper proves that their conjunction changes everything. **It's coupled.** The horizon is not a static wall. Stress-energy conservation enforces continuous dynamical correlations across it. Matter crosses the horizon, and the horizon area adjusts in response to interior energy density. Information flows in both directions. (Condition C1.) **It's slow.** The hidden sector's correlation time is set by the Hubble timescale — roughly $10^{17}$ seconds, the age of the universe. Any laboratory experiment operates on timescales of $10^{-15}$ seconds or shorter. The ratio is $10^{-32}$. The hidden sector cannot "reset" between your measurements. Every correlation it picks up from one experiment is still there when the next one begins. This is the *opposite* of the standard Markovian regime, where the environment decorrelates fast. Here, it never decorrelates at all. (Condition C2.) **It's vast.** The hidden sector has roughly $10^{122}$ independent degrees of freedom — the Bekenstein-Hawking entropy of the cosmological horizon. No experiment you could ever perform would appreciably disturb its state. Its memory never saturates. (Condition C3.) A fast environment with vast capacity would wash out correlations (Markovian noise). A slow environment with limited capacity would eventually fill up and stop recording. Only an environment that is simultaneously coupled, slow, and vast sustains the kind of persistent, non-decomposable correlations that the paper calls *P-indivisibility* — a technical term meaning the system's transition probabilities at different times cannot be broken into independent steps. --- ## Partition-Relativity This is the first real proof in the paper, and it's beautifully simple. **What it proves:** The emergent description (what the observer sees) depends *only* on the partition — on which degrees of freedom are visible and which are hidden. Nothing else. **The formula:** $$T_{ij}(t_2, t_1) = \int_{\Gamma_H} \delta_{x_j}[\pi_V(\phi_{t_2-t_1}(x_i, h))] \, d\mu(h)$$ Unpacking each symbol: - **T_ij**: The probability of transitioning from visible state x_i to visible state x_j in the time interval from t_1 to t_2. This is what the observer measures. - **(x_i, h)**: The complete state — visible part x_i, hidden part h. - **φ_{t2-t1}**: The deterministic evolution. Takes the complete state at time t_1 and returns the complete state at time t_2. Uniquely determined by the definition. - **π_V**: Projection onto the visible sector. Takes a complete state (x, h) and returns just x. - **δ_{xj}[...]**: The Kronecker delta. Equals 1 if the visible part ended up at x_j, equals 0 otherwise. - **dμ(h)**: Integration over all possible hidden states, weighted by the Liouville measure. **In plain English:** For each possible hidden state h, check whether starting at (x_i, h) and evolving forward lands the visible part on x_j. Count up all the hidden states where this happens, weighted by how likely each hidden state is. The result is the probability of the transition x_i → x_j. **The proof:** The formula has exactly three inputs: (1) the dynamics φ_t — fixed by the definition, (2) the partition (Γ_V, Γ_H) and projection π_V — fixed by Lemma 2, and (3) the measure μ — fixed by Lemma 3 (Liouville measure is the unique choice). Since inputs 1 and 3 are determined by the definition, the only free input is the partition. Therefore: everything about the emergent description depends only on the partition. QED. **Why the Liouville measure is unique:** The observer needs a "prior" — a way to weight the hidden states. Liouville measure is the unique measure on phase space that is absolutely continuous (no point masses) and invariant under Hamiltonian flow. Any smooth initial distribution evolves toward it. Singular measures are excluded by Lemma 3's requirement of standard probability theory. The observer has no choice. --- ## Emergent Stochasticity and the Slow-Bath Regime The total system is deterministic. If you knew both x and h, you'd know the future with certainty. But the observer knows only x. Different hidden states h send the same visible state x to different futures. Example: visible state is "Heads." Hidden state could be any die value 1–6. If the die is 1 or 2, the dynamics flip the coin to Tails. If the die is 3–6, the coin stays at Heads. The observer doesn't know the die, so they see: P(Heads → Tails) = 2/6 = 1/3. The randomness is epistemic (from ignorance) not ontological (from fundamental indeterminacy). In a normal "heat bath" scenario, the environment is fast and chaotic. It scrambles any information you write into it before you can read it back. This produces Markovian (memoryless) dynamics — each step is independent of previous steps. C2 inverts this. The hidden sector is slow. When the visible sector interacts with it (writing information through H_int), the information stays there. At the next interaction, the hidden sector reads back what was written before. The observer sees history-dependent transition probabilities — what happens next depends on what happened before. This is non-Markovian dynamics. It's the key ingredient that separates quantum mechanics from classical stochastic processes. --- ## The P-Indivisibility Theorem **What it claims.** If a deterministic system is split into a visible and hidden sector, and these sectors are genuinely coupled, then the visible sector's behavior *cannot* be a simple memoryless random process. It must exhibit P-indivisibility — a specific kind of built-in memory. **What "P-indivisible" means.** A stochastic process is "P-divisible" if you can always find a valid transition matrix connecting any two time points. Mathematically: for any times t_1 < t_2 < t_3, there exists a stochastic matrix Λ such that: $$T(t_3, t_1) = \Lambda(t_3, t_2) \cdot T(t_2, t_1)$$ where Λ has non-negative entries and rows summing to 1. "P-indivisible" means this fails — the "intermediate propagator" would need negative entries, which means it's not a valid probability matrix. Breuer, Laine, and Piilo proved that P-indivisibility is equivalent to "information backflow" — the system's distinguishability can *increase* over time. In a classical Markov process, you can only lose information (mixing). In a P-indivisible process, information comes back. This is exactly what quantum systems do — interference, revivals, and non-classical correlations all involve information returning from where it was stored. **The setup.** We work on finite sets (Lemma 1). The visible sector has states C_V = {x_1, x_2, ...} with |C_V| ≥ 2. The hidden sector has states C_H = {h_1, h_2, ...}. The total dynamics is a bijection φ on C_V × C_H. The transition matrix is: $$T_{ij} = \frac{|\{h \in \mathcal{C}_H : \pi_V(\varphi(x_i, h)) = x_j\}|}{|\mathcal{C}_H|}$$ **The key tool — total variation distance:** $$d(p, q) = \frac{1}{2}\sum_k |p_k - q_k|$$ This measures how distinguishable two probability distributions are. If d = 1, they're perfectly distinguishable. If d = 0, they're identical. For P-divisible processes, d can only decrease or stay constant. **Step 1 — Recurrence.** φ is a bijection on a finite set. Keep applying φ and you must eventually return to where you started — there are only finitely many states to visit. Formally: there exists N such that φ^N = id. So T^(N) = I, and: $$d(\delta_i T^{(N)}, \delta_j T^{(N)}) = d(\delta_i, \delta_j) = 1$$ After N steps, states that started distinguishable are still perfectly distinguishable. **Step 2 — Strict contraction.** T is not a permutation matrix (this follows from C1 — the coupling mixes things). So there exist states i, j, l where both T_il > 0 and T_jl > 0. The total variation distance after one step: $$d(\delta_i T, \delta_j T) = \frac{1}{2}\sum_k |T_{ik} - T_{jk}| < 1$$ The inequality is strict because the distributions overlap. Distinguishability has decreased. **Step 3 — The punchline.** At t = 1: d < 1 (distinguishability decreased). At t = N: d = 1 (distinguishability restored). The distinguishability went down then came back up — non-monotonic behavior. A P-divisible process can only have non-increasing distinguishability. Therefore the process is P-indivisible. QED. The proof uses almost nothing — just that the dynamics is a bijection on a finite set (the definition and Lemma 1) and that the coupling is non-trivial (C1). It is purely combinatorial. --- ## The Accessible-Timescale Lemma The recurrence proof shows P-indivisibility exists as a mathematical property. But the recurrence time is absurdly long — for the cosmological case, it's e^(10^122) years. Nobody will ever observe it. The accessible-timescale lemma shows that information backflow happens on *laboratory timescales*, independently of recurrence. **The mechanism:** At each interaction (timescale τ_S), the coupling H_int transfers some information from the visible sector to the hidden sector. Call the amount I_0. Between interactions, the hidden sector's correlations decay with a rate set by its spectral gap Δ ~ 1/τ_B. The decay per visible-sector step is: $$e^{-\Delta \tau_S} \approx 1 - \frac{\tau_S}{\tau_B}$$ When τ_S ≪ τ_B (C2), this is very close to 1 — almost no decay. The hidden sector remembers almost perfectly between steps. After k steps, the cumulative decay is: $$e^{-k\Delta\tau_S} \approx 1 - \frac{k\tau_S}{\tau_B}$$ As long as k·τ_S ≪ τ_B, the hidden sector retains ~k bits of visible-sector history. The mutual information satisfies: $$I(X_{t} \mid X_t) \geq I_0\left(1 - \frac{k\tau_S}{\tau_B}\right)$$ For the cosmological case: τ_S ~ 10^{-15} s, τ_B ~ 10^{17} s. Even after k = 10^{20} steps, k·τ_S/τ_B ~ 10^{-12} — negligible. The hidden sector remembers everything. **The role of C3:** The hidden sector's memory capacity is log_2(|C_H|) bits. If k bits of history are written but the capacity is only m < k bits, old data gets overwritten. C3 ensures m is large enough that the memory never saturates on observable timescales. --- ## The Coin-and-Die Model The paper builds a concrete toy model to make the mechanism tangible. **Setup:** - Visible: x ∈ {0, 1} (a coin: 0 = Heads, 1 = Tails) - Hidden: h ∈ {1, 2, 3, 4, 5, 6} (a die) - Total: 12 states **The permutation σ:** | Input state | Output state | What happens | |---|---|---| | (0, 1) | (1, 1) | Coin flips, die stays | | (1, 1) | (0, 1) | Coin flips, die stays | | (0, 2) | (1, 2) | Coin flips, die stays | | (1, 2) | (0, 2) | Coin flips, die stays | | (0, 3) | (0, 4) | Coin stays, die changes | | (0, 4) | (0, 3) | Coin stays, die changes | | (0, 5) | (0, 6) | Coin stays, die changes | | (0, 6) | (0, 5) | Coin stays, die changes | | (1, 3) | (1, 4) | Coin stays, die changes | | (1, 4) | (1, 3) | Coin stays, die changes | | (1, 5) | (1, 6) | Coin stays, die changes | | (1, 6) | (1, 5) | Coin stays, die changes | Every swap is a transposition (a ↔ b), so σ² = id (apply twice, everything returns). **Checking the conditions:** C1 (coupling): die values 1 and 2 flip the coin ✓. C2 (slow bath): σ² = id means recurrence time is 2 steps, giving τ_S/τ_B = 1/2 ✓. C3 (sufficient capacity): 6 hidden states vs 2 visible states ✓. **Computing T(1,0).** Start at x = 0 (Heads). All 6 die values are equally likely. - h = 1: σ(0,1) = (1,1) → Tails - h = 2: σ(0,2) = (1,2) → Tails - h = 3: σ(0,3) = (0,4) → Heads - h = 4: σ(0,4) = (0,3) → Heads - h = 5: σ(0,5) = (0,6) → Heads - h = 6: σ(0,6) = (0,5) → Heads P(0 → 0) = 4/6 = 2/3, P(0 → 1) = 2/6 = 1/3. By the same logic for x = 1: $$T(1,0) = \begin{pmatrix} 2/3 & 1/3 \\ 1/3 & 2/3 \end{pmatrix}$$ **Distinguishability at t = 1:** $$d(\delta_0 T, \delta_1 T) = \frac{1}{2}(|2/3 - 1/3| + |1/3 - 2/3|) = 1/3$$ Started at d = 1. Now d = 1/3. Distinguishability decreased. **What Markov would predict at t = 2:** Apply the same transition matrix again: $$T(1,0)^2 = \begin{pmatrix} 5/9 & 4/9 \\ 4/9 & 5/9 \end{pmatrix}$$ Distinguishability would drop to d = 1/9. More mixing. **What actually happens at t = 2:** σ² = id. Every state returns to its starting point. $$T(2,0) = I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$$ Distinguishability is back to d = 1. Complete un-mixing. Impossible for a Markov process. **The smoking gun — negative entries.** If there were a valid stochastic matrix Λ(2,1) connecting steps 1 and 2: $$\Lambda(2,1) = T(2,0) \cdot [T(1,0)]^{-1} = I \cdot \begin{pmatrix} 2 & -1 \\ -1 & 2 \end{pmatrix} = \begin{pmatrix} 2 & -1 \\ -1 & 2 \end{pmatrix}$$ The entries −1 are negative. No valid stochastic matrix exists. **This is P-indivisibility.** **The mechanism in detail.** The die works as a memory register. At step 1, if the coin was at 0 and the die was at 1, the coin flips to 1 but the die stays at 1. The die value 1 now encodes the information "the coin was at 0 and I flipped it." At step 2, σ sees (1, 1) and flips it back to (0, 1). The die read its own memory and reversed the flip. C1 (coupling) allows writing to the memory. C2 (slow bath) ensures it isn't erased between reads. C3 (sufficient capacity) ensures there's enough room. Together, they produce the information backflow that makes the process P-indivisible — and therefore, by the stochastic-quantum correspondence, equivalent to quantum mechanics. --- ### Why conditions C2 and C3 matter physically The P-indivisibility theorem needs only coupling (C1) and finiteness. So why does the paper insist on slow memory (C2) and vast capacity (C3)? Because P-indivisibility without C2 and C3 might only show up at absurd timescales or might self-destruct. C2 ensures the memory persists on timescales accessible to actual experiments, not just at cosmic recurrence times. C3 ensures the hidden sector never runs out of room to store information — if it saturates, later imprints overwrite earlier ones, and the process becomes effectively memoryless. Together, C2 and C3 guarantee that P-indivisibility is strong, persistent, and observationally relevant. --- ## The Stochastic-Quantum Correspondence This is the key link. Section 2 proved that the embedded observer's dynamics are P-indivisible. Section 3 shows this is mathematically equivalent to quantum mechanics. **The core statement.** Any P-indivisible stochastic process on a finite configuration space of size n can be embedded into a unitarily evolving quantum system. Specifically, there exists a Hilbert space H (dimension ≤ n³) and a unitary operator U(t) such that: $$T_{ij}(t) = |U_{ij}(t)|^2$$ This is the Born rule. The left side is the transition probability computed by averaging over hidden states (the classical formula from partition-relativity). The right side is the quantum mechanical probability — the squared modulus of a matrix element of the unitary evolution operator. The equivalence is not approximate. It is not an analogy. It is a mathematical identity. **Two independent routes to the same conclusion.** The primary route uses Barandes' stochastic-quantum correspondence (2023–2025): P-indivisibility means transition probabilities can't be factored through intermediate times — try it and you get "negative probabilities." In quantum mechanics, this is *exactly what happens*: probability amplitudes combine to produce interference patterns that don't factorize classically. What Barandes proved is that these are the same mathematical object, written in different notation. The secondary route, given in Appendix A, uses Stinespring's dilation theorem (1955): a deterministic bijection on a finite product space defines a permutation unitary; tracing out the hidden sector with the Liouville measure produces a completely positive quantum channel whose diagonal elements recover the classical transition probabilities exactly. This second route requires only textbook results. Either route alone suffices; together they ensure the bridge rests on no single recent result. **Where the quantum features come from:** - **The Schrödinger equation** arises because U(t) is differentiable. Any smooth family of unitary matrices can be written as U(t) = exp(-iHt/ℏ) for some Hermitian matrix H. - **The Born rule** T_ij = |U_ij|² is not an additional postulate — it's the definition of how the stochastic process maps onto the unitary one. - **The action scale ℏ** enters when converting from the dimensionless unitary to a dimensionful Hamiltonian: Ĥ = iℏ (∂U/∂t) U†. The value of ℏ cannot be determined from the dimensionless transition data alone — it requires additional physical input from the partition geometry. - **Bell inequality violations.** Since the transition matrices for composite systems don't factorize, entangled systems naturally produce correlations that violate Bell inequalities, up to exactly Tsirelson's bound. --- ## The Phase-Locking Lemma A potential objection: the relation T_ij = |U_ij|² throws away phase information. Different unitaries could give the same transition probabilities. Does this make the quantum description ambiguous? The phase-locking lemma shows: no. **Setup:** The transition probability at time t is: $$T_{ij}(t) = \left|\sum_k V_{ik} \, e^{-iE_k t} \, V_{jk}^*\right|^2$$ where V_ik = ⟨i|k⟩ are the overlaps between the configuration basis and the energy eigenbasis, and E_k are the energy eigenvalues. Expanding the square: $$T_{ij}(t) = \sum_{k,l} V_{ik}\, V_{jk}^*\, V_{jl}\, V_{il}^*\; e^{-i(E_k - E_l)t}$$ **The Fourier trick:** This is a sum of oscillating terms at frequencies ω_kl = E_k - E_l. If all these frequencies are distinct (condition G2: non-degenerate energy gaps), you can extract each coefficient by Fourier transform: $$a_{ij}^{kl} = V_{ik}\, V_{jk}^*\, V_{jl}\, V_{il}^*$$ **Extracting the moduli:** Setting i = j: $a_{ii}^{kl} = |V_{ik}|^2 |V_{il}|^2$. If none of the overlaps are zero (condition G3), all moduli |V_ik| are determined. **Extracting the phases:** Write V_ik = |V_ik| e^{iφ_ik}. The argument of the Fourier coefficient gives: $$\arg(a_{ij}^{kl}) = (\varphi_{ik} - \varphi_{il}) - (\varphi_{jk} - \varphi_{jl})$$ The only transformation preserving all double differences is φ_ik → φ_ik + α_i + β_k — just relabeling (choosing a different phase convention for the basis states). Once you fix these conventions, all remaining phases are uniquely determined. **Bottom line:** Continuous-time transition probability data uniquely determines the Hamiltonian up to physically irrelevant relabeling. --- ## Bell Inequality Violations This is the question everyone asks: isn't this ruled out by Bell's theorem? **What Bell's theorem actually requires.** Bell's theorem proves that no hidden variable theory can reproduce quantum correlations if it satisfies three conditions simultaneously: 1. **Locality:** The outcome at detector A doesn't depend on the setting at detector B 2. **Measurement independence:** The experimenters' choices are independent of the hidden variables 3. **Factorizability:** P(a,b | x,y,λ) = P(a|x,λ) · P(b|y,λ) The framework satisfies conditions 1 and 2. It violates condition 3. **Why factorizability fails.** Factorizability requires that, conditioned on the hidden variable λ, the outcomes at the two detectors are independent — that λ carries all the relevant information as a snapshot at a single moment. P-indivisible processes don't work this way. The transition probabilities for a joint system can't be factored: $$T_{QR} \neq T_Q \otimes T_R$$ Two subsystems that interacted during preparation carry a joint transition matrix that doesn't decompose into a product. This non-factorizability IS entanglement. **The Jarrett decomposition.** Factorizability splits into parameter independence (outcome at A doesn't depend on setting at B — preserved ✓) and outcome independence (outcome at A doesn't depend on outcome at B — violated ✗). Parameter independence prevents faster-than-light signaling. Fine's theorem shows that violating outcome independence while preserving parameter independence is exactly the class of theories consistent with quantum correlations. **The maximum violation.** Barandes, Hasan, and Kagan prove the maximum CHSH violation from P-indivisible processes is exactly Tsirelson's bound: 2√2 — the quantum maximum. --- ## The Characterization Theorem It's not enough to show that embedded observation *produces* QM (sufficiency). The paper shows QM *requires* embedded observation under C1–C3 (necessity). The full logical chain: - Barandes proved: QM ⟺ P-indivisibility - Section 2.3 proved: C1–C3 ⟹ P-indivisibility (sufficiency) - Section 3.3 proves: P-indivisibility ⟹ C1–C3 (necessity) - Combined: **QM ⟺ P-indivisibility ⟺ embedded observation under C1–C3** **Necessity of C1 (coupling).** If T is a permutation (no coupling), then T^k is also a permutation for all k. The intermediate propagator Λ(k₂,k₁) = T^{k₂-k₁} is always a valid stochastic matrix. So the process is P-divisible. Contrapositive: P-indivisibility requires non-trivial coupling. **Necessity of C2 (slow bath).** Between coupling events (separated by τ_S), the hidden sector evolves under its own Hamiltonian. The convergence to equilibrium is: $$\| e^{\mathcal{L}_H \tau_S} \mu_H(\cdot | x_i) - \mu_{\text{eq}} \|_{\text{TV}} \leq C \, e^{-\Delta \tau_S}$$ In the fast-bath regime (Δ·τ_S ≫ 1), this is exponentially small. The hidden sector forgets everything between interactions. Each transition is computed against the same equilibrium distribution, so T^(k) = T^k — a Markov chain, hence P-divisible. Contrapositive: P-indivisibility requires τ_S ≪ τ_B. **Necessity of C3 (sufficient capacity).** The non-Markovian mutual information is bounded by the hidden sector's size: $$I(X_{t} \mid X_t) \leq \log_2 m$$ where m = |C_H|. Proof: the total system is deterministic, so X_{>t} is a function of (X_t, H_t). Given X_t, the chain X_{t} is Markov. By the data processing inequality: $$I(X_{t} \mid X_t) \leq I(X_{ 3\sigma$ significance relative to local spirals. The cleanest local test is even more striking: Jiao et al. (*A&A* 678, A208, 2023) detect for the first time a Keplerian decline in the Milky Way's own rotation curve from $\sim 19$ to $\sim 26.5$ kpc using Gaia DR3 kinematics, with the flat rotation curve hypothesis rejected at $3\sigma$. Our own galaxy is now the strongest single piece of evidence for the $H(z)$-dependent crossover prediction at $z = 0$ — the framework predicts the crossover at $r_M \sim 17$ kpc for the Milky Way's baryonic mass, essentially where Jiao et al. observe the transition. The baryonic Tully-Fisher relation also evolves: $v_{\text{flat}} \propto H(z)^{1/4}$, predicting 32% higher velocities at $z = 2$ at fixed baryonic mass. McGaugh et al. (2024) report no evolution in the *stellar* mass TF to $z \sim 2.5$ — but this is actually *predicted* by the framework, because gas fractions at high $z$ are large ($f_{\text{gas}} \sim 50$–$70\%$) and the gas mass omitted from $M_*$ almost exactly compensates the dynamical shift (the cancellation gas fractions — 44% at $z = 1$, 67% at $z = 2$ — match observations). The definitive test is the *baryonic* TF at $z > 1$ with reliable ALMA gas masses. Particle dark matter (NFW halos) predicts flat rotation curves at all redshifts — the observed decline is unexpected in ΛCDM but natural in the OI framework. Direct dark matter searches continue to return null results (LZ Collaboration's 417-day analysis presented December 2025 is the most sensitive WIMP search ever conducted, finding nothing), consistent with the framework's prediction that no particle dark matter exists. **Cluster scales and the Bullet Cluster.** Galaxy clusters — the hardest test for any MOND-like theory — are addressed by the interpolation between the Newtonian and deep-MOND regimes. The simple interpolation function $g_{\text{total}} = g_B \cdot \nu(g_B/a_0)$ with $\nu(y) = (1 + \sqrt{1+4/y})/2$ matches the Coma cluster to $< 1\%$ in velocity (1260 vs 1270 km/s) and reduces the standard MOND mass shortfall from a factor $\sim 2$ to $\sim 1.0$–$1.5$ for other rich clusters — with the residual attributable to undetected warm-hot intergalactic medium (WHIM). This interpolation is indistinguishable from the deep-MOND limit at galaxy scales (differences $< 0.07$ dex, well within the observed RAR scatter). The Bullet Cluster — where gravitational lensing peaks at the galaxy positions rather than the dominant X-ray gas — is explained by the non-local character of entropy displacement: the boundary entropy relaxation time is $\sim H^{-1} \approx 14$ Gyr, while the collision crossing time is $\sim 0.15$ Gyr. The dark gravity is frozen at the pre-collision configuration (centered on the galaxies, which defined the potential wells for gigayears), not tracking the recently displaced gas. This reproduces the observed lensing morphology and makes a testable prediction: very old post-collision systems should show gradual relaxation of the dark gravity toward the gas distribution. The same thermodynamic averaging explains why the entropy displacement reproduces the CMB acoustic peak pattern: oscillating perturbations have zero net entropy displacement per cycle (the Clausius relation involves *net* heat transfer), so only the growing mode is tracked — providing non-oscillating potential wells identical to CDM in the linear regime. **Neutrino predictions and the JUNO + DESI confirmation.** The framework predicts Majorana neutrinos with normal mass ordering and a hierarchical spectrum, with $\Sigma m_\nu$ near the oscillation minimum of 0.059 eV. The DESI DR2 + CMB analysis (Elbers et al., March 2025) reports $\Sigma m_\nu < 0.0642$ eV (95% CL), prefers normal ordering, and bounds the lightest neutrino mass at $m_l < 0.023$ eV — every directly comparable measurement matches the OI prediction. The same analysis finds a $3\sigma$ tension with the lower oscillation limit assuming $\Lambda$CDM, interpreted as "a hint of new physics not necessarily related to neutrinos" — exactly the structural mismatch that the OI dark energy resolves. JUNO first results (November 2025) deliver world-leading precision on $\sin^2\theta_{12} = 0.3092 \pm 0.0087$, matching the OI prediction $1/3 - 1/(4\pi^2) = 0.3080$ at $0.14\sigma$, while confirming the persistent solar/reactor 1.5$\sigma$ tension. The neutrino sector and the dark energy sector are coupled in the framework — both follow from the same lattice structure — so the joint observational support is not independent confirmation of separate effects but a single empirical pattern consistent with the framework's central mechanism. **Gauge coupling prediction.** The framework extends the derivation chain to the gauge coupling strengths. The fermion-induced coupling gives $1/\alpha_0 = 23.25$ at the Planck scale — a universal value determined by the lattice structure ($N_f = 6$ flavors, $T(R) = 1/2$), not by the specific bijection $\varphi$, verified analytically via the one-loop staggered vacuum polarization lattice integral. A universal $C_2$-independent threshold $\delta_0 = 10.02$ is fixed by the U(1) row and is consistent with the first-principles two-loop VP computation ($8.0 \pm 2$) plus natural three-loop corrections. Combined with a geometric-series resummation of the $C_2$-dependent gauge self-energy and Standard Model renormalization group running, this reproduces all three SM gauge couplings at $M_Z$: $1/\alpha_1 = 59.00$, $1/\alpha_2 = 29.57$, $1/\alpha_3 = 8.47$ — matching the observed values to $< 0.1\%$. **Independent corroboration of the trace-out mechanism.** The framework's central technical move — that integrating out hidden degrees of freedom from a deterministic substrate produces well-defined non-Markovian visible dynamics, which under the Barandes correspondence becomes quantum mechanics — is now established at theorem level in the open systems literature. Brandner (*Phys. Rev. Lett.* **134**, 037101 and companion *Phys. Rev. E* **111**, 014137, January 2025) proves that for autonomous linear evolution equations, integrating out inaccessible degrees of freedom yields well-defined non-Markovian visible dynamics in a controlled weak-memory regime, with explicit error bounds and a convergent perturbation scheme — derived independently for general open systems with no stake in OI being right. Direct experimental demonstrations exist in controlled physical systems: Mehl et al. (PRL 108, 220601, 2012) showed that hidden slow degrees of freedom in a colloidal system produce non-Markovian visible dynamics violating naive Markovian fluctuation theorems; Gröblacher et al. (Nature Communications 6, 7606, 2015) observed non-Markovian Brownian motion in a macroscopic micromechanical oscillator. On the quantum side, Kim (April 2025) shows that monitored quantum systems are formally quantum hidden Markov models with a rigorous correspondence to classical hidden Markov models — the same formal structure as the stochastic-quantum bridge, developed independently in the monitored quantum systems literature. The framework's central technical move is therefore not speculative; it is the standard way physicists now think about open systems with hidden structure, and the mathematical apparatus has been published in the standard literature within the past 12 months. No competing framework produces all of these from a single definition. The parallel with the cosmological constant dissolution is exact: the $10^{122}$ discrepancy is the *information compression ratio* of the trace-out, and the ~95% dark sector is the *gravitational occlusion fraction*. Together, they account for the two largest anomalies in modern cosmology as two aspects of a single phenomenon: the cost of observing the universe from within. --- ## Philosophical Lineage The paper is a physics paper, but its core claims — that observers face irreducible limits, that two irreconcilable descriptions can both be correct, that incompleteness is a structural feature rather than a deficiency — sit at the intersection of some of the oldest debates in philosophy. A systematic mapping against the major traditions reveals a striking pattern: broad support for most of the framework, and near-universal resistance to one specific thesis. ### The seven claims The framework rests on seven implicit philosophical commitments: 1. **Embedded observers face irreducible limits.** No observer inside a system can access the complete state. 2. **QM and GR are both correct** within their domains. 3. **The hidden sector is permanently inaccessible** — not due to technological limitations, but structural ones. 4. **The underlying reality is local and definite.** Indeterminacy belongs to the observer's description, not to the world. 5. **Incompleteness is structural, not deficient.** The limitation arises because the observer is made of the same elements as the universe it is trying to describe — a physical form of self-reference. This is analogous to Gödel's incompleteness theorem, not to ignorance that better instruments could cure. 6. **The description is observer-relative.** Different partitions yield different emergent physics. 7. **The two descriptions are irreconcilable** — not because one is wrong, but because they are complementary projections of a single reality that no embedded observer can access directly. Claims 1, 5, 6, and 7 enjoy broad philosophical support across nearly every tradition examined. Claim 4 — that the underlying reality is definite — is the paper's most philosophically isolated thesis. ### The Gödel connection The analogy between this framework and Gödel's incompleteness theorem is not merely metaphorical. Gödel proved that a formal system rich enough to encode arithmetic cannot prove all true statements about itself from within — the limitation arises because the system is self-referential, capable of constructing sentences that refer to its own provability. The observer in this framework faces a structurally parallel situation: the reason a cosmological horizon exists at all is that the observer is a physical subsystem made of the same fields, obeying the same speed-of-light constraint, as the universe it is trying to describe. An observer not made of the universe's own elements would not face a horizon and would not be forced into a quantum description. The incompleteness is a consequence of self-inclusion. The connection can be made precise through Wolpert's limits of inference [19], which the paper cites. Wolpert proved, using diagonal self-referential arguments directly descended from Gödel's, that any inference device embedded in the universe it is trying to predict faces fundamental limits — not because of noise or finite resources, but because complete self-prediction is logically impossible. The present framework provides the concrete physical mechanism by which Wolpert's logical limitation manifests: the causal parti