Skip to content

Phase 38.2 — DifferentialPrivacyEngine: ε-DP Mechanisms & Privacy Budget Accounting #773

@web3guru888

Description

@web3guru888

Phase 38.2 — DifferentialPrivacyEngine

Overview

The DifferentialPrivacyEngine provides rigorous privacy guarantees for federated learning through ε-differential privacy mechanisms. Implements Gaussian and Laplace noise injection, privacy budget accounting via Rényi Differential Privacy (RDP) and the moments accountant, and advanced composition theorems for multi-round training.

References: Dwork & Roth (2014) DP foundations, Abadi et al. (2016) DP-SGD & moments accountant, Mironov (2017) Rényi DP, Balle et al. (2020) privacy amplification by subsampling

Architecture

┌───────────────────────────────────────────────────┐
│            DifferentialPrivacyEngine               │
│  ┌─────────────────────────────────────────────┐  │
│  │          NoiseMechanism (ABC)                │  │
│  │  ┌───────────┐  ┌───────────┐  ┌─────────┐ │  │
│  │  │ Gaussian  │  │ Laplace   │  │Discrete │ │  │
│  │  │ Mechanism │  │ Mechanism │  │Gaussian │ │  │
│  │  │ σ²=2ln   │  │ b = Δf/ε  │  │         │ │  │
│  │  │ (1.25/δ) │  │           │  │         │ │  │
│  │  │ · Δf²/ε² │  │           │  │         │ │  │
│  │  └───────────┘  └───────────┘  └─────────┘ │  │
│  └─────────────────────────────────────────────┘  │
│  ┌─────────────────────────────────────────────┐  │
│  │          PrivacyAccountant                   │  │
│  │  ┌─────────────┐  ┌──────────────────────┐  │  │
│  │  │  Moments    │  │   Rényi DP           │  │  │
│  │  │  Accountant │  │   Accountant          │  │  │
│  │  │  (Abadi+16) │  │   (Mironov 2017)     │  │  │
│  │  └─────────────┘  └──────────────────────┘  │  │
│  │  ┌──────────────────────────────────────┐   │  │
│  │  │   Composition Theorems               │   │  │
│  │  │   Basic / Advanced / Optimal         │   │  │
│  │  └──────────────────────────────────────┘   │  │
│  └─────────────────────────────────────────────┘  │
│  ┌─────────────────────────────────────────────┐  │
│  │          GradientClipper                     │  │
│  │  per-sample clipping, adaptive clipping      │  │
│  └─────────────────────────────────────────────┘  │
└───────────────────────────────────────────────────┘

Data Models

from dataclasses import dataclass, field
from typing import Dict, List, Optional, Tuple
from enum import Enum
import numpy as np

class NoiseMechanism(Enum):
    GAUSSIAN = "gaussian"
    LAPLACE = "laplace"
    DISCRETE_GAUSSIAN = "discrete_gaussian"

class AccountingMethod(Enum):
    MOMENTS = "moments"
    RENYI = "renyi"
    BASIC_COMPOSITION = "basic"
    ADVANCED_COMPOSITION = "advanced"

@dataclass(frozen=True)
class PrivacyBudget:
    """Privacy budget specification."""
    epsilon: float                        # total privacy budget
    delta: float = 1e-5                   # failure probability
    consumed_epsilon: float = 0.0
    consumed_delta: float = 0.0
    rounds_consumed: int = 0

@dataclass(frozen=True)
class DPConfig:
    """Differential privacy configuration."""
    mechanism: NoiseMechanism = NoiseMechanism.GAUSSIAN
    accounting: AccountingMethod = AccountingMethod.RENYI
    target_epsilon: float = 8.0
    target_delta: float = 1e-5
    max_grad_norm: float = 1.0            # L2 clipping bound
    noise_multiplier: float = 1.1         # σ / max_grad_norm
    sampling_rate: float = 0.01           # for privacy amplification
    renyi_orders: Tuple[float, ...] = (2, 5, 10, 20, 50, 100)

@dataclass(frozen=True)
class PrivacyReport:
    """Report of privacy expenditure after operations."""
    epsilon_spent: float
    delta_spent: float
    noise_scale: float
    num_compositions: int
    budget_remaining: float
    estimated_rounds_left: int
    accounting_method: AccountingMethod

Protocol

from typing import Protocol, runtime_checkable

@runtime_checkable
class DifferentialPrivacyProtocol(Protocol):
    def add_noise(self, gradients: Dict[str, np.ndarray], config: DPConfig) -> Dict[str, np.ndarray]: ...
    def clip_gradients(self, gradients: Dict[str, np.ndarray], max_norm: float) -> Tuple[Dict[str, np.ndarray], float]: ...
    def compute_privacy_spent(self, config: DPConfig, num_steps: int) -> PrivacyReport: ...
    def get_noise_multiplier(self, target_epsilon: float, target_delta: float, sampling_rate: float, num_steps: int) -> float: ...
    def check_budget(self, budget: PrivacyBudget) -> bool: ...
    def renyi_divergence(self, alpha: float, sigma: float) -> float: ...

Acceptance Criteria

  • Gaussian mechanism with calibrated noise σ = Δf · √(2 ln(1.25/δ)) / ε
  • Laplace mechanism with scale b = Δf / ε
  • Per-sample gradient clipping with configurable L2 norm bound
  • Rényi DP accountant with multiple α orders for tight bounds
  • Moments accountant matching Abadi et al. (2016) analysis
  • Privacy amplification by subsampling (Poisson and uniform)
  • Budget tracking across federated rounds with early stopping when exhausted
  • Unit tests verifying (ε,δ)-guarantees via statistical hypothesis tests

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions