Phase 34.3 — SynapticPlasticityEngine
Module: asi.neuromorphic.synaptic_plasticity
Depends on: Phase 34.1 SpikingNeuronModel, Phase 34.2 EventDrivenProcessor
Theme: Dynamic synaptic weight adaptation mechanisms for learning in spiking neural networks
Overview
The SynapticPlasticityEngine implements biologically-inspired learning rules that govern how synaptic connections strengthen or weaken in response to neural activity. This module provides multiple plasticity mechanisms — from classical spike-timing-dependent plasticity (STDP) to homeostatic scaling and structural rewiring — enabling SNNs to learn temporal patterns, maintain stability, and adapt connectivity topology.
Motivation
Synaptic plasticity is the fundamental mechanism underlying learning and memory in biological neural systems. Unlike gradient-based backpropagation, plasticity rules operate locally using only pre- and post-synaptic activity, making them naturally suited to neuromorphic hardware and event-driven computation. This module provides:
- Hebbian learning — "neurons that fire together wire together" via STDP
- Stability regulation — homeostatic mechanisms preventing runaway excitation/inhibition
- Credit assignment — reward-modulated STDP for reinforcement learning in SNNs
- Network topology adaptation — structural plasticity for pruning/growing connections
Frozen Dataclasses
from __future__ import annotations
from dataclasses import dataclass
from enum import Enum, auto
from typing import Sequence
class PlasticityRule(Enum):
ASYMMETRIC_STDP = auto() # Classical Bi & Poo (1998)
SYMMETRIC_STDP = auto() # Non-causal symmetric window
TRIPLEX_STDP = auto() # Pfister & Gerstner (2006) triplet
REWARD_MODULATED = auto() # Frémaux & Gerstner (2016) R-STDP
BCM = auto() # Bienenstock-Cooper-Munro (1982)
HOMEOSTATIC = auto() # Turrigiano (2008) synaptic scaling
@dataclass(frozen=True)
class PlasticityConfig:
"""Global plasticity configuration."""
rule: PlasticityRule
learning_rate: float # η — base learning rate
a_plus: float = 0.01 # LTP amplitude
a_minus: float = 0.012 # LTD amplitude (slightly larger for stability)
tau_plus: float = 20.0 # LTP time constant (ms)
tau_minus: float = 20.0 # LTD time constant (ms)
w_min: float = 0.0 # Minimum synaptic weight
w_max: float = 1.0 # Maximum synaptic weight
eligibility_decay: float = 0.95 # Eligibility trace decay factor
homeostatic_tau: float = 1000.0 # Homeostatic time constant (ms)
target_rate: float = 5.0 # Target firing rate (Hz)
structural_threshold: float = 0.01 # Pruning threshold for structural plasticity
triplet_tau_x: float = 101.0 # Triplet pre-trace time constant
triplet_tau_y: float = 125.0 # Triplet post-trace time constant
@dataclass(frozen=True)
class Synapse:
"""Immutable synapse state."""
pre_id: int # Pre-synaptic neuron ID
post_id: int # Post-synaptic neuron ID
weight: float # Current synaptic weight
delay: float # Axonal delay (ms)
eligibility: float = 0.0 # Eligibility trace for R-STDP
pre_trace: float = 0.0 # Pre-synaptic activity trace
post_trace: float = 0.0 # Post-synaptic activity trace
age: int = 0 # Synapse age (update steps)
last_update_time: float = 0.0 # Last modification timestamp (ms)
@dataclass(frozen=True)
class STDPWindow:
"""STDP timing window parameters."""
delta_t: float # Post - Pre spike time difference (ms)
delta_w: float # Resulting weight change
pre_trace_value: float # Pre-synaptic trace at evaluation
post_trace_value: float # Post-synaptic trace at evaluation
rule_applied: PlasticityRule # Which rule produced this change
@dataclass(frozen=True)
class PlasticityTrace:
"""Running trace for eligibility / activity tracking."""
neuron_id: int
trace_value: float # Current exponentially-decayed trace
last_spike_time: float # Time of last spike (ms)
spike_count: int # Total spike count in window
running_rate: float # Estimated firing rate (Hz)
@dataclass(frozen=True)
class HomeostaticState:
"""Per-neuron homeostatic regulation state."""
neuron_id: int
scaling_factor: float # Multiplicative scaling factor
current_rate: float # Measured firing rate (Hz)
target_rate: float # Desired firing rate (Hz)
rate_error: float # (current - target) / target
intrinsic_excitability: float # Intrinsic excitability modifier
last_adjustment_time: float # Last homeostatic update (ms)
@dataclass(frozen=True)
class StructuralChange:
"""Record of a structural plasticity event (synapse creation/deletion)."""
event_type: str # "prune" | "grow"
pre_id: int
post_id: int
old_weight: float # Weight before event (0.0 for grow)
new_weight: float # Weight after event (0.0 for prune)
timestamp: float # Event time (ms)
reason: str # Human-readable reason
Protocol
from typing import Protocol, Sequence
import numpy as np
from numpy.typing import NDArray
class SynapticPlasticityProtocol(Protocol):
"""Interface for all synaptic plasticity mechanisms."""
def apply_stdp(
self,
pre_times: Sequence[float],
post_times: Sequence[float],
synapse: Synapse,
) -> Synapse:
"""Apply spike-timing-dependent plasticity.
Computes weight change from temporal correlations between
pre- and post-synaptic spike times using the configured
STDP kernel (asymmetric, symmetric, or triplet).
Args:
pre_times: Pre-synaptic spike timestamps (ms).
post_times: Post-synaptic spike timestamps (ms).
synapse: Current synapse state.
Returns:
Updated Synapse with new weight and traces.
"""
...
def apply_short_term(
self,
synapse: Synapse,
spike_history: Sequence[float],
) -> Synapse:
"""Apply short-term plasticity (facilitation / depression).
Models use-dependent transient changes in synaptic efficacy
via Tsodyks-Markram dynamics.
Args:
synapse: Current synapse state.
spike_history: Recent pre-synaptic spike times (ms).
Returns:
Updated Synapse with short-term weight modulation.
"""
...
def apply_homeostatic(
self,
neuron_activity: Sequence[PlasticityTrace],
target_rate: float,
) -> float:
"""Compute homeostatic scaling factor.
Implements synaptic scaling (Turrigiano 2008) to maintain
network stability by adjusting all incoming weights toward
a target firing rate.
Args:
neuron_activity: Activity traces for the neuron.
target_rate: Desired firing rate (Hz).
Returns:
Multiplicative scaling factor for incoming weights.
"""
...
def apply_reward_modulated(
self,
eligibility_trace: float,
reward_signal: float,
) -> Synapse:
"""Apply reward-modulated STDP (R-STDP).
Combines eligibility traces (from STDP) with a delayed
reward signal for three-factor learning rules.
Args:
eligibility_trace: Accumulated eligibility from STDP.
reward_signal: Scalar reward/punishment signal.
Returns:
Updated Synapse with reward-modulated weight change.
"""
...
def prune_or_grow(
self,
connectivity_matrix: NDArray[np.float64],
activity: Sequence[PlasticityTrace],
) -> NDArray[np.float64]:
"""Apply structural plasticity to network topology.
Prunes weak/inactive synapses and grows new connections
based on activity-dependent rules and distance metrics.
Args:
connectivity_matrix: N×N weight matrix.
activity: Per-neuron activity traces.
Returns:
Updated connectivity matrix with structural changes.
"""
...
Concrete Implementations
| Class |
Purpose |
Key Algorithm |
AsymmetricSTDP |
Classical causal STDP |
Δw = A⁺·exp(−Δt/τ⁺) if Δt>0; −A⁻·exp(Δt/τ⁻) if Δt<0 |
SymmetricSTDP |
Non-causal symmetric window |
Δw = A·exp(−|Δt|/τ) for coincidence detection |
TriplexSTDP |
Triplet rule (Pfister & Gerstner 2006) |
Adds second-order pre/post trace interactions |
RewardModulatedSTDP |
Three-factor R-STDP (Frémaux & Gerstner 2016) |
Δw = η · e(t) · r(t); e(t) = STDP kernel convolved with eligibility decay |
HomeostaticScaler |
Synaptic scaling (Turrigiano 2008) |
s = 1 + α·(r_target − r_actual)/r_target; w_new = s·w |
StructuralPlasticityManager |
Synaptogenesis & pruning |
Prune if w < θ_prune for age > T_min; grow if post-neuron below target connectivity |
Key Algorithms
1. Exponential STDP Kernel
For each pair (t_pre, t_post):
Δt = t_post - t_pre
if Δt > 0: # LTP (pre before post)
Δw += A⁺ · exp(-Δt / τ⁺)
elif Δt < 0: # LTD (post before pre)
Δw -= A⁻ · exp(Δt / τ⁻)
w_new = clip(w + η · Δw, w_min, w_max)
2. Eligibility Trace Accumulation (R-STDP)
On each pre-post spike pair:
e(t) += STDP(Δt) # Standard STDP kernel
Between spikes:
e(t) *= exp(-dt / τ_e) # Exponential decay
On reward signal r(t):
Δw = η · e(t) · r(t) # Three-factor rule
3. BCM Theory Implementation
θ_BCM = E[y²] # Sliding modification threshold
if y > θ_BCM:
Δw = η · x · y · (y - θ_BCM) # LTP
else:
Δw = η · x · y · (y - θ_BCM) # LTD (negative since y < θ)
4. Intrinsic Excitability Adjustment
r_error = (r_current - r_target) / r_target
excitability *= (1 - α_ie · r_error)
excitability = clip(excitability, 0.5, 2.0)
5. Synaptogenesis Threshold
For each neuron i:
connectivity_i = count(w[i,:] > 0) / N
if connectivity_i < target_connectivity:
candidates = neurons with highest activity correlation
grow synapse with w_init = w_min + ε
For each synapse (i,j):
if w[i,j] < θ_prune and age[i,j] > T_maturation:
prune synapse → set w[i,j] = 0
Metrics
| Metric |
Target |
Description |
weight_convergence_time |
≤1000 presentations |
Steps until weight distribution stabilizes (KL divergence < 0.01) |
pattern_selectivity |
≥0.8 |
Ratio of potentiated synapses for target pattern vs. noise |
homeostatic_stability |
±5% of target rate |
Firing rate maintained within tolerance after perturbation |
reward_correlation |
≥0.7 |
Pearson correlation between weight changes and reward signal |
structural_sparsity |
60–80% |
Fraction of zero-weight connections after structural plasticity |
Test Targets
- Line coverage: ≥95%
- STDP convergence: Weight distribution stabilizes within 1000 pattern presentations for all STDP variants
- Homeostatic regulation: Neuron firing rate returns to within 5% of target within 500ms after 2× rate perturbation
- Reward-modulated: Positive reward → LTP for correlated pairs; negative → LTD (Spearman ρ > 0.7)
- Structural plasticity: Network achieves target sparsity (60–80%) while maintaining pattern recall accuracy ≥ 90%
- Trace correctness: Eligibility traces decay to < 1% within 5τ_e
- Weight bounds: All weights remain in [w_min, w_max] under all plasticity rules (fuzz 10⁶ updates)
- Determinism: Identical inputs → identical outputs across 100 trials (given same RNG seed)
References
- Bi, G.-Q. & Poo, M.-M. (1998). Synaptic modifications in cultured hippocampal neurons. J. Neurosci., 18(24), 10464–10472.
- Turrigiano, G.G. (2008). The self-tuning neuron: synaptic scaling of excitatory synapses. Cell, 135(3), 422–435.
- Frémaux, N. & Gerstner, W. (2016). Neuromodulated spike-timing-dependent plasticity, and theory of three-factor learning rules. Front. Neural Circuits, 9, 85.
- Pfister, J.-P. & Gerstner, W. (2006). Triplets of spikes in a model of spike timing-dependent plasticity. J. Neurosci., 26(38), 9673–9682.
- Bienenstock, E.L., Cooper, L.N. & Munro, P.W. (1982). Theory for the development of neuron selectivity. J. Neurosci., 2(1), 32–48.
- Tsodyks, M.V. & Markram, H. (1997). The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. PNAS, 94(2), 719–723.
- Holtmaat, A. & Svoboda, K. (2009). Experience-dependent structural synaptic plasticity in the mammalian brain. Nat. Rev. Neurosci., 10(9), 647–658.
File Layout
src/asi/neuromorphic/synaptic_plasticity/
├── __init__.py
├── _config.py # PlasticityConfig, enums
├── _types.py # Synapse, STDPWindow, PlasticityTrace, HomeostaticState, StructuralChange
├── _protocol.py # SynapticPlasticityProtocol
├── _stdp.py # AsymmetricSTDP, SymmetricSTDP, TriplexSTDP
├── _reward_modulated.py # RewardModulatedSTDP
├── _homeostatic.py # HomeostaticScaler
├── _structural.py # StructuralPlasticityManager
├── _engine.py # SynapticPlasticityEngine (facade)
└── _metrics.py # PlasticityMetrics collector
tests/neuromorphic/synaptic_plasticity/
├── test_asymmetric_stdp.py
├── test_symmetric_stdp.py
├── test_triplex_stdp.py
├── test_reward_modulated.py
├── test_homeostatic.py
├── test_structural.py
├── test_engine.py
├── test_convergence.py # Long-running learning convergence tests
└── conftest.py # Shared fixtures
Acceptance Criteria
Phase 34.3 — SynapticPlasticityEngine
Module:
asi.neuromorphic.synaptic_plasticityDepends on: Phase 34.1 SpikingNeuronModel, Phase 34.2 EventDrivenProcessor
Theme: Dynamic synaptic weight adaptation mechanisms for learning in spiking neural networks
Overview
The SynapticPlasticityEngine implements biologically-inspired learning rules that govern how synaptic connections strengthen or weaken in response to neural activity. This module provides multiple plasticity mechanisms — from classical spike-timing-dependent plasticity (STDP) to homeostatic scaling and structural rewiring — enabling SNNs to learn temporal patterns, maintain stability, and adapt connectivity topology.
Motivation
Synaptic plasticity is the fundamental mechanism underlying learning and memory in biological neural systems. Unlike gradient-based backpropagation, plasticity rules operate locally using only pre- and post-synaptic activity, making them naturally suited to neuromorphic hardware and event-driven computation. This module provides:
Frozen Dataclasses
Protocol
Concrete Implementations
AsymmetricSTDPSymmetricSTDPTriplexSTDPRewardModulatedSTDPHomeostaticScalerStructuralPlasticityManagerKey Algorithms
1. Exponential STDP Kernel
2. Eligibility Trace Accumulation (R-STDP)
3. BCM Theory Implementation
4. Intrinsic Excitability Adjustment
5. Synaptogenesis Threshold
Metrics
weight_convergence_timepattern_selectivityhomeostatic_stabilityreward_correlationstructural_sparsityTest Targets
References
File Layout
Acceptance Criteria
__post_init__validationSynapticPlasticityProtocoldefined; all 6 concrete classes implement it