Phase 34.4 — NeuromorphicEncoder
Component: asi.neuromorphic.encoder
Depends on: Phase 34.1 SpikingNeuronModel, Phase 34.2 EventDrivenProcessor
Predecessor: Phase 34.3 SynapticPlasticityEngine
Overview
The NeuromorphicEncoder bridges conventional computing and neuromorphic hardware by converting continuous-valued data (images, signals, features) into spike trains and vice versa. It implements a comprehensive suite of neural coding schemes — from simple Poisson rate coding to sophisticated population and temporal codes — and provides a principled ANN-to-SNN conversion pipeline. Faithful encoding is the critical bottleneck for neuromorphic deployment: lossy or inefficient encoding degrades downstream SNN accuracy regardless of network architecture.
Frozen Dataclasses
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Literal, Sequence
import numpy as np
CodingScheme = Literal[
"rate_poisson", "rate_burst",
"temporal_ttfs", "temporal_rank_order", "temporal_phase",
"population_gaussian", "population_triangular",
]
DecodingMethod = Literal[
"spike_count", "spike_rate", "time_to_first_spike",
"membrane_potential", "population_vector", "bayesian_mle",
]
@dataclass(frozen=True)
class EncoderConfig:
"""Top-level configuration for the NeuromorphicEncoder."""
coding_scheme: CodingScheme = "rate_poisson"
time_window_ms: float = 100.0 # simulation window
dt_ms: float = 0.1 # time resolution
max_firing_rate_hz: float = 1000.0 # saturation frequency
min_firing_rate_hz: float = 0.0 # baseline frequency
refractory_period_ms: float = 2.0 # absolute refractory period
population_size: int = 10 # neurons per input dimension
jitter_ms: float = 0.0 # temporal jitter (noise)
normalize_input: bool = True # normalize to [0, 1]
seed: int | None = None
@dataclass(frozen=True)
class SpikeTrain:
"""Spike train representation for one or more neurons."""
spike_times: np.ndarray # shape (num_neurons, max_spikes), padded with NaN
neuron_ids: np.ndarray # shape (num_neurons,)
duration_ms: float
dt_ms: float
num_spikes: np.ndarray # shape (num_neurons,), actual spike count per neuron
binary_matrix: np.ndarray | None = None # shape (num_neurons, num_timesteps), optional dense repr
@dataclass(frozen=True)
class PopulationCode:
"""Population-coded representation of a scalar or vector value."""
spike_trains: tuple[SpikeTrain, ...] # one SpikeTrain per input dimension
tuning_centers: np.ndarray # shape (num_neurons,), preferred stimuli
tuning_widths: np.ndarray # shape (num_neurons,), receptive field σ
input_range: tuple[float, float]
num_neurons_per_dim: int
@dataclass(frozen=True)
class TemporalCode:
"""Temporal coding metadata layered on a SpikeTrain."""
spike_train: SpikeTrain
scheme: Literal["ttfs", "rank_order", "phase"]
reference_time_ms: float = 0.0 # phase/rank reference
phase_frequency_hz: float | None = None # for phase coding
rank_order: np.ndarray | None = None # permutation (descending importance)
@dataclass(frozen=True)
class ConversionResult:
"""Result of ANN-to-SNN conversion."""
snn_weights: dict[str, np.ndarray] # layer_name -> weight matrix
threshold_balancing: dict[str, float] # layer_name -> v_thresh
normalization_factors: dict[str, float] # layer_name -> scale factor
accuracy_original: float # ANN top-1 accuracy
accuracy_converted: float # SNN top-1 accuracy (simulated)
accuracy_loss_pct: float # relative accuracy drop %
num_timesteps_simulated: int
conversion_time_s: float
@dataclass(frozen=True)
class DecodingResult:
"""Result of decoding spike trains back to continuous values."""
values: np.ndarray # reconstructed signal
confidence: np.ndarray # per-sample confidence
method: DecodingMethod
reconstruction_mse: float | None = None
reconstruction_r2: float | None = None
@dataclass(frozen=True)
class EncodingMetrics:
"""Benchmarking metrics for an encoding scheme."""
information_rate_bits_per_s: float
reconstruction_accuracy: float # R² or accuracy, scheme-dependent
coding_efficiency_bits_per_spike: float
mean_firing_rate_hz: float
latency_to_first_spike_ms: float
sparsity: float # fraction of silent time-bins
energy_proxy_nJ: float # estimated energy (spike count × E_spike)
encode_wall_time_ms: float
decode_wall_time_ms: float
Protocol
from typing import Protocol, runtime_checkable, Any
@runtime_checkable
class NeuromorphicEncoderProtocol(Protocol):
"""Converts data ↔ spike trains using various neural coding schemes."""
def encode_rate(
self, data: np.ndarray, duration_ms: float
) -> SpikeTrain:
"""Encode continuous data as a rate-coded spike train (Poisson process).
Args:
data: Input array, shape (num_channels,) or (batch, num_channels).
duration_ms: Duration of the encoding window.
Returns:
SpikeTrain with firing rates proportional to input magnitude.
"""
...
def encode_temporal(
self, data: np.ndarray, scheme: Literal["ttfs", "rank_order", "phase"]
) -> TemporalCode:
"""Encode data using a temporal coding scheme.
Args:
data: Input array, shape (num_channels,) or (batch, num_channels).
scheme: Temporal coding strategy — time-to-first-spike,
rank-order, or phase coding.
Returns:
TemporalCode wrapping the encoded SpikeTrain with scheme metadata.
"""
...
def encode_population(
self, data: np.ndarray, num_neurons: int
) -> PopulationCode:
"""Encode scalar/vector data via population coding (Gaussian tuning curves).
Args:
data: Input values, shape (num_dims,) or (batch, num_dims).
num_neurons: Number of neurons per input dimension.
Returns:
PopulationCode with tuning centers, widths, and per-dim spike trains.
"""
...
def convert_ann_to_snn(
self,
ann_model: Any,
calibration_data: np.ndarray,
num_timesteps: int = 256,
) -> ConversionResult:
"""Convert a trained ANN to an equivalent SNN via threshold balancing.
Uses the Diehl et al. (2015) weight-normalization + threshold-balancing
pipeline, extended with Petersen et al. (2021) layer-wise optimisation.
Args:
ann_model: Trained ANN (dict of weight arrays or framework model).
calibration_data: Representative input batch for activation profiling.
num_timesteps: Number of SNN simulation steps for accuracy evaluation.
Returns:
ConversionResult with SNN weights, thresholds, and accuracy comparison.
"""
...
def decode_spikes(
self, spike_train: SpikeTrain, method: DecodingMethod
) -> DecodingResult:
"""Decode a spike train back to continuous-valued output.
Args:
spike_train: Input spike train to decode.
method: Decoding strategy — spike_count, spike_rate, time_to_first_spike,
membrane_potential, population_vector, or bayesian_mle.
Returns:
DecodingResult with reconstructed values and quality metrics.
"""
...
def benchmark_encoding(
self, data: np.ndarray, coding_scheme: CodingScheme
) -> EncodingMetrics:
"""Benchmark a coding scheme on the given data.
Measures information rate, reconstruction accuracy, efficiency,
latency, sparsity, and estimated energy.
Args:
data: Test dataset, shape (num_samples, num_channels).
coding_scheme: The neural coding scheme to benchmark.
Returns:
EncodingMetrics with comprehensive performance measurements.
"""
...
Concrete Implementations
| Class |
Role |
Key Algorithm |
PoissonRateEncoder |
Inhomogeneous Poisson process spike generation |
spike_times ~ Poisson(λ(t)·dt) where λ = f_max · x_norm; refractory filter |
TimeToFirstSpikeEncoder |
TTFS temporal coding |
t_spike = T_max · (1 - x_norm); higher values fire earlier (latency ∝ 1/intensity) |
RankOrderEncoder |
Thorpe & Gautrais rank-order coding |
Sort channels by magnitude → emit spikes in rank order with fixed ISI; decode via modulation factor m^rank |
PhaseCodingEncoder |
Phase-of-firing encoding |
Value encoded as phase offset relative to a reference oscillation: φ = 2π · x_norm; spike at t = φ / (2π·f_ref) per cycle |
PopulationEncoder |
Gaussian tuning curve population code |
r_j = f_max · exp(-(x - μ_j)² / (2σ_j²)); evenly spaced centers across input range |
ANNtoSNNConverter |
Diehl et al. + Petersen et al. conversion |
1. Profile per-layer max activations on calibration data. 2. Weight normalization: W_l' = W_l · (λ_{l-1} / λ_l). 3. Threshold balancing: v_thresh_l = λ_l / λ_{l-1}. 4. Bias correction for batch-norm folding. 5. Simulate with integrate-and-fire neurons. |
SpikeDecoder |
Multi-method spike → value decoding |
spike_count: x̂ = count / (T · f_max). spike_rate: sliding window ISI. ttfs: x̂ = 1 - t_first / T_max. membrane_potential: read final V_mem. population_vector: x̂ = Σ(r_j · μ_j) / Σ(r_j). bayesian_mle: `x̂ = argmax_x P(spikes |
Key Algorithms in Detail
1. Poisson Rate Encoding
Input: x ∈ ℝ^n (normalised to [0, 1])
For each channel i:
λ_i = f_min + (f_max - f_min) * x_i # instantaneous rate
For t = 0 to T_window step dt:
if t > t_last_spike + t_refract:
if uniform(0, 1) < λ_i * dt / 1000: # dt in ms, λ in Hz
emit spike at time t
t_last_spike = t
2. Temporal Contrast Encoding
Input: time-series x(t) ∈ ℝ^n
For each channel i:
Δx_i(t) = x_i(t) - x_i(t - dt)
if Δx_i(t) > θ_up: emit ON spike
if Δx_i(t) < -θ_down: emit OFF spike
# Mimics DVS (Dynamic Vision Sensor) event generation
3. Gaussian Receptive Fields (Population Coding)
Input: scalar x, range [x_min, x_max], N neurons
Centers: μ_j = x_min + j * (x_max - x_min) / (N - 1), j = 0..N-1
Widths: σ_j = (x_max - x_min) / (N - 1) / β # β ≈ 1.5
Rates: r_j = f_max * exp(-(x - μ_j)² / (2σ_j²))
Spikes: generate Poisson(r_j) for each neuron j
4. ANN-to-SNN Conversion (Threshold Balancing)
Input: Trained ANN with L layers, calibration set D
Phase 1 — Activation profiling:
For each layer l, record max activation: λ_l = max_{x∈D}(a_l(x))
Phase 2 — Weight normalisation:
For l = 1 to L:
W_l' = W_l * (λ_{l-1} / λ_l)
b_l' = b_l / λ_l
Phase 3 — Threshold balancing:
v_thresh_l = λ_l / λ_{l-1} (ensures balanced firing rates)
Phase 4 — Simulate SNN:
For each timestep t = 1..T:
For l = 1..L:
V_l(t) = V_l(t-1) + W_l' @ S_{l-1}(t)
S_l(t) = (V_l(t) ≥ v_thresh_l) # spike if threshold exceeded
V_l(t) = V_l(t) * (1 - S_l(t)) # reset on spike
Output: argmax spike_count over output neurons
5. Population Vector Decoding
Input: Spike trains {S_j} for N neurons with tuning centers {μ_j}
spike_count_j = |{t : S_j(t) = 1}|
x̂ = Σ_j (spike_count_j * μ_j) / Σ_j (spike_count_j)
Confidence = Σ_j spike_count_j / (N * T * f_max)
Encoding Metrics
| Metric |
Formula / Description |
Target |
information_rate_bits_per_s |
Mutual information I(X; S) / T_window |
Maximise |
reconstruction_accuracy |
R² between original and decoded signal |
> 0.90 |
coding_efficiency_bits_per_spike |
I(X; S) / total_spike_count |
Maximise |
mean_firing_rate_hz |
total_spikes / (N × T_window) |
Log for comparison |
latency_to_first_spike_ms |
Time from stimulus onset to first spike |
Minimise for TTFS |
sparsity |
Fraction of (neuron, time_bin) pairs with no spike |
Higher = more efficient |
energy_proxy_nJ |
spike_count × E_spike (E_spike ≈ 0.9 nJ on Loihi) |
Minimise |
Test Targets
- Line coverage: ≥ 95 %
- ANN-to-SNN accuracy loss: < 2 % relative on calibration benchmarks (MNIST-scale)
- Encoding round-trip: > 90 % reconstruction R² for rate and population coding
- Temporal coding order preservation: 100 % rank consistency for rank-order encoding
- Performance: Encode 1000-dim input in < 10 ms on CPU
- Type safety: Full
mypy --strict compliance, zero errors
Layer-Type Conversion Support
| ANN Layer |
SNN Equivalent |
Conversion Strategy |
| Dense / Linear |
LIF neuron layer |
Direct weight transfer + threshold balancing |
| Conv2D |
Spiking Conv2D |
Weight-norm per filter; spatial threshold map |
| BatchNorm |
Folded into weights |
W' = W * γ/√(σ²+ε), b' = (b - μ) * γ/√(σ²+ε) + β |
| ReLU |
LIF threshold |
Implicit via v_thresh; ReLU ↔ IF neuron equivalence |
| MaxPool |
Spike-based max pool |
Select neuron with earliest spike per window |
| Attention |
Spiking attention |
Rate-coded Q·K^T with softmax approximation via WTA |
| Dropout |
Not converted |
Removed (training-only regularization) |
Integration Points
- SpikingNeuronModel (34.1):
SpikeTrain output feeds LIFNeuron.receive_spikes(); shared SpikeTrain dataclass
- EventDrivenProcessor (34.2): Encoded spike events routed via
EventQueue; event timestamps match encoder dt_ms
- SynapticPlasticityEngine (34.3): Spike timing from encoder drives STDP pre-synaptic trace computation
- NeuromorphicOrchestrator (34.5): Orchestrator calls
encode_rate() / encode_population() for input preprocessing and decode_spikes() for output readout
References
- Auge, D., et al. (2021). A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks. Neural Processing Letters.
- Diehl, P. U., et al. (2015). Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. IJCNN.
- Thorpe, S. & Gautrais, J. (1998). Rank order coding. Computational Neuroscience: Trends in Research.
- Petersen, P., et al. (2021). Optimal Conversion of Conventional Artificial Neural Networks to Spiking Neural Networks. ICLR.
- Bohte, S. M., et al. (2002). Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing.
- Guo, Y., et al. (2021). Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems. Frontiers in Neuroscience.
- Sengupta, A., et al. (2019). Going Deeper in Spiking Neural Networks: VGG and Residual Architectures. Frontiers in Neuroscience.
Phase 34.4 — NeuromorphicEncoder
Component:
asi.neuromorphic.encoderDepends on: Phase 34.1 SpikingNeuronModel, Phase 34.2 EventDrivenProcessor
Predecessor: Phase 34.3 SynapticPlasticityEngine
Overview
The NeuromorphicEncoder bridges conventional computing and neuromorphic hardware by converting continuous-valued data (images, signals, features) into spike trains and vice versa. It implements a comprehensive suite of neural coding schemes — from simple Poisson rate coding to sophisticated population and temporal codes — and provides a principled ANN-to-SNN conversion pipeline. Faithful encoding is the critical bottleneck for neuromorphic deployment: lossy or inefficient encoding degrades downstream SNN accuracy regardless of network architecture.
Frozen Dataclasses
Protocol
Concrete Implementations
PoissonRateEncoderspike_times ~ Poisson(λ(t)·dt)whereλ = f_max · x_norm; refractory filterTimeToFirstSpikeEncodert_spike = T_max · (1 - x_norm); higher values fire earlier (latency ∝ 1/intensity)RankOrderEncoderm^rankPhaseCodingEncoderφ = 2π · x_norm; spike att = φ / (2π·f_ref)per cyclePopulationEncoderr_j = f_max · exp(-(x - μ_j)² / (2σ_j²)); evenly spaced centers across input rangeANNtoSNNConverterW_l' = W_l · (λ_{l-1} / λ_l). 3. Threshold balancing:v_thresh_l = λ_l / λ_{l-1}. 4. Bias correction for batch-norm folding. 5. Simulate with integrate-and-fire neurons.SpikeDecoderx̂ = count / (T · f_max). spike_rate: sliding window ISI. ttfs:x̂ = 1 - t_first / T_max. membrane_potential: read finalV_mem. population_vector:x̂ = Σ(r_j · μ_j) / Σ(r_j). bayesian_mle: `x̂ = argmax_x P(spikesKey Algorithms in Detail
1. Poisson Rate Encoding
2. Temporal Contrast Encoding
3. Gaussian Receptive Fields (Population Coding)
4. ANN-to-SNN Conversion (Threshold Balancing)
5. Population Vector Decoding
Encoding Metrics
information_rate_bits_per_sreconstruction_accuracycoding_efficiency_bits_per_spikemean_firing_rate_hzlatency_to_first_spike_mssparsityenergy_proxy_nJTest Targets
mypy --strictcompliance, zero errorsLayer-Type Conversion Support
W' = W * γ/√(σ²+ε),b' = (b - μ) * γ/√(σ²+ε) + βIntegration Points
SpikeTrainoutput feedsLIFNeuron.receive_spikes(); sharedSpikeTraindataclassEventQueue; event timestamps match encoderdt_msencode_rate()/encode_population()for input preprocessing anddecode_spikes()for output readoutReferences