Differentiable neurophysical audio engine.
Neural networks predict physics, not audio samples.
KAN + GPU FDTD. Train on any sound. Synthesize anything.
Current neural audio synthesis generates waveforms directly — fast but physically meaningless. Physical modeling is parametric but not differentiable — you can't train it on audio. NeiroSynth bridges this gap.
Exact gradients through the wave equation via the Adjoint State Method. Reverse-time FDTD computes ∂L/∂s with O(N) memory — no autograd graph, no backprop-through-time explosion.
Kolmogorov-Arnold Networks with Gaussian RBF basis predict 25 physical parameters. Interpretable nonlinear mappings replace black-box MLPs. Every output has physical meaning.
12 WGSL compute shaders run the full 2D wave simulation on Metal/Vulkan. Zero CPU-GPU readback in the training loop. 0.01ms per dispatch on Apple M1.
All GPU buffers allocated at init. Per-frame: shader dispatch + ping-pong swap. No allocations, no GC pauses, no Python overhead. Pure metal performance.
From MIDI input to physical audio output — every stage is differentiable, every parameter has physical meaning.
A 64×64 grid of learned material properties defines how sound propagates through a virtual body. Feed it a recording of a guitar — it learns the physical body that produces that sound. Swap maps between instruments to cross-synthesize.
C = cΔt/Δx (Courant number, CFL-stable: C² < 0.5)
s[i,j] = learned stiffness · d[i,j] = damping coefficient
From gradient-free robustness to exact adjoint gradients — choose the convergence profile that matches your target.
M=4 Rademacher perturbation. 8 FDTD passes per step. Robust gradient-free optimization through non-differentiable GPU kernels.
Exact gradients via reverse-time FDTD. O(N) memory for state history. Cosine LR schedule 1e-4 → 5e-6.
SPSA → Adjoint transition. Explore globally with SPSA, then refine with exact gradients. Best of both worlds.
32×32 mesh (40%) → bilinear upsample → 64×64 (60%). Coarse-to-fine refinement for faster convergence.
Same algorithms. Different universe of performance.
| Metric | NeiroSynth (Rust/WGPU) | PyTorch Equivalent |
|---|---|---|
| FDTD step (64×64) | 0.01 ms | ~0.3 ms |
| Training iter (SPSA, M=4) | ~300 ms | ~2-4 s |
| Memory (adjoint, 4096 steps) | ~64 MB | ~500 MB+ |
| Binary size | ~8 MB | ~2 GB |
| Dependencies | 7 crates | 50+ packages |
| Cold start | <1 s | 5-10 s |
Every sample below was synthesized by NeiroSynth — trained on real recordings, then rendered through the GPU FDTD mesh in real time.
25,000 lines. 12 shaders. Zero Python. Zero frameworks. One developer. Ukraine.