Researchers from the Embedded Machine Learning CD-Lab at ICT introduce a technique for synthesizing electric circuits which can be used for analog neural network inference. With the proposed technique it’s possible to perform audio classification with the speed of the electric current.
Link to preprint:
https://jantsch.se/AxelJantsch/papers/2025/MatthiasBittner-AnalogSSM.pdf, opens an external URL in a new window
Link to conference/workshop:
https://sites.google.com/view/dl-meets-nh-25, opens an external URL in a new window
At the forthcoming third international workshop on "Deep Learning meets Neuromorphic Hardware", co-located with ECMLPKDD’25 researchers from TU Wien’s Christian Doppler Laboratory for Embedded Machine Learning will present AnalogSSM, a State Space Model (SSM) based Neural Network layer which can be converted into and electric circuit for analog Neural Network inference.
Neural networks based on (SSMs) have shown good performance on long sequence modeling tasks, such as raw audio classification. So far, their continuous-time parameter representation has not been used for analog neural network computing. We propose to synthesize electric circuits that replicate the inputoutput behavior of
the continuous-time transfer function defined by the learned SSM parameters. The presented AnalogSSM is modeled as a diagonal and real-valued SSM architecture that can be converted into a purely analog electric circuit representation consisting of adder/subtraction, first-order low-pass, and rectifier operational amplifier circuits. Targeting hotword detection based on the Google Speech Commands dataset, we evaluate three model configurations ranging from 0.15k–1.3k parameters. Achieving, on average, over ten individual hotwords, an accuracy range of 84.5%–90.8%
with discrete models in PyTorch. The synthesized electric circuits are simulated and evaluated with LTspice. On average, we observe accuracy drops of 2.9pp with the continuous-time analog circuits only consisting of 70–238 operational amplifiers.