Pruning State Space Models

Pruning State Space Models with Model Order Reduction for Efficient Raw Audio Classification at EUSIPCO’25

Researchers from the Embedded Machine Learning CD-Lab at ICT introduce a pruning technique for State Space Models. Specifically targeting to improve the efficiency of the S-Edge layer with Model Order Reduction techniques stemming from linear control theory.

Link to preprint:
https://jantsch.se/AxelJantsch/papers/2025/MatthiasBittner-Eusipco.pdf, opens an external URL in a new window 
Link to conference:
https://eusipco2025.org/, opens an external URL in a new window 

At the forthcoming European Signal Processing Conference (EUSIPCO) Researchers from TU Wien’s Christian Doppler Laboratory for Embedded Machine Learning will present a pruning technique for increasing the efficiency State Space Models (SSMs). SSMs have shown good performance on long-sequence classification tasks such as raw audio classification. Targeting edge devices it is crucial to further improve their inference efficiency. However, pruning techniques
are not well explored for SSMs. We propose a layer-wise Model Order Reduction (MOR) technique based on balanced truncation combined with an iterative pruning algorithm to increase the efficiency of already trained SSM models, without the need
for retraining. Specifically, we focus on S-Edge models, a class of hardware-friendly SSMs. Evaluated on the Google Speech Commands dataset we prune models ranging from 141k–8k in parameters and 94.9%–90.0% in test accuracy. Given an
accuracy loss constraint of 0.5pp we are able to find models which reduce parameters by 36.1% for the biggest and 5.8% for the smallest model.
 

The proposed method for pruning S-Edge based State Space Models.

© Matthias Bittner

The proposed method for pruning S-Edge based State Space Models.