Mahum Naseer

Supervisor: Muhammad Shafique

Formal Analysis for Resilient Neural Networks in Embedded Systems

 

Artificial neural networks (ANNs), especially the Deep neural networks (DNNs), have shown remarkable improvement in past several decades, hence becoming predominant in numerous real-world embedded applications like healthcare, spam filtering, face recognition and autonomous driving. However, the discovery of adversarial examples, both in presence and absence of an adversary, puts the resilience of ANNs to these input perturbations into question. Added to the challenge, other ANN vulnerabilities like backdoors and training bias calls for concrete behavioral guarantees regarding ANNs prior to their deployment in practical systems, particularly in the safetycritical systems. Hence, there has been a surge of formal verification efforts for the ensuring safety and robustness properties in trained ANNs in recent years. However, these works are generally limited by their scalability, timing efficiency, completeness and limited scope of the properties analyzed. To tackle these challenges, this research aims to formally analyze diverse ANN properties under practical uncertainties in the real-world applications. We intend to incorporate optimization approaches from both formal verification (like model abstraction) and machine learning (like quantization and pruning) communities to improve scalability, while also ensuring completeness of our results.