Guest Talk: Lijun Zhang: Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

Tuesday, January 07, 2020, 10:30am

Location: RWTH Aachen University, Department of Computer Science - Ahornstr. 55, building E3, room 9220

Speaker: Lijun Zhang



Deep neural networks (DNNs) have been shown lack ofrobustness for the vulnerability of their classification to small perturbationson the inputs. This has led to safety concerns of applying DNNs tosafety-critical domains. Several verification approaches have been developed toautomatically prove or disprove safety properties of DNNs.

However, these approaches suffer from either thescalability problem, i.e., only small DNNs can be handled, or the precisionproblem, i.e., the obtained bounds are loose. This paper improves on a recentproposal of analyzing DNNs through the classic abstract interpretationtechnique, by a novel symbolic propagation technique. More specifically, thevalues of neurons are represented symbolically and propagated forwardly from theinput layer to the output layer, on top of abstract domains. We show that ourapproach can achieve significantly higher precision and thus can prove moreproperties than using only abstract domains. Moreover, we show that the boundsderived from our approach on the hidden neurons, when applied to astate-of-the-art SMT based verification tool, can improve its performance. Weimplement our approach into a software tool and validate it over a few DNNstrained on benchmark datasets such as MNIST, etc. 


External Links