Gastvortrag: Lijun Zhang: Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification
Dienstag, 07.01.2020, 10.30 Uhr
Ort: RWTH Aachen University, Informatikzentrum - Ahornstr. 55, Erweiterungsgebäude E3, Raum 9220
Vortragender: Lijun Zhang
Abstract:
Deep neural networks (DNNs) have been shown lack ofrobustness for the vulnerability of their classification to small perturbationson the inputs. This has led to safety concerns of applying DNNs tosafety-critical domains. Several verification approaches have been developed toautomatically prove or disprove safety properties of DNNs.
However, these approaches suffer from either thescalability problem, i.e., only small DNNs can be handled, or the precisionproblem, i.e., the obtained bounds are loose. This paper improves on a recentproposal of analyzing DNNs through the classic abstract interpretationtechnique, by a novel symbolic propagation technique. More specifically, thevalues of neurons are represented symbolically and propagated forwardly from theinput layer to the output layer, on top of abstract domains. We show that ourapproach can achieve significantly higher precision and thus can prove moreproperties than using only abstract domains. Moreover, we show that the boundsderived from our approach on the hidden neurons, when applied to astate-of-the-art SMT based verification tool, can improve its performance. Weimplement our approach into a software tool and validate it over a few DNNstrained on benchmark datasets such as MNIST, etc.