Information Sciences Seminar——Linear Approximation Approaches for Formal Verification of Deep Neural Networks and Their Tightness
报告人:Min Zhang (East China Normal University)
时间:2021-10-14 10:00-12:00
地点:腾讯会议ID:106 290 487
Abstract: Linear approximation is an effective technique for the formal verification of Deep Neural Networks. By over-approximating non-linear activation functions in DNNs, the verification problem can be transformed into a linear programming problem that is more amenable and scalable to solve. The completeness of the verification may be sacrificed due to the approximation. A tighter approximation is desired to improve the precision of the verification result. In this talk, we will introduce a fine-grained fine-linear approximation approach for the robustness verification of Convolutional Neural Networks (CNNs). Under the approximation, the robustness verification of CNNs is tightened, i.e., they are proved to be robust within a larger perturbation distance. The approximation approach is applicable to general sigmoid-like activation functions. We implement DeepCert, the resulting verification toolkit, and evaluate it with open-source benchmarks, including LeNet and the models trained on MNIST and CIFAR. Experimental results show that DeepCert outperforms other state-of-the-art robustness verification tools with at most 286.3% improvement to the certified lower bound and 1566.8 times speedup for the same neural networks.
Short bio: 张民,华东师范大学软件工程学院教授,软件科学与技术系系主任。主要研究兴趣包括可信计算和形式化方法,相关成果发表在AAAI,RTSS,DAC,FSE,DATE,TACAS和RE等国际旗舰会议以及TCS,SCP,TCAD等权威期刊上,曾获第26届IEEE亚太软件工程会议APSEC唯一最佳论文奖,FSE和RE会议最佳论文提名奖。研究受到国家自然基金委,科技部重点研发项目资助,入选中国留学基委-法国高等教育署“蔡元培”合作交流计划,担任中法Inria-ECNU可信物联网联合团队中方负责人,法国尼斯大学高级访问教授,CCF形式化专委会委员和系统软件专委会委员。
腾讯会议ID:106 290 487
腾讯会议链接:https://meeting.tencent.com/dm/Tv8bXyt8urDS