资源论文DiffChaser: Detecting Disagreements for Deep Neural Networks

DiffChaser: Detecting Disagreements for Deep Neural Networks

2019-10-10 | |  51 |   41 |   0
Abstract The platform migration and customization have become an indispensable process of deep neural network (DNN) development lifecycle. A highprecision but complex DNN trained in the cloud on massive data and powerful GPUs often goes through an optimization phase (e.g., quantization, compression) before deployment to a target device (e.g., mobile device). A test set that effectively uncovers the disagreements of a DNN and its optimized variant provides certain feedback to debug and further enhance the optimization procedure. However, the minor inconsistency between a DNN and its optimized version is often hard to detect and easily bypasses the original test set. This paper proposes DiffChaser, an automated black-box testing framework to detect untargeted/targeted disagreements between version variants of a DNN. We demonstrate 1) its effectiveness by comparing with the state-of-the-art techniques, and 2) its usefulness in real-world DNN product deployment involved with quantization and optimization

上一篇:Deep Learning for Video Captioning: A Review

下一篇:Discovering Reliable Dependencies from Data: Hardness and Improved Algorithms?

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...