Abstract
The platform migration and customization have
become an indispensable process of deep neural
network (DNN) development lifecycle. A highprecision but complex DNN trained in the cloud
on massive data and powerful GPUs often goes
through an optimization phase (e.g., quantization,
compression) before deployment to a target device
(e.g., mobile device). A test set that effectively uncovers the disagreements of a DNN and its optimized variant provides certain feedback to debug
and further enhance the optimization procedure.
However, the minor inconsistency between a DNN
and its optimized version is often hard to detect
and easily bypasses the original test set. This paper proposes DiffChaser, an automated black-box
testing framework to detect untargeted/targeted disagreements between version variants of a DNN. We
demonstrate 1) its effectiveness by comparing with
the state-of-the-art techniques, and 2) its usefulness
in real-world DNN product deployment involved
with quantization and optimization