Learning Efficient Object-Detection Models With Knowledge Distillation
NeurIPS 2017 | Deep object detectors require prohibitive runtimes to process an image for real-time applications. Model compression can learn compact models with fewer parameters, but accuracy is significantly degraded. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation and hint learning. Our results show consistent improvement in accuracy-speed trade-off across PASCAL, KITTI, ILSVRC and MS-COCO.
Collaborators: Guobin Chen, Wongun Choi, Tony Han, Manmohan Chandraker