Recognition is the process of identifying or acknowledging something or someone based on certain characteristics, features, or patterns. In various contexts, recognition can refer to the ability to perceive, remember, or categorize objects, individuals, or concepts.


Deep Supervision with Intermediate Concepts (IEEE)

Read Deep Supervision with Intermediate Concepts (IEEE). Recent data-driven approaches to scene interpretation predominantly pose inference as an end-to-end black-box mapping, commonly performed by a Convolutional Neural Network (CNN). However, decades of work on perceptual organization in both human and machine vision suggest that there are often intermediate representations that are intrinsic to an inference task, and which provide essential structure to improve generalization. In this work, we explore an approach for injecting prior domain structure into neural network training by supervising hidden layers of a CNN with intermediate concepts that normally are not observed in practice. We formulate a probabilistic framework which formalizes these notions and predicts improved generalization via this deep supervision method. One advantage of this approach is that we are able to train only from synthetic CAD renderings of cluttered scenes, where concept values can be extracted, but apply the results to real images. Our implementation achieves the state-of-the-art performance of 2D/3D keypoint localization and image classification on real image benchmarks including KITTI, PASCALVOC, PASCAL3D+, IKEA, and CIFAR100. We provide additional evidence that our approach outperforms alternative forms of supervision, such as multi-task networks.

Deep Supervision with Intermediate Concepts (BayLearn)

Read Deep Supervision with Intermediate Concepts (BayLearn). We introduce a novel technique for training convolutional neural networks (CNNs), namely deep supervision with intermediate concepts, leading to improved generalization. Our approach draws inspiration from Deeply Supervised Nets (DSN) [5], which supervises each layer by the main task to accelerate training convergence. Our method differs from DSN in that we apply deep supervision with intermediate concepts, intrinsic to the ultimate task, to regularize the network for better generalization. We apply this improved generalization ability to transfer knowledge from synthetic to real images.