A Convolutional Neural Network (CNN) is a type of artificial neural network designed specifically for tasks related to computer vision, image processing, and pattern recognition. CNNs are highly effective in processing and analyzing visual data due to their ability to automatically learn hierarchical representations and capture spatial hierarchies. Convolutional Neural Networks have been instrumental in achieving breakthroughs in computer vision tasks, including image classification, object detection, facial recognition, and medical image analysis. Their architecture is inspired by the visual processing in the human brain and is optimized for extracting hierarchical features from visual data.


Beyond Communication: Telecom Fiber Networks for Rain Detection and Classification

We present the field trial of an innovative neural network and DAS-based technique, employing a pre-trained CNN fine-tuning strategy for effective rain detection and classification within two practical scenarios.

Convolutional Transformer based Dual Discriminator Generative Adversarial Networks for Video Anomaly Detection

Detecting abnormal activities in real-world surveillance videos is an important yet challenging task as the prior knowledge about video anomalies is usually limited or unavailable. Despite that many approaches have been developed to resolve this problem, few of them can capture the normal spatio-temporal patterns effectively and efficiently. Moreover, existing works seldom explicitly consider the local consistency at frame level and global coherence of temporal dynamics in video sequences. To this end, we propose Convolutional Transformer based Dual Discriminator Generative Adversarial Networks (CT-D2GAN) to perform unsupervised video anomaly detection. Specifically, we first present a convolutional transformer to perform future frame prediction. It contains three key components, i.e., a convolutional encoder to capture the spatial information of the input video clips, a temporal self-attention module to encode the temporal dynamics, and a convolutional decoder to integrate spatio-temporal features and predict the future frame. Next, a dual discriminator based adversarial training procedure, which jointly considers an image discriminator that can maintain the local consistency at frame-level and a video discriminator that can enforce the global coherence of temporal dynamics, is employed to enhance the future frame prediction. Finally, the prediction error is used to identify abnormal video frames. Thoroughly empirical studies on three public video anomaly detection datasets, i.e., UCSD Ped2, CUHK Avenue, and Shanghai Tech Campus, demonstrate the effectiveness of the proposed adversarial spatio-temporal modeling framework.

Wavelength Modulation Spectroscopy Enhanced by Machine Learning for Early Fire Detection

We proposed and demonstrated a new machine learning algorithm for wavelength modulation spectroscopy to enhance the accuracy of fire detection. The result shows more than 8% of accuracy improvement by analyzing CO/CO 2 2f signals.

Size and Alignment Independent Classification of the High-order Spatial Modes of a Light Beam Using a Convolutional Neural Network

The higher-order spatial modes of a light beam are receiving significant interest. They can be used to further increase the data speeds of high speed optical communication, and for novel optical sensing modalities. As such, the classification of higher-order spatial modes is ubiquitous. Canonical classification methods typically require the use of unconventional optical devices. However, in addition to having prohibitive cost, complexity, and efficacy, such methods are dependent on the light beam’s size and alignment. In this work, a novel method to classify higher-order spatial modes is presented, where a convolutional neural network is applied to images of higher-order spatial modes that are taken with a conventional camera. In contrast to previous methods, by training the convolutional neural network with higher-order spatial modes of various alignments and sizes, this method is not dependent on the light beam’s size and alignment. As a proof of principle, images of 4 Hermite-Gaussian modes (HG00, HG01, HG10, and HG11) are numerically calculated via known solutions to the electromagnetic wave equation, and used to synthesize training examples. It is shown that as compared to training the convolutional neural network with training examples that have the same sizes and alignments, a?~2×?increase in accuracy can be achieved.

Learning Context-Sensitive Convolutional Filters for Text Processing

Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP share the same learned (and static) set of filters for all input sentences. In this paper, we consider an approach of using a small meta network to learn context-sensitive convolutional filters for text processing. The role of meta network is to abstract the contextual information of a sentence or document into a set of input-sensitive filters. We further generalize this framework to model sentence pairs, where a bidirectional filter generation mechanism is introduced to encapsulate co-dependent sentence representations. In our benchmarks on four different tasks, including ontology classification, sentiment analysis, answer sentence selection, and paraphrase identification, our proposed model, a modified CNN with context-sensitive filters, consistently outperforms the standard CNN and attention-based CNN baselines. By visualizing the learned context-sensitive filters, we further validate and rationalize the effectiveness of proposed framework.