Transfer Learning is a machine learning technique where a model trained on one task is adapted to perform a second related task. It leverages knowledge gained from the source task to improve learning on the target task, especially when labeled data for the target task is limited.


Why is the video analytics accuracy fluctuating, and what can we do about it?

It is a common practice to think of a video as a sequence of images (frames), and re-use deep neural network models that are trained only on images for similar analytics tasks on videos. In this paper, we show that this “leap of faith” that deep learning models that work well on images will also work well on videos is actually flawed. We show that even when a video camera is viewing a scene that is not changing in any human-perceptible way, and we control for external factors like video compression and environment (lighting), the accuracy of video analytics application fluctuates noticeably. These fluctuations occur because successive frames produced by the video camera may look similar visually but are perceived quite differently by the video analytics applications. We observed that the root cause for these fluctuations is the dynamic camera parameter changes that a video camera automatically makes in order to capture and produce a visually pleasing video. The camera inadvertently acts as an “unintentional adversary” because these slight changes in the image pixel values in consecutive frames, as we show, have a noticeably adverse impact on the accuracy of insights from video analytics tasks that re-use image-trained deep learning models. To address this inadvertent adversarial effect from the camera, we explore the use of transfer learning techniques to improve learning in video analytics tasks through the transfer of knowledge from learning on image analytics tasks. Our experiments with a number of different cameras, and a variety of different video analytics tasks, show that the inadvertent adversarial effect from the camera can be noticeably offset by quickly re-training the deep learning models using transfer learning. In particular, we show that our newly trained Yolov5 model reduces fluctuation in object detection across frames, which leads to better tracking of objects (∼40% fewer mistakes in tracking). Our paper also provides new directions and techniques to mitigate the camera’s adversarial effect on deep learning models used for video analytics applications.

Multi-source Inductive Knowledge Graph Transfer

Multi-source Inductive Knowledge Graph Transfer Large-scale information systems, such as knowledge graphs (KGs), enterprise system networks, often exhibit dynamic and complex activities. Recent research has shown that formalizing these information systems as graphs can effectively characterize the entities (nodes) and their relationships (edges). Transferring knowledge from existing well-curated source graphs can help construct the target graph of newly-deployed systems faster and better which no doubt will benefit downstream tasks such as link prediction and anomaly detection for new systems. However, current graph transferring methods are either based on a single source, which does not sufficiently consider multiple available sources, or not selectively learns from these sources. In this paper, we propose MSGT-GNN, a graph knowledge transfer model for efficient graph link prediction from multiple source graphs. MSGT-GNN consists of two components: the Intra-Graph Encoder, which embeds latent graph features of system entities into vectors, and the graph transferor, which utilizes graph attention mechanism to learn and optimize the embeddings of corresponding entities from multiple source graphs, in both node level and graph level. Experimental results on multiple real-world datasets from various domains show that MSGT-GNN outperforms other baseline approaches in the link prediction and demonstrate the merit of attentive graph knowledge transfer and the effectiveness of MSGT-GNN.

Model transfer of QoT prediction in optical networks based on artificial neural networks

An artificial neural network (ANN) based transfer learning model is built for quality of transmission (QoT) prediction in optical systems feasible with different modulation formats. Knowledge learned from one optical system can be transferred to a similar optical system by adjusting weights in ANN hidden layers with a few additional training samples, where highly related information from both systems is integrated and redundant information is discarded. Homogeneous and heterogeneous ANN structures are implemented to achieve accurate Q-factor-based QoT prediction with low root-mean-square error. The transfer learning accuracy under different modulation formats, transmission distances, and fiber types is evaluated. Using transfer learning, the number of retraining samples is reduced from 1000 to as low as 20, and the training time is reduced by up to four times.

ANN-Based Transfer Learning for QoT Prediction in Real-Time Mixed Line-Rate Systems

Quality of transmission prediction for real-time mixed line-rate systems is realized using artificial neural network based transfer learning with SDN orchestrating. 0.42 dB accuracy is achieved with a 1000 to 20 reduction in training samples.