Rain Intensity Detection and Classification with Pre-existing Telecom Fiber Cables

For the first time, we demonstrate detection and classification of rain intensity using Distributed Acoustic Sensing (DAS). An artificial neural network was applied for rain intensity classification and high precision of over 96% was achieved.

Evolution of Fiber Infrastructure – From Data Transmission to Network Sensing

We review multiple use cases over deployed networks including co-existing sensing/data transmission, cable cut prevention and perimeter intrusion detection to realize telecom infrastructure can be sensing backbones instead of the sole function of data transmission.

Field Tests of Impulsive Acoustic Event Detection, Localization, and Classification Over Telecom Fiber Networks

We report distributed-fiber-optic-sensing results on impulsive acoustic events localization/classification over telecom networks. A deep-learning-based model was trained to classify starter-gun and fireworks signatures with high accuracy of > 99% using fiber-based-signal-enhancer and >97% using aerial coils.

Simultaneous Fiber Sensing and Communications

We review recent advances aimed at increasing the reach of distributed fiber optic sensing with simultaneous data transmission. We review two methods based on measurement of accumulated phase on telecom signals, and chirp-pulsed DAS with inline amplification and frequency diversity.

Template Matching Method with Distributed Acoustic Sensing Data and Simulation Data

We propose a new method to detect acoustic signals by matching distributed acoustic sensing data with simulation. In the simulation of the dynamic strain on an optical fiber, the optical fiber layouts and the gauge length are properly incorporated. We apply the proposed method to the acoustic-source localization and demonstrate the method localizes the source accurately even under the layouts which include the straight optical fiber for the sensing points with the large gauge-length settings.

Mosaic: Leveraging Diverse Reflector Geometries for Omnidirectional Around-Corner Automotive Radar

A large number of traffic collisions occur as a result of obstructed sight lines, such that even an advanced driver assistance system would be unable to prevent the crash. Recent work has proposed the use of around-the-corner radar systems to detect vehicles, pedestrians, and other road users in these occluded regions. Through comprehensive measurement, we show that these existing techniques cannot sense occluded moving objects in many important real-world scenarios. To solve this problem of limited coverage, we leverage multiple, curved reflectors to provide comprehensive coverage over the most important locations near an intersection. In scenarios where curved reflectors are insufficient, we evaluate the relative benefits of using additional flat planar surfaces. Using these techniques, we more than double the probability of detecting a vehicle near the intersection in three real urban locations and enable NLoS radar sensing using an entirely new class of reflectors.

StyleT2I: Towards Compositional and High-Fidelity Text-to-Image Synthesis

Although progress has been made for text-to-image synthesis, previous methods fall short of generalizing to unseen or underrepresented attribute compositions in the input text. Lacking compositionality could have severe implications for robustness and fairness, e.g., inability to synthesize the face images of underrepresented demographic groups. In this paper, we introduce a new framework, StyleT2I, to improve the compositionality of text-to-image synthesis. Specifically, we propose a CLIP-guided Contrastive Loss to better distinguish different compositions among different sentences. To further improve the compositionality, we design a novel Semantic Matching Loss and a Spatial Constraint to identify attributes’ latent directions for intended spatial region manipulations, leading to better disentangled latent representations of attributes. Based on the identified latent directions of attributes, we propose Compositional Attribute Adjustment to adjust the latent code, resulting in better compositionality of image synthesis. In addition, we leverage the l2 -norm regularization of identified latent directions (norm penalty) to strike a nice balance between image-text alignment and image fidelity. In the experiments, we devise a new dataset split and an evaluation metric to evaluate the compositionality of text-to-image synthesis models. The results show that StyleT2I outperforms previous approaches in terms of the consistency between the input text and synthesized images and achieves higher fidelity

Chimera: Context-Aware Splittable Deep Multitasking Models for Edge Intelligence

Design of multitasking deep learning models has mostly focused on improving the accuracy of the constituent tasks, but the challenges of efficiently deploying such models in a device-edge collaborative setup (that is common in 5G deployments) has not been investigated. Towards this end, in this paper, we propose an approach called Chimera 1 for training (done Offline) and deployment (done Online) of multitasking deep learning models that are splittable across the device and edge. In the offline phase, we train our multi-tasking setup such that features from a pre-trained model for one of the tasks (called the Primary task) are extracted and task-specific sub-models are trained to generate the other (Secondary) tasks’ outputs through a knowledge distillation like training strategy to mimic the outputs of pre-trained models for the tasks. The task-specific sub-models are designed to be significantly lightweight than the original pre-trained models for the Secondary tasks. Once the sub-models are trained, during deployment, for given deployment context, characterized by the configurations, we search for the optimal (in terms of both model performance and cost) deployment strategy for the generated multitasking model, through finding one or multiple suitable layer(s) for splitting the model, so that inference workloads are distributed between the device and the edge server and the inference is done in a collaborative manner. Extensive experiments on benchmark computer vision tasks demonstrate that Chimera generates splittable multitasking models that are at least ~ 3 x parameter efficient than the existing such models, and the end-to-end device-edge collaborative inference becomes ~ 1.35 x faster with our choice of context-aware splitting decisions.

Controllable Dynamic Multi-Task Architectures

Multi-task learning commonly encounters competition for resources among tasks, specifically when model capacity is limited. This challenge motivates models which allow control over the relative importance of tasks and total compute cost during inference time. In this work, we propose such a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints. In contrast to the existing dynamic multi-task approaches that adjust only the weights within a fixed architecture, our approach affords the flexibility to dynamically control the total computational cost and match the user-preferred task importance better. We propose a disentangled training of two hyper networks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights. Experiments on three multi-task benchmarks, namely PASCAL-Context, NYU-v2, and CIFAR-100, show the efficacy of our approach. Project page is available at https://www.nec-labs.com/-mas/DYMU.

Learning to Learn across Diverse Data Biases in Deep Face Recognition

Convolutional Neural Networks have achieved remarkable success in face recognition, in part due to the abundant availability of data. However, the data used for training CNNs is often imbalanced. Prior works largely focus on the long-tailed nature of face datasets in data volume per identity or focus on single bias variation. In this paper, we show that many bias variations such as ethnicity, head pose, occlusion and blur can jointly affect the accuracy significantly. We propose a sample level weighting approach termed Multi-variation Cosine Margin (MvCoM), to simultaneously consider the multiple variation factors, which orthogonally enhances the face recognition losses to incorporate the importance of training samples. Further, we leverage a learning to learn approach, guided by a held-out meta learning set and use an additive modeling to predict the MvCoM. Extensive experiments on challenging face recognition benchmarks demonstrate the advantages of our method in jointly handling imbalances due to multiple variations.