Nonlinear Impairment Compensation using Neural Networks

Neural networks are attractive for nonlinear impairment compensation applications in communication systems. In this paper, several approaches to reduce computational complexity of the neural network-based algorithms are presented.

Static Weight Detection and Localization on Aerial Fiber Cables using Distributed Acoustic Sensing

We demonstrated for the first time to our knowledge, the detection and localization of a static weight on an aerial cable by using frequency domain decomposition analysis of ambient vibrations detected by a φ-DAS system.

Vehicle Run-Off-Road Event Automatic Detection by Fiber Sensing Technology

We demonstrate a new application of fiber-optic-sensing and machine learning techniques for vehicle run-off-road events detection to enhance roadway safety and efficiency. The proposed approach achieves high accuracy in a testbed under various experimental conditions.

Automatic Fine-Grained Localization of Utility Pole Landmarks on Distributed Acoustic Sensing Traces Based on Bilinear Resnets

In distributed acoustic sensing (DAS) on aerial fiber-optic cables, utility pole localization is a prerequisite for any subsequent event detection. Currently, localizing the utility poles on DAS traces relies on human experts who manually label the poles’ locations by examining DAS signal patterns generated in response to hammer knocks on the poles. This process is inefficient, error-prone and expensive, thus impractical and non-scalable for industrial applications. In this paper, we propose two machine learning approaches to automate this procedure for large-scale implementation. In particular, we investigate both unsupervised and supervised methods for fine-grained pole localization. Our methods are tested on two real-world datasets from field trials, and demonstrate successful estimation of pole locations at the same level of accuracy as human experts and strong robustness to label noises.

Distributed Fiber Sensor Network using Telecom Cables as Sensing Media: Applications

Distributed fiber optical systems (DFOS) allow deployed optical cables to monitor the ambient environment over wide geographic area. We review recent field trial results, and show how DFOS can be made compatible with passive optical networks (PONs).

Field Trial of Vibration Detection and Localization using Coherent Telecom Transponders over 380-km Link

We demonstrate vibration detection and localization based on extracting optical phase from the DSP elements of a coherent receiver in bidirectional WDM transmission of 200-Gb/s DP-16QAM over 380 km of installed field fiber.

Optics and Biometrics

Forget passwords—identity verification can now be accomplished with the touch of a finger or in the blink of an eye as the biometrics field expands to encompass new techniques and application areas.

ECO: Edge-Cloud Optimization of 5G applications

Centralized cloud computing with 100+ milliseconds network latencies cannot meet the tens of milliseconds to sub-millisecond response times required for emerging 5G applications like autonomous driving, smart manufacturing, tactile internet, and augmented or virtual reality. We describe a new, dynamic runtime that enables such applications to make effective use of a 5G network, computing at the edge of this network, and resources in the centralized cloud, at all times. Our runtime continuously monitors the interaction among the microservices, estimates the data produced and exchanged among the microservices, and uses a novel graph min-cut algorithm to dynamically map the microservices to the edge or the cloud to satisfy application-specific response times. Our runtime also handles temporary network partitions, and maintains data consistency across the distributed fabric by using microservice proxies to reduce WAN bandwidth by an order of magnitude, all in an application-specific manner by leveraging knowledge about the application’s functions, latency-critical pipelines and intermediate data. We illustrate the use of our runtime by successfully mapping two complex, representative real-world video analytics applications to the AWS/Verizon Wavelength edge-cloud architecture, and improving application response times by 2x when compared with a static edge-cloud implementation.

Disentangled Recurrent Wasserstein Auto-Encoder

Learning disentangled representations leads to interpretable models and facilitates data generation with style transfer, which has been extensively studied on static data such as images in an unsupervised learning framework. However, only a few works have explored unsupervised disentangled sequential representation learning due to challenges of generating sequential data. In this paper, we propose recurrent Wasserstein Autoencoder (R-WAE), a new framework for generative modeling of sequential data. R-WAE disentangles the representation of an input sequence into static and dynamic factors (i.e., time-invariant and time-varying parts). Our theoretical analysis shows that, R-WAE minimizes an upper bound of a penalized form of the Wasserstein distance between model distribution and sequential data distribution, and simultaneously maximizes the mutual information between input data and different disentangled latent factors, respectively. This is superior to (recurrent) VAE which does not explicitly enforce mutual information maximization between input data and disentangled latent representations. When the number of actions in sequential data is available as weak supervision information, R-WAE is extended to learn a categorical latent representation of actions to improve its disentanglement. Experiments on a variety of datasets show that our models outperform other baselines with the same settings in terms of disentanglement and unconditional video generation both quantitatively and qualitatively.

Hopper: Multi-hop Transformer for Spatio-Temporal Reasoning

This paper considers the problem of spatiotemporal object-centric reasoning in videos. Central to our approach is the notion of object permanence, i.e., the ability to reason about the location of objects as they move through the video while being occluded, contained or carried by other objects. Existing deep learning based approaches often suffer from spatiotemporal biases when applied to video reasoning problems. We propose Hopper, which uses a Multi-hop Transformer for reasoning object permanence in videos. Given a video and a localization query, Hopper reasons over image and object tracks to automatically hop over critical frames in an iterative fashion to predict the final position of the object of interest. We demonstrate the effectiveness of using a contrastive loss to reduce spatiotemporal biases. We evaluate over CATER dataset and find that Hopper achieves 73.2% Top-1 accuracy using just 1 FPS by hopping through just a few critical frames. We also demonstrate Hopper can perform long-term reasoning by building a CATER-h dataset that requires multi-step reasoning to localize objects of interest correctly.