AE-StyleGAN: Improved Training of Style-Based Auto-Encoders

StyleGANs have shown impressive results on data generation and manipulation in recent years, thanks to its disentangled style latent space. A lot of efforts have been made in inverting a pretrained generator, where an encoder is trained ad hoc after the generator is trained in a two-stage fashion. In this paper, we focus on style-based generators asking a scientific question: Does forcing such a generator to reconstruct real data lead to more disentangled latent space and make the inversion process from image to latent space easy? We describe a new methodology to train a style-based autoencoder where the encoder and generator are optimized end-to-end. We show that our proposed model consistently outperforms baselines in terms of image inversion and generation quality. Supplementary, code, and pretrained models are available on the project website.

SplitBrain: Hybrid Data and Model Parallel Deep Learning

The recent success of deep learning applications has coincided with those widely available powerful computational resources for training sophisticated machine learning models with huge datasets. Nonetheless, training large models such as convolutional neural networks using model parallelism (as opposed to data parallelism) is challenging because the complex nature of communication between model shards makes it difficult to partition the computation efficiently across multiple machines with an acceptable trade off. This paper presents SplitBrain, a high performance distributed deep learning framework supporting hybrid data and model parallelism. Specifically, SplitBrain provides layer specific partitioning that co locates compute intensive convolutional layers while sharding memory demanding layers. A novel scalable group communication is proposed to further improve the training throughput with reduced communication overhead. The results show that SplitBrain can achieve nearly linear speedup while saving up to 67% of memory consumption for data and model parallel VGG over CIFAR 10.

Distributed Fiber Sensor Network Using Telecom Cables as Sensing Media: Technology Advancements and Applications

Distributed fiber optic sensing (DFOS) is a rapidly evolving field that allows the existing optical fiber infrastructure for telecommunications to be reused for wide-area sensing. Using the backscattering mechanisms of glass—which includes Rayleigh, Brillouin, and Raman backscatter—it is possible to realize distributed vibration and temperature sensors with good sensitivity at every fiber position, and spatial resolution is determined by the bandwidth of the interrogation signal. In this paper, we will review the main technologies in currently deployed DFOS. We review the digital signal processing operations that are performed to extract the sensing parameters of interest. We report recent distributed vibration sensing, distributed acoustic sensing, and distributed temperature sensing field trial results over an existing network with reconfigurable add/drop multiplexers carrying live telecom traffic, showing that the network is capable of simultaneous traffic and temperature monitoring. We report Brillouin optical time-domain reflectometry experimental results for monitoring static strain on aerial fiber cables suspended on utility poles. Finally, we demonstrate an example of network modification to make passive optical networks compatible with DFOS by adding reflective semiconductor optical amplifiers at optical network units.

A Deep Generative Model for Molecule Optimization via One Fragment Modification

Molecule optimization is a critical step in drug development to improve the desired properties of drug candidates through chemical modification. We have developed a novel deep generative model, Modof, over molecular graphs for molecule optimization. Modof modifies a given molecule through the prediction of a single site of disconnection at the molecule and the removal and/or addition of fragments at that site. A pipeline of multiple, identical Modof models is implemented into Modof-pipe to modify an input molecule at multiple disconnection sites. Here we show that Modof-pipe is able to retain major molecular scaffolds, allow controls over intermediate optimization steps and better constrain molecule similarities. Modof-pipe outperforms the state-of-the-art methods on benchmark datasets. Without molecular similarity constraints, Modof-pipe achieves 81.2% improvement in the octanol–water partition coefficient, penalized by synthetic accessibility and ring size, and 51.2%, 25.6% and 9.2% improvement if the optimized molecules are at least 0.2, 0.4 and 0.6 similar to those before optimization, respectively. Modof-pipe is further enhanced into Modof-pipem to allow modification of one molecule to multiple optimized ones. Modof-pipem achieves additional performance improvement, at least 17.8% better than Modof-pipe.

Detection and Localization of Stationary Weights Hanging on Aerial Telecommunication Fibers using Distributed Acoustic Sensing

For the first time to our knowledge, a stationary weight hanging on an operational aerial telecommunication field fiber was detected and localized using only ambient data collected by a φ-DAS system. Although stationary weights do not create temporally varying signals, and hence cannot be observed directly from the DAS traces, the existence and the location of the additional weights were revealed by the operational modal analysis of the aerial fiber structure.

AQuA: Analytical Quality Assessment for Optimizing Video Analytics Systems

Millions of cameras at edge are being deployed to power a variety of different deep learning applications. However, the frames captured by these cameras are not always pristine – they can be distorted due to lighting issues, sensor noise, compression etc. Such distortions not only deteriorate visual quality, they impact the accuracy of deep learning applications that process such video streams. In this work, we introduce AQuA, to protect application accuracy against such distorted frames by scoring the level of distortion in the frames. It takes into account the analytical quality of frames, not the visual quality, by learning a novel metric, classifier opinion score, and uses a lightweight, CNN-based, object-independent feature extractor. AQuA accurately scores distortion levels of frames and generalizes to multiple different deep learning applications. When used for filtering poor quality frames at edge, it reduces high-confidence errors for analytics applications by 17%. Through filtering, and due to its low overhead (14ms), AQuA can also reduce computation time and average bandwidth usage by 25%.

InfoGCL: Information-Aware Graph Contrastive Learning

InfoGCL: Information-Aware Graph Contrastive Learning Various graph contrastive learning models have been proposed to improve the performance of tasks on graph datasets in recent years. While effective and prevalent, these models are usually carefully customized. In particular, despite all recent work create two contrastive views, they differ in a variety of view augmentations, architectures, and objectives. It remains an open question how to build your graph contrastive learning model from scratch for particular graph tasks and datasets. In this work, we aim to fill this gap by studying how graph information is transformed and transferred during the contrastive learning process, and proposing an information-aware graph contrastive learning framework called InfoGCL. The key to the success of the proposed framework is to follow the Information Bottleneck principle to reduce the mutual information between contrastive parts while keeping task-relevant information intact at both the levels of the individual module and the entire framework so that the information loss during graph representation learning can be minimized. We show for the first time that all recent graph contrastive learning methods can be unified by our framework. Based on theoretical and empirical analysis on benchmark graph datasets, we show that InfoGCL achieves state-of-the-art performance in the settings of both graph classification and node classification tasks.

Edge-based fever screening system over private 5G

Edge computing and 5G have made it possible to perform analytics closer to the source of data and achieve super-low latency response times, which isn’t possible with centralized cloud deployment. In this paper, we present a novel fever screening system, which uses edge machine learning techniques and leverages private 5G to accurately identify and screen individuals with fever in real-time. Particularly, we present deep-learning based novel techniques for fusion and alignment of cross-spectral visual and thermal data streams at the edge. Our novel Cross-Spectral Generative Adversarial Network (CS-GAN) synthesizes visual images that have the key, representative object level features required to uniquely associate objects across visual and thermal spectrum. Two key features of CS-GAN are a novel, feature-preserving loss function that results in high-quality pairing of corresponding cross-spectral objects, and dual bottleneck residual layers with skip connections (a new, network enhancement) to not only accelerate real-time inference, but to also speed up convergence during model training at the edge. To the best of our knowledge, this is the first technique that leverages 5G networks and limited edge resources to enable real-time feature-level association of objects in visual and thermal streams (30 ms per full HD frame on an Intel Core i7-8650 4-core, 1.9GHz mobile processor). To the best of our knowledge, this is also the first system to achieve real-time operation, which has enabled fever screening of employees and guests in arenas, theme parks, airports and other critical facilities. By leveraging edge computing and 5G, our fever screening system is able to achieve 98.5% accuracy and is able to process ∼ 5X more people when compared to a centralized cloud deployment.

Dynamic Causal Discovery in Imitation Learning

Using deep reinforcement learning (DRL) to recover expert policies via imitation has been found to be promising in a wide range of applications. However, it remains a difficult task to interpret the control policy learned by the agent. Difficulties mainly come from two aspects: 1) agents in DRL are usually implemented as deep neural networks (DNNs), which are black-box models and lack in interpretability, 2) the latent causal mechanism behind agents’ decisions may vary along the trajectory, rather than staying static throughout time steps. To address these difficulties, in this paper, we propose a self-explaining imitation framework, which can expose causal relations among states and action variables behind its decisions. Specifically, a dynamic causal discovery module is designed to extract the causal graph basing on historical trajectory and current states at each time step, and a causality encoding module is designed to model the interactions among variables with discovered causal edges. After encoding causality into variable embeddings, a prediction model conducts the imitation learning on top of obtained representations. These three components are trained end-to-end, and discovered causal edges can provide interpretations on rules captured by the agent. Comprehensive experiments are conducted on the simulation dataset to analyze its causal discovery capacity, and we further test it on a real-world medical dataset MIMIC-IV. Experimental results demonstrate its potential of providing explanations behind decisions.

Shaping mmWave Wireless Channel via Multi-Beam Design using Reconfigurable Intelligent Surfaces

Millimeter-wave (mmWave) communications is considered as a key enabler towards the realization of next-generation wireless networks, due to the abundance of available spectrum at mmWave frequencies. However, mmWave suffers from high free-space path-loss and poor scattering resulting in mostly line-of-sight (LoS) channels which result in a lack of coverage. Reconfigurable intelligent surfaces (RIS), as a new paradigm, have the potential to fill the coverage holes by shaping the wireless channel. In this paper, we propose a novel approach for designing RIS with elements arranged in a uniform planar array (UPA) structure. In what we refer to as multi-beamforming, We propose and design RIS such that the reflected beam comprises multiple disjoint lobes. Moreover, the beams have optimized gain within the desired angular coverage with fairly sharp edges avoiding power leakage to other regions. We provide a closed-form low-complexity solution for the multi-beamforming design. We confirm our theoretical results by numerical analysis.