Field and lab experimental demonstration of nonlinear impairment compensation using neural networks

Fiber nonlinearity is one of the major limitations to the achievable capacity in long distance fiber optic transmission systems. Nonlinear impairments are determined by the signal pattern and the transmission system parameters. Deterministic algorithms based on approximating the nonlinear Schrodinger equation through digital back propagation, or a single step approach based on perturbation methods have been demonstrated, however, their implementation demands excessive signal processing resources, and accurate knowledge of the transmission system. A completely different approach uses machine learning algorithms to learn from the received data itself to figure out the nonlinear impairment. In this work, a single-step, system agnostic nonlinearity compensation algorithm based on a neural network is proposed to pre-distort symbols at transmitter side to demonstrate ~0.6?dB Q improvement after 2800?km standard single-mode fiber transmission using 32 Gbaud signal. Without prior knowledge of the transmission system, the neural network tensor weights are constructed from training data thanks to the intra-channel cross-phase modulation and intra-channel four-wave mixing triplets used as input features.

Decentralized Transactive Energy Auctions with Bandit Learning

The power systems worldwide have been embracing the rapid growth of distributed energy resources. Commonly, distributed energy resources exist in the distribution level, such as electric vehicles, rooftop photovoltaic panels, and home battery systems, which cannot be controlled by a centralized entity like a utility. However, a large number of distributed energy resources have potential to reshape the power generation landscape when the owners (prosumers) are allowed to send electricity back to the grids. Transactive energy paradigms are emerging for orchestrating the coordination of prosumers and consumers by enabling the exchange of energy among them. In this paper, we propose a transactive energy auction framework based on blockchain technology for creating trustworthy and transparent transactive environments in distribution networks, which does not rely on a centralized entity to clear transactions. Moreover, we propose intelligent decentralized decision-making strategies by bandit learning for market participants to locally decide their energy prices in auctions. The bandit learning approach can provide market participants with more benefits under the blockchain framework than trading energy with the centralized entity, which is further supported by the preliminary simulated results conducted over our blockchain-based platform.

Neural-Network-Based G-OSNR Estimation of Probabilistic-Shaped 144QAM Channels in DWDM Metro Network Field Trial

A two-stage neural network model is applied on captured PS-144QAM raw data to estimate channel G-OSNR in a metro network field trial. We obtained 0.27dB RMSE with first-stage CNN classifier and second-stage ANN regressions.

Energy Predictive Models with Limited Data using Transfer Learning

In this paper, we consider the problem of developing predictive models with limited data for energy assets such as electricity loads, PV power generations, etc. We specifically investigate the cases where the amount of historical data is not sufficient to effectively train the prediction model. We first develop an energy predictive model based on convolutional neural network (CNN) which is well suited to capture the interaday, daily, and weekly cyclostationary patterns, trends and seasonalities in energy assets time series. A transfer learning strategy is then proposed to address the challenge of limited training data. We demonstrate our approach on a usecase of daily electricity demand forecasting. we show practicing the transfer learning strategy on the CNN model results in significant improvement to existing forecasting methods.

Clairvoyant Networks

We use the term clairvoyant to refer to networks that provide on-demand visibility for any flow at any time. Traditionally, network visibility is achieved by instrumenting and passively monitoring all flows in a network. SDN networks, by design endowed with full visibility, offer another alternative to network-wide flow monitoring. Both approaches incur significant capital and operational costs to make networks clairvoyant. In this paper, we argue that we can make any existing network clairvoyant by installing one or more SDN-enabled switches and a specialized controller to support on-demand visibility. We analyze the benefits and costs of such clairvoyant networks and provide a basic design by integrating two existing mechanisms for updating paths through legacy switches with SDN, telekinesis and magnet MACs. Our evaluation on a lab testbed and through extensive simulations show that, even with a single SDN-enabled switch, operators can make any flow visible for monitoring within milliseconds, albeit at 38% average increase in path length. With as many as 2% strategically chosen legacy switches replaced with SDN switches, clairvoyant networks achieve on-demand flow visibility with negligible overhead.

A Dataset for High-Level 3D Scene Understanding of Complex Road Scenes in the Top-View

We introduce a novel dataset for high-level 3D scene understanding of complex road scenes. Our annotations extend the existing datasets KITTI [5] and nuScenes [1] with semantically and geometrically meaningful attributes like the number of lanes or the existence of, and distance to, intersections, sidewalks and crosswalks. Our attributes are rich enough to build a meaningful representation of the scene in the top-view and provide a tangible interface to the real world for several practical applications.

Learning Structure-And-Motion-Aware Rolling Shutter Correction

An exact method of correcting the rolling shutter (RS) effect requires recovering the underlying geometry, i.e. the scene structures and the camera motions between scanlines or between views. However, the multiple-view geometry for RS cameras is much more complicated than its global shutter (GS) counterpart, with various degeneracies. In this paper, we first make a theoretical contribution by showing that RS two-view geometry is degenerate in the case of pure translational camera motion. In view of the complex RS geometry, we then propose a Convolutional Neural Network (CNN)-based method which learns the underlying geometry (camera motion and scene structure) from just a single RS image and perform RS image correction. We call our method structure-and-motion-aware RS correction because it reasons about the concealed motions between the scanlines as well as the scene structure. Our method learns from a large-scale dataset synthesized in a geometrically meaningful way where the RS effect is generated in a manner consistent with the camera motion and scene structure. In extensive experiments, our method achieves superior performance compared to other state-of-the-art methods for single image RS correction and subsequent Structure from Motion (SfM) applications.

Gotta Adapt ’Em All: Joint Pixel and Feature-Level Domain Adaptation for Recognition in the Wild

Recent developments in deep domain adaptation have allowed knowledge transfer from a labeled source domain to an unlabeled target domain at the level of intermediate features or input pixels. We propose that advantages may be derived by combining them, in the form of different insights that lead to a novel design and complementary properties that result in better performance. At the feature level, inspired by insights from semi-supervised learning, we propose a classification-aware domain adversarial neural network that brings target examples into more classifiable regions of source domain. Next, we posit that computer vision insights are more amenable to injection at the pixel level. In particular, we use 3D geometry and image synthesis based on a generalized appearance flow to preserve identity across pose transformations, while using an attribute-conditioned CycleGAN to translate a single source into multiple target images that differ in lower-level properties such as lighting. Besides standard UDA benchmark, we validate on a novel and apt problem of car recognition in unlabeled surveillance images using labeled images from the web, handling explicitly specified, nameable factors of variation through pixel-level and implicit, unspecified factors through feature-level adaptation.

Feature Transfer Learning for Face Recognition with Under-Represented Data

Despite the large volume of face recognition datasets, there is a significant portion of subjects, of which the samples are insufficient and thus under-represented. Ignoring such significant portion results in insufficient training data. Training with under-represented data leads to biased classifiers in conventionally-trained deep networks. In this paper, we propose a center-based feature transfer framework to augment the feature space of under-represented subjects from the regular subjects that have sufficiently diverse samples. A Gaussian prior of the variance is assumed across all subjects and the variance from regular ones are transferred to the under-represented ones. This encourages the under-represented distribution to be closer to the regular distribution. Further, an alternating training regimen is proposed to simultaneously achieve less biased classifiers and a more discriminative feature representation. We conduct ablative study to mimic the under-represented datasets by varying the portion of under-represented classes on the MS-Celeb-1M dataset. Advantageous results on LFW, IJB-A and MS-Celeb-1M demonstrate the effectiveness of our feature transfer and training strategy, compared to both general baselines and state-of-the-art methods. Moreover, our feature transfer successfully presents smooth visual interpolation, which conducts disentanglement to preserve identity of a class while augmenting its feature space with non-identity variations such as pose and lighting.

A Parametric Top-View Representation of Complex Road Scenes

In this paper, we address the problem of inferring the layout of complex road scenes given a single camera as input. To achieve that, we first propose a novel parameterized model of road layouts in a top-view representation, which is not only intuitive for human visualization but also provides an interpretable interface for higher-level decision making. Moreover, the design of our top-view scene model allows for efficient sampling and thus generation of large-scale simulated data, which we leverage to train a deep neural network to infer our scene model’s parameters. Specifically, our proposed training procedure uses supervised domain-adaptation techniques to incorporate both simulated as well as manually annotated data. Finally, we design a Conditional Random Field (CRF) that enforces coherent predictions for a single frame and encourages temporal smoothness among video frames. Experiments on two public data sets show that: (1) Our parametric top-view model is representative enough to describe complex road scenes; (2) The proposed method outperforms baselines trained on manually-annotated or simulated data only, thus getting the best of both; (3) Our CRF is able to generate temporally smoothed while semantically meaningful results.