Memory Warps for Learning Long-Term Online Video Representations

This paper proposes a novel memory-based online video representation that is efficient, accurate and predictive. This is in contrast to prior works that often rely on computationally heavy 3D convolutions, ignore actual motion when aligning features over time, or operate in an off-line mode to utilize future frames. In particular, our memory (i) holds the feature representation, (ii) is spatially warped over time to compensate for observer and scene motions, (iii) can carry long-term information, and (iv) enables predicting feature representations in future frames. By exploring a variant that operates at multiple temporal scales, we efficiently learn across even longer time horizons. We apply our online framework to object detection in videos, obtaining a large 2.3 times speed-up and losing only 0.9% mAP on ImageNet-VID dataset, compared to prior works that even use future frames. Finally, we demonstrate the predictive property of our representation in two novel detection setups, where features are propagated over time to (i) significantly enhance a real-time detector by more than 10% mAP in a multi-threaded online setup and to (ii) anticipate objects in future frames.

Feature Transfer Learning for Deep Face Recognition with Long-Tail Data

Real-world face recognition datasets exhibit long-tail characteristics, which results in biased classifiers in conventionally-trained deep neural networks, or insufficient data when long-tail classes are ignored. In this paper, we propose to handle long-tail classes in the training of a face recognition engine by augmenting their feature space under a center-based feature transfer framework. A Gaussian prior is assumed across all the head (regular) classes and the variance from regular classes are transferred to the long-tail class representation. This encourages the long-tail distribution to be closer to the regular distribution, while enriching and balancing the limited training data. Further, an alternating training regimen is proposed to simultaneously achieve less biased decision boundaries and a more discriminative feature representation. We conduct empirical studies that mimic long-tail datasets by limiting the number of samples and the proportion of long-tail classes on the MS-Celeb-1M dataset. We compare our method with baselines not designed to handle long-tail classes and also with state-of-the-art methods on face recognition benchmarks. State-of-the-art results on LFW, IJB-A and MS-Celeb-1M datasets demonstrate the effectiveness of our feature transfer approach and training strategy. Finally, our feature transfer allows smooth visual interpolation, which demonstrates disentanglement to preserve identity of a class while augmenting its feature space with non-identity variations.

Channel-Recurrent Autoencoding for Image Modeling

Despite recent successes in synthesizing faces and bedrooms, existing generative models struggle to capture more complex image types (Figure 1), potentially due to the oversimplification of their latent space constructions. To tackle this issue, building on Variational Autoencoders (VAEs), we integrate recurrent connections across channels to both inference and generation steps, allowing the high-level features to be captured in global-to-local, coarse-to-fine manners. Combined with adversarial loss, our channel-recurrent VAE-GAN (crVAE-GAN) outperforms VAE-GAN in generating a diverse spectrum of high resolution images while maintaining the same level of computational efficacy. Our model produces interpretable and expressive latent representations to benefit downstream tasks such as image completion. Moreover, we propose two novel regularizations, namely the KL objective weighting scheme over time steps and mutual information maximization between transformed latent variables and the outputs, to enhance the training.

Universal Hybrid Probabilistic-geometric Shaping Based on Two-dimensional Distribution Matchers

We propose universal distribution matchers applicable to any two-dimensional signal constellation. We experimentally demonstrate that the performance of 32-ary QAM, based on hybrid probabilistic-geometric shaping, is superior to probabilistically shaped 32QAM and regular 32QAM.

Flex-Rate Transmission using Hybrid Probabilistic and Geometric Shaped 32QAM

A novel algorithm to design geometric shaped 32QAM to work with probabilistic shaping is proposed to approach the Shannon limit within ~0.2 dB in SNR. The experimental results show ~0.2 dB SNR advantage over 64Gbaud PAS-64QAM, and flex-rate transmission demonstrates > 500 km reach improvement over 32QAM.

Evolution from 8QAM live traffic to PCS 64-QAM with Neural-Network Based Nonlinearity Compensation on 11000 km Open Subsea Cable

We report on the evolution of the longest segment of FASTER cable at 11,017 km, with 8QAM transponders at 4b/s/Hz spectral efficiency (SE) in service. With offline testing, 6 b/s/Hz is further demonstrated using probabilistically shaped 64QAM, and a novel, low complexity nonlinearity compensation technique based on generating a black-box model of the transmission by training an artificial neural network, resulting in the largest SE-distance product 66,102 b/s/Hz-km over live-traffic carrying cable.

ANN-Based Transfer Learning for QoT Prediction in Real-Time Mixed Line-Rate Systems

Quality of transmission prediction for real-time mixed line-rate systems is realized using artificial neural network based transfer learning with SDN orchestrating. 0.42 dB accuracy is achieved with a 1000 to 20 reduction in training samples.

41.5 Tb/s Data Transport over 549 km of Field Deployed Fiber Using Throughput Optimized Probabilistic-Shaped 144QAM to Support Metro Network Capacity Demands

41.5-Tb/s over 549 km of deployed SSMF in Verizon’s network is achieved using probabilistic-shaped 144QAM to optimize throughput at ultra-fine granularity. This is the highest C-band only capacity and spectral efficiency in metro field environment.

SVBRDF-Invariant Shape and Reflectance Estimation from a Light-Field Camera

Light-field cameras have recently emerged as a powerful tool for one-shot passive 3D shape capture. However, obtaining the shape of glossy objects like metals or plastics remains challenging, since standard Lambertian cues like photo-consistency cannot be easily applied. In this paper, we derive a spatially-varying (SV)BRDF-invariant theory for recovering 3D shape and reflectance from light-field cameras. Our key theoretical insight is a novel analysis of diffuse plus single-lobe SVBRDFs under a light-field setup. We show that, although direct shape recovery is not possible, an equation relating depths and normals can still be derived. Using this equation, we then propose using a polynomial (quadratic) shape prior to resolve the shape ambiguity. Once shape is estimated, we also recover the reflectance. We present extensive synthetic data on the entire MERL BRDF dataset, as well as a number of real examples to validate the theory, where we simultaneously recover shape and BRDFs from a single image taken with a Lytro Illum camera.

Towards a Timely Causality Analysis for Enterprise Security

The increasingly sophisticated Advanced Persistent Threat (APT) attacks have become a serious challenge for enterprise IT security. Attack causality analysis, which tracks multi-hop causal relationships between files and processes to diagnose attack provenances and consequences, is the first step towards understanding APT attacks and taking appropriate responses. Since attack causality analysis is a time-critical mission, it is essential to design causality tracking systems that extract useful attack information in a timely manner. However, prior work is limited in serving this need. Existing approaches have largely focused on pruning causal dependencies totally irrelevant to the attack, but fail to differentiate and prioritize abnormal events from numerous relevant, yet benign and complicated system operations, resulting in long investigation time and slow responses.