On Optimal Multi-user Beam Alignment in Millimeter Wave Wireless Systems

Directional transmission patterns (a.k.a. narrow beams) are the key to wireless communications in millimeter wave (mmWave) frequency bands which suffer from high path loss and severe shadowing. In addition, the propagation channel in mmWave frequencies incorporates only a few number of spatial clusters requiring a procedure to align the corresponding narrow beams with the angle of departure (AoD) of the channel clusters. The objective of this procedure, called beam alignment (BA) is to increase the beamforming gain for subsequent data communication. Several prior studies consider optimizing BA procedure to achieve various objectives such as reducing the BA overhead, increasing throughput, and reducing power consumption. While these studies mostly provide optimized BA schemes for scenarios with a single active user, there are often multiple active users in practical networks. Consequently, it is more efficient in terms of BA overhead and delay to design multi-user BA schemes which can perform beam management for multiple users collectively. This paper considers a class of multi-user BA schemes where the base station performs a one shot scan of the angular domain to simultaneously localize multiple users. The objective is to minimize the average of expected width of remaining uncertainty regions (UR) on the AoDs after receiving users’ feedbacks. Fundamental bounds on the optimal performance are analyzed using information theoretic tools. Furthermore, a BA optimization problem is formulated and a practical BA scheme, which provides significant gains compared to the beam sweeping used in 5G standard, is proposed.

Peek-a-boo: Occlusion Reasoning in Indoor Scenes with Plane Representations

We address the challenging task of occlusion-aware indoor 3D scene understanding. We represent scenes by a set of planes, where each one is defined by its normal, offset and two masks outlining (i) the extent of the visible part and (ii) the full region that consists of both visible and occluded parts of the plane. We infer these planes from a single input image with a novel neural network architecture. It consists of a two-branch category-specific module that aims to predict layout and objects of the scene separately so that different types of planes can be handled better. We also introduce a novel loss function based on plane warping that can leverage multiple views at training time for improved occlusion-aware reasoning. In order to train and evaluate our occlusion-reasoning model, we use the ScanNet dataset and propose (i) a strategy to automatically extract ground truth for both visible and hidden regions and (ii) a new evaluation metric that specifically focuses on the prediction in hidden regions. We empirically demonstrate that our proposed approach can achieve higher accuracy for occlusion reasoning compared to competitive baselines on the ScanNet dataset, e.g. 42.65% relative improvement on hidden regions.

Private-kNN Practical Differential Privacy for Computer Vision

With increasing ethical and legal concerns on privacy for deep models in visual recognition, differential privacy has emerged as a mechanism to disguise membership of sensitive data in training datasets. Recent methods like Private Aggregation of Teacher Ensembles (PATE) leverage a large ensemble of teacher models trained on disjoint subsets of private data, to transfer knowledge to a student model with privacy guarantees. However, labeled vision data is often expensive and datasets, when split into many disjoint training sets, lead to significantly sub-optimal accuracy and thus hardly sustain good privacy bounds. We propose a practically data-efficient scheme based on private release of k-nearest neighbor (kNN) queries, which altogether avoids splitting the training dataset. Our approach allows the use of privacy-amplification by subsampling and iterative refinement of the kNN feature embedding. We rigorously analyze the theoretical properties of our method and demonstrate strong experimental performance on practical computer vision datasets for face attribute recognition and person reidentification. In particular, we achieve comparable or better accuracy than PATE while reducing more than 90% of the privacy loss, thereby providing the “most practical method to-date” for private deep learning in computer vision.

Towards Universal Representation Learning for Deep Face Recognition

Recognizing wild faces is extremely hard as they appear with all kinds of variations. Traditional methods either train with specifically annotated variation data from target domains, or by introducing unlabeled target variation data to adapt from the training data. Instead, we propose a universal representation learning framework that can deal with larger variation unseen in the given training data without leveraging target domain knowledge. We firstly synthesize training data alongside some semantically meaningful variations, such as low resolution, occlusion and head pose. However, directly feeding the augmented data for training will not converge well as the newly introduced samples are mostly hard examples. We propose to split the feature embedding into multiple sub-embeddings, and associate different confidence values for each sub-embedding to smooth the training procedure. The sub-embeddings are further decorrelated by regularizing variation classification loss and variation adversarial loss on different partitions of them. Experiments show that our method achieves top performance on general face recognition datasets such as LFW and MegaFace, while significantly better on extreme benchmarks such as TinyFace and IJB-S.

Understanding Road Layout from Videos as a Whole

In this paper, we address the problem of inferring the layout of complex road scenes from video sequences. To this end, we formulate it as a top-view road attributes prediction problem and our goal is to predict these attributes for each frame both accurately and consistently. In contrast to prior work, we exploit the following three novel aspects: leveraging camera motions in videos, including context cues and incorporating long-term video information. Specifically, we introduce a model that aims to enforce prediction consistency in videos. Our model consists of one LSTM and one Feature Transform Module (FTM). The former implicitly incorporates the consistency constraint with its hidden states, and the latter explicitly takes the camera motion into consideration when aggregating information along videos. Moreover, we propose to incorporate context information by introducing road participants, e.g. objects, into our model. When the entire video sequence is available, our model is also able to encode both local and global cues, e.g. information from both past and future frames. Experiments on two data sets show that: (1) Incorporating either global or contextual cues improves the prediction accuracy and leveraging both gives the best performance. (2) Introducing the LSTM and FTM modules improves the prediction consistency in videos. (3) The proposed method outperforms the SOTA by a large margin.

15 Keypoints Is All You Need

Pose-tracking is an important problem that requires identifying unique human pose-instances and matching them temporally across different frames in a video. However, existing pose-tracking methods are unable to accurately model temporal relationships and require significant computation, often computing the tracks offline. We present an efficient multi-person pose-tracking method, KeyTrack that only relies on keypoint information without using any RGB or optical flow to locate and track human keypoints in real-time. KeyTrack is a top-down approach that learns spatio-temporal pose relationships by modeling the multi-person pose-tracking problem as a novel Pose Entailment task using a Transformer-based architecture. Furthermore, KeyTrack uses a novel, parameter-free, keypoint refinement technique that improves the keypoint estimates used by the Transformers. We achieved state-of-the-art results on PoseTrack’17 and PoseTrack’18 benchmarks while using only a fraction of the computation used by most other methods for computing the tracking information.

S3VAE: Self-Supervised Sequential VAE for Representation Disentanglement and Data Generation

We propose a sequential variational autoencoder to learn disentangled representations of sequential data (e.g., videos and audios) under self-supervision. Specifically, we exploit the benefits of some readily accessible supervision signals from input data itself or some off-the-shelf functional models and accordingly design auxiliary tasks for our model to utilize these signals. With the supervision of the signals, our model can easily disentangle the representation of an input sequence into static factors and dynamic factors (i.e., time-invariant and time-varying parts). Comprehensive experiments across videos and audios verify the effectiveness of our model on representation disentanglement and generation of sequential data, and demonstrate that, our model with self-supervision performs comparable to, if not better than, the fully-supervised model with ground truth labels, and outperforms state-of-the-art unsupervised models by a large margin.

At the Speed of Sound: Efficient Audio Scene Classification

Efficient audio scene classification is essential for smart sensing platforms such as robots, medical monitoring, surveillance, or autonomous vehicles. We propose a retrieval-based scene classification architecture that combines recurrent neural networks and attention to compute embeddings for short audio segments. We train our framework using a custom audio loss function that captures both the relevance of audio segments within a scene and that of sound events within a segment. Using experiments on real audio scenes, we show that we can discriminate audio scenes with high accuracy after listening in for less than a second. This preserves 93% of the detection accuracy obtained after hearing the entire scene.

RULENet: End-to-end Learning with the Dual-estimator for Remaining Useful Life Estimation

Remaining Useful Life (RUL) estimation is a key element in Predictive maintenance. System agnostic approaches which just utilize sensor and operational time series have gained popularity due to its ease of implementation. Due to the nature of measurement or degradation mechanisms, its accurate estimation is not always feasible. Existing methods suppose the range of RUL with feasible estimation is given from results at upstream tasks or prior knowledge. In this work, we propose the novel framework of end-to-end learning for RUL estimation, which is called RULENet. RULENet simultaneously optimizes its Dual-estimator for RUL estimation and its feasible range estimation. Experimental results on NASA C-MAPSS benchmark data show the superiority of the end-to-end framework.

Chemical profiling of red wines using surface-1 enhanced Raman spectroscopy (SERS)

In this study, we explored surface-enhanced Raman spectroscopy (SERS) for analyzing red wine through several facile sample preparations. These approaches involved the direct analysis of red wine with Raman spectroscopy and the direct incubation of red wine with silver nanoparticles (i.e., AgNPs) and a reproducible SERS substrate, the AgNP mirror, previously developed by our group. However, as previously reported for red wine analysis, the signals obtained through these approaches were either due to interference of the fluorescence exhibited by pigments or mainly attributed to a DNA fraction, adenine. Therefore, an innovative approach was developed using solvent extraction to provide more characteristic information that is beneficial for wine chemical profiling and discrimination. Signature peaks in the wine extract spectra were found to match those of condensed tannins, resveratrol, anthocyanins, gallic acid, and catechin, which indicated that SERS combined with extraction is an innovative method for profiling wine chemicals and overcoming well-known challenges in red wine analysis. Based on this approach, we have successfully differentiated three red wines and demonstrated the possible relation between the overall intensity of wine spectra and the ratings. Since the wine chemical profile is closely related to the grape species, wine quality, and wine authentication, the SERS approach to obtain rich spectral information from red wine could advance wine chemical analysis.