Fast Few-shot Debugging for NLU Test Suites

We study few-shot debugging of transformer based natural language understanding models, using recently popularized test suites to not just diagnose but correct a problem. Given a few debugging examples of a certain phenomenon, and a held-out test set of the same phenomenon, we aim to maximize accuracy on the phenomenon at a minimal cost of accuracy on the original test set. We examine several methods that are faster than full epoch retraining. We introduce a new fast method, which samples a few in-danger examples from the original training set. Compared to fast methods using parameter distance constraints or Kullback-Leibler divergence, we achieve superior original accuracy for comparable debugging accuracy.

Codebook Design for Hybrid Beamforming in 5G Systems

Massive MIMO and hybrid beamforming are among the key physical layer technologies for the next generation wireless systems. In the last stage of the hybrid beamforming, the goal is to generate sharp beam with maximal and preferably uniform gain. We highlight the shortcomings of uniform linear arrays (ULAs) in generating such perfect beams, i.e., beams with maximal uniform gain and sharp edges, and propose a solution based on a novel antenna configuration, namely, twin-ULA (TULA). Consequently, we propose two antenna configurations based on TULA: Delta and Star. We pose the problem of finding the beamforming coefficients as a continuous optimization problem for which we find the analytical closed-form solution by a quantization/aggregation method. Thanks to the derived closed-form solution the beamforming coefficients can be easily obtained with low complexity. Through numerical analysis, we illustrate the effectiveness of the proposed antenna structure and beamforming algorithm to reach close-to-perfect beams.

Time Series Prediction and Classification using Silicon Photonic Neuron with Self-Connection

We experimentally demonstrated the real-time operation of a photonic neuron with a self-connection, a prerequisite for integrated recurrent neural networks (RNNs). After studying two applications, we propose a photonics-assisted platform for time series prediction and classification.

Superclass-Conditional Gaussian Mixture Model for Coarse-To-Fine Few-Shot Learning

Learning fine-grained embeddings is essential for extending the generalizability of models pre-trained on “coarse” labels (e.g., animals). It is crucial to fields for which fine-grained labeling (e.g., breeds of animals) is expensive, but fine-grained prediction is desirable, such as medicine. The dilemma necessitates adaptation of a “coarsely” pre-trained model to new tasks with a few “finer-grained” training labels. However, coarsely supervised pre-training tends to suppress intra-class variation, which is vital for cross-granularity adaptation. In this paper, we develop a training framework underlain by a novel superclass-conditional Gaussian mixture model (SCGM). SCGM imitates the generative process of samples from hierarchies of classes through latent variable modeling of the fine-grained subclasses. The framework is agnostic to the encoders and only adds a few distribution related parameters, thus is efficient, and flexible to different domains. The model parameters are learned end-to-end by maximum-likelihood estimation via a principled Expectation-Maximization algorithm. Extensive experiments on benchmark datasets and a real-life medical dataset indicate the effectiveness of our method.

ROMA: Resource Orchestration for Microservices-based 5G Applications

With the growth of 5G, Internet of Things (IoT), edge computing and cloud computing technologies, the infrastructure (compute and network) available to emerging applications (AR/VR, autonomous driving, industry 4.0, etc.) has become quite complex. There are multiple tiers of computing (IoT devices, near edge, far edge, cloud, etc.) that are connected with different types of networking technologies (LAN, LTE, 5G, MAN, WAN, etc.). Deployment and management of applications in such an environment is quite challenging. In this paper, we propose ROMA, which performs resource orchestration for microservices-based 5G applications in a dynamic, heterogeneous, multi-tiered compute and network fabric. We assume that only application-level requirements are known, and the detailed requirements of the individual microservices in the application are not specified. As part of our solution, ROMA identifies and leverages the coupling relationship between compute and network usage for various microservices and solves an optimization problem in order to appropriately identify how each microservice should be deployed in the complex, multi-tiered compute and network fabric, so that the end-to-end application requirements are optimally met. We implemented two real-world 5G applications in video surveillance and intelligent transportation system (ITS) domains. Through extensive experiments, we show that ROMA is able to save up to 90%, 55% and 44% compute and up to 80%, 95% and 75% network bandwidth for the surveillance (watchlist) and transportation application (person and car detection), respectively. This improvement is achieved while honoring the application performance requirements, and it is over an alternative scheme that employs a static and overprovisioned resource allocation strategy by ignoring the resource coupling relationships.

Learning Transferable Reward for Query Object Localization with Policy Adaptation

We propose a reinforcement learning-based approach to query object localization, for which an agent is trained to localize objects of interest specified by a small exemplary set. We learn a transferable reward signal formulated using the exemplary set by ordinal metric learning. Our proposed method enables test-time policy adaptation to new environments where the reward signals are not readily available and outperforms fine-tuning approaches that are limited to annotated images. In addition, the transferable reward allows repurposing the trained agent from one specific class to another class. Experiments on corrupted MNIST, CU-Birds, and COCO datasets demonstrate the effectiveness of our approach.

Provable Adaptation Across Multiway Domains via Representation Learning

This paper studies zero-shot domain adaptation where each domain is indexed on a multi-dimensional array, and we only have data from a small subset of domains. Our goal is to produce predictors that perform well on unseen domains. We propose a model which consists of a domain-invariant latent representation layer and a domain-specific linear prediction layer with a low-rank tensor structure. Theoretically, we present explicit sample complexity bounds to characterize the prediction error on unseen domains in terms of the number of domains with training data and the number of data per domain. To our knowledge, this is the first finite-sample guarantee for zero-shot domain adaptation. In addition, we provide experiments on two-way MNIST and four-way fiber sensing datasets to demonstrate the effectiveness of our proposed model.

Opportunistic Temporal Fair Mode Selection and User Scheduling in Full-Duplex Systems

In-band full-duplex (FD) communication has emerged as one of the promising techniques to improve data rates in next generation wireless systems. Typical FD scenarios considered in the literature assume FD base stations (BSs) and half-duplex (HD) users activated either in uplink (UL) or downlink (DL), where inter-user interference (IUI) is treated as noise at the DL user. This paper considers more general FD scenarios where an arbitrary fraction of the users are capable of FD and/or they can perform successive interference cancellation (SIC) to mitigate IUI. Consequently, one user can be activated in either UL or DL (HD-UL and HD-DL modes), or simultaneously in both directions requiring self-interference mitigation (SIM) at that user (FD-SIM mode). Furthermore, two users can be scheduled, one in UL and the other in DL (both operating in HD), where the DL user can treat IUI as noise (FD-IN mode) or perform SIC to mitigate IUI (FD-SIC mode). This paper studies opportunistic mode selection and user scheduling under long-term and short-term temporal fairness in single-carrier and multi-carrier (OFDM) FD systems, with the goal of maximizing system utility (e.g. sum-rate). First, the feasible region of temporal demands is characterized for both long-term and short-term fairness. Subsequently, optimal temporal fair schedulers as well as practical low-complexity online algorithms are devised. Simulation results demonstrate that using SIC to mitigate IUI as well as having FD capability at users can improve FD throughput gains significantly especially, when user distribution is concentrated around a few hotspots.

Codebook Design for Composite Beamforming in Next-generation mmWave Systems

In pursuance of the unused spectrum in higher frequencies, millimeter wave (mmWave) bands have a pivotal role. However, the high path-loss and poor scattering associated with mmWave communications highlight the necessity of employing effective beamforming techniques. In order to efficiently search for the beam to serve a user and to jointly serve multiple users it is often required to use a composite beam which consists of multiple disjoint lobes. A composite beam covers multiple desired angular coverage intervals (ACIs) and ideally has maximum and uniform gain (smoothness) within each desired ACI, negligible gain (leakage) outside the desired ACIs, and sharp edges. We propose an algorithm for designing such ideal composite codebook by providing an analytical closed-form solution with low computational complexity. There is a fundamental trade-off between the gain, leakage and smoothness of the beams. Our design allows to achieve different values in such trade-off based on changing the design parameters. We highlight the shortcomings of the uniform linear arrays (ULAs) in building arbitrary composite beams. Consequently, we use a recently introduced twin-ULA (TULA) antenna structure to effectively resolve these inefficiencies. Numerical results are used to validate the theoretical findings.

DataXe: A System for Application Self-optimization in Serverless Edge Computing Environments

A key barrier to building performant, remotely managed and self-optimizing multi-sensor, distributed stream processing edge applications is high programming complexity. We recently proposed DataX [1], a novel platform that improves programmer productivity by enabling easy exchange, transformations, and fusion of data streams on virtualized edge computing infrastructure. This paper extends DataX to include (a) serverless computing that automatically scales stateful and stateless analytics units (AUs) on virtualized edge environments, (b) novel communication mechanisms that efficiently communicate data among analytics units, and (c) new techniques to promote automatic reuse and sharing of analytics processing across multiple applications in a lights out, serverless computing environment. Synthesizing these capabilities into a single platform has been substantially more transformative than any available stream processing system for the edge. We refer to this enhanced and efficient version of DataX as DataXe. To the best of our knowledge, this is the first serverless system for stream processing. For a real-world video analytics application, we observed that the performance of the DataXe implementation of the analytics application is about 3X faster than a standalone implementation of the analytics application with custom, handcrafted communication, multiprocessing and allocation of edge resources.