Integrated Systems

Our Integrated Systems department innovates, designs, and prototypes high-performance intelligent distributed systems, applications, and services on complex, large-scale communication networks like 5G and beyond. We develop next-generation wireless technologies for sensing the world, localizing critical assets, and improving the capacity, coverage, and scalability of communication networks like 5G and beyond.

New application needs have always sparked human innovation. A decade ago, cloud computing enabled high-value enterprise services with a global reach and scale but with several minutes or seconds of delay. Large-scale services like enterprise resource planning (ERP) were a corner-case scenario, often designed as one-off systems. Today, applications like social networks, automated trading, and video streaming have made large-scale services the norm rather than the exception. In the future, advances in 5G networks and an explosion in smart devices, microservices, databases, networking, and computing tiers will make services so complex that humans cannot tune or manage them.

The sheer scale, dynamic nature, and concurrency in services on 5G slices will require them to be intelligent and autonomic. They will need to continuously self-assess, learn, and automatically adjust for resource needs, data quality, and service reliability. The need for increased efficiency and reduced latency between measurement and action drives our design of real-time distributed systems for feature extraction, computation, and machine learning on multimodal streaming data. We are conducting extensive research on creating end-to-end solutions using multimodal sensing technologies in the retail, public safety, and transportation domains.

Our 5G cellular network research encompasses the development of technologies on the Radio Access Network (RAN), the mobile edge, and the 5G LAN. Within the RAN, we are developing technologies that optimize massive MIMO/MU-MIMO deployments and millimeter-wave access (e.g., transmission at 28 GHz to nomadic/mobile users). At the mobile edge (MEC), we focus on virtualization, scalability, and cloud deployment of appropriate services. Our 5G LAN research extends the benefits of 5G slicing technology to enterprise LANs to position the enterprise as the new MEC.

Read our news and publications from our world-class team of researchers from our Integrated Systems department.

Posts

RoVaR: Robust Multi-agent Tracking through Dual-layer Diversity in Visual and RF Sensor Fusion

The plethora of sensors in our commodity devices provides a rich substrate for sensor-fused tracking. Yet, today’s solutions are unable to deliver robust and high tracking accuracies across multiple agents in practical, everyday environments – a feature central to the future of immersive and collaborative applications. This can be attributed to the limited scope of diversity leveraged by these fusion solutions, preventing them from catering to the multiple dimensions of accuracy, robustness (diverse environmental conditions) and scalability (multiple agents) simultaneously.In this work, we take an important step towards this goal by introducing the notion of dual-layer diversity to the problem of sensor fusion in multi-agent tracking. We demonstrate that the fusion of complementary tracking modalities, – passive/relative (e.g. visual odometry) and active/absolute tracking (e.g.infrastructure-assisted RF localization) offer a key first layer of diversity that brings scalability while the second layer of diversity lies in the methodology of fusion, where we bring together the complementary strengths of algorithmic (for robustness) and data-driven (for accuracy) approaches. ROVAR is an embodiment of such a dual-layer diversity approach that intelligently attends to cross-modal information using algorithmic and data-driven techniques that jointly share the burden of accurately tracking multiple agents in the wild. Extensive evaluations reveal ROVAR’S multi-dimensional benefits in terms of tracking accuracy, scalability and robustness to enable practical multi-agent immersive applications in everyday environments.

Application-specific, Dynamic Reservation of 5G Compute and Network Resources by using Reinforcement Learning

5G services and applications explicitly reserve compute and network resources in today’s complex and dynamic infrastructure of multi-tiered computing and cellular networking to ensure application-specific service quality metrics, and the infrastructure providers charge the 5G services for the resources reserved. A static, one-time reservation of resources at service deployment typically results in extended periods of under-utilization of reserved resources during the lifetime of the service operation. This is due to a plethora of reasons like changes in content from the IoT sensors (for example, change in number of people in the field of view of a camera) or a change in the environmental conditions around the IoT sensors (for example, time of the day, rain or fog can affect data acquisition by sensors). Under-utilization of a specific resource like compute can also be due to temporary inadequate availability of another resource like the network bandwidth in a dynamic 5G infrastructure. We propose a novel Reinforcement Learning-based online method to dynamically adjust an application’s compute and network resource reservations to minimize under-utilization of requested resources, while ensuring acceptable service quality metrics. We observe that a complex application-specific coupling exists between the compute and network usage of an application. Our proposed method learns this coupling during the operation of the service, and dynamically modulates the compute and network resource requests to mimimize under-utilization of reserved resources. Through experimental evaluation using real-world video analytics application, we show that our technique is able to capture complex compute-network coupling relationship in an online manner i.e. while the application is running, and dynamically adapts and saves up to 65% compute and 93% network resources on average (over multiple runs), without significantly impacting application accuracy.

Cosine Similarity based Few-Shot Video Classifier with Attention-based Aggregation

Meta learning algorithms for few-shot video recognition use complex, episodic training but they often fail to learn effective feature representations. In contrast, we propose a new and simpler few-shot video recognition method that does not use meta-learning, but its performance compares well with the best meta-learning proposals. Our new few-shot video classification pipeline consists of two distinct phases. In the pre-training phase, we learn a good video feature extraction network that generates a feature vector for each video. After a sparse sampling strategy selects frames from the video, we generate a video feature vector from the sampled frames. Our proposed video feature extractor network, which consists of an image feature extraction network followed by a new transformer encoder, is trained end-to-end by including a classifier head that uses cosine similarity layer instead of the traditional linear layer to classify a corpus of labeled video examples. Unlike prior work in meta learning, we do not use episodic training to learn the image feature vector. Also, unlike prior work that averages frame-level feature vectors into a single video feature vector, we combine individual frame-level feature vectors by using a new Transformer encoder that explicitly captures the key, temporal properties in the sequence of sampled frames. End-to-end training of the video feature extractor ensures that the proposed Transformer encoder captures important temporal properties in the video, while the cosine similarity layer explicitly reduces the intra-class variance of videos that belong to the same class. Next, in the few-shot adaptation phase, we use the learned video feature extractor to train a new video classifier by using the few available examples from novel classes. Results on SSV2-100 and Kinetics-100 benchmarks show that our proposed few-shot video classifier outperforms the meta-learning-based methods and achieves the best state-of-the-art accuracy. We also show that our method can easily discern between actions and their inverse (for example, picking something up vs. putting something down), while prior art, which averages image feature vectors, is unable to do so.

Mosaic: Leveraging Diverse Reflector Geometries for Omnidirectional Around-Corner Automotive Radar

A large number of traffic collisions occur as a result of obstructed sight lines, such that even an advanced driver assistance system would be unable to prevent the crash. Recent work has proposed the use of around-the-corner radar systems to detect vehicles, pedestrians, and other road users in these occluded regions. Through comprehensive measurement, we show that these existing techniques cannot sense occluded moving objects in many important real-world scenarios. To solve this problem of limited coverage, we leverage multiple, curved reflectors to provide comprehensive coverage over the most important locations near an intersection. In scenarios where curved reflectors are insufficient, we evaluate the relative benefits of using additional flat planar surfaces. Using these techniques, we more than double the probability of detecting a vehicle near the intersection in three real urban locations and enable NLoS radar sensing using an entirely new class of reflectors.

Chimera: Context-Aware Splittable Deep Multitasking Models for Edge Intelligence

Design of multitasking deep learning models has mostly focused on improving the accuracy of the constituent tasks, but the challenges of efficiently deploying such models in a device-edge collaborative setup (that is common in 5G deployments) has not been investigated. Towards this end, in this paper, we propose an approach called Chimera 1 for training (done Offline) and deployment (done Online) of multitasking deep learning models that are splittable across the device and edge. In the offline phase, we train our multi-tasking setup such that features from a pre-trained model for one of the tasks (called the Primary task) are extracted and task-specific sub-models are trained to generate the other (Secondary) tasks’ outputs through a knowledge distillation like training strategy to mimic the outputs of pre-trained models for the tasks. The task-specific sub-models are designed to be significantly lightweight than the original pre-trained models for the Secondary tasks. Once the sub-models are trained, during deployment, for given deployment context, characterized by the configurations, we search for the optimal (in terms of both model performance and cost) deployment strategy for the generated multitasking model, through finding one or multiple suitable layer(s) for splitting the model, so that inference workloads are distributed between the device and the edge server and the inference is done in a collaborative manner. Extensive experiments on benchmark computer vision tasks demonstrate that Chimera generates splittable multitasking models that are at least ~ 3 x parameter efficient than the existing such models, and the end-to-end device-edge collaborative inference becomes ~ 1.35 x faster with our choice of context-aware splitting decisions.

Codebook Design for Hybrid Beamforming in 5G Systems

Massive MIMO and hybrid beamforming are among the key physical layer technologies for the next generation wireless systems. In the last stage of the hybrid beamforming, the goal is to generate sharp beam with maximal and preferably uniform gain. We highlight the shortcomings of uniform linear arrays (ULAs) in generating such perfect beams, i.e., beams with maximal uniform gain and sharp edges, and propose a solution based on a novel antenna configuration, namely, twin-ULA (TULA). Consequently, we propose two antenna configurations based on TULA: Delta and Star. We pose the problem of finding the beamforming coefficients as a continuous optimization problem for which we find the analytical closed-form solution by a quantization/aggregation method. Thanks to the derived closed-form solution the beamforming coefficients can be easily obtained with low complexity. Through numerical analysis, we illustrate the effectiveness of the proposed antenna structure and beamforming algorithm to reach close-to-perfect beams.

ROMA: Resource Orchestration for Microservices-based 5G Applications

With the growth of 5G, Internet of Things (IoT), edge computing and cloud computing technologies, the infrastructure (compute and network) available to emerging applications (AR/VR, autonomous driving, industry 4.0, etc.) has become quite complex. There are multiple tiers of computing (IoT devices, near edge, far edge, cloud, etc.) that are connected with different types of networking technologies (LAN, LTE, 5G, MAN, WAN, etc.). Deployment and management of applications in such an environment is quite challenging. In this paper, we propose ROMA, which performs resource orchestration for microservices-based 5G applications in a dynamic, heterogeneous, multi-tiered compute and network fabric. We assume that only application-level requirements are known, and the detailed requirements of the individual microservices in the application are not specified. As part of our solution, ROMA identifies and leverages the coupling relationship between compute and network usage for various microservices and solves an optimization problem in order to appropriately identify how each microservice should be deployed in the complex, multi-tiered compute and network fabric, so that the end-to-end application requirements are optimally met. We implemented two real-world 5G applications in video surveillance and intelligent transportation system (ITS) domains. Through extensive experiments, we show that ROMA is able to save up to 90%, 55% and 44% compute and up to 80%, 95% and 75% network bandwidth for the surveillance (watchlist) and transportation application (person and car detection), respectively. This improvement is achieved while honoring the application performance requirements, and it is over an alternative scheme that employs a static and overprovisioned resource allocation strategy by ignoring the resource coupling relationships.

Opportunistic Temporal Fair Mode Selection and User Scheduling in Full-Duplex Systems

In-band full-duplex (FD) communication has emerged as one of the promising techniques to improve data rates in next generation wireless systems. Typical FD scenarios considered in the literature assume FD base stations (BSs) and half-duplex (HD) users activated either in uplink (UL) or downlink (DL), where inter-user interference (IUI) is treated as noise at the DL user. This paper considers more general FD scenarios where an arbitrary fraction of the users are capable of FD and/or they can perform successive interference cancellation (SIC) to mitigate IUI. Consequently, one user can be activated in either UL or DL (HD-UL and HD-DL modes), or simultaneously in both directions requiring self-interference mitigation (SIM) at that user (FD-SIM mode). Furthermore, two users can be scheduled, one in UL and the other in DL (both operating in HD), where the DL user can treat IUI as noise (FD-IN mode) or perform SIC to mitigate IUI (FD-SIC mode). This paper studies opportunistic mode selection and user scheduling under long-term and short-term temporal fairness in single-carrier and multi-carrier (OFDM) FD systems, with the goal of maximizing system utility (e.g. sum-rate). First, the feasible region of temporal demands is characterized for both long-term and short-term fairness. Subsequently, optimal temporal fair schedulers as well as practical low-complexity online algorithms are devised. Simulation results demonstrate that using SIC to mitigate IUI as well as having FD capability at users can improve FD throughput gains significantly especially, when user distribution is concentrated around a few hotspots.

DataXe: A System for Application Self-optimization in Serverless Edge Computing Environments

A key barrier to building performant, remotely managed and self-optimizing multi-sensor, distributed stream processing edge applications is high programming complexity. We recently proposed DataX [1], a novel platform that improves programmer productivity by enabling easy exchange, transformations, and fusion of data streams on virtualized edge computing infrastructure. This paper extends DataX to include (a) serverless computing that automatically scales stateful and stateless analytics units (AUs) on virtualized edge environments, (b) novel communication mechanisms that efficiently communicate data among analytics units, and (c) new techniques to promote automatic reuse and sharing of analytics processing across multiple applications in a lights out, serverless computing environment. Synthesizing these capabilities into a single platform has been substantially more transformative than any available stream processing system for the edge. We refer to this enhanced and efficient version of DataX as DataXe. To the best of our knowledge, this is the first serverless system for stream processing. For a real-world video analytics application, we observed that the performance of the DataXe implementation of the analytics application is about 3X faster than a standalone implementation of the analytics application with custom, handcrafted communication, multiprocessing and allocation of edge resources.

Multi-user Beam Alignment in Presence of Multi-path

To overcome the high pathloss and the intense shadowing in millimeterwave (mmWave) communications, effective beamforming schemes are required which incorporate narrow beams with high beamforming gains. The mm Wave channel consists of a few spatial clusters each associated with an angle of departure (AoD). The narrow beams must be aligned with the channel AoDs to increase the beamforming gain. This is achieved through a procedure called beam alignment (BA). Most of the BA schemes in the literature consider channels with a single dominant path while in practice the channel has a few resolvable paths with different AoDs, hence, such BA schemes may not work correctly in the presence of multi-path or at the least do not exploit such multi path to achieve diversity or increase robustness. In this paper, we propose an efficient BA schemes in presence of multi-path. The proposed BA scheme transmits probing packets using a set of scanning beams and receives the feedback for all the scanning beams at the end of probing phase from each user. We formulate the BA scheme as minimizing the expected value of the average transmission beamwidth under different policies. The policy is defined as a function from the set of received feedback to the set of transmission beams (TB). In order to maximize the number of possible feedback sequences, we prove that the set of scanning beams (SB) has an special form, namely, Tulip Design. Consequently, we rewrite the minimization problem with a set of linear constraints and reduced number of variables which is solved by using an efficient greedy algorithm.