NEC Labs Blue Logo Square

Srimat T. Chakradhar

Department Head

Integrated Systems

Posts

SmartSlice: Dynamic, Self-optimization of Application’s QoS requests to 5G networks

Applications can tailor a network slice by specifying a variety of QoS attributes related to application-specific performance, function or operation. However, some QoS attributes like guaranteed bandwidth required by the application do vary over time. For example, network bandwidth needs of video streams from surveillance cameras can vary a lot depending on the environmental conditions and the content in the video streams. In this paper, we propose a novel, dynamic QoS attribute prediction technique that assists any application to make optimal resource reservation requests at all times. Standard forecasting using traditional cost functions like MAE, MSE, RMSE, MDA, etc. don’t work well because they do not take into account the direction (whether the forecasting of resources is more or less than needed), magnitude (by how much the forecast deviates, and in which direction), or frequency (how many times the forecast deviates from actual needs, and in which direction). The direction, magnitude and frequency have a direct impact on the application’s accuracy of insights, and the operational costs. We propose a new, parameterized cost function that takes into account all three of them, and guides the design of a new prediction technique. To the best of our knowledge, this is the first work that considers time-varying application requirements and dynamically adjusts slice QoS requests to 5G networks in order to ensure a balance between application’s accuracy and operational costs. In a real-world deployment of a surveillance video analytics application over 17 cameras, we show that our technique outperforms other traditional forecasting methods, and it saves 34% of network bandwidth (over a ~24 hour period) when compared to a static, one-time reservation.

Magic-Pipe: Self-optimizing video analytics pipelines

Microservices-based video analytics pipelines routinely use multiple deep convolutional neural networks. We observe that the best allocation of resources to deep learning engines (or microservices) in a pipeline, and the best configuration of parameters for each engine vary over time, often at a timescale of minutes or even seconds based on the dynamic content in the video. We leverage these observations to develop Magic-Pipe, a self-optimizing video analytic pipeline that leverages AI techniques to periodically self-optimize. First, we propose a new, adaptive resource allocation technique to dynamically balance the resource usage of different microservices, based on dynamic video content. Then, we propose an adaptive microservice parameter tuning technique to balance the accuracy and performance of a microservice, also based on video content. Finally, we propose two different approaches to reduce unnecessary computations due to unavoidable mismatch of independently designed, re-usable deep-learning engines: a deep learning approach to improve the feature extractor performance by filtering inputs for which no features can be extracted, and a low-overhead graph-theoretic approach to minimize redundant computations across frames. Our evaluation of Magic-Pipe shows that pipelines augmented with self-optimizing capability exhibit application response times that are an order of magnitude better than the original pipelines, while using the same hardware resources, and achieving similar high accuracy.

CamTuner: Reinforcement Learning based System for Camera Parameter Tuning to enhance Analytics

Video analytics systems critically rely on video cameras, which capture high quality video frames, to achieve high analytics accuracy. Although modern video cameras often expose tens of configurable parameter settings that can be set by end users, deployment of surveillance cameras today often uses a fixed set of parameter settings because the end users lack the skill or understanding to reconfigure these parameters. In this paper, we first show that in a typical surveillance camera deployment, environmental condition changes can significantly affect the accuracy of analytics units such as person detection, face detection and face recognition, and how such adverse impact can be mitigated by dynamically adjusting camera settings. We then propose CAMTUNER, a framework that can be easily applied to an existing video analytics pipeline (VAP) to enable automatic and dynamic adaptation of complex camera settings to changing environmental conditions, and autonomously optimize the accuracy of analytics units (AUs) in the VAP. CAMTUNER is based on SARSA reinforcement learning (RL) and it incorporates two novel components: a light weight analytics quality estimator and a virtual camera. CAMTUNER is implemented in a system with AXIS surveillance cameras and several VAPs (with various AUs) that processed day long customer videos captured at airport entrances. Our evaluations show that CAMTUNER can adapt quickly to changing environments. We compared CAMTUNER with two alternative approaches where either static camera settings were used, or a strawman approach where camera settings were manually changed every hour (based on human perception of quality). We observed that for the face detection and person detection AUs, CAMTUNER is able to achieve up to 13.8% and 9.2% higher accuracy, respectively, compared to the best of the two approaches (average improvement of 8% for both AUs).

UAC: An Uncertainty-Aware Face Clustering Algorithm

We investigate ways to leverage uncertainty in face images to improve the quality of the face clusters. We observe that popular clustering algorithms do not produce better quality clusters when clustering probabilistic face representations that implicitly model uncertainty – these algorithms predict up to 9.6X more clusters than the ground truth for the IJB-A benchmark. We empirically analyze the causes for this unexpected behavior and identify excessive false-positives and false-negatives (when comparing face-pairs) as the main reasons for poor quality clustering. Based on this insight, we propose an uncertainty-aware clustering algorithm, UAC, which explicitly leverages uncertainty information during clustering to decide when a pair of faces are similar or when a predicted cluster should be discarded. UAC considers (a) uncertainty of faces in face-pairs, (b) bins face-pairs into different categories based on an uncertainty threshold, (c) intelligently varies the similarity threshold during clustering to reduce false-negatives and false-positives, and (d) discards predicted clusters that exhibit a high measure of uncertainty. Extensive experimental results on several popular benchmarks and comparisons with state-of-the-art clustering methods show that UAC produces significantly better clusters by leveraging uncertainty in face images – predicted number of clusters is up to 0.18X more of the ground truth for the IJB-A benchmark.

AppSlice: A system for application-centric design of 5G and edge computing applications

Applications that use edge computing and 5G to improve response times consume both compute and network resources. However, 5G networks manage only network resources without considering the application’s compute requirements, and container orchestration frameworks manage only compute resources without considering the application’s network requirements. We observe that there is a complex coupling between an application’s compute and network usage, which can be leveraged to improve application performance and resource utilization. We propose a new, declarative abstraction called app slice that jointly considers the application’s compute and network requirements. This abstraction leverages container management systems to manage edge computing resources, and 5G network stacks to manage network resources, while the joint consideration of coupling between compute and network usage is explicitly managed by a new runtime system, which delivers the declarative semantics of the app slice. The runtime system also jointly manages the edge compute and network resource usage automatically across different edge computing environments and 5G networks by using two adaptive algorithms. We implement a complex, real-world, real-time monitoring application using the proposed app slice abstraction, and demonstrate on a private 5G/LTE testbed that the proposed runtime system significantly improves the application performance and resource usage when compared with the case where the coupling between the compute and network resource usage is ignored.

DataX: A system for Data eXchange and transformation of streams

The exponential growth in smart sensors and rapid progress in 5G networks is creating a world awash with data streams. However, a key barrier to building performant multi-sensor, distributed stream processing applications is high programming complexity. We propose DataX, a novel platform that improves programmer productivity by enabling easy exchange, transformations, and fusion of data streams. DataX abstraction simplifies the application’s specification and exposes parallelism and dependencies among the application functions (microservices). DataX runtime automatically sets up appropriate data communication mechanisms, enables effortless reuse of microservices and data streams across applications, and leverages serverless computing to transform, fuse, and auto-scale microservices. DataX makes it easy to write, deploy and reliably operate distributed applications at scale. Synthesizing these capabilities into a single platform is substantially more transformative than any available stream processing system.

F3S: Free Flow Fever Screening

Identification of people with elevated body temperature can reduce or dramatically slow down the spread of infectious diseases like COVID-19. We present a novel fever-screening system, F 3 S, that uses edge machine learning techniques to accurately measure core body temperatures of multiple individuals in a free-flow setting. F 3 S performs real-time sensor fusion of visual camera with thermal camera data streams to detect elevated body temperature, and it has several unique features: (a) visual and thermal streams represent very different modalities, and we dynamically associate semantically-equivalent regions across visual and thermal frames by using a new, dynamic alignment technique that analyzes content and context in real-time, (b) we track people through occlusions, identify the eye (inner canthus), forehead, face and head regions where possible, and provide an accurate temperature reading by using a prioritized refinement algorithm, and (c) we robustly detect elevated body temperature even in the presence of personal protective equipment like masks, or sunglasses or hats, all of which can be affected by hot weather and lead to spurious temperature readings. F 3 S has been deployed at over a dozen large commercial establishments, providing contact-less, free-flow, real-time fever screening for thousands of employees and customers in indoors and outdoor settings.

ECO: Edge-Cloud Optimization of 5G applications

Centralized cloud computing with 100+ milliseconds network latencies cannot meet the tens of milliseconds to sub-millisecond response times required for emerging 5G applications like autonomous driving, smart manufacturing, tactile internet, and augmented or virtual reality. We describe a new, dynamic runtime that enables such applications to make effective use of a 5G network, computing at the edge of this network, and resources in the centralized cloud, at all times. Our runtime continuously monitors the interaction among the microservices, estimates the data produced and exchanged among the microservices, and uses a novel graph min-cut algorithm to dynamically map the microservices to the edge or the cloud to satisfy application-specific response times. Our runtime also handles temporary network partitions, and maintains data consistency across the distributed fabric by using microservice proxies to reduce WAN bandwidth by an order of magnitude, all in an application-specific manner by leveraging knowledge about the application’s functions, latency-critical pipelines and intermediate data. We illustrate the use of our runtime by successfully mapping two complex, representative real-world video analytics applications to the AWS/Verizon Wavelength edge-cloud architecture, and improving application response times by 2x when compared with a static edge-cloud implementation.