
Our Integrated Systems department innovates, designs, and prototypes high-performance intelligent distributed systems, applications, and services on complex, large-scale communication networks like 5G and beyond. We develop next-generation wireless technologies for sensing the world, localizing critical assets, and improving the capacity, coverage, and scalability of communication networks like 5G and beyond.
New application needs have always sparked human innovation. A decade ago, cloud computing enabled high-value enterprise services with a global reach and scale but with several minutes or seconds of delay. Large-scale services like enterprise resource planning (ERP) were a corner-case scenario, often designed as one-off systems. Today, applications like social networks, automated trading, and video streaming have made large-scale services the norm rather than the exception. In the future, advances in 5G networks and an explosion in smart devices, microservices, databases, networking, and computing tiers will make services so complex that humans cannot tune or manage them.
The sheer scale, dynamic nature, and concurrency in services on 5G slices will require them to be intelligent and autonomic. They will need to continuously self-assess, learn, and automatically adjust for resource needs, data quality, and service reliability. The need for increased efficiency and reduced latency between measurement and action drives our design of real-time distributed systems for feature extraction, computation, and machine learning on multimodal streaming data. We are conducting extensive research on creating end-to-end solutions using multimodal sensing technologies in the retail, public safety, and transportation domains.
Our 5G cellular network research encompasses the development of technologies on the Radio Access Network (RAN), the mobile edge, and the 5G LAN. Within the RAN, we are developing technologies that optimize massive MIMO/MU-MIMO deployments and millimeter-wave access (e.g., transmission at 28 GHz to nomadic/mobile users). At the mobile edge (MEC), we focus on virtualization, scalability, and cloud deployment of appropriate services. Our 5G LAN research extends the benefits of 5G slicing technology to enterprise LANs to position the enterprise as the new MEC.
Read our news and publications from our world-class team of researchers from our Integrated Systems department.
Millions of cameras at edge are being deployed to power a variety of different deep learning applications. However, the frames captured by these cameras are not always pristine – they can be distorted due to lighting issues, sensor noise, compression etc. Such distortions not only deteriorate visual quality, they impact the accuracy of deep learning applications that process such video streams. In this work, we introduce AQuA, to protect application accuracy against such distorted frames by scoring the level of distortion in the frames. It takes into account the analytical quality of frames, not the visual quality, by learning a novel metric, classifier opinion score, and uses a lightweight, CNN-based, object-independent feature extractor. AQuA accurately scores distortion levels of frames and generalizes to multiple different deep learning applications. When used for filtering poor quality frames at edge, it reduces high-confidence errors for analytics applications by 17%. Through filtering, and due to its low overhead (14ms), AQuA can also reduce computation time and average bandwidth usage by 25%.
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
0
0
NEC Labs America
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
NEC Labs America2021-12-15 00:00:002024-03-16 12:18:02AQuA: Analytical Quality Assessment for Optimizing Video Analytics SystemsEdge computing and 5G have made it possible to perform analytics closer to the source of data and achieve super-low latency response times, which isn’t possible with centralized cloud deployment. In this paper, we present a novel fever screening system, which uses edge machine learning techniques and leverages private 5G to accurately identify and screen individuals with fever in real-time. Particularly, we present deep-learning based novel techniques for fusion and alignment of cross-spectral visual and thermal data streams at the edge. Our novel Cross-Spectral Generative Adversarial Network (CS-GAN) synthesizes visual images that have the key, representative object level features required to uniquely associate objects across visual and thermal spectrum. Two key features of CS-GAN are a novel, feature-preserving loss function that results in high-quality pairing of corresponding cross-spectral objects, and dual bottleneck residual layers with skip connections (a new, network enhancement) to not only accelerate real-time inference, but to also speed up convergence during model training at the edge. To the best of our knowledge, this is the first technique that leverages 5G networks and limited edge resources to enable real-time feature-level association of objects in visual and thermal streams (30 ms per full HD frame on an Intel Core i7-8650 4-core, 1.9GHz mobile processor). To the best of our knowledge, this is also the first system to achieve real-time operation, which has enabled fever screening of employees and guests in arenas, theme parks, airports and other critical facilities. By leveraging edge computing and 5G, our fever screening system is able to achieve 98.5% accuracy and is able to process ∼ 5X more people when compared to a centralized cloud deployment.
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
0
0
NEC Labs America
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
NEC Labs America2021-12-14 00:00:002024-03-25 18:27:11Edge-based fever screening system over private 5GMicroservices-based video analytics pipelines routinely use multiple deep convolutional neural networks. We observe that the best allocation of resources to deep learning engines (or microservices) in a pipeline, and the best configuration of parameters for each engine vary over time, often at a timescale of minutes or even seconds based on the dynamic content in the video. We leverage these observations to develop Magic-Pipe, a self-optimizing video analytic pipeline that leverages AI techniques to periodically self-optimize. First, we propose a new, adaptive resource allocation technique to dynamically balance the resource usage of different microservices, based on dynamic video content. Then, we propose an adaptive microservice parameter tuning technique to balance the accuracy and performance of a microservice, also based on video content. Finally, we propose two different approaches to reduce unnecessary computations due to unavoidable mismatch of independently designed, re-usable deep-learning engines: a deep learning approach to improve the feature extractor performance by filtering inputs for which no features can be extracted, and a low-overhead graph-theoretic approach to minimize redundant computations across frames. Our evaluation of Magic-Pipe shows that pipelines augmented with self-optimizing capability exhibit application response times that are an order of magnitude better than the original pipelines, while using the same hardware resources, and achieving similar high accuracy.
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
0
0
NEC Labs America
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
NEC Labs America2021-12-06 00:00:002024-03-31 20:49:12Magic-Pipe: Self-optimizing video analytics pipelinesApplications can tailor a network slice by specifying a variety of QoS attributes related to application-specific performance, function or operation. However, some QoS attributes like guaranteed bandwidth required by the application do vary over time. For example, network bandwidth needs of video streams from surveillance cameras can vary a lot depending on the environmental conditions and the content in the video streams. In this paper, we propose a novel, dynamic QoS attribute prediction technique that assists any application to make optimal resource reservation requests at all times. Standard forecasting using traditional cost functions like MAE, MSE, RMSE, MDA, etc. don’t work well because they do not take into account the direction (whether the forecasting of resources is more or less than needed), magnitude (by how much the forecast deviates, and in which direction), or frequency (how many times the forecast deviates from actual needs, and in which direction). The direction, magnitude and frequency have a direct impact on the application’s accuracy of insights, and the operational costs. We propose a new, parameterized cost function that takes into account all three of them, and guides the design of a new prediction technique. To the best of our knowledge, this is the first work that considers time-varying application requirements and dynamically adjusts slice QoS requests to 5G networks in order to ensure a balance between application’s accuracy and operational costs. In a real-world deployment of a surveillance video analytics application over 17 cameras, we show that our technique outperforms other traditional forecasting methods, and it saves 34% of network bandwidth (over a ~24 hour period) when compared to a static, one-time reservation.
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
0
0
NEC Labs America
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
NEC Labs America2021-12-06 00:00:002024-03-30 10:10:39SmartSlice: Dynamic, Self-optimization of Application’s QoS requests to 5G networksVideo analytics systems critically rely on video cameras, which capture high quality video frames, to achieve high analytics accuracy. Although modern video cameras often expose tens of configurable parameter settings that can be set by end users, deployment of surveillance cameras today often uses a fixed set of parameter settings because the end users lack the skill or understanding to reconfigure these parameters. In this paper, we first show that in a typical surveillance camera deployment, environmental condition changes can significantly affect the accuracy of analytics units such as person detection, face detection and face recognition, and how such adverse impact can be mitigated by dynamically adjusting camera settings. We then propose CAMTUNER, a framework that can be easily applied to an existing video analytics pipeline (VAP) to enable automatic and dynamic adaptation of complex camera settings to changing environmental conditions, and autonomously optimize the accuracy of analytics units (AUs) in the VAP. CAMTUNER is based on SARSA reinforcement learning (RL) and it incorporates two novel components: a light weight analytics quality estimator and a virtual camera. CAMTUNER is implemented in a system with AXIS surveillance cameras and several VAPs (with various AUs) that processed day long customer videos captured at airport entrances. Our evaluations show that CAMTUNER can adapt quickly to changing environments. We compared CAMTUNER with two alternative approaches where either static camera settings were used, or a strawman approach where camera settings were manually changed every hour (based on human perception of quality). We observed that for the face detection and person detection AUs, CAMTUNER is able to achieve up to 13.8% and 9.2% higher accuracy, respectively, compared to the best of the two approaches (average improvement of 8% for both AUs).
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
0
0
NEC Labs America
https://neclabs.wpengine.com/wp-content/uploads/2022/08/NEC-Labs-Blue-Logo-Square-300x267.jpg
NEC Labs America2021-10-26 00:00:002024-06-26 21:06:47CamTuner: Reinforcement Learning based System for Camera Parameter Tuning to enhance Analytics