DataX Allocator: Dynamic resource management for stream analytics at the Edge

Serverless edge computing aims to deploy and manage applications so that developers are unaware of challenges associated with dynamic management, sharing, and maintenance of the edge infrastructure. However, this is a non-trivial task because the resource usage by various edge applications varies based on the content in their input sensor data streams. We present a novel reinforcement-learning (RL) technique to maximize the processing rates of applications by dynamically allocating resources (like CPU cores or memory) to microservices in these applications. We model applications as analytics pipelines consisting of several microservices, and a pipeline’s processing rate directly impacts the accuracy of insights from the application. In our unique problem formulation, the state space or the number of actions of RL is independent of the type of workload in the microservices, the number of microservices in a pipeline, or the number of pipelines. This enables us to learn the RL model only once and use it many times to improve the accuracy of insights for a diverse set of AI/ML engines like action recognition or face recognition and applications with varying microservices. Our experiments with real-world applications, i.e., face recognition and action recognition, show that our approach outperforms other widely-used alternative approaches and achieves up to 2.5X improvement in the overall application processing rate. Furthermore, when we apply our RL model trained on a face recognition pipeline to a different and more complex action recognition pipeline, we obtain a 2X improvement in processing rate, thus showing the versatility and robustness of our RL model to pipeline changes.

APT: Adaptive Perceptual quality based camera Tuning using reinforcement learning

Cameras are increasingly being deployed in cities, enterprises and roads world-wide to enable many applications in public safety, intelligent transportation, retail, healthcare and manufacturing. Often, after initial deployment of the cameras, the environmental conditions and the scenes around these cameras change, and our experiments show that these changes can adversely impact the accuracy of insights from video analytics. This is because the camera parameter settings, though optimal at deployment time, are not the best settings for good-quality video capture as the environmental conditions and scenes around a camera change during operation. Capturing poor-quality video adversely affects the accuracy of analytics. To mitigate the loss in accuracy of insights, we propose a novel, reinforcement-learning based system APT that dynamically, and remotely (over 5G networks), tunes the camera parameters, to ensure a high-quality video capture, which mitigates any loss in accuracy of video analytics. As a result, such tuning restores the accuracy of insights when environmental conditions or scene content change. APT uses reinforcement learning, with no-reference perceptual quality estimation as the reward function. We conducted extensive real-world experiments, where we simultaneously deployed two cameras side-by-side overlooking an enterprise parking lot (one camera only has manufacturer-suggested default setting, while the other camera is dynamically tuned by APT during operation). Our experiments demonstrated that due to dynamic tuning by APT, the analytics insights are consistently better at all times of the day: the accuracy of object detection video analytics application was improved on average by ∼ 42%. Since our reward function is independent of any analytics task, APT can be readily used for different video analytics tasks.

NEC Labs America’s Time Series Data Research Drives Space Systems Innovation

With decreasing hardware costs and increasing demand for autonomic management, many of today’s physical systems are equipped with an extensive network of sensors, generating a considerable amount of time series data daily. A highly valuable source of information, time series data is used by businesses and governments to measure and analyze change over time in complex systems. Organizations must consolidate, integrate and organize a vast amount of time series data from multiple sources to generate insights and business value.

Next-Generation Computing Finally Sees Light

Moore’s law is dead, as we have squeezed all the innovation out of silicon. Fiber optics is the solution to meet the computing needs of tomorrow. Today, we can already use the light traveling inside fiber optic cables as sensors that measure vibrations, sound, temperature, light, and pressure changes. We’re now developing the means to take this to the next level with photonic computing at the speed of light to provide faster reaction time, reduce energy consumption and improve battery range

NEC Labs America Heads to Stanford University’s SystemX Alliance Annual Fall Conference

NEC Labs America’s (NECLA) President Christopher White is attending Stanford University’s SystemX Alliance 2022 Fall Conference this week, where he is meeting with Ph.D. students, industry-leading researchers and business leaders presenting on a wide range of research topics. The annual conference will highlight exciting research in the areas of advanced materials, data analytics, energy and power management, 3D nanoprinting, and photonic and quantum computing, to name but a few!

Availability Analysis for Reliable Distributed Fiber Optic Sensors Placement

We perform the availability analysis for various reliable distributed fiber optic sensor placement schemes in the circumstances of multiple failures. The study can help the network carriers to select the optimal protection scheme for their network sensing services, considering both service availability and hardware cost.

Distributed Optical Fiber Sensing Using Specialty Optical Fibers

Distributed fiber optic sensing systems use long section of optical fiber as the sensing media. Therefore, the fiber characteristics determines the sensing capability and performance. In this presentation, various types of specialty optical fibers and their sensing applications will be introduced and discussed.

A Multi-sensor Feature Fusion Network Model for Bearings Grease Life Assessment in Accelerated Experiments

This paper presents a multi-sensor feature fusion (MSFF) neural network comprised of two inception layer-type multiple channel feature fusion (MCFF) networks for both inner-sensor and cross-sensor feature fusion in conjunction with a deep residual neural network (ResNet) for accurate grease life assessment and bearings health monitoring. The single MCFF network is designed for low-level feature extraction and fusion of either vibration or acoustic emission signals at multi-scales. The concatenation of MCFF networks serves as a cross-sensor feature fusion layer to combine extracted features from both vibration and acoustic emission sources. A ResNet is developed for high-level feature extraction from the fused feature maps and prediction. Besides, to handle the large volume of collected data, original time-series data are transformed to the frequency domain with different sampling intervals and targeted ranges. The proposed MSFF network outperforms other models based on different fusion methods, fully connected network predictors and/or a single sensor source.

Enhancing Video Analytics Accuracy via Real-time Automated Camera Parameter Tuning

In Video Analytics Pipelines (VAP), Analytics Units (AUs) such as object detection and face recognition running on remote servers critically rely on surveillance cameras to capture high-quality video streams in order to achieve high accuracy. Modern IP cameras come with a large number of camera parameters that directly affect the quality of the video stream capture. While a few of such parameters, e.g., exposure, focus, white balance are automatically adjusted by the camera internally, the remaining ones are not. We denote such camera parameters as non-automated (NAUTO) parameters. In this paper, we first show that environmental condition changes can have significant adverse effect on the accuracy of insights from the AUs, but such adverse impact can potentially be mitigated by dynamically adjusting NAUTO camera parameters in response to changes in environmental conditions. We then present CamTuner, to our knowledge, the first framework that dynamically adapts NAUTO camera parameters to optimize the accuracy of AUs in a VAP in response to adverse changes in environmental conditions. CamTuner is based on SARSA reinforcement learning and it incorporates two novel components: a light-weight analytics quality estimator and a virtual camera that drastically speed up offline RL training. Our controlled experiments and real-world VAP deployment show that compared to a VAP using the default camera setting, CamTuner enhances VAP accuracy by detecting 15.9% additional persons and 2.6%–4.2% additional cars (without any false positives) in a large enterprise parking lot and 9.7% additional cars in a 5G smart traffic intersection scenario, which enables a new usecase of accurate and reliable automatic vehicle collision prediction (AVCP). CamTuner opens doors for new ways to significantly enhance video analytics accuracy beyond incremental improvements from refining deep-learning models.

Semi-supervised Identification and Mapping of Water Accumulation Extent using Street-level Monitoring Videos

Urban flooding is becoming a common and devastating hazard, which causes life loss and economic damage. Monitoring and understanding urban flooding in a highly localized scale is a challenging task due to the complicated urban landscape, intricate hydraulic process, and the lack of high-quality and resolution data. The emerging smart city technology such as monitoring cameras provides an unprecedented opportunity to address the data issue. However, estimating water ponding extents on land surfaces based on monitoring footage is unreliable using the traditional segmentation technique because the boundary of the water ponding, under the influence of varying weather, background, and illumination, is usually too fuzzy to identify, and the oblique angle and image distortion in the video monitoring data prevents georeferencing and object-based measurements. This paper presents a novel semi-supervised segmentation scheme for surface water extent recognition from the footage of an oblique monitoring camera. The semi-supervised segmentation algorithm was found suitable to determine the water boundary and the monoplotting method was successfully applied to georeference the pixels of the monitoring video for the virtual quantification of the local drainage process. The correlation and mechanism-based analysis demonstrate the value of the proposed method in advancing the understanding of local drainage hydraulics. The workflow and created methods in this study have a great potential to study other street level and earth surface processes.