DeepGAR: Deep Graph Learning for Analogical Reasoning

Analogical reasoning is the process of discovering and mapping correspondences from a target subject to a base subject. As the most well-known computational method of analogical reasoning, Structure-Mapping Theory (SMT) abstracts both target and base subjects into relational graphs and forms the cognitive process of analogical reasoning by finding a corresponding subgraph (i.e., correspondence) in the target graph that is aligned with the base graph. However, incorporating deep learning for SMT is still under-explored due to several obstacles: 1) the combinatorial complexity of searching for the correspondence in the target graph, 2) the correspondence mining is restricted by various cognitive theory-driven constraints. To address both challenges, we propose a novel framework for Analogical Reasoning (DeepGAR) that identifies the correspondence between source and target domains by assuring cognitive theory-driven constraints. Specifically, we design a geometric constraint embedding space to induce subgraph relation from node embeddings for efficient subgraph search. Furthermore, we develop novel learning and optimization strategies that could end-to-end identify correspondences that are strictly consistent with constraints driven by the cognitive theory. Extensive experiments are conducted on synthetic and real-world datasets to demonstrate the effectiveness of the proposed DeepGAR over existing methods. The code and data are available at: https://github.com/triplej0079/DeepGAR.

Using Global Fiber Networks for Environmental Sensing

We review recent advances in distributed fiber optic sensing (DFOS) and their applications. The scattering mechanisms in glass, which are exploited for reflectometry-based DFOS, are Rayleigh, Brillouin, and Raman scatterings. These are sensitive to either strain and/or temperature, allowing optical fiber cables to monitor their ambient environment in addition to their conventional role as a medium for telecommunications. Recently, DFOS leveraged technologies developed for telecommunications, such as coherent detection, digital signal processing, coding, and spatial/frequency diversity, to achieve improved performance in terms of measurand resolution, reach, spatial resolution, and bandwidth. We review the theory and architecture of commonly used DFOS methods. We provide recent experimental and field trial results where DFOS was used in wide-ranging applications, such as geohazard monitoring, seismic monitoring, traffic monitoring, and infrastructure health monitoring. Events of interest often have unique signatures either in the spatial, temporal, frequency, or wavenumber domains. Based on the temperature and strain raw data obtained from DFOS, downstream postprocessing allows the detection, classification, and localization of events. Combining DFOS with machine learning methods, it is possible to realize complete sensor systems that are compact, low cost, and can operate in harsh environments and difficult-to-access locations, facilitating increased public safety and smarter cities.

DataX Allocator: Dynamic resource management for stream analytics at the Edge

Serverless edge computing aims to deploy and manage applications so that developers are unaware of challenges associated with dynamic management, sharing, and maintenance of the edge infrastructure. However, this is a non-trivial task because the resource usage by various edge applications varies based on the content in their input sensor data streams. We present a novel reinforcement-learning (RL) technique to maximize the processing rates of applications by dynamically allocating resources (like CPU cores or memory) to microservices in these applications. We model applications as analytics pipelines consisting of several microservices, and a pipeline’s processing rate directly impacts the accuracy of insights from the application. In our unique problem formulation, the state space or the number of actions of RL is independent of the type of workload in the microservices, the number of microservices in a pipeline, or the number of pipelines. This enables us to learn the RL model only once and use it many times to improve the accuracy of insights for a diverse set of AI/ML engines like action recognition or face recognition and applications with varying microservices. Our experiments with real-world applications, i.e., face recognition and action recognition, show that our approach outperforms other widely-used alternative approaches and achieves up to 2.5X improvement in the overall application processing rate. Furthermore, when we apply our RL model trained on a face recognition pipeline to a different and more complex action recognition pipeline, we obtain a 2X improvement in processing rate, thus showing the versatility and robustness of our RL model to pipeline changes.

APT: Adaptive Perceptual quality based camera Tuning using reinforcement learning

Cameras are increasingly being deployed in cities, enterprises and roads world-wide to enable many applications in public safety, intelligent transportation, retail, healthcare and manufacturing. Often, after initial deployment of the cameras, the environmental conditions and the scenes around these cameras change, and our experiments show that these changes can adversely impact the accuracy of insights from video analytics. This is because the camera parameter settings, though optimal at deployment time, are not the best settings for good-quality video capture as the environmental conditions and scenes around a camera change during operation. Capturing poor-quality video adversely affects the accuracy of analytics. To mitigate the loss in accuracy of insights, we propose a novel, reinforcement-learning based system APT that dynamically, and remotely (over 5G networks), tunes the camera parameters, to ensure a high-quality video capture, which mitigates any loss in accuracy of video analytics. As a result, such tuning restores the accuracy of insights when environmental conditions or scene content change. APT uses reinforcement learning, with no-reference perceptual quality estimation as the reward function. We conducted extensive real-world experiments, where we simultaneously deployed two cameras side-by-side overlooking an enterprise parking lot (one camera only has manufacturer-suggested default setting, while the other camera is dynamically tuned by APT during operation). Our experiments demonstrated that due to dynamic tuning by APT, the analytics insights are consistently better at all times of the day: the accuracy of object detection video analytics application was improved on average by ∼ 42%. Since our reward function is independent of any analytics task, APT can be readily used for different video analytics tasks.

NEC Labs America’s Time Series Data Research Drives Space Systems Innovation

With decreasing hardware costs and increasing demand for autonomic management, many of today’s physical systems are equipped with an extensive network of sensors, generating a considerable amount of time series data daily. A highly valuable source of information, time series data is used by businesses and governments to measure and analyze change over time in complex systems. Organizations must consolidate, integrate and organize a vast amount of time series data from multiple sources to generate insights and business value.

Next-Generation Computing Finally Sees Light

Moore’s law is dead, as we have squeezed all the innovation out of silicon. Fiber optics is the solution to meet the computing needs of tomorrow. Today, we can already use the light traveling inside fiber optic cables as sensors that measure vibrations, sound, temperature, light, and pressure changes. We’re now developing the means to take this to the next level with photonic computing at the speed of light to provide faster reaction time, reduce energy consumption and improve battery range

NEC Labs America Heads to Stanford University’s SystemX Alliance Annual Fall Conference

NEC Labs America’s (NECLA) President Christopher White is attending Stanford University’s SystemX Alliance 2022 Fall Conference this week, where he is meeting with Ph.D. students, industry-leading researchers and business leaders presenting on a wide range of research topics. The annual conference will highlight exciting research in the areas of advanced materials, data analytics, energy and power management, 3D nanoprinting, and photonic and quantum computing, to name but a few!

Availability Analysis for Reliable Distributed Fiber Optic Sensors Placement

We perform the availability analysis for various reliable distributed fiber optic sensor placement schemes in the circumstances of multiple failures. The study can help the network carriers to select the optimal protection scheme for their network sensing services, considering both service availability and hardware cost.

Distributed Optical Fiber Sensing Using Specialty Optical Fibers

Distributed fiber optic sensing systems use long section of optical fiber as the sensing media. Therefore, the fiber characteristics determines the sensing capability and performance. In this presentation, various types of specialty optical fibers and their sensing applications will be introduced and discussed.

A Multi-sensor Feature Fusion Network Model for Bearings Grease Life Assessment in Accelerated Experiments

This paper presents a multi-sensor feature fusion (MSFF) neural network comprised of two inception layer-type multiple channel feature fusion (MCFF) networks for both inner-sensor and cross-sensor feature fusion in conjunction with a deep residual neural network (ResNet) for accurate grease life assessment and bearings health monitoring. The single MCFF network is designed for low-level feature extraction and fusion of either vibration or acoustic emission signals at multi-scales. The concatenation of MCFF networks serves as a cross-sensor feature fusion layer to combine extracted features from both vibration and acoustic emission sources. A ResNet is developed for high-level feature extraction from the fused feature maps and prediction. Besides, to handle the large volume of collected data, original time-series data are transformed to the frequency domain with different sampling intervals and targeted ranges. The proposed MSFF network outperforms other models based on different fusion methods, fully connected network predictors and/or a single sensor source.