Field Trials of Manhole Localization and Condition Diagnostics by Using Ambient Noise and Temperature Data with AI in a Real-Time Integrated Fiber Sensing System

Field trials of ambient noise-based automated methods for manhole localization and condition diagnostics using a real-time DAS/DTS integrated system were conducted. Crossreferencingmultiple sensing data resulted in a 94.7% detection rate and enhanced anomaly identification.

Field Tests of AI-Driven Road Deformation Detection Leveraging Ambient Noise over Deployed Fiber Networks

This study demonstrates an AI-driven method for detecting road deformations using Distributed Acoustic Sensing (DAS) over existing telecom fiber networks. Utilizingambient traffic noise, it enables real-time, long-term, and scalable monitoring for road safety.

Enhancing EDFAs Greybox Modeling in Optical Multiplex Sections Using Few-Shot Learning

We combine few-shot learning and grey-box modeling for EDFAs in optical lines, training a single EDFA model on 500 spectral loads and transferring it to other EDFAs using 4-8 samples, maintaining low OSNR prediction error.

Dual Privacy Protection for Distributed Fiber Sensing with Disaggregated Inference and Fine-tuning of Memory-Augmented Networks

We propose a memory-augmented model architecture with disaggregated computation infrastructure for fiber sensing event recognition. By leveraging geo-distributed computingresources in optical networks, this approach empowers end-users to customize models while ensuring dual privacy protection.

DiffOptics: A Conditional Diffusion Model for Fiber Optics Sensing Data Imputation

We present a generative AI framework based on a conditional diffusion model for distributed acoustic sensing (DAS) data imputation. The proposed DiffOptics model generates high-quality DAS data of various acoustic events using telecom fiber cables.

1.2 Tb/s/l Real Time Mode Division Multiplexing Free Space Optical Communication with Commercial 400G Open and Disaggregated Transponders

We experimentally demonstrate real time mode division multiplexing free space optical communication with commercial 400G open and disaggregated transponders. As proof of concept,using HG00, HG10, and HG01 modes, we transmit 1.2 Tb/s/l (3´1l´400Gb/s) error free.

Real-Time Network-Aware Roadside LiDAR Data Compression

LiDAR technology has emerged as a pivotal tool in Intelligent Transportation Systems (ITS), providing unique capabilities that have significantly transformed roadside traffic applications. However, this transformation comes with a distinct challenge: the immense volume of data generated by LiDAR sensors. These sensors produce vast amounts of data every second, which can overwhelm both private and public 5G networks that are used to connect intersections. This data volume makes it challenging to stream raw sensor data across multiple intersections effectively. This paper proposes an efficient real-time compression method for roadside LiDAR data. Our approach exploits a special characteristic of roadside LiDAR data: the background points are consistent across all frames. We detect these background points and send them to edge servers only once. For each subsequent frame, we filter out the background points and compress only the remaining data. This process achieves significant temporal compression by eliminating redundant background data and substantial spatial compression by focusing only on the filtered points. Our method is sensor-agnostic, exceptionally fast, memory-efficient, and adaptable to varying network conditions. It offers a 2.5x increase in compression rates and improves application-level accuracy by 40% compared to current state-of-the-art methods.

Top 10 Most Legendary College Pranks of All-Time for April Fools’ Day

At NEC Labs America, we celebrate innovation in all forms—even the brilliantly engineered college prank. From MIT’s police car on the Great Dome to Caltech hacking the Rose Bowl, these legendary stunts showcase next-level planning, stealth, and technical genius. Our Top 10 list honors the creativity behind pranks that made history (and headlines). This April Fools’ Day, we salute the hackers, makers, and mischief-makers who prove that brilliance can be hilarious.

A Smart Sensing Grid for Road Traffic Detection Using Terrestrial Optical Networks and Attention-Enhanced Bi-LSTM

We demonstrate the use of existing terrestrial optical networks as a smart sensing grid, employing a bidirectional long short-term memory (Bi-LSTM) model enhanced with an attention mechanism to detect road vehicles. The main idea of our approach is to deploy a fast, accurate and reliable trained deep learning model in each network element that is constantly monitoring the state of polarization (SOP) of data signals traveling through the optical line system (OLS). Consequently, this deployment approach enables the creation of a sensing smart grid that can continuously monitor wide areas and respond with notifications/alerts for road traffic situations. The model is trained on the synthetic dataset and tested on the real dataset obtained from the deployed metropolitan fiber cable in the city of Turin. Our model is able to achieve 99% accuracy for both synthetic and real datasets.

CAMTUNER: Adaptive Video Analytics Pipelines via Real-time Automated Camera Parameter Tuning

In Video Analytics Pipelines (VAP), Analytics Units (AUs) such as object detection and face recognition operating on remote servers rely heavily on surveillance cameras to capture high-quality video streams to achieve high accuracy. Modern network cameras offer an array of parameters that directly influence video quality. While a few of such parameters, e.g., exposure, focus and white balance, are automatically adjusted by the camera internally, the others are not. We denote such camera parameters as non-automated (NAUTO) parameters. In this work, we first show that in a typical surveillance camera deployment, environmental condition changes can have significant adverse effect on the accuracy of insights from the AUs, but such adverse impact can potentially be mitigated by dynamically adjusting NAUTO camera parameters in response to changes in environmental conditions. Second, since most end-users lack the skill or understanding to appropriately configure these parameters and typically use a fixed parameter setting, we present CAMTUNER, to our knowledge, the first framework that dynamically adapts NAUTO camera parameters to optimize the accuracy of AUs in a VAP in response to adverse changes in environmental conditions. CAMTUNER is based on SARSA reinforcement learning and it incorporates two novel components: a light-weight analytics quality estimator and a virtual camera that drastically speed up offline RL training. Our controlled experiments and real-world VAP deployment show that compared to a VAP using the default camera setting, CAMTUNER enhances VAP accuracy by detecting 15.9% additional persons and 2.6%-4.2% additional cars (without any false positives) in a large enterprise parking lot. CAMTUNER opens up new avenues for elevating video analytics accuracy, transcending mere incremental enhancements achieved through refining deep-learning models.