Optical Line Physical Parameters Calibration in Presence of EDFA Total Power Monitors

A method is proposed in order to improve QoT-E by calibrating the physical model parameters of an optical link post-installation, using only total power monitors integrated into the EDFAs and an OSA at the receiver.

Multi-Span Optical Power Spectrum Prediction using ML-based EDFA Models and Cascaded Learning

We implement a cascaded learning framework using component-level EDFA models for optical power spectrum prediction in multi-span networks, achieving a mean absolute error of 0.17 dB across 6 spans and 12 EDFAs with only one-shot measurement.

Modeling the Input Power Dependency in Transceiver BER-ONSR for QoT Estimation

We propose a method to estimate the input power dependency of the transceiver BER-OSNR characteristic. Experiments using commercial transceivers show that estimation error in Q-factor is less than 0.2 dB.

Inline Fiber Type Identification using In-Service Brillouin Optical Time Domain Analysis

We proposed the use of BOTDA as a monitoring tool to identify fiber types present in deployed hybrid-span fiber cables, to assist in network planning, setting optimal launch powers, and selecting correct modulation formats.

Field Implementation of Fiber Cable Monitoring for Mesh Networks with Optimized Multi-Channel Sensor Placement

We develop a heuristic solution to effectively optimize the placement of multi-channel distributed fiber optic sensors in mesh optical fiber cable networks. The solution has beenimplemented in a field network to provide continuous monitoring.

4D Optical Link Tomography: First Field Demonstration of Autonomous Transponder Capable of Distance, Time, Frequency, and Polarization-Resolved Monitoring

We report the first field demonstration of 4D link tomography using a commercial transponder, which offers distance, time, frequency, and polarization-resolved monitoring. This scheme enables autonomous transponders that identify locations of multiple QoT degradation causes.

LARA: Latency-Aware Resource Allocator for Stream Processing Applications

One of the key metrics of interest for stream processing applications is “latency”, which indicates the total time it takes for the application to process and generate insights from streaming input data. For mission-critical video analytics applications like surveillance and monitoring, it is of paramount importance to report an incident as soon as it occurs so that necessary actions can be taken right away. Stream processing applications are typically developed as a chain of microservices and are deployed on container orchestration platforms like Kubernetes. Allocation of system resources like “cpu” and “memory” to individual application microservices has direct impact on “latency”. Kubernetes does provide ways to allocate these resources e.g. through fixed resource allocation or through vertical pod autoscaler (VPA), however there is no straightforward way in Kubernetes to prioritize “latency” for an end-to end application pipeline. In this paper, we present LARA, which is specifically designed to improve “latency” of stream processing application pipelines. LARA uses a regression-based technique for resource allocation to individual microservices. We implement four real-world video analytics application pipelines i.e. license plate recognition, face recognition, human attributes detection and pose detection, and show that compared to fixed allocation, LARA is able to reduce latency by up to ? 2.8X and is consistently better than VPA. While reducing latency, LARA is also able to deliver over 2X throughput compared to fixed allocation and is almost always better than VPA.

Improving Real-time Data Streams Performance on Autonomous Surface Vehicles using DataX

In the evolving Artificial Intelligence (AI) era, the need for real-time algorithm processing in marine edge environments has become a crucial challenge. Data acquisition, analysis, and processing in complex marine situations require sophisticated and highly efficient platforms. This study optimizes real-time operations on a containerized distributed processing platform designed for Autonomous Surface Vehicles (ASV) to help safeguard the marine environment. The primary objective is to improve the efficiency and speed of data processing by adopting a microservice management system called DataX. DataX leverages containerization to break down operations into modular units, and resource coordination is based on Kubernetes. This combination of technologies enables more efficient resource management and real-time operations optimization, contributing significantly to the success of marine missions. The platform was developed to address the unique challenges of managing data and running advanced algorithms in a marine context, which often involves limited connectivity, high latencies, and energy restrictions. Finally, as a proof of concept to justify this platform’s evolution, experiments were carried out using a cluster of single-board computers equipped with GPUs, running an AI-based marine litter detection application and demonstrating the tangible benefits of this solution and its suitability for the needs of maritime missions.

NEC Laboratories Advances Material Design with AI-based MateriAI Platform

NEC Laboratories Europe and NEC Laboratories America have developed MateriAI, an AI-based, material design platform that accelerates the development of new, environmentally friendly materials. The prototype platform was initially designed to overcome major hurdles in the creation of new synthetic, organic and bio-based polymers, such as rubber and plastics.

Dynamic Causal Discovery in Imitation Learning

Imitation learning, which learns agent policy by mimicking expert demonstration, has shown promising results in many applications such as medical treatment regimes and self-driving vehicles. However, it remains a difficult task to interpret control policies learned by the agent. Difficulties mainly come from two aspects: 1) agents in imitation learning are usually implemented as deep neural networks, which are black-box models and lack interpretability; 2) the latent causal mechanism behind agents’ decisions may vary along the trajectory, rather than staying static throughout time steps. To increase transparency and offer better interpretability of the neural agent, we propose to expose its captured knowledge in the form of a directed acyclic causal graph, with nodes being action and state variables and edges denoting the causal relations behind predictions. Furthermore, we design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs. Concretely, we conduct causal discovery from the perspective of Granger causality and propose a self-explainable imitation learning framework, CAIL. The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner. After the model is learned, we can obtain causal relations among states and action variables behind its decisions, exposing policies learned by it. Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of the proposed CAIL in learning the dynamic causal graphs for understanding the decision-making of imitation learning meanwhilemaintaining high prediction accuracy.