Semi-Automatic Line-System Provisioning with Integrated Physical-Parameter-Aware Methodology: Field Verification and Operational Feasibility

We propose methods and architecture to conduct measurements and optimize newly installed optical fiber line systems semi-automatically using integrated physics-aware technologies in a data center interconnection (DCI) transmission scenario. We demonstrate, for the first time, digital longitudinal monitoring (DLM) and optical line system (OLS) physical parameter calibration working together in real-time to extract physical link parameters for transmission performance optimization. Our methodology has the following advantages over traditional design: minimized footprint at the user site, accurate estimate of necessary optical network characteristics via complementary telemetry technologies, and ability to conduct all operation work from remotely. The last feature is crucial as remote operation personnel can implement network design settings for immediate response to quality of transmission (QoT) degradation and reverting in case of unforeseen problems. We successfully completed the semi-automatic line system provisioning over field fiber networks facilities at Duke University, Durham, NC. The tasks of parameter retrieval, equipment setting optimization, and system setup/provisioning were completed within 1 hour. The field operation was supervised by on-duty personnel who can access the system remotely from different timezones. By comparing Q-factor estimates calculated by the extracted link parameters with measured results from 400G transceivers, we confirmed our methodology has a reduction in the QoT prediction errors overexisting design.

4D Optical Link Tomography: First Field Demonstration of Autonomous Transponder Capable of Distance, Time, Frequency, and Polarization-Resolved Monitoring

We report the first field demonstration of 4D link tomography using a commercial transponder, which offers distance, time, frequency, and polarization-resolved monitoring. This scheme enables autonomous transponders that identify locations of multiple QoT degradation causes.

Field Implementation of Fiber Cable Monitoring for Mesh Networks with Optimized Multi-Channel Sensor Placement

We develop a heuristic solution to effectively optimize the placement of multi-channel distributed fiber optic sensors in mesh optical fiber cable networks. The solution has beenimplemented in a field network to provide continuous monitoring.

Inline Fiber Type Identification using In-Service Brillouin Optical Time Domain Analysis

We proposed the use of BOTDA as a monitoring tool to identify fiber types present in deployed hybrid-span fiber cables, to assist in network planning, setting optimal launch powers, and selecting correct modulation formats.

Modeling the Input Power Dependency in Transceiver BER-ONSR for QoT Estimation

We propose a method to estimate the input power dependency of the transceiver BER-OSNR characteristic. Experiments using commercial transceivers show that estimation error in Q-factor is less than 0.2 dB.

Multi-Span Optical Power Spectrum Prediction using ML-based EDFA Models and Cascaded Learning

We implement a cascaded learning framework using component-level EDFA models for optical power spectrum prediction in multi-span networks, achieving a mean absolute error of 0.17 dB across 6 spans and 12 EDFAs with only one-shot measurement.

Optical Line Physical Parameters Calibration in Presence of EDFA Total Power Monitors

A method is proposed in order to improve QoT-E by calibrating the physical model parameters of an optical link post-installation, using only total power monitors integrated into the EDFAs and an OSA at the receiver.

Optical Network Anomaly Detection and Localization Based on Forward Transmission Sensing and Route Optimization

We introduce a novel scheme to detect and localize optical network anomaly using forward transmission sensing, and develop a heuristic algorithm to optimize the route selection. The performance is verified via simulations and network experiments.

Improving Real-time Data Streams Performance on Autonomous Surface Vehicles using DataX

In the evolving Artificial Intelligence (AI) era, the need for real-time algorithm processing in marine edge environments has become a crucial challenge. Data acquisition, analysis, and processing in complex marine situations require sophisticated and highly efficient platforms. This study optimizes real-time operations on a containerized distributed processing platform designed for Autonomous Surface Vehicles (ASV) to help safeguard the marine environment. The primary objective is to improve the efficiency and speed of data processing by adopting a microservice management system called DataX. DataX leverages containerization to break down operations into modular units, and resource coordination is based on Kubernetes. This combination of technologies enables more efficient resource management and real-time operations optimization, contributing significantly to the success of marine missions. The platform was developed to address the unique challenges of managing data and running advanced algorithms in a marine context, which often involves limited connectivity, high latencies, and energy restrictions. Finally, as a proof of concept to justify this platform’s evolution, experiments were carried out using a cluster of single-board computers equipped with GPUs, running an AI-based marine litter detection application and demonstrating the tangible benefits of this solution and its suitability for the needs of maritime missions.

LARA: Latency-Aware Resource Allocator for Stream Processing Applications

One of the key metrics of interest for stream processing applications is “latency”, which indicates the total time it takes for the application to process and generate insights from streaming input data. For mission-critical video analytics applications like surveillance and monitoring, it is of paramount importance to report an incident as soon as it occurs so that necessary actions can be taken right away. Stream processing applications are typically developed as a chain of microservices and are deployed on container orchestration platforms like Kubernetes. Allocation of system resources like “cpu” and “memory” to individual application microservices has direct impact on “latency”. Kubernetes does provide ways to allocate these resources e.g. through fixed resource allocation or through vertical pod autoscaler (VPA), however there is no straightforward way in Kubernetes to prioritize “latency” for an end-to end application pipeline. In this paper, we present LARA, which is specifically designed to improve “latency” of stream processing application pipelines. LARA uses a regression-based technique for resource allocation to individual microservices. We implement four real-world video analytics application pipelines i.e. license plate recognition, face recognition, human attributes detection and pose detection, and show that compared to fixed allocation, LARA is able to reduce latency by up to ? 2.8X and is consistently better than VPA. While reducing latency, LARA is also able to deliver over 2X throughput compared to fixed allocation and is almost always better than VPA.