Link Loss Analysis of Integrated Linear Weight Bank within Silicon Photonic Neural Network

Over the last decade, silicon photonic neural networks have demonstrated the possibility of photonic-enabled machine learning at the edge. These systems enable low-latency ultra-wideband classifications, channel estimations, and many other signal characterization tasks within wireless environments. While these proof-of-concept experiments have yielded promising results, poor device and architectural designs have resulted in sub-optimal bandwidth and noise performance. As a result, the application space of this technology has been limited to GHz bandwidths and high signal-to-ratio input signals. By applying a microwave photonic perspective to these systems, the authors demonstrate high-bandwidth operation while optimizing for RF performance metrics: instantaneous bandwidth, link loss, noise figure, and dynamic range. The authors explore the extended capabilities due to these improved metrics and potential architectures to continue further optimization. The authors introduce novel architectures and RF analysis for RF-optimized neuromorphic photonic hardware.

NEC Labs America at OFC 2024 San Diego from March 24 – 28

The NEC Labs America team Yaowen Li, Andrea D’Amico, Yue-Kai Huang, Philip Ji, Giacomo Borraccini, Ming-Fang Huang, Ezra Ip, Ting Wang & Yue Tian (Not pictured: Fatih Yaman) has arrived in San Diego, CA for OFC24! Our team will be speaking and presenting throughout the event. Read more for an overview of our participation.

Optical Network Anomaly Detection and Localization Based on Forward Transmission Sensing and Route Optimization

We introduce a novel scheme to detect and localize optical network anomaly using forward transmission sensing, and develop a heuristic algorithm to optimize the route selection. The performance is verified via simulations and network experiments.

Optical Line Physical Parameters Calibration in Presence of EDFA Total Power Monitors

A method is proposed in order to improve QoT-E by calibrating the physical model parameters of an optical link post-installation, using only total power monitors integrated into the EDFAs and an OSA at the receiver.

Multi-Span Optical Power Spectrum Prediction using ML-based EDFA Models and Cascaded Learning

We implement a cascaded learning framework using component-level EDFA models for optical power spectrum prediction in multi-span networks, achieving a mean absolute error of 0.17 dB across 6 spans and 12 EDFAs with only one-shot measurement.

Modeling the Input Power Dependency in Transceiver BER-ONSR for QoT Estimation

We propose a method to estimate the input power dependency of the transceiver BER-OSNR characteristic. Experiments using commercial transceivers show that estimation error in Q-factor is less than 0.2 dB.

Inline Fiber Type Identification using In-Service Brillouin Optical Time Domain Analysis

We proposed the use of BOTDA as a monitoring tool to identify fiber types present in deployed hybrid-span fiber cables, to assist in network planning, setting optimal launch powers, and selecting correct modulation formats.

Field Implementation of Fiber Cable Monitoring for Mesh Networks with Optimized Multi-Channel Sensor Placement

We develop a heuristic solution to effectively optimize the placement of multi-channel distributed fiber optic sensors in mesh optical fiber cable networks. The solution has beenimplemented in a field network to provide continuous monitoring.

4D Optical Link Tomography: First Field Demonstration of Autonomous Transponder Capable of Distance, Time, Frequency, and Polarization-Resolved Monitoring

We report the first field demonstration of 4D link tomography using a commercial transponder, which offers distance, time, frequency, and polarization-resolved monitoring. This scheme enables autonomous transponders that identify locations of multiple QoT degradation causes.

LARA: Latency-Aware Resource Allocator for Stream Processing Applications

One of the key metrics of interest for stream processing applications is “latency”, which indicates the total time it takes for the application to process and generate insights from streaming input data. For mission-critical video analytics applications like surveillance and monitoring, it is of paramount importance to report an incident as soon as it occurs so that necessary actions can be taken right away. Stream processing applications are typically developed as a chain of microservices and are deployed on container orchestration platforms like Kubernetes. Allocation of system resources like “cpu” and “memory” to individual application microservices has direct impact on “latency”. Kubernetes does provide ways to allocate these resources e.g. through fixed resource allocation or through vertical pod autoscaler (VPA), however there is no straightforward way in Kubernetes to prioritize “latency” for an end-to end application pipeline. In this paper, we present LARA, which is specifically designed to improve “latency” of stream processing application pipelines. LARA uses a regression-based technique for resource allocation to individual microservices. We implement four real-world video analytics application pipelines i.e. license plate recognition, face recognition, human attributes detection and pose detection, and show that compared to fixed allocation, LARA is able to reduce latency by up to ? 2.8X and is consistently better than VPA. While reducing latency, LARA is also able to deliver over 2X throughput compared to fixed allocation and is almost always better than VPA.