Optical Networking and SensingRead our Optical Networking and Sensing publications from our team of researchers. We are leading world-class research into the next generation of optical networks and sensing systems that will power ICT-based social solutions for years. We advance globally acknowledged innovation by engaging in visionary theoretical research, pioneering experiments, and leading technology field trials. Our work not only foresees the future but also transforms it into today’s reality.

Posts

Integrated Optical-to-Optical Gain in a Silicon Photonic Modulator Neuron

Silicon photonic neural networks can achieve higher throughputs and lower latencies than digital electronic alternatives.However, recently reported implementations of such networks have lacked integrated signal gain, instead utilizingoff-chip amplifiers or co-processors to complete the signal processing pipeline. Photonic neural networks without gainface substantial limitations in network depth and inter-layer fan-out. Here, we demonstrate a fully integrated siliconphotonic modulator neuron capable of up to 14.1 dBgain, achieved by modeling and addressing self-heating behavior inour output PN-junction micro-ring modulator.We use our experimental neuron to emulate a small network subject tohigh loss, achieving superior accuracy on an automated modulation classification benchmark to that of an optimal linearsystem. Our high-gain neuron can serve as a building block vastly expanding the range of neural network architecturesthat can be implemented with silicon photonics.

Neuromorphic Photonics-Enabled Near-Field RF Sensing with Residual Signal Recovery and Classification

We present near-field radio-frequency (RF) sensing using microwave photonic canceler (MPC) for residual signal recovery and neuromorphic photonic recurrent neural network (PRNN)chip and FPGA hardware to implement machine learning for high-bandwidth and low-latency classification.

Scalable Photonic Neurons for High-speed Automatic Modulation Classification

Automatic modulation classification (AMC) is becoming increasingly critical in the context of growing demands for ultra-wideband, low-latency signal intelligence in 5G/6G systems, with photonics addressing the bandwidth and real-time adaptability limitations faced by traditional radio-frequency (RF) electronics. This paper presents the first experimental photonicimplementation of AMC, achieved through a fully functional photonic neural network built from scalable microring resonators that co-integrate electro-optic modulation and weighting. Thiswork also represents a system-level deployment of such compact photonic neurons in a real photonic neural network, demonstrating the significant potential of photonic computing forlarge-scale, complex RF intellegence for next-generation wireless communication systems.

Sound Event Classification meets Data Assimilation with Distributed Fiber-Optic Sensing

Distributed Fiber-Optic Sensing (DFOS) is a promising technique for large-scale acoustic monitoring. However, its wide variation in installation environments and sensor characteristics causes spatial heterogeneity. This heterogeneity makes it difficult to collect representative training data. It also degrades the generalization ability of learning-based models, such as fine-tuning methods, under a limited amount of training data. To address this, we formulate Sound Event Classification (SEC) as data assimilation in an embedding space. Instead of training models, we infer sound event classes by combining pretrained audio embeddings with simulated DFOS signals. Simulated DFOS signals are generated by applying various frequency responses and noise patterns to microphone data, which allows for diverse prior modeling of DFOS conditions. Our method achieves out-of-domain (OOD) robust classification without requiring model training. The proposed method achieved accuracy improvements of 6.42, 14.11, and 3.47 percentage points compared with conventional zero-shot and two types of fine-tune methods, respectively. By employing the simulator in the framework of data assimilation, the proposed method also enables precise estimation of physical parameters from observed DFOS signals.

Emerging Integrated Photonic Technologies Leveraging Multimaterial Integration for AI and Datacenter Applications

Since the inception of integrated photonics, multimaterial integration has served as a primary avenue for new technology innovations. Now, with an ever-increasing demand for integrated photonics as a platform for both high-performance links from/within datacenters and AI acceleration, multimaterial integration has begun to play an even more critical role in pushing capabilities beyond their current limits. In this work, we review photonics for AI and datacenter applications, the current landscape of multimaterial integration in photonics, and the ways in which multimaterial integration techniques have been recently utilized to push the performance of modulators on silicon and chip-scale optical frequency combs.

THAT: Token-wise High-frequency Augmentation Transformer for Hyperspectral Pansharpening

Transformer-based methods have demonstrated strong potential in hyperspectral pansharpening by modeling long-range dependencies. However, their effectiveness is often limited by redundant token representations and a lack of multiscale feature modeling. Hyperspectral images exhibit intrinsic spectral priors (e.g., abundance sparsity) and spatial priors(e.g., non-local similarity), which are critical for accurate reconstruction. From a spectral–spatial perspective, Vision Transformers (ViTs) face two major limitations: they struggle to preserve high-frequency components—such as material edges and texture transitions, and suffer from attention dispersion across redundant tokens. These issues stem from the global self-attention mechanism, which tends to dilute high-frequency signals and overlook localized details. To address these challenges, we propose the Token-wise High-frequency AugmentationTransformer (THAT), a novel framework designed to enhance hyperspectral pansharpening through improved high-frequency feature representation and token selection. Specifically, THAT introduces: (1) Pivotal Token Selective Attention (PTSA) to prioritize informative tokens and suppress redundancy; (2) a Multi-level Variance-aware Feed-forward Network (MVFN) to enhance high-frequency detail learning. Experiments on standard benchmarks show that THAT achieves state-of-the-art performance with improved reconstruction quality and efficiency.

Leveraging Digital Twins for AII-Photonics Networks-as-a-Ser­vice: Enabling Innovation and Efficiency

This tutorial presents an architecture and methods for all-photonics networks-as-a-service in distributed Al data center infrastructures. We discuss server-based coherent transceiver architectures, remote transponder control, rapid end-to-end lightpath provisioning, digital longitudinal monitoring, and line-system calibration, demonstrating their feasibility through field validations.

Computation Stability Tracking Using Data Anchors for Fiber Rayleigh-based Nonlinear Random Projection System

We introduce anchor vectors to monitor Rayleigh-backscattering variability in a fiber-optic computing system that performs nonlinear random projection for image classification. With a ~0.4-s calibration interval, system stability can be maintained with a linear decoder, achieving an average accuracy of 80%-90%.

Digital Twins Beyond C-band Using GNPy

GNPy advancements enable accurate and efficient modeling of multiband optical networks for digital twin applications. The developed solvers for Kerr nonlinearity and SRS have been validated through simulation and experimentally in C+L transmission, supporting real-world network planning, design, and performance optimization across disaggregated optical infrastructures.

End-to-End AI for Distributed Fiber Optics Sensing: Eliminating Intermediate Processing via Raw Data Learning

For the first time, we present an end-to-end AI framework for data analysis in distributed fiber optic sensing. The proposed model eliminates the need for optical phase computation and outperforms traditional data processing pipelines, achieving over 96% recognition accuracy on a diverse acoustic dataset.