Columbia University is an Ivy League institution in New York City, renowned for its research in medicine, data science, climate, and public policy. Its global network and research ecosystem advance solutions to complex societal challenges. NEC Labs America and Columbia University explore distributed AI, secure collaborative learning, and scalable machine learning infrastructure. We research supports high-performance computing environments and real-time AI systems. Please read about our latest news and collaborative publications with Columbia University.

Posts

Toward Intelligent and Efficient Optical Networks: Performance Modeling, Co-existence, and Field Trials

Optical transmission networks require intelligent traffic adaptation and efficient spectrum usage. We present scalable machine learning (ML) methods for network performance modeling, andfield trials of distributed fiber sensing and classic optical network traffic coexistence.

Field Verification of Fault Localization with Integrated Physical-Parameter-Aware Methodology

We report the first field verification of fault localization in an optical line system (OLS) by integrating digital longitudinal monitoring and OLS calibration, highlighting changes in physical metrics and parameters. Use cases shown are degradation of a fiber span loss and optical amplifier noise figure.

Inline Fiber Type Identification using In-Service Brillouin Optical Time Domain Analysis

We proposed the use of BOTDA as a monitoring tool to identify fiber types present in deployed hybrid-span fiber cables, to assist in network planning, setting optimal launch powers, and selecting correct modulation formats.

Modeling the Input Power Dependency in Transceiver BER-ONSR for QoT Estimation

We propose a method to estimate the input power dependency of the transceiver BER-OSNR characteristic. Experiments using commercial transceivers show that estimation error in Q-factor is less than 0.2 dB.

Field Trial of Coexistence and Simultaneous Switching of Real-Time Fiber Sensing and Coherent 400 GbE in a Dense Urban Environment

Recent advances in optical fiber sensing have enabled telecom network operators to monitor their fiber infrastructure while generating new revenue in various application scenarios, including data center interconnect, public safety, smart cities, and seismic monitoring. However, given the high utilization of fiber networks for data transmission, it is undesirable to allocate dedicated fiber strands solely for sensing purposes. Therefore, it is crucial to ensure the reliable coexistence of fiber sensing and communication signals that co-propagate on the same fiber. In this paper, we conduct field trials in a reconfigurable optical add-drop multiplexer (ROADM) network enabled by the PAWR COSMOS testbed, utilizing metro area fibers in Manhattan, New York City. We verify the coexistence of real-time constant-amplitude distributed acoustic sensing (DAS), coherent 400 GbE, and analog radio-over-fiber (ARoF) signals. Measurement results obtained from the field trial demonstrate that the quality of transmission (QoT) of the coherent 400 GbE signal remains unaffected during co-propagation with DAS and ARoF signals in adjacent dense wavelength-division multiplexing (DWDM) channels. In addition, we present a use case of this coexistence system supporting preemptive DAS-informed optical path switching before link failure.

Fast WDM Provisioning With Minimum Probe Signals: The First Field Experiments For DC Exchanges

There are increasing requirements for data center interconnection (DCI) services, which use fiber to connect any DC distributed in a metro area and quickly establish high-capacity optical paths between cloud services and mobile edge computing and the users. In such networks, coherent transceivers with various optical frequency ranges, modulators, and modulation formats installed at each connection point must be used to meet service requirements such as fast-varying traffic requests between user computing resources. This requires technologyand architectures that enable users and DCI operators to cooperate to achieve fast provisioning of WDM links and flexible route switching in a short time, independent of the transceiver’s implementation and characteristics. We propose an approach to estimate the end-to-end (EtE) generalized signal-to-noise ratio (GSNR) accurately in a short time, not by measuring the GSNR at the operational route and wavelength for the EtE optical path but by simply applying a quality of transmission probe channel link by link, at a wavelength/modulation-formatconvenient for measurement. Assuming connections between transceivers of various frequency ranges, modulators, and modulation formats, we propose a device software architecture in which the DCI operator optimizes the transmission mode between user transceivers with high accuracy using only common parameters such as the bit error rate. In this paper, we first implement software libraries for fast WDM provisioning and experimentally build different routes to verify the accuracy of this approach. For the operational EtE GSNR measurements, theaccuracy estimated from the sum of the measurements for each link was 0.6 dB, and the wavelength-dependent error was about 0.2 dB. Then, using field fibers deployed in the NSF COSMOS testbed, a Linux-based transmission device software architecture, and transceivers with different optical frequency ranges, modulators, andmodulation formats, the fast WDM provisioning of an optical path was completed within 6 min.

First Field Demonstration of Automatic WDM Optical Path Provisioning over Alien Access Links for Data Center Exchange

We demonstrated under six minutes automatic provisioning of optical paths over field- deployed alien access links and WDM carrier links using commercial-grade ROADMs, whitebox mux-ponders, and multi-vendor transceivers. With channel probing, transfer learning, and Gaussian noise model, we achieved an estimation error (Q-factor) below 0.7 dB

Field Trial of Coexistence and Simultaneous Switching of Real-time Fiber Sensing and 400GbE Supporting DCI and 5G Mobile Services

Coexistence of real-time constant-amplitude distributed acoustic sensing (DAS) and 400GbE signals is verified by field trial over metro fibers, demonstrating no QoT impact during co-propagation and supporting preemptive DAS-informed optical path switching before link failure

Convolutional Transformer based Dual Discriminator Generative Adversarial Networks for Video Anomaly Detection

Detecting abnormal activities in real-world surveillance videos is an important yet challenging task as the prior knowledge about video anomalies is usually limited or unavailable. Despite that many approaches have been developed to resolve this problem, few of them can capture the normal spatio-temporal patterns effectively and efficiently. Moreover, existing works seldom explicitly consider the local consistency at frame level and global coherence of temporal dynamics in video sequences. To this end, we propose Convolutional Transformer based Dual Discriminator Generative Adversarial Networks (CT-D2GAN) to perform unsupervised video anomaly detection. Specifically, we first present a convolutional transformer to perform future frame prediction. It contains three key components, i.e., a convolutional encoder to capture the spatial information of the input video clips, a temporal self-attention module to encode the temporal dynamics, and a convolutional decoder to integrate spatio-temporal features and predict the future frame. Next, a dual discriminator based adversarial training procedure, which jointly considers an image discriminator that can maintain the local consistency at frame-level and a video discriminator that can enforce the global coherence of temporal dynamics, is employed to enhance the future frame prediction. Finally, the prediction error is used to identify abnormal video frames. Thoroughly empirical studies on three public video anomaly detection datasets, i.e., UCSD Ped2, CUHK Avenue, and Shanghai Tech Campus, demonstrate the effectiveness of the proposed adversarial spatio-temporal modeling framework.

Countering Malicious Processes with Process-DNS Association

Modern malware and cyber attacks depend heavily on DNS services to make their campaigns reliable and difficult to track. Monitoring network DNS activities and blocking suspicious domains have been proven an effective technique in countering such attacks. However, recent successful campaigns reveal that at- tackers adapt by using seemingly benign domains and public web storage services to hide malicious activity. Also, the recent support for encrypted DNS queries provides attacker easier means to hide malicious traffic from network-based DNS monitoring.We propose PDNS, an end-point DNS monitoring system based on DNS sensor deployed at each host in a network, along with a centralized backend analysis server. To detect such attacks, PDNS expands the monitored DNS activity context and examines process context which triggered that activity. Specifically, each deployed PDNS sensor matches domain name and the IP address related to the DNS query with process ID, binary signature, loaded DLLs, and code signing information of the program that initiated it. We evaluate PDNS on a DNS activity dataset collected from 126 enterprise hosts and with data from multiple malware sources. Using ML Classifiers including DNN, our results outperform most previous works with high detection accuracy: a true positive rate at 98.55% and a low false positive rate at 0.03%.