NEC Labs America at OFC 2024 San Diego from March 24 – 28

The NEC Labs America team Yaowen Li, Andrea D’Amico, Yue-Kai Huang, Philip Ji, Giacomo Borraccini, Ming-Fang Huang, Ezra Ip, Ting Wang & Yue Tian (Not pictured: Fatih Yaman) has arrived in San Diego, CA for OFC24! Our team will be speaking and presenting throughout the event. Read more for an overview of our participation.

NEC Laboratories Advances Material Design with AI-based MateriAI Platform

NEC Laboratories Europe and NEC Laboratories America have developed MateriAI, an AI-based, material design platform that accelerates the development of new, environmentally friendly materials. The prototype platform was initially designed to overcome major hurdles in the creation of new synthetic, organic and bio-based polymers, such as rubber and plastics.

Distributed Fiber Optic Sensing for Fault Localization Caused by Fallen Tree Using Physics-informed ResNet

Falling trees or their limbs can cause power lines to break or sag, sometimes resulting in devastating wildfires. Conventional protections such as circuit breakers, overcurrent relays and automatic circuit reclosers may clear short circuits caused by tree contact, but they may not detect cases where the conductors remain intact or a conducting path is not sufficient to create a full short circuit. In this paper, we introduce a novel, non-intrusive monitoring technique that detects and locates fallen trees, even if a short circuit is not triggered. This method employs distributed fiber optic sensing (DFOS) to detect vibrations along the power distribution line where corresponding fiber cables are installed. A physics-informed ResNet model is then utilized to interpret this information and accurately locate fallen trees, which sets it apart from traditional black-box predictions of machine learning algorithms. Our real-scale lab tests demonstrate highly accurate and reliable fallen tree detection and localization.

Fast WDM Provisioning With Minimum Probe Signals: The First Field Experiments For DC Exchanges

There are increasing requirements for data center interconnection (DCI) services, which use fiber to connect any DC distributed in a metro area and quickly establish high-capacity optical paths between cloud services and mobile edge computing and the users. In such networks, coherent transceivers with various optical frequency ranges, modulators, and modulation formats installed at each connection point must be used to meet service requirements such as fast-varying traffic requests between user computing resources. This requires technologyand architectures that enable users and DCI operators to cooperate to achieve fast provisioning of WDM links and flexible route switching in a short time, independent of the transceiver’s implementation and characteristics. We propose an approach to estimate the end-to-end (EtE) generalized signal-to-noise ratio (GSNR) accurately in a short time, not by measuring the GSNR at the operational route and wavelength for the EtE optical path but by simply applying a quality of transmission probe channel link by link, at a wavelength/modulation-formatconvenient for measurement. Assuming connections between transceivers of various frequency ranges, modulators, and modulation formats, we propose a device software architecture in which the DCI operator optimizes the transmission mode between user transceivers with high accuracy using only common parameters such as the bit error rate. In this paper, we first implement software libraries for fast WDM provisioning and experimentally build different routes to verify the accuracy of this approach. For the operational EtE GSNR measurements, theaccuracy estimated from the sum of the measurements for each link was 0.6 dB, and the wavelength-dependent error was about 0.2 dB. Then, using field fibers deployed in the NSF COSMOS testbed, a Linux-based transmission device software architecture, and transceivers with different optical frequency ranges, modulators, andmodulation formats, the fast WDM provisioning of an optical path was completed within 6 min.

A system-on-chip microwave photonic processor solves dynamic RF interference in real-time with femtosecond latency

Radio-frequency interference is a growing concern as wireless technology advances, with potentially life-threatening consequences like interference between radar altimeters and 5?G cellular networks. Mobile transceivers mix signals with varying ratios over time, posing challenges for conventional digital signal processing (DSP) due to its high latency. These challenges will worsen as future wireless technologies adopt higher carrier frequencies and data rates. However, conventional DSPs, already on the brink of their clock frequency limit, are expected to offer only marginal speed advancements. This paper introduces a photonic processor to address dynamic interference through blind source separation (BSS). Our system-on-chip processor employs a fully integrated photonic signal pathway in the analogue domain, enabling rapid demixing of received mixtures and recovering the signal-of-interest in under 15 picoseconds. This reduction in latency surpasses electronic counterparts by more than three orders of magnitude. To complement the photonic processor, electronic peripherals based on field-programmable gate array (FPGA) assess the effectiveness of demixing and continuously update demixing weights at a rate of up to 305?Hz. This compact setup features precise dithering weight control, impedance-controlled circuit board and optical fibre packaging, suitable for handheld and mobile scenarios. We experimentally demonstrate the processor’s ability to suppress transmission errors and maintain signal-to-noise ratios in two scenarios, radar altimeters and mobile communications. This work pioneers the real-time adaptability of integrated silicon photonics, enabling online learning and weight adjustments, and showcasing practical operational applications for photonic processing.

Apply for a Summer 2024 Internship

Our exciting internship opportunities for this Summer 2024 are now available. We are looking for students pursuing advanced degrees in Computer Science and Electrical Engineering. Internships are typically 3 months long in duration. The benefits of working for us include the opportunity to quickly become part of a project team applying cutting-edge technology to industry-leading concepts. We have opportunities in Data Science & System Security, Integrated Systems, Machine Learning, and Optical Networking & Sensing.

Enabling Cooperative Hybrid Beamforming in TDD-based Distributed MIMO Systems

Distributed massive MIMO networks are envisioned to realize cooperative multi-point transmission in next-generation wireless systems. For efficient cooperative hybrid beamforming, the cluster of access points (APs) needs to obtain precise estimates of the uplink channel to perform reliable downlink precoding. However, due to the radio frequency (RF) impairments between the transceivers at the two en-points of the wireless channel, full channel reciprocity does not hold which results in performance degradation in the cooperative hybrid beamforming (CHBF) unless a suitable reciprocity calibration mechanism is in place. We propose a two-step approach to calibrate any two hybrid nodes in the distributed MIMO system. We then present and utilize the novel concept of reciprocal tandem to propose a low-complexity approach for jointly calibrating the cluster of APs and estimating the downlink channel. Finally, we validate our calibration technique’s effectiveness through numerical simulation.

Differentiable JPEG: The Devil is in The Details

JPEG remains one of the most widespread lossy image coding methods. However, the non-differentiable nature of JPEG restricts the application in deep learning pipelines. Several differentiable approximations of JPEG have recently been proposed to address this issue. This paper conducts a comprehensive review of existing diff. JPEG approaches and identifies critical details that have been missed by previous methods. To this end, we propose a novel diff. JPEG approach, overcoming previous limitations. Our approach is differentiable w.r.t. the input image, the JPEG quality, the quantization tables, and the color conversion parameters. We evaluate the forward and backward performance of our diff. JPEG approach against existing methods. Additionally, extensive ablations are performed to evaluate crucial design choices. Our proposed diff. JPEG resembles the (non-diff.) reference implementation best, significantly surpassing the recent-best diff. approach by 3.47dB (PSNR) on average. For strong compression rates, we can even improve PSNR by 9.51dB. Strong adversarial attack results are yielded by our diff. JPEG, demonstrating the effective gradient approximation. Our code is available at https://github.com/necla-ml/Diff-JPEG.

Improving Language-Based Object Detection by Explicit Generation of Negative Examples

The recent progress in language-based object detection with an open-vocabulary can be largely attributed to finding better ways of leveraging large-scale data with free-form text annotations. Training from image captions with grounded bounding boxes (ground truth or pseudo-labeled) enable the models to reason over an open-vocabulary and understand object descriptions in free-form text. In this work, we investigate the role of negative captions for training such language-based object detectors. While the fixed label space in standard object detection datasets clearly defines the set of negative classes, the free-form text used for language-based detection makes the space of potential negatives virtually infinite in size. We propose to leverage external knowledge bases and large-language-models to automatically generate contradictions for each caption in the training dataset. Furthermore, we leverage image-generate tools to create corresponding negative images to the contradicting caption. Such automatically generated data constitute hard negative examples for language-based detection and improve the model when trained from. Our experiments demonstrate the benefits of the automatically generated training data on two complex benchmarks.

Prompt-based Domain Discrimination for Multi-source Time Series Domain Adaptation

Time series domain adaptation stands as a pivotal and intricate challenge with diverse applications, including but not limited to human activity recognition, sleep stage classification, and machine fault diagnosis. Despite the numerous domain adaptation techniques proposed to tackle this complex problem, their primary focus has been on the common representations of time series data. This concentration might inadvertently lead to the oversight of valuable domain-specific information originating from different source domains. To bridge this gap, we introduce POND, a novel prompt-based deep learning model designed explicitly for multi-source time series domain adaptation. POND is tailored to address significant challenges, notably: 1) The unavailability of a quantitative relationship between meta-data information and time series distributions, and 2) The dearth of exploration into extracting domain specific meta-data information. In this paper, we present an instance-level prompt generator and afidelity loss mechanism to facilitate the faithful learning of meta-data information. Additionally, we propose a domain discrimination technique to discern domain-specific meta-data information from multiple source domains. Our approach involves a simple yet effective meta-learning algorithm to optimize the objective efficiently. Furthermore, we augment the model’s performance by incorporating the Mixture of Expert (MoE) technique. The efficacy and robustness of our proposed POND model are extensively validated through experiments across 50 scenarios encompassing five datasets, which demonstrates that our proposed POND model outperforms the state-of the-art methods by up to 66% on the F1-score.