Entries by NEC Labs America

zeta-QVAE: A Quantum Variational Autoencoder utilizing Regularized Mixed-state Latent Representations

A major challenge in near-term quantum computing is its application to large real-world datasets due to scarce quantum hardware resources. One approach to enabling tractable quantum models for such datasets involves compressing the original data to manageable dimensions while still representing essential information for downstream analysis. In classical machine learning, variational autoencoders (VAEs) facilitate efficient data compression, representation learning for subsequent tasks, and novel data generation. However, no model has been proposed that exactly captures all of these features for direct application to quantum data on quantum computers. Some existing quantum models for data compression lack regularization of latent representations, thus preventing direct use for generation and control of generalization. Others are hybrid models with only some internal quantum components, impeding direct training on quantum data. To bridge this gap, we present a fully quantum framework, ?-QVAE, which encompasses all the capabilities of classical VAEs and can be directly applied for both classical and quantum data compression. Our model utilizes regularized mixed states to attain optimal latent representations. It accommodates various divergences for reconstruction and regularization. Furthermore, by accommodating mixed states at every stage, it can utilize the full-data density matrix and allow for a “global” training objective. Doing so, in turn, makes efficient optimization possible and has potential implications for private and federated learning. In addition to exploring the theoretical properties of ?-QVAE, we demonstrate its performance on representative genomics and synthetic data. Our results consistently indicate that ?-QVAE exhibits similar or better performance compared to matched classical models.

DFA-RAG: Conversational Semantic Router for Large Language Model with Definite Finite Automaton

This paper introduces the retrieval-augmented large language model with Definite Finite Automaton (DFA-RAG), a novel framework designed to enhance the capabilities of conversational agents using large language models (LLMs). Traditional LLMs face challenges in generating regulated and compliant responses in special scenarios with predetermined response guidelines, like emotional support and customer service. Our framework addresses these challenges by embedding a Definite Finite Automaton (DFA), learned from training dialogues, within the LLM. This structured approach acts as a semantic router which enables the LLM to adhere to a deterministic response pathway. The routing is achieved by the retrieval-augmentation generation (RAG) strategy, which carefully selects dialogue examples aligned with the current conversational context. The advantages of DFA-RAG include an interpretable structure through human-readable DFA, context-aware retrieval for responses in conversations, and plug-and-play compatibility with existing LLMs. Extensive benchmarks validate DFA-RAG’s effectiveness, indicating its potential as a valuable contribution to the conversational agent.

Low-Latency Passive Thermal Stabilization of a Silicon Micro-Ring Resonator with Self-Heating

Analog photonic information processing can be implemented with low chip area using wavelength-division multiplexed systems, which typically manipulate light using micro-ring resonators. Micro-rings are uniquely susceptible to thermal crosstalk, with negative system performance consequences if not addressed. Existing thermal sensitivity mitigation methods face drawbacks including high complexity, high latency, high digital and analog hardware requirements, and CMOS incompatibility. Here, we demonstrate a passive thermal desensitization mechanism for silicon micro-ring resonators exploiting self-heating resulting from optical absorption. We achieve a 49% reduction in thermal crosstalk sensitivity and 1 ?s adaptation latency using a system with no specialized micro-ring engineering, no additional control hardware, and no additional calibration. Our theoretical model indicates the potential for significant further desensitization gains with optimized microring designs. Self-heating desensitization can be combined with active thermal stabilization to achieve both responsiveness and accuracy or applied independently to thermally desensitize large photonic systems for signal processing or neural network inference.

RIO-CPD: A Riemannian Geometric Method for Correlation-aware Online Change Point Detection

The objective of change point detection is to identify abrupt changes at potentially multiple points within a data sequence. This task is particularly challenging in the online setting where various types of changes can occur, including shifts in both the marginal and joint distributions of the data. This paper tackles these challenges by sequentially tracking correlation matrices on their Riemannian geometry, where the geodesic distances accurately capture the development of correlations. We propose Rio-CPD, a non-parametric correlation-aware online change point detection framework that combines the Riemannian geometry of the manifold of symmetric positive definite matrices and the cumulative sum statistic (CUSUM) for detecting change points. Rio-CPD enhances CUSUM by computing the geodesic distance from present observations to the Frechet mean of previous observations. With careful choice of metrics equipped to the Riemannian geometry, Rio-CPD is simple and computationally efficient. Experimental results on both synthetic and real-world datasets demonstrate that Rio-CPD outperforms existing methods in detection accuracy and efficiency.

Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews

We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM). Our maximum likelihood model leverages expert-written and AI-generated reference texts to accurately and efficiently examine real-world LLM-use at the corpus level. We apply this approach to a case study of scientific peer review in AI conferences that took place after the release of ChatGPT: ICLR 2024, NeurIPS 2023, CoRL 2023 and EMNLP 2023. Our results suggest that between 6.5% and 16.9% of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates. The circumstances in which generated text occurs offer insight into user behavior: the estimated fraction of LLM-generated text is higher in reviews which report lower confidence, were submitted close to the deadline, and from reviewers who are less likely to respond to author rebuttals. We also observe corpus-level trends in generated text which may be too subtle to detect at the individual level, and discuss the implications of such trends on peer review. We call for future interdisciplinary work to examine how LLM use is changing our information and knowledge practices.

Multi-terminal Germanium Photodetector in a Commercial Silicon Photonics Platform

We report responsivity measurements of a multiterminal photodetection device in a commercial silicon photonics platform. The ratio of measured responsivities is found to track the relative terminal lengths. This can serve as a highly compact optoelectronic tap/diplexer. More importantly, complex biasing conditions of similar devices are promising for onchip reprogrammable opto-electronic responses in conventional silicon photonic platforms, with applications in reprogrammable photonics and neuromorphic photonics.

GNPy Experimental Validation in a C+L Multiband Optical Multiplex Section

The GNPy quality-of-transmission estimator has undergone improvements and rigorous experimental validation in a C+L multiband transmission scenario. This includes the incorporation of a disaggregated generalized Gaussian noise model, along with advanced modeling of amplifiers and transceivers. The recently proposed implementation demonstrates notable enhancements, offering highly accurate GSNR predictions on commercial C+L-band equipment while significantly reducing computation time.

LLMs and MI Bring Innovation to Material Development Platforms

In this paper, we introduce efforts to apply large language models (LLMs) to the field of material development. NEC is advancing the development of a material development platform. By applying core technologies corresponding to two material development steps, namely investigation activities (Read paper/patent) and experimental planning (Design Experiment Plan), the platform organizes documents such as papers and reports as well as data such as experimental results and then presents in an interactive way to users. In addition, with techniquesthat reflect physical and chemical principles into machine learning models, AI can learn even with limited data and accurately predict material properties. Through this platform, we aim to achieve the seamless integration of materials informatics (MI) with a vast body of industry literature and knowledge, thereby bringing innovation to the material development process.

Foundational Vision-LLM for AI Linkage and Orchestration

We propose a vision-LLM framework for automating development and deployment of computer vision solutions for pre-defined or custom-defined tasks. A foundational layer is proposed with a code-LLM AI orchestrator self-trained with reinforcement learning to create Python code based on its understanding of a novel user-defined task, together with APIs, documentation and usage notes of existing task-specific AI models. Zero-shot abilities in specific domains are obtained through foundational vision-language models trained at a low compute expense leveraging existing computer vision models and datasets. An engine layer is proposed which comprises of several task-specific vision-language engines which can be compositionally utilized. An application-specific layer is proposed to improve performance in customer-specific scenarios, using novel LLM-guided data augmentation and question decomposition, besides standard fine-tuning tools. We demonstrate a range of applications including visual AI assistance, visual conversation, law enforcement, mobility, medical image reasoning and remote sensing.