A Quantum Variational Autoencoder Utilizing Regularized Mixed-state Latent Representations

A major challenge in near-term quantum computing is its application to large real-world datasets due to scarce quantum hardware resources. One approach to enabling tractable quantum models for such datasets involves finding low-dimensional representations that preserve essential information for downstream analysis. Inclassical machine learning, variational autoencoders (VAEs) facilitate efficient data compression, representationlearning for subsequent tasks, and novel data generation. However, no quantum model has been proposed thatexactly captures all of these features for direct application to quantum data on quantum computers. Some existingquantum models for data compression lack regularization of latent representations, thus preventing direct use forgeneration and control of generalization. Others are hybrid models with only some internal quantum components,impeding direct training on quantum data. To address this, we present a fully quantum framework, ?-QVAE,which encompasses all the capabilities of classical VAEs and can be directly applied to map both classicaland quantum data to a lower-dimensional space, while effectively reconstructing much of the original statefrom it. Our model utilizes regularized mixed states to attain optimal latent representations. It accommodatesvarious divergences for reconstruction and regularization. Furthermore, by accommodating mixed states at everystage, it can utilize the full data density matrix and allow for a training objective defined on probabilisticmixtures of input data. Doing so, in turn, makes efficient optimization possible and has potential implications forprivate and federated learning. In addition to exploring the theoretical properties of ?-QVAE, we demonstrateits performance on representative genomics and synthetic data. Our results indicate that ?-QVAE consistentlylearns representations that better utilize the capacity of the latent space and exhibits similar or better performancecompared with matched classical models.

TSLA: Unified Time Series and Language Model

Real-world time series data often require analysis or interpretation from domain experts. Some tasks, like time series question answering, involve both time series and natural language questions, posing challenges for single-modality language models to understand their interaction. To this end, we present TSLA (Time Series Language Model), a framework designed to enhance the language model with the understanding of time series data for multi-modality tasks. TSLA comprises three key components. (1) Time Series Tokenizer learns how to represent time series data into discrete tokens, making it more manageable for language models. (2) Joint (Pre-)Training on task-agnostic time series and text data integrates time series tokens and text tokens to model the interplay between time series and language concepts. (3) Multi-task Instruction Tuning fine-tunes the pretrained TSLA for various downstream tasks relevant to user interests. For evaluation, we applied TSLA to time series data from human motions on four tasks: time series captioning, time series question answering, text-based time series synthesis, and text-based time series continuation. The results demonstrate TSLA’s effectiveness in handling multiple time series analysis tasks, pointing the way for future research endeavors.

Graph Neural Networks, Explained: Our Role in the Future of AI

NEC Laboratories America (NECLA) is advancing the frontier of Graph Neural Networks (GNNs), a transformative AI technology that processes complex, interconnected data. Through innovations like PTDNet for robust learning, novel frameworks for explainability, StrGNN for anomaly detection in dynamic graphs, and GERDQ for calibration with out-of-distribution nodes, NECLA is addressing critical challenges in GNN development. These breakthroughs have real-world implications in fields such as cybersecurity, bioinformatics, and recommendation systems, positioning NECLA as a leader in the evolution of graph-based AI.

Trainingless Adaptation of Pretrained Models for Environmental Sound Classification

Deep neural network (DNN)-based models for environmental sound classification are not robust against a domain to which training data do not belong, that is, out-of-distribution or unseen data. To utilize pretrained models for the unseen domain, adaptation methods, such as finetuning and transfer learning, are used with rich computing resources, e.g., the graphical processing unit (GPU). However, it is becoming more difficult to keep up with research trends for those who have poor computing resources because state-of-the-art models are becoming computationally resource-intensive. In this paper, we propose a trainingless adaptation method for pretrained models for environmental sound classification. To introduce the trainingless adaptation method, we first propose an operation of recovering time–frequency-ish (TF-ish) structures in intermediate layers of DNN models. We then propose the trainingless frequency filtering method for domain adaptation, which is not a gradient-based optimization widely used. The experiments conducted using the ESC-50 dataset show that the proposed adaptation method improves the classification accuracy by 20.40 percentage points compared with the conventional method.

Text-guided Device-realistic Sound Generation for Fiber-based Sound Event Classification

Recent advancements in unique acoustic sensing devices and large-scale audio recognition models have unlocked new possibilities for environmental sound monitoring and detection. However, applying pretrained models to non-conventional acoustic sensors results in performance degradation due to domain shifts, caused by differences in frequency response and noise characteristics from the original training data. In this study, we introduce a text-guided framework for generating new datasets to retrain models specifically for these non-conventional sensors efficiently. Our approach integrates text-conditional audio generative models with two additional steps: (1) selecting audio samples based on text input to match the desired sounds, and (2) applying domain transfer techniques using recorded impulse responses and background noise to simulate the characteristics of the sensors. We demonstrate this process by generating emulated signals for fiber-optic Distributed Acoustic Sensors (DAS), creating datasets similar to the recorded ESC-50 dataset. The generated signals are then used to train a classifier, which outperforms few-shot learning approaches in environmental sound classification.

CLAP-S: Support Set Based Adaptation for Downstream Fiber-optic Acoustic Recognition

Contrastive Language-Audio Pretraining (CLAP) models have demonstrated unprecedented performance in various acoustic signal recognition tasks. Fiber-optic-based acoustic recognition is one of the most important downstream tasks and plays a significant role in environmental sensing. Adapting CLAP for fiber-optic acoustic recognition has become an active research area. As a non-conventional acoustic sensor, fiberoptic acoustic recognition presents a challenging, domain-specific, low-shot deployment environment with significant domain shifts due to unique frequency response and noise characteristics. To address these challenges, we propose a support-based adaptation method, CLAP-S, which linearly interpolates a CLAP Adapter with the Support Set, leveraging both implicit knowledge through fine-tuning and explicit knowledge retrieved from memory for cross-domain generalization. Experimental results show that our method delivers competitive performance on both laboratory recorded fiber-optic ESC-50 datasets and a real-world fiber optic gunshot-firework dataset. Our research also provides valuable insights for other downstream acoustic recognition tasks.

Shaping the Future with Responsible AI, Collaboration, and Disruption

Chris White, President of NEC Laboratories America, reflects on the lab’s mission to build responsible, human-centered technology—from AI to streetscape innovation—that tackles real-world challenges. In recent keynotes and interviews, he’s emphasized the power of collaboration, the importance of designing AI as a tool that empowers (not replaces), and the discipline required to scale truly disruptive ideas. He’s also shared thoughts on using digital tools for sustainability, such as optimizing global water systems, and the need for cooperative decision-making in complex environments like supply chains. Through it all, he reminds us: real innovation isn’t about flashy tech—it’s about solving meaningful problems, at scale, with intention and integrity.

LLM-based Distributed Code Generation and Cost-Efficient Execution in the Cloud

The advancement of Generative Artificial Intelligence (AI), particularly Large Language Models (LLMs), is reshaping the software industry by automating code generation. Many LLM-driven distributed processing systems rely on serial code generation constrained by predefined libraries, limiting flexibility and adaptability. While some approaches enhance performance through parallel execution or optimize edge-cloud distributed processing for specific domains, they often overlook the cost implications of deployment, restricting scalability and economic feasibility across diverse cloud environments. This paper presents DiCE-C, a system that eliminates these constraints by starting directly from a natural language query. DiCE-C dynamically identifies available tools at runtime, programmatically refines LLM prompts, and employs a stepwise approach—first generating serial code and then transforming it into distributed code. This adaptive methodology enables efficient distributed execution without dependence on specific libraries. By leveraging high-level parallelism at the Application Programming Interface (API) level and managing API execution as services within a Kubernetes-based runtime, DiCE-C reduces idle GPU time and facilitates the use of smaller, cost-effective GPU instances. Experiments with a vision-based insurance application demonstrate that DiCE-C reduces cloud operational costs by up to 72% when using smaller GPUs (A6000 and A4000 GPU machines vs. A100 GPU machine) and by 32% when using identical GPUs (A100 GPU machines). This flexible and cost-efficient approach makes DiCE-C a scalable solution for deploying LLM-generated vision applications in cloud environments.

Variable Temperature and Pump Power Semi-Analytical Gain Model for GFF-Embedded Single-Stage EDFAs

A simple and accurate semi-analytical model for predicting the gain of a single-stage erbium-doped fiber amplifier embedded with an unknown gain flattening filter is proposed for precise system equalization that is crucial for submarine systems.

Underwater Acoustic OFDM Transmission over Optical Fiber with Distributed Acoustic Sensing

We demonstrate fiber-optic acoustic data transmission using distributed acoustic sensing technology in an underwater environment. An acoustic orthogonal frequencydivisionmultiplexing (OFDM) signal transmitted through a fiber-optic cable deployed in a standard 40-meter-scale underwater testbed.