The University of Connecticut (UConn), founded in 1881, is a national leader among public research universities, with a network of campuses across Connecticut. It is dedicated to fostering a culture of innovation, with over 32,000 students engaged in critical questions in classrooms, labs, and the community. We have collaborated with the University of Connecticut on research involving secure AI and anomaly detection. Our work has led to the development of improved models for identifying patterns in sensitive or imbalanced datasets across cybersecurity and system health applications. Please read about our latest news and collaborative publications with the University of Connecticut.

Posts

TimeXL: Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop

Time series analysis provides essential insights for real-world system dynamics and informs downstream decision-making, yet most existing methods often overlook the rich contextual signals present in auxiliary modalities. To bridge this gap, we introduce TimeXL, a multi-modal prediction framework that integrates a prototype-based time series encoder with three collaborating Large Language Models (LLMs) to deliver more accurate predictions and interpretable explanations. First, a multi-modal prototype-based encoder processes both time series and textual inputs to generate preliminary forecasts alongside case-based rationales. These outputs then feed into a prediction LLM, which refines the forecasts by reasoning over the encoder’s predictions and explanations. Next, a reflection LLM compares the predicted values against the ground truth, identifying textual inconsistencies or noise. Guided by this feedback, a refinement LLM iteratively enhances text quality and triggers encoder retraining. This closed-loop workflow—prediction, critique (reflect), and refinement—continuously boosts the framework’s performance and interpretability. Empirical evaluations on four real-world datasets demonstrate that TimeXL achieves up to 8.9% improvement in AUC and produces human-centric, multi-modal explanations, highlighting the power of LLM-driven reasoning for time series prediction.

Multi-Modal View Enhanced Large Vision Models for Long-Term Time Series Forecasting

Time series, typically represented as numerical sequences, can also be transformed into images and texts, offering multi-modal views (MMVs) of the same underlying signal. These MMVs can reveal complementary patterns and enable the use of powerful pre-trained large models, such as large vision models (LVMs), for long-term time series forecasting (LTSF). However, as we identified in this work, the state-of-the-art (SOTA) LVM-based forecaster poses an inductive bias towards “forecasting periods”. To harness this bias, we propose DMMV, a novel decomposition-based multi-modal view framework that leverages trend-seasonal decomposition and a novel backcast residual based adaptive decomposition to integrate MMVs for LTSF. Comparative evaluations against 14 SOTA models across diverse datasets show that DMMV outperforms single-view and existing multi-modal baselines, achieving the best mean squared error (MSE) on 6 out of 8 benchmark datasets. The code for this paper is available at: https://github.com/D2I-Group/dmmv.

Uni-LoRA: One Vector is All You Need

Low-Rank Adaptation (LoRA) has become the de facto parameter-efficient fine-tuning (PEFT) method for large language models (LLMs) by constraining weight updates to low-rank matrices. Recent works such as Tied-LoRA, VeRA, and VB-LoRA push efficiency further by introducing additional constraints to reduce the trainable parameter space. In this paper, we show that the parameter space reduction strategies employed by these LoRA variants can be formulated within a unified framework, Uni-LoRA, where the LoRA parameter space, flattened as a high-dimensional vector space R^D, can be reconstructed through a projection from a subspace R^d, with d ll D. We demonstrate that the fundamental difference among various LoRA methods lies in the choice of the projection matrix, P in R^(Unknown sysvar: (D times d)).Most existing LoRA variants rely on layer-wise or structure-specific projections that limit cross-layer parameter sharing, thereby compromising parameter efficiency. In light of this, we introduce an efficient and theoretically grounded projection matrix that is isometric, enabling global parameter sharing and reducing computation overhead. Furthermore, under the unified view of Uni-LoRA, this design requires only a single trainable vector to reconstruct LoRA parameters for the entire LLM – making Uni-LoRA both a unified framework and a “one-vector-only” solution. Extensive experiments on GLUE, mathematical reasoning, and instruction tuning benchmarks demonstrate that Uni-LoRA achieves state-of-the-art parameter efficiency while outperforming or matching prior approaches in predictive performance.

NeurIPS 2025 in San Diego from November 30th to December 5th, 2025

NEC Laboratories America is heading to San Diego for NeurIPS 2025, where our researchers will present cutting-edge work spanning optimization, AI systems, language modeling, and trustworthy machine learning. This year’s lineup highlights breakthroughs in areas like multi-agent coordination, scalable training, efficient inference, and techniques for detecting LLM-generated text. Together, these contributions reflect our commitment to advancing fundamental science while building real-world solutions that strengthen industry and society. We’re excited to join the global AI community in San Diego from November 30 to December 5 to share our latest innovations.

Harnessing Vision Models for Time Series Analysis: A Survey

Time series analysis has witnessed the inspiring development from traditional autoregressive models, deep learning models, to recent Transformers and Large Language Models (LLMs). Efforts in leveraging vision models for time series analysis have also been made along the way but are less visible to the community due to the predominant research on sequence modeling in this domain. However, the discrepancy between continuous time series and the discrete token space of LLMs, and the challenges in explicitly modeling the correlations of variates in multivariate time series have shifted some research attentions to the equally successful Large Vision Models (LVMs) and Vision Language Models (VLMs). To fill the blank in the existing literature, this survey discusses the advantages of vision models over LLMs in time series analysis. It provides a comprehensive and in-depth overview of the existing methods, with dual views of detailed taxonomy that answer the key research questions including how to encode time series as images and how to model the imaged time series for various tasks. Additionally, we address the challenges in the pre- and post-processing steps involved in this framework and outline future directions to further advance time series analysis with vision models.

Multi-modal Time Series Analysis: A Tutorial and Survey

Multi-modal time series analysis has recently emerged as a prominent research area, driven by the increasing availability of diverse data modalities, such as text, images, and structured tabular data from real-world sources. However, effective analysis of multi-modal time series is hindered by data heterogeneity, modality gap, misalignment, and inherent noise. Recent advancements in multi-modal time series methods have exploited the multi-modal context via cross-modal interactions based on deep learning methods, significantly enhancing various downstream tasks. In this tutorial and survey, we present a systematic and up-to-date overview of multi-modal time series datasets and methods. We first state the existing challenges of multi-modal time series analysis and our motivations, with a brief introduction of preliminaries. Then, we summarize the general pipeline and categorize existing methods through a unified cross-modal interaction framework encompassing fusion, alignment, and transference at different levels (i.e., input, intermediate, output), where key concepts and ideas are highlighted. We also discuss the real-world applications of multi-modal analysis for both standard and spatial time series, tailored to general and specific domains. Finally, we discuss future research directions to help practitioners explore and exploit multi-modal time series. The up-to-date resources are provided in the GitHub repository. https://github.com/UConn-DSIS/Multi-modal-Time-Series-Analysis.

National Intern Day at NEC Laboratories America: Celebrating the Next Generation of Innovators

On National Intern Day, NEC Laboratories America celebrates the bright minds shaping tomorrow’s technology. Each summer, interns from top universities work side-by-side with our researchers on real-world challenges in AI, cybersecurity, data science, and more. From groundbreaking research to team-building events, our interns contribute fresh ideas and bold thinking that power NEC’s innovation engine.

Distributed Fiber Optic Sensing for Fault Localization Caused by Fallen Tree Using Physics-informed ResNet

Falling trees or their limbs can cause power lines to break or sag, sometimes resulting in devastating wildfires. Conventional protections such as circuit breakers, overcurrent relays and automatic circuit reclosers may clear short circuits caused by tree contact, but they may not detect cases where the conductors remain intact or a conducting path is not sufficient to create a full short circuit. In this paper, we introduce a novel, non-intrusive monitoring technique that detects and locates fallen trees, even if a short circuit is not triggered. This method employs distributed fiber optic sensing (DFOS) to detect vibrations along the power distribution line where corresponding fiber cables are installed. A physics-informed ResNet model is then utilized to interpret this information and accurately locate fallen trees, which sets it apart from traditional black-box predictions of machine learning algorithms. Our real-scale lab tests demonstrate highly accurate and reliable fallen tree detection and localization.

FedSkill: Privacy Preserved Interpretable Skill Learning via Imitation

Read FedSkill: Privacy Preserved Interpretable Skill Learning via Imitation publication. Imitation learning that replicates experts’ skills via their demonstrations has shown significant success in various decision-making tasks. However, two critical challenges still hinder the deployment of imitation learning techniques in real-world application scenarios. First, existing methods lack the intrinsic interpretability to explicitly explain the underlying rationale of the learned skill and thus making learned policy untrustworthy. Second, due to the scarcity of expert demonstrations from each end user (client), learning a policy based on different data silos is necessary but challenging in privacy-sensitive applications such as finance and healthcare. To this end, we present a privacy-preserved interpretable skill learning framework (FedSkill) that enables global policy learning to incorporate data from different sources and provides explainable interpretations to each local user without violating privacy and data sovereignty. Specifically, our proposed interpretable skill learning model can capture the varying patterns in the trajectories of expert demonstrations, and extract prototypical information as skills that provide implicit guidance for policy learning and explicit explanations in the reasoning process. Moreover, we design a novel aggregation mechanism coupled with the based skill learning model to preserve global information utilization and maintain local interpretability under the federated framework. Thoroughly experiments on three datasets and empirical studies demonstrate that our proposed FedSkill framework not only outperforms state-of-the-art imitation learning methods but also exhibits good interpretability under a federated setting. Our proposed FedSkill framework is the first attempt to bridge the gaps among federated learning, interpretable machine learning, and imitation learning.

A Temperature-Informed Data-Driven Approach for Behind-the-Meter Solar Disaggregation

The lack of visibility to behind-the-meter (BTM) PVs causes many challenges to utilities. By constructing a dictionary of typical load patterns based on daily average temperatures and power consumptions, this paper proposes a temperature-informed data-driven approach for disaggregating BTM PV generation. This approach takes advantage of the high correlation between outside temperature and electricity consumption, as well as the high similarity between PV generation profiles. First, temperature-based fluctuation patterns are extracted from customer load demands without PV for each specific temperature range to build a temperature-based dictionary (TBD) in the offline stage. The dictionary is then used to disaggregate BTM PV in real-time. As a result, the proposed approach is more practical and provides a useful guideline in using temperature for operators in online mode. The proposed methodology has been verified using real smart meter data from London.