Entries by NEC Labs America

Kunal Rao presents SlideCraft: Context-Aware Slides Generation Agent at PICom 2025 on October 21st

Kunal Rao (presenting virtually) will present “SlideCraft: Context-Aware Slides Generation Agent” at the IEEE International Conference on Pervasive Intelligence and Computing hashtag#PICom2025 on Tuesday, Oct 21 (10:30am–12pm JST) | Monday, Oct 20 (9:30–11pm ET) in Hokkaido, Japan. SlideCraft uses AI to automatically generate presentation slides from research content, making technical communication faster and context-aware for scientists and professionals.

Sparsh Garg Presents Mapillary Vistas Validation for Fine-Grained Traffic Signs at DataCV 2025

Our Sparsh Garg, a Senior Associate Researcher in the Media Analytics Department, will present “Mapillary Vistas Validation for Fine-Grained Traffic Signs: A Benchmark Revealing Vision-Language Model Limitations” at the Data Computer Vision (DataCV) 2025 workshop as part of ICCV 2025 in Honolulu, Hawai’i, on Sunday, October 19th, from 11:15 am – 11:25 am.

NECLA at ECOC 2025: Advancing Optical Communication and Distributed Sensing

NEC Laboratories America (NECLA) was proud to join the European Conference on Optical Communication (ECOC 2025) in Copenhagen, Denmark, from September 28 to October 2. Our researchers presented cutting-edge work in distributed acoustic sensing, AI-driven fiber optics, and optical networking. From generative models for event classification to digital twins and entomological observations using telecom fibers, these sessions highlighted NECLA’s role in shaping the future of intelligent and resilient communication systems. In addition, NECLA’s Fatih Yaman co-organized a workshop on emerging frontiers in optical communication.

Uncertainty Quantification and Reasoning for Reliable AI Seminar at Brigham Young University

Our researcher Xujiang Zhao will present “Uncertainty Quantification and Reasoning for Reliable AI” at Brigham Young University on Thursday, Sept. 25 at 11 a.m. in TMCB 1170. The seminar explores how statistical modeling and reasoning frameworks can strengthen trustworthy AI, making systems more robust and transparent in high-stakes applications like healthcare and autonomous systems. Attendees will gain insights into how uncertainty quantification is shaping the next generation of responsible AI.

DiscussLLM: Teaching Large Language Models When to Speak

Large Language Models (LLMs) have demonstrated remarkable capabilities in understanding and generating human-like text, yet they largely operate as reactive agents, responding only when directly prompted. This passivity creates an “awareness gap,” limiting their potential as truly collaborative partners in dynamic human discussions. We introduce , a framework designed to bridge this gap by training models to proactively decide not just to say, but critically, to speak. Our primary contribution is a scalable two-stage data generation pipeline that synthesizes a large-scale dataset of realistic multi-turn human discussions. Each discussion is annotated with one of five intervention types (e.g., Factual Correction, Concept Definition) and contains an explicit conversational trigger where an AI intervention adds value. By training models to predict a special silent token when no intervention is needed, they learn to remain quiet until a helpful contribution can be made. We explore two architectural baselines: an integrated end-to-end model and a decoupled classifier-generator system optimized for low-latency inference. We evaluate these models on their ability to accurately time interventions and generate helpful responses, paving the way for more situationally aware and proactive conversational AI.

Bifröst: Peer-to-peer Load-balancing for Function Execution in Agentic AI Systems

Agentic AI systems rely on Large Language Models (LLMs) to execute complex tasks by invoking external functions. The efficiency of these systems depends on how well function execution is managed, especially under heterogeneous and high-variance workloads, where function execution times can range from milliseconds to several seconds. Traditional load-balancing techniques, such as round-robin, least-loaded, and Peak-EWMA (used in Linkerd), struggle in such settings: round-robin ignores load imbalance, least-loaded reacts slowly to rapid workload shifts, and Peak-EWMA relies on latency tracking, which is ineffective for workloads with high execution time variability. In this paper, we introduce Bifröst, a peer-to-peer load-balancing mechanism that distributes function requests based on real-time active request count rather than latency estimates. Instead of relying on centralized load-balancers or client-side decisions, Bifröst enables function-serving pods to dynamically distribute load by comparing queue lengths and offloading requests accordingly. This avoids unnecessary overhead while ensuring better responsiveness under high-variance workloads. Our evaluation on open-vocabulary object detection, multi-modal understanding, and code generation workloads shows that Bifröst improves function completion time by up to 20% when processing 13,700 requests from 137 AI agents on a 32-node Kubernetes cluster, outperforming both OpenFaaS and OpenFaaS with Linkerd. In an AI-driven insurance claims processing workflow, Bifröst achieves up to 25% faster execution.

Summer Highlights at NEC Labs America: Teamwork, Innovation, and Fun

This summer at NEC Laboratories America was full of energy, teamwork, and connection. From volleyball games in San Jose and TopGolf with colleagues from Princeton to kayaking adventures, a campus picnic, and celebrating our incredible interns, our teams came together to learn, laugh, and grow. Here’s a look back at the highlights that made Summer 2025 so memorable.

Harnessing Vision Models for Time Series Analysis: A Survey

Time series analysis has witnessed the inspiring development from traditional autoregressive models, deep learning models, to recent Transformers and Large Language Models (LLMs). Efforts in leveraging vision models for time series analysis have also been made along the way but are less visible to the community due to the predominant research on sequence modeling in this domain. However, the discrepancy between continuous time series and the discrete token space of LLMs, and the challenges in explicitly modeling the correlations of variates in multivariate time series have shifted some research attentions to the equally successful Large Vision Models (LVMs) and Vision Language Models (VLMs). To fill the blank in the existing literature, this survey discusses the advantages of vision models over LLMs in time series analysis. It provides a comprehensive and in-depth overview of the existing methods, with dual views of detailed taxonomy that answer the key research questions including how to encode time series as images and how to model the imaged time series for various tasks. Additionally, we address the challenges in the pre- and post-processing steps involved in this framework and outline future directions to further advance time series analysis with vision models.