Entries by NEC Labs America

Kunal Rao presents SlideCraft: Context-Aware Slides Generation Agent at PICom 2025 on October 21st

Kunal Rao (presenting virtually) will present “SlideCraft: Context-Aware Slides Generation Agent” at the IEEE International Conference on Pervasive Intelligence and Computing hashtag#PICom2025 on Tuesday, Oct 21 (10:30am–12pm JST) | Monday, Oct 20 (9:30–11pm ET) in Hokkaido, Japan. SlideCraft uses AI to automatically generate presentation slides from research content, making technical communication faster and context-aware for scientists and professionals.

Sparsh Garg Presents Mapillary Vistas Validation for Fine-Grained Traffic Signs at DataCV 2025

Our Sparsh Garg, a Senior Associate Researcher in the Media Analytics Department, will present “Mapillary Vistas Validation for Fine-Grained Traffic Signs: A Benchmark Revealing Vision-Language Model Limitations” at the Data Computer Vision (DataCV) 2025 workshop as part of ICCV 2025 in Honolulu, Hawai’i, on Sunday, October 19th, from 11:15 am – 11:25 am.

THAT: Token-wise High-frequency Augmentation Transformer for Hyperspectral Pansharpening

Transformer-based methods have demonstrated strong potential in hyperspectral pansharpening by modeling long-range dependencies. However, their effectiveness is often limited by redundant token representations and a lack of multiscale feature modeling. Hyperspectral images exhibit intrinsic spectral priors (e.g., abundance sparsity) and spatial priors(e.g., non-local similarity), which are critical for accurate reconstruction. From a spectral–spatial perspective, Vision Transformers (ViTs) face two major limitations: they struggle to preserve high-frequency components—such as material edges and texture transitions, and suffer from attention dispersion across redundant tokens. These issues stem from the global self-attention mechanism, which tends to dilute high-frequency signals and overlook localized details. To address these challenges, we propose the Token-wise High-frequency AugmentationTransformer (THAT), a novel framework designed to enhance hyperspectral pansharpening through improved high-frequency feature representation and token selection. Specifically, THAT introduces: (1) Pivotal Token Selective Attention (PTSA) to prioritize informative tokens and suppress redundancy; (2) a Multi-level Variance-aware Feed-forward Network (MVFN) to enhance high-frequency detail learning. Experiments on standard benchmarks show that THAT achieves state-of-the-art performance with improved reconstruction quality and efficiency.

Energy-based Generative Models for Distributed Acoustic Sensing Event Classification in Telecom Networks

Distributed fiber-optic sensing combined with machine learning enables continuous monitoring of telecom infrastructure. We employ generative modeling for event classification, supporting semi­ supervised learning, uncertainty calibration, and noise resilience. Our approach offers a scalable, data-efficient solution for real-world deployment in complex environments.

Digital Twins Beyond C-band Using GNPy

GNPy advancements enable accurate and efficient modeling of multiband optical networks for digital twin applications. The developed solvers for Kerr nonlinearity and SRS have been validated through simulation and experimentally in C+L transmission, supporting real-world network planning, design, and performance optimization across disaggregated optical infrastructures.