Yi-Wen Chen NEC Labs AmericaYi-Wen Chen is a Postdoctoral Scientist in the Media Analytics Department at NEC Laboratories America in San Jose, CA. She received her PhD in Computer Science from the University of California, Merced, where she studied multimodal machine learning and representation fusion for cross-sensor applications. She received her MS in Communication Engineering and her BS in Electrical Engineering from National Taiwan University.

At NEC, Dr. Chen’s research centers on vision–language models (VLMs), multimodal learning, and video understanding, with an emphasis on robustness, generalization, and interpretability for real-world deployments. Her recent work spans open-vocabulary / zero-shot recognition, entity grounding with LLM assistance, and text-conditioned image editing with localized control—pushing models to align fine-grained language cues with visual regions and to hold up under domain shift. She also studies temporal grounding in video, cross-modal attention/fusion for cross-sensor applications, and evaluation protocols that probe compositional reasoning and failure modes (e.g., caption drift, representation collapse, and shortcut learning). Collectively, these threads aim to make VLMs safer, more reliable, and more explainable, enabling human-centered AI for security, accessibility, and assistive scenarios.

Posts

Unseen Object Segmentation in Videos via Transferable Representations

In order to learn object segmentation models in videos, conventional methods require a large amount of pixel-wise ground truth annotations. However, collecting such supervised data is time-consuming and labor-intensive. In this paper, we exploit existing annotations in source images and transfer such visual information to segment videos with unseen object categories. Without using any annotations in the target video, we propose a method to jointly mine useful segments and learn feature representations that better adapt to the target frames. The entire process is decomposed into two tasks: (1) solving a submodular function for selecting object-like segments, and (2) learning a CNN model with a transferable module for adapting seen categories in the source domain to the unseen target video. We present an iterative update scheme between two tasks to self-learn the final solution for object segmentation. Experimental results on numerous benchmark datasets show that the proposed method performs favorably against the state-of-the-art algorithms.