Martin Min NEC Labs AmericaMartin Renqiang Min is the Department Head of the Machine Learning Department at NEC Laboratories America. He holds a Ph.D. in Computer Science from the University of Toronto and completed postdoctoral research at Yale University, where he also taught courses on deep learning. His research has been published in top venues, including Nature, NeurIPS, ICML, ICLR, CVPR, and ACL, and his innovations have been recognized  internationally, with features in Science and MIT Technology Review.

At NEC, Dr. Min directs a multidisciplinary research team at the forefront of foundational and applied artificial intelligence. His portfolio spans deep learning, natural language understanding, multimodal learning, visual reasoning, and the application of machine learning to biomedical and healthcare data. He has contributed to the design of scalable learning systems that power real-world applications, bridging cutting-edge theory with industry-scale deployment. He also co-chaired the NeurIPS Workshop on Machine Learning in Computational Biology, advancing the dialogue between AI and life sciences.

Under his leadership, NEC’s machine learning group drives innovation across multiple domains, including AI for precision medicine, next-generation language modeling, and interpretable multimodal systems. He is known for fostering interdisciplinary collaboration—both within NEC and with academic and industry partners—encouraging research that connects scientific breakthroughs with societal impact. His team’s contributions extend to core technologies used across telecommunications, enterprise solutions, and healthcare, positioning NEC at the leading edge of applied AI. Dr. Min is recognized for his ability to identify emerging trends in machine learning and translate them into long-term research roadmaps. His work continues to influence the global AI community, and his leadership ensures NEC remains a hub for transformative research that combines fundamental discovery with practical applications that improve people’s lives.

Posts

Video Generation From Text

Generating videos from text has proven to be a significant challenge for existing generative models. We tackle this problem by training a conditional generative model to extract both static and dynamic information from text. This is manifested in a hybrid framework, employing a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN). The static features, called “gist,” are used to sketch text-conditioned background color and object layout structure. Dynamic features are considered by transforming input text into an image filter. To obtain a large amount of data for training the deep-learning model, we develop a method to automatically create a matched text-video corpus from publicly available online videos. Experimental results show that the proposed framework generates plausible and diverse short-duration smooth videos, while accurately reflecting the input text information. It significantly outperforms baseline models that directly adapt text-to-image generation procedures to produce videos. Performance is evaluated both visually and by adapting the inception score used to evaluate image generation in GANs.

Adaptive Feature Abstraction for Translating Video to Text

Previous models for video captioning often use the output from a specific layer of a Convolutional Neural Network (CNN) as video features. However, the variable context-dependent semantics in the video may make it more appropriate to adaptively select features from the multiple CNN layers. We propose a new approach to generating adaptive spatiotemporal representations of videos for the captioning task. A novel attention mechanism is developed, which adaptively and sequentially focuses on different layers of CNN features (levels of feature “abstraction”), as well as local spatiotemporal regions of the feature maps at each layer. The proposed approach is evaluated on three benchmark datasets: YouTube2Text, M-VAD and MSR-VTT. Along with visualizing the results and how the model works, these experiments quantitatively demonstrate the effectiveness of the proposed adaptive spatiotemporal feature abstraction for translating videos to sentences with rich semantics.