The University at Buffalo is a flagship campus of the State University of New York system (SUNY), excelling in engineering, AI, computing, biomedical sciences, medicine, and public health. It is a major center for research and innovation in upstate New York. It fosters research that advances regional and global impact. NECLA partnered with the University at Buffalo on research into adversarial training for visual content generation on large-scale vision systems, data-efficient learning, and multimodal AI for healthcare. We contributed to the refinement of dual projection GANs through collaboration, improving realism and diversity in synthesized images, with implications for both biometric security and creative applications. Please read about our latest news and collaborative publications with the State University of New York at Buffalo.

Posts

National Intern Day at NEC Laboratories America: Celebrating the Next Generation of Innovators

On National Intern Day, NEC Laboratories America celebrates the bright minds shaping tomorrow’s technology. Each summer, interns from top universities work side-by-side with our researchers on real-world challenges in AI, cybersecurity, data science, and more. From groundbreaking research to team-building events, our interns contribute fresh ideas and bold thinking that power NEC’s innovation engine.

CLAP-S: Support Set Based Adaptation for Downstream Fiber-optic Acoustic Recognition

Contrastive Language-Audio Pretraining (CLAP) models have demonstrated unprecedented performance in various acoustic signal recognition tasks. Fiber-optic-based acoustic recognition is one of the most important downstream tasks and plays a significant role in environmental sensing. Adapting CLAP for fiber-optic acoustic recognition has become an active research area. As a non-conventional acoustic sensor, fiberoptic acoustic recognition presents a challenging, domain-specific, low-shot deployment environment with significant domain shifts due to unique frequency response and noise characteristics. To address these challenges, we propose a support-based adaptation method, CLAP-S, which linearly interpolates a CLAP Adapter with the Support Set, leveraging both implicit knowledge through fine-tuning and explicit knowledge retrieved from memory for cross-domain generalization. Experimental results show that our method delivers competitive performance on both laboratory recorded fiber-optic ESC-50 datasets and a real-world fiber optic gunshot-firework dataset. Our research also provides valuable insights for other downstream acoustic recognition tasks.

CLAP-S: Support Set Based Adaptation for Downstream Fiber-optic Acoustic Recognition

Contrastive Language-Audio Pretraining (CLAP) models have demonstrated unprecedented performance in various acoustic signal recognition tasks. Fiber optic-based acoustic recognition is one of the most important downstream tasks and plays a significant role in environmental sensing. Adapting CLAP for fiber-optic acoustic recognition has become an active research area. As a non-conventional acoustic sensor, fiber-optic acoustic recognition presents a challenging, domain-specific, low-shot deployment environment with significant domain shifts due to unique frequency response and noise characteristics. To address these challenges, we propose a support-based adaptation method, CLAP-S, which linearly interpolates a CLAP Adapter with the Support Set, leveraging both implicit knowledge through fine-tuning and explicit knowledge retrieved from memory for cross-domain generalization. Experimental results show that our method delivers competitive performance on both laboratory-recorded fiber-optic ESC-50 datasets and a real-world fiber-optic gunshot-firework dataset. Our research also provides valuable insights for other downstream acoustic recognition tasks.

3D Finger Vein Biometric Authentication with Photoacoustic Tomography

Biometric authentication is the recognition of human identity via unique anatomical features. The development of novel methods parallels widespread application by consumer devices, law enforcement, and access control. In particular, methods based on finger veins, as compared to face and fingerprints, obviate privacy concerns and degradation due to wear, age, and obscuration. However, they are two-dimensional (2D) and are fundamentally limited by conventional imaging and tissue-light scattering. In this work, for the first time, to the best of our knowledge, we demonstrate a method of three-dimensional (3D) finger vein biometric authentication based on photoacoustic tomography. Using a compact photoacoustic tomography setup and a novel recognition algorithm, the advantages of 3D are demonstrated via biometric authentication of index finger vessels with false acceptance, false rejection, and equal error rates <1.23%, <9.27%, and <0.13%, respectively, when comparing one finger, a false acceptance rate improvement >10× when comparing multiple fingers, and <0.7% when rotating fingers ±30.