Xujiang Zhao NEC Labs America

Xujiang Zhao is a researcher in the Data Science & System Security department at NEC Laboratories America, based in Princeton, New Jersey. He holds a B.S. in Civil Engineering from Chongqing University and an M.S. in Computer Science from the University of Science and Technology of China. He earned his PhD in Computer Science from the University of Texas at Dallas, and his academic training provided a strong foundation in both theoretical and applied aspects of computing, which continues to shape his contributions at NEC.

At NEC Labs, Zhao’s research focuses on aligning large language models (LLMs) with human intent through techniques that enhance explainability, factual consistency, uncertainty estimation, and robustness. He develops methods that make LLMs more transparent and reliable, ensuring that they can be applied in sensitive, high-stakes environments. A key area of his work is building collaborative agent systems that integrate LLMs with domain-specific expertise and human feedback loops, enabling AI to work more effectively as a partner in decision-making.

Beyond language alignment, Zhao explores applications in image–text retrieval, synthetic media detection, and multi-agent reasoning, areas that are increasingly critical for enterprise knowledge management, misinformation defense, and the verification of AI-generated content. By combining fundamental advances in machine learning with applied research, his work pushes forward the responsible and practical use of foundation models across industries.

Posts

Boosting Cross-Lingual Transfer via Self-Learning with Uncertainty Estimation

Recent multilingual pre-trained language models have achieved remarkable zero-shot performance, where the model is only finetuned on one source language and directly evaluated on target languages. In this work, we propose a self-learning framework that further utilizes unlabeled data of target languages, combined with uncertainty estimation in the process to select high-quality silver labels. Three different uncertainties are adapted and analyzed specifically for the cross lingual transfer: Language Heteroscedastic/Homoscedastic Uncertainty (LEU/LOU), Evidential Uncertainty (EVI). We evaluate our framework with uncertainties on two cross-lingual tasks including Named Entity Recognition (NER) and Natural Language Inference (NLI) covering 40 languages in total, which outperforms the baselines significantly by 10 F1 for NER on average and 2.5 accuracy for NLI.