Beyond One Model Fits All: A Survey of Domain Specialization for Large Language Models
Publication Date: 6/9/2023
Event: arXiv
Reference: https://arxiv.org/abs/2305.18703
Authors: Chen Ling, NEC Laboratories America, Inc., Emory University, Xujiang Zhao, NEC Laboratories America, Inc., Jiaying Lu, Emory University, Chengyuan Deng, Rutgers University, NEC Laboratories America, Inc., Can Zheng, NEC Laboratories America, Inc., University of Pittsburgh, Junxiang Wang, NEC Laboratories America, Inc., Tanmoy Chowdhury, George Mason University, Yun Li, George Mason University, Hejie Cui, Emory University, Tianjiao Zhao, Blackrock Aladdin Financial Engineer Group, Amit Panalkar, Blackrock Aladdin Financial Engineer Group, Wei Cheng, NEC Laboratories America, Inc., Haoyu Wang, NEC Laboratories America, Inc., Yanchi Liu, NEC Laboratories America, Inc., ZhengZhang Chen, NEC Laboratories America, Inc., Haifeng Chen, NEC Laboratories America, Inc., Chris White, NEC Laboratories America, Inc., Quanquan Gu, University of California, Los Angeles, Carl Yang, Emory University, Liang Zhao, Emory University
Abstract: Large language models (LLMs) have significantly advanced the field of natural language processing (NLP), providing a highly useful, task agnostic foundation for a wide range of applications. The great promise of LLMs as general task solvers motivated people to extend their functionality largely beyond just a chatbot, and use it as an assistant or even replacement for domain experts and tools in specific domains such as healthcare, finance, and education. However, directly applying LLMs to solve sophisticated problems in specific domains meets many hurdles, caused by the heterogeneity of domain data, the sophistication of domain knowledge, the uniqueness of domain objectives, and the diversity of the constraints (e.g., various social norms, cultural conformity, religious beliefs, and ethical standards in the domain applications). To fill such a gap, explosively increase research, and practices have been conducted in very recent years on the domain specialization of LLMs, which, however, calls for a comprehensive and systematic review to better summarizes and guide this promising domain. In this survey paper, first, we propose a systematic taxonomy that categorizes the LLM domain specialization techniques based on the accessibility to LLMs and summarizes the framework for all the subcategories as well as their relations and differences to each other. We also present a comprehensive taxonomy of critical application domains that can benefit from specialized LLMs, discussing their practical significance and open challenges. Furthermore, we offer insights into the current research status and future trends in this area.
Publication Link: https://arxiv.org/pdf/2305.18703.pdf