Exploring the limits of ChatGPT for Query or Aspect based Text Summarization

Publication Date: 2/16/2023

Event: arXiv

Reference: https://arxiv.org/pdf/2302.08081.pdf

Authors: Xianjun Yang, University of California, Santa Barbara, Yan Li, University of California, Santa Barbara, Xinlu Zhang, University of California, Santa Barbara, Wei Cheng, NEC Laboratories America, Inc., Haifeng Chen, NEC Laboratories America, Inc.

Abstract: Text summarization has been a crucial problem in natural language processing (NLP) for several decades. It aims to condense lengthy documents into shorter versions while retaining the most critical information. Various methods have been proposed for text summarization, including extractive and abstractive summarization. The emergence of large language models (LLMs) like GPT3 and ChatGPT has recently created significant interest in using these models for text summarization tasks. Recent studies (Goyal et al., 2022, Zhang et al., 2023) have shown that LLMs generated news summaries are already on par with humans. However, the performance of LLMs for more practical applications like aspect or query based summaries is underexplored. To fill this gap, we conducted an evaluation of ChatGPT’s performance on four widely used benchmark datasets, encompassing diverse summaries from Reddit posts, news articles, dialogue meetings, and stories. Our experiments reveal that ChatGPT’s performance is comparable to traditional fine tuning methods in terms of Rouge scores. Moreover, we highlight some unique differences between ChatGPT generated summaries and human references, providing valuable insights into the superpower of ChatGPT for diverse text summarization tasks. Our findings call for new directions in this area, and we plan to conduct further research to systematically examine the characteristics of ChatGPT generated summaries through extensive human evaluation.

Publication Link: https://arxiv.org/pdf/2302.08081.pdf