Jiangbo Li, Jian Li, Shihang Niu, Wenhao Ouyang, Chunlin Jiang
This study introduces a novel approach to assessing the impact of scholarly monographs by utilizing large language models as automated scoring tools. Based on the abstract texts of 2248 sociology monographs from the Dimensions database (2014–2023), we employ the ChatGPT-4o model to generate scores across 40 evaluation variables (e.g., Concise, Subjective and Coherent). We then examine the correlation between ChatGPT-generated scores and academic impact indicators (e.g., citation counts), as well as social impact indicators (e.g., altmetrics scores). The findings indicate that, within this sociology sample, ChatGPT-4o scores demonstrate stronger correlations with citation counts and altmetrics scores than with the conventional evaluation metric, readability index. While large language models are not yet capable of independently conducting comprehensive evaluations of scholarly monographs, this sociology-based study demonstrates considerable potential as auxiliary tools for enhancing the efficiency of academic assessment. This study highlights the transformative role of large language models in supplementing monograph evaluation methods, particularly in addressing the limitations of traditional assessment approaches in the humanities and social sciences.
Large language model; Academic monographs; ChatGPT-assisted; Altmetrics; Dimensions database