Volume 1, Issue 1
Published: September 04, 2025
In this paper we explore ChatGPT's ability to produce a summary, a precis and/or an essay on the basis of excerpts from a novel – The Solid Mandala – by Nobel Prize Australian writer Patrick White. We use a number of prompts to test a number of functions related to narrative analysis from the point of view of the “sujet”, the “fable”, and the style. In the paper, we illustrate extensively a number of recurrent semantic mistakes and hallucinations that can badly harm the understanding of the contents of the novel. We made a list of 12 different types of mistakes and hallucinations we found GPT made. We then tested Gemini for the same 12 mistakes and found a marked improvement in all critical key issues. The conclusion for ChatGPT is mostly negative. We formulate as an underlying hypothesis for its worse performance, the influence of vocabulary size which in Gemini is 7 times higher than in GPT.
ChatGPT Prompts; Narrative Theory; Semantic Theory; Modality and Factuality; Temporal Reordering
Rodolfo Delmonte, Department of Language Science, Ca’ Foscari University, Ca’ Bembo, 30123 Venezia, Italy.
Delmonte, R., Marchesini, G., & Busetto, N. (2025). How ChatGPT's Hallucinations (Compared to Gemini’s) Impact Text Summarization with Literary Text. Journal of Arts and Humanities, 1(1), 01-104.