A friend’s son was recently awarded an ESRC PHD scholarship. I pointed out that this is a Very Big Deal, that these are competitive and hard to get and was greeted by surprised interest, not surprising really given that the friend is not in academia, the rules of which are at best opaque (the ESRC had itself been unknown).
Yes, let’s look up how rare this is, said I, turning to ChatGPT and asking what the success rates are for ESCR-funded PhD scholarships. I asked ChatGPT rather than my usual DuckDuckGo, because I suspected there would likely be several answers based on discipline etc. and that a synthesised summary would be useful.
Immediate reply was provided - 14% with a URL to a named report even the page number 48.
Before sending the link to my friend I thought I would check the report and clicked on the URL.
I told Chat GPT that the link did not work.
Immediate reply with a correction. “I apologize for the confusion. Here is an updated link."
Again, a broken link.
Another corrected reply “I apologize for the error in my previous responses. Here is the correct link."
Again, a broken link.
“I apologize for the continued issues with the link. Here is an alternative way to access the information.”
That does not work either. Never mind, I think, the ChatGPT database ends in 2021 so maybe that is why the dead links.
At least I have the report’s name.
So I search the old-fashioned way and eventually, I find the report and turn to Page 48.
No relevant data at all, let alone a percentage of 14% or anything else.
Next, I type into the box that I have found that report and the information is not there, and I need the relevant paragraph and an exact quotation.
The ChatGPT reply says “I apologize for any confusion my previous responses may have caused. Upon double-checking the relevant report, I have discovered that there is no information regarding the success rate for ESRC-funded PhD research.”
More apologies “I apologize for any inconvenience or confusion my earlier responses may have caused. If you have any further questions or concerns, please don't hesitate to let me know.”
I won’t repeat the ongoing conversation, if that is what it was, because I boringly and doggedly pursued the question, intrigued to see whether a verifiable answer ever emerged. Ongoing made-up answers and continuing apologies. And no, never a verifiable answer.
What to make of this?
Obviously, to check everything. We know this. I know this. But I almost didn’t because it was such a sensible and likely source. Except it was entirely made up. As was eventually conceded (Upon double-checking the relevant report, I have discovered that there is no information regarding the success rate for ESRC-funded PhD research.”).
It is intriguing that an answer was provided in the first place. I had also asked Perplexity and received an immediate response that it was not possible to provide the information.
The other striking aspect was the human voice and tone. The first person “I” and the apologies. It really did feel like a real human exchange, but it really wasn’t one. Was it even a “real’ exchange? What is “real”? Once again, I know this. And yet….Also, why invent an answer? This adds a further human dimension of wanting to please. Any answer is better than disappointing the questioner. That is weird!
Made up, and yet not lies exactly, I think, since lies do imply intent. Rather, two “hallucinations” in one: Generative AI pretending to be human and authoritatively and confidently making up information.
As we say around here, Eish!
* Image: "Hallucinations" by Sergio Cerrato - Italia from Pixabay
Your comment will be posted after it is approved.
Leave a Reply.
I am a professor at the University of Cape Town in South Africa, interested in the digitally-mediated changes in society and specifically in higher education, largely through an inequality lens