A friend’s son was recently awarded an ESRC PHD scholarship. I pointed out that this is a Very Big Deal, that these are competitive and hard to get and was greeted by surprised interest, not surprising really given that the friend is not in academia, the rules of which are at best opaque (the ESRC had itself been unknown).
Yes, let’s look up how rare this is, said I, turning to ChatGPT and asking what the success rates are for ESCR-funded PhD scholarships. I asked ChatGPT rather than my usual DuckDuckGo, because I suspected there would likely be several answers based on discipline etc. and that a synthesised summary would be useful. Immediate reply was provided - 14% with a URL to a named report even the page number 48. Seriously impressive! Before sending the link to my friend I thought I would check the report and clicked on the URL. Broken link. I told Chat GPT that the link did not work. Immediate reply with a correction. “I apologize for the confusion. Here is an updated link." Again, a broken link. Another corrected reply “I apologize for the error in my previous responses. Here is the correct link." Again, a broken link. “I apologize for the continued issues with the link. Here is an alternative way to access the information.” That does not work either. Never mind, I think, the ChatGPT database ends in 2021 so maybe that is why the dead links. At least I have the report’s name. So I search the old-fashioned way and eventually, I find the report and turn to Page 48. No relevant data at all, let alone a percentage of 14% or anything else. Next, I type into the box that I have found that report and the information is not there, and I need the relevant paragraph and an exact quotation. The ChatGPT reply says “I apologize for any confusion my previous responses may have caused. Upon double-checking the relevant report, I have discovered that there is no information regarding the success rate for ESRC-funded PhD research.” More apologies “I apologize for any inconvenience or confusion my earlier responses may have caused. If you have any further questions or concerns, please don't hesitate to let me know.” I won’t repeat the ongoing conversation, if that is what it was, because I boringly and doggedly pursued the question, intrigued to see whether a verifiable answer ever emerged. Ongoing made-up answers and continuing apologies. And no, never a verifiable answer. What to make of this? Obviously, to check everything. We know this. I know this. But I almost didn’t because it was such a sensible and likely source. Except it was entirely made up. As was eventually conceded (Upon double-checking the relevant report, I have discovered that there is no information regarding the success rate for ESRC-funded PhD research.”). It is intriguing that an answer was provided in the first place. I had also asked Perplexity and received an immediate response that it was not possible to provide the information. The other striking aspect was the human voice and tone. The first person “I” and the apologies. It really did feel like a real human exchange, but it really wasn’t one. Was it even a “real’ exchange? What is “real”? Once again, I know this. And yet….Also, why invent an answer? This adds a further human dimension of wanting to please. Any answer is better than disappointing the questioner. That is weird! Made up, and yet not lies exactly, I think, since lies do imply intent. Rather, two “hallucinations” in one: Generative AI pretending to be human and authoritatively and confidently making up information. As we say around here, Eish! * Image: "Hallucinations" by Sergio Cerrato - Italia from Pixabay
4 Comments
10/5/2023 05:54:50 pm
Hi Laura. I understand the frustration you must have experienced. But I can't help but wonder if those kinds of interactions stem from an expectation that language models are like some kind of Oracle, or truth machine. They have no concept of truth, and so we can't hold them to that standard. If you think of ChatGPT as a well-read colleague who knows a lot of stuff about a lot of stuff, but who sometimes gets the details wrong, then it can be a powerful interlocutor.
Reply
Alan Levine
10/5/2023 09:31:38 pm
I've refrained too much from asking ChatGPT for facts and information, having seen early as you describe and others here, it produced text that statistically resemble a real citation or URL, but often misses by critical amounts, The same for some attempts to ask it to produce code, what I see looks structurally correct, but usually fails to properly execute.
Reply
12/5/2023 01:32:01 pm
Thanks so much for this Laura. I found it quite by chance but I am glad I did. (BTW I have tried to subscribe to your newsletter but it didn't ask me for any details :) I have been doing a lot of thinking about "AI" - I'd like it to remain in sardonic quotes for a long time.
Reply
Caroline Kuhn
6/6/2023 04:35:27 pm
It's funny that the bot apologises so much, it might have learned it from Facebook's CEO who solves all his unethical behaviours apologising shamelessly!
Reply
Your comment will be posted after it is approved.
Leave a Reply. |
AuthorI am a professor at the University of Cape Town in South Africa, interested in the digitally-mediated changes in society and specifically in higher education, largely through an inequality lens Archives
September 2024
Categories |