My guess is that most academics have a Scholar Alert to their names. It can be intriguing, or pleasing, or surprising, to see how one’s work is referenced and for what purposes. Sometimes it is a stretch of the imagination to see a link, but at least, references are to actual papers we wrote ourselves and to work we have undertaken. That has been my experience to date. Until now..... ...when a Google Alert pointed me to a paper I wrote with Cheryl Brown quite recently. Which I did not remember us having written. Although we quite plausibly could have, as we have written many papers together (albeit not for a while), and this one is close enough to a topic we could have written about. A closer look showed more details - not just a reference, but an elaboration of the study we purportedly undertook, together with methods and findings. It was not, as I momentarily thought, a “loskop” moment on my part. (Literally translated as “looseheaded”). There is no such paper, no such study, no such research.
We have been hallucinated! A surreal moment. Many reliable and insightful scholars are exploring the implications of AI for research and education, so I won’t write an essay. Just a few observations, largely that neatly apportioning “blame” is too straightforward a response, so my point here is not to out the journal (which exists) or the authors (whom I presume exist too!) Presumably the authors are using an AI tool as a search engine, and the AI tool did what it is good at and plausibly inferred something useful. My own view is that AI tools for academics are terrible when used as a search engine but useful in other ways. There is certainly no point saying that academic authors shouldn’t use AI for their writing - it is there, and it can surely be valuable. Academics are very unlikely to explicitly set out to present falsehoods. It is a fair assumption that all academics want to contribute to knowledge in meaningful ways. It is also a fair assumption that they are under huge pressure, with often unmanageable workloads, large classes to teach, grants to chase, contracts to be renewed and actual research and writing to be squeezed in somewhere between. Add to the mix that using AI tools for writing and research is deceptively complicated. Of course, not an excuse, but a reasonable explanation, I think. What of the journal? In their current incarnation, journals are responsible for quality assurance which would include the incorrect inclusion of AI generated studies and references. Are journal editors realistically to be expected to check every reference? And if they are to play such a role, then wouldn’t this mean that it is only the large well-resourced journals who could employ fact checkers? Most journal editors are not paid at all, so the work, done for interest, or prestige, or service, is done on top of everything else. Yet doing it badly is bad for reputations too. And no, not an excuse. Just some context. What about peer reviewers? Any journal editor will tell you how much harder it has become to find peer reviewers at all when everyone is under so much pressure etc. (See above). And is it realistic to expect peer reviewers to recognise every reference, even if it is within their field, especially when it is a plausible reference? Of course not. The consequences of these kinds of hallucinations are potentially serious, given that publishing is the lifeblood of academia and knowledge creation. In single instances like this, papers can be withdrawn. But if they are already being integrated into search engine catchments, isn’t this a case of the stable door being closed after the horse has bolted? This Alert has left me, and Cheryl, lots to think about. For now, putting it out there as a stark example of what is happening, ready or not.
6 Comments
15/9/2024 12:22:42 pm
You've raised a good point about the role of editors and fact checkers in major journals. It's very strange for a Google Alert to suggest "work and wellbeing in digital universities" is something you published because neither Google nor Google Scholar finds it. I wonder what large language model generated the text plug in your screencap. It doesn't appear in the first two paid AIs I use.
Reply
Laura
2/10/2024 10:08:54 am
The alert was on my name, Bernard, not on the papers
Reply
16/9/2024 04:33:37 pm
It's not surprising. Its as if we have this idea that LLMs are actually going to real sources, when all they are doing is statistically generating something -- even when it gets an accurate citation, it is hallucinated the same way it makes the one you cite. It's actually impressive that it can get correct citations! It's hard to wrap your head around it, but everything you get back can be considered hallucinated.
Reply
Laura
2/10/2024 10:10:35 am
Great advice, Alan. And thanks for the hilarious random research paper generators.
Reply
Adesuwa Vanessa Agbedahin
18/9/2024 11:43:23 pm
This is mind-boggling and scary! What an age to live and work in.
Reply
Laura
2/10/2024 10:11:13 am
Seriously. Being an educator just became exponentially more complex....
Reply
Your comment will be posted after it is approved.
Leave a Reply. |
AuthorI am a professor at the University of Cape Town in South Africa, interested in the digitally-mediated changes in society and specifically in higher education, largely through an inequality lens Archives
September 2024
Categories |