Talk by Joe Dumit
"AI hallucinations are a feature of LLMs: Let’s use them!"

AI confabulations are integral to how large language models work. They are a feature, not a bug. LLMs are trained on texts, not truths. Each text bears traces of its context, including its genre, voice, audience and the history and local politics of its place of origin. A correct sentence in one scientific discipline might be inaccurate or nonsensical in another. Texts are ‘true’ only in the right context. LLMs learn from texts by picking up features — billions of detailed patterns that include discipline-specific micro-grammars and concepts. They operate in a space where meaning is constructed rather than retrieved from elsewhere. They confabulate and synthesize across disparate texts, and extrapolate from partial inputs. Their responses are inherently creative, context-specific and untrustworthy.
This talk explores how chat interfaces require active resistance to the notion that the LLM has “an answer” to a question (dialogues cause us to hallucinate!). Prompting should instead involve agentive stance-taking, point-of-view activating, and iterative co-creativity. Techniques for slow prompting, hypothesis generating, teaching with rather than against, and collective querying with LLMs are offered.
Joseph Dumit is an anthropologist of passions and performance, brains and games, AI and computers, contact improvisation and slownesses, drugs and facts. He is Professor in Interdisciplinary Data Collaborations at Aarhus University, and Chair of Performance Studies, and Professor of Science & Technology Studies, and Anthropology at University of California Davis; and He's a core member of the Experiencing, Experimenting, Reflecting grant with Aarhus University and Studio Olafur Eliasson. His books include Picturing Personhood: Brain Scans & Biomedical America and Drugs for Life: How Pharmaceutical Companies Define Our Health. He likes lichen and speculation. https://dumit.net