TEXT: Center for Contemporary Cultures of Text is organized to understand the impact of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) on writing cultures at this pivotal moment in history, in which — after more than 6,000 years of handcrafted text production — we see all aspects of text creation and use are being altered. We are convinced that a research-based understanding of the role of text in a new technological environment is a condition for a prevailing human-centered control of the production and usage of text.
The center works on the evaluation and development of language technology based on linguistic research, as well as on investigating when the introduction of new practices and technologies contributes to a better text culture—and when something valuable is lost. Learn more out the researchers involved and the organization of our work packages.
TEXT is based at Aarhus University and funded by the Danish National Research Foundation. The center’s partners include It-vest, Danish Foundation Models, and Rhetor, with external participants from Cornell University, UC Berkeley, UC Davis, and the University of Oslo.
We are committed to conducting AI research that is aware of its environmental footprint. Large-scale models contribute significantly to global greenhouse gas emissions.
We aim to demonstrate that smaller, more efficient models can match larger ones in many textual tasks. By opting for model sufficiency over scale, we contribute to more sustainable AI practices.
We are committed to FAIR (Findable, Accessible, Interoperable, Reusable) principles in data use and stewardship. Our research acknowledges the risks of misinformation, cheating, copyright infringement, and biased evaluation.
We promote a culture of careful curation, annotation, and reflection—treating data not just as raw input, but as the shared basis of responsible knowledge creation.
We support transparent, explainable, and accessible AI. We prioritize open-source models and methods, while critically engaging with commercial systems. Unknown training data, censorship of outputs, and the sudden retraction of tools all challenge the reliability and democratic oversight of AI.
Our goal is to foster models that remain accountable to the public and usable in diverse research, educational, and cultural settings.
Our center upholds a strong code of conduct. We value open dialogue, inclusive collaboration, and mutual respect across disciplines and communities. We reject discrimination and exploitation in all forms.
Ethical reflection is part of our everyday practice—not a separate domain but a shared responsibility.