David E. Millard
Welcome to the post-document world. A place where our written messages are merely an ephemeral part of a machine translation from the intent of a sender to the preferences of a recipient [11]. This is a world where readers interact far less with fixed, discrete texts authored by individuals, and instead receive information through AI-mediated, dynamically generated content tailored for their needs. Here, documents lose their role as containers of an author’s singular perspective. Instead, meaning emerges through adaptive algorithms that synthesize and personalize a vast array of sources in real time. In the post-document world reading becomes an on-demand, dialogic experience, with content customized according to context, expertise, and preference [20]. This reconfiguration, which now seems inevitable, will fundamentally change how we approach the consumption and creation of texts.
Historically, this move can be seen as both an extension and a rupture of what hypertext first set in motion. Hypertext challenged the tyranny of linearity, allowing readers to traverse documents along associative trails, shaping their own paths and weaving together fragments of information [22]. Early digital systems privileged non-linearity and navigational agency, but AI escalates this challenge by making not just the journey, but the text itself, fluid and contingent. Where hypertext allowed meaning to become multi-threaded, AI has the potential to dissolve the very boundaries of texts, recomposing insights on demand [20].
For everyone engaged in writing and reading, this raises critical stakes. For the Future of Text community, the journey into the post-document world requires a reimagining of agency and the practices that sustain our collective dialogue about knowledge and creativity.
The dissolution of document boundaries by AI-driven systems completely changes textual interaction: readers encounter not static artifacts, but immediate and perpetually malleable streams of information. This means that the role of authorship enters ambiguous terrain. Attribution becomes problematic; reuse and remixing are automatic, and the edges between original and synthetic text blur. For writers, recognition and compensation are jeopardized as traditional systems of reputational and financial reward, from citation counts to royalties, grow unreliable in a world of automated remixing and synthesis.
Reputation, ownership, and authority (once stabilized by the document) are now distributed and dynamic. Users may benefit from frictionless access and adaptive engagement, but they must navigate the challenges of provenance and trust. The question is no longer “Who wrote this?” but “How did this synthesis happen?” The stakes are high enough to prompt a wholesale reconsideration of legitimacy and value in textual work.
These shifts echo the provocations posed by early hypertext theorists, particularly the so-called “death of the author.” [4, 6]. Where non-linearity once empowered readers to construct meaning by traversing links between texts, the AI era radically deepens this shift: it untethers the content itself from its origins, actively recomposing knowledge for every occasion.
There are, of course, advantages and opportunities. The most immediate benefit of a post-document world will be a leap in accessibility. When AI systems produce information on demand, tailored to each reader’s background, interests, and questions, barriers of expertise and style dissolve. Readers can access precisely what they need, whether a quick summary, detailed technical analysis, or something pitched at their level of understanding, resulting in information systems that are radically user-centred, opening complex domains to a wider and more diverse audience [25].
This relentless adaptability opens up new creative and scholarly possibilities, and not just for AI supported writing [14]. AI’s generative capacities could support the creation of multi-perspective “living texts” that evolve in response to user input, topical debate, or collaborative effort. Unlike the static canon, these new genres will be inherently inclusive: they invite commentary, remixing, and extension, fostering joint ownership of knowledge and shared understanding among communities. They could transform what are now sprawling community forums or wiki spaces into coherent, living texts that remain multifaceted and interrogatable.
Immersive and spatial interfaces propel this reimagining into new territory. In museums, classrooms, or public spaces, mixed reality overlays can already situate information directly within real-world contexts, turning passive panels into interactive experiences or narratives [12]. With AI this text could become fluid and responsive. Visitors may receive poetic reflections, scientific data, or historical interpretation linked to artefacts or locations, tailored to their interests or learning goals. The scale and flexibility of AI makes such experiences sustainable at a mass level, making today’s discrete and granular locative texts seem clunky and unresponsive in comparison.
The net result of these changes is a more porous, participatory, and creative environment. As readers and writers move beyond static documents, the possibilities for textual engagement are dramatically expanded, and with it, the challenge to ensure that these opportunities serve deeper agency and meaning.
Unfortunately, the move to a post-document world also poses serious risks for both readers and writers. The most serious risk is the erosion of slow, critical, and skilful reading and writing practices. As AI-generated text displaces traditional drafting and synthesis, individuals may lose opportunities to engage in the deliberate work of marshalling arguments and developing nuanced interpretations through sustained writing. For writers then AI is in danger of becoming the Thief of Reason [20]. Recent studies suggest negative impacts for readers too, as the ease and fluency of AI outputs can encourage surface-level reading, with users less likely to scrutinize facts or construct arguments independently [9] leading to cognitive offloading and intellectual passivity [13].
Transparency, trust, and information verification become persistent challenges in this landscape. AI systems are designed to generate plausible, well-formed text, and in doing so, they often “homogenize” the surface cues that readers have historically relied upon to detect unreliability, bias, or error. As a result, subtle signals of uncertainty or dissent may be lost, and hallucinated content appears with the same confidence as established fact. Emerging studies show that their use in scholarly communications is already rising sharply [10, 15], and the risks to trust and verifiability in scholarly and journalistic contexts are grave, requiring new technical and ethical frameworks to address information provenance and integrity [23].
For knowledge-making communities, the threat to tradition is profound: the speed and volume of AI-mediated discourse could overwhelm the deliberate processes that peer review and editorial pacing preserve. The slow crafts of argumentation, citation, and critical exchange, essential for the refinement and validation of scholarship, are increasingly at odds with commercial incentives for faster production and wider visibility [5]. Commercial and cultural risks are equally stark. As remix culture and the ubiquitous reuse of AI-generated content blur the boundaries of creative ownership, attribution and compensation become increasingly precarious. The value chain that once rewarded careful writing and curated expertise faces disruption, raising urgent legal and institutional questions about recognition in collaborative, post-document ecosystems [24].
Finally, over-reliance on black-box AI tools can threaten both individual agency and the communal sense of meaning. When content creation and interpretation are outsourced to systems whose workings remain largely opaque, readers and writers risk becoming passive consumers. The challenge is not only technical, but existential: how to preserve the skills, values, and habits that give meaning to reading and writing in the face of seamless but inscrutable automation [7].
While AI makes some incarnations of hypertext redundant (for example, its value as a computational knowledge format [2]), other perspectives on hypertext have much to offer in this new post-document world. At its core, hypertext has always been rooted in a philosophy of human agency [22], an existentialist belief that people should have the autonomy to configure their information and knowledge spaces as they see fit. As we confront the challenge of AI-generated, post-document textuality, these foundational values offer a critical guide. The question is not whether AI can produce fluid, adaptive text (it clearly can) but rather how we build systems that preserve human autonomy, allowing people to retain genuine control over how information is processed, manipulated, and presented [7]. If hypertextual thinking is to remain relevant in the age of AI, it must focus less on the specific technologies of the 1970s, 80s, and 90s, and more on the timeless principles those technologies embodied.
Transparency stands as one of the most urgent requirements. Current large language models produce text, not hypertext [20]. They generate polished, linear prose with perhaps a few prompt suggestions appended at the end, but the text itself lacks the structural markers that hypertext affords. Hypertext, by contrast, makes relationships visible: links act as a form of punctuation, highlighting interesting words, concepts, or hotspots that carry particular weight. They also create a grammar of connection, showing how texts relate to one another and offering multiple entry points into meaning [19]. Imagine if AI systems could generate true hypertext outputs that allow readers to navigate mini hypertextual collections, expand sections, or reshape arguments dynamically. Instead of a sequence of interactions where the machine remembers context but the textual output is entirely rewritten each time, we might imagine a persistent hypertext that the user navigates and reshapes through prompts, building understanding incrementally and transparently.
Provenance is also essential. In a post-document world, the connection between text and author risks becoming a mere metadata tag, invisible to readers and algorithmically obscured. Hypertext thinking demands that we treat contributors as first-class entities, not afterthoughts [21]. Transparency must extend beyond simply listing sources; it should show which parts of which texts are being drawn upon, and crucially how and why they have been transformed. Current simplistic, linear interfaces are inadequate for this task. We need more faceted views of information, interfaces where users can see the remediated text they are engaging with, alongside the original sources in context, the relationships between them, and the transformations applied. We do not yet have a fully realized solution for such an interface, but the urgency of the challenge will drive considerable exploration in the coming years [3].
These flexible, spatial, and faceted interfaces offer a bridge between the document-centric past and the fluid, post-document future. Rather than surrendering to linear chatbot outputs, we can draw on the rich tradition of spatial hypertext, where information is organized not just sequentially but multidimensionally [18]. Spatial arrangements support tasks like information triage, where users rapidly assess, cluster, and navigate large volumes of material [17]. These interfaces need not be purely utilitarian; they can be poetic and aesthetically rich, enhancing both comprehension and engagement. By combining AI’s generative power with hypertext’s structural clarity and user-driven navigation, we can preserve human meaning-making even as the boundaries of documents dissolve.
The transformation ahead requires action on multiple fronts. Educational priorities must be revisited. Conceptualisations of what AI Literacy entails are still evolving [16] but it is clear that traditional literacy skills are more essential than ever in an AI-mediated world. Yet the traditional pathways through which we develop critical reading and writing (slow, deliberate engagement with texts, the marshalling of arguments, assessing sources) risk being short-circuited by tools that make complex writing appear effortless. If we stop teaching people to be good writers because writing seems devalued, we simultaneously fail to teach them the skills they need to be discerning readers [20]. In a post-document world, where everyone must captain their own ship through vast seas of AI-generated information, those critical skills are intellectual survival tools.
The design challenge is to move beyond plain text chatbot interfaces and linear outputs toward hypertextually rich environments that support post-document workflows. This means designing workspaces that are flexible and bounded, where fluid information can be scoped, grouped, and navigated in ways that match human cognition. Current tools are beginning to experiment with ”spaces” and ”projects” (cognitive scaffolds that allow users to manage context) but these are only first steps. The community must explore interfaces that generate true hypertext, with differentiated links, transparent transformations, and faceted views that show not only the remediated content but also its sources and relationships. The technical work ahead is substantial, but so too is the conceptual challenge: to reimagine what reading, writing, and navigation mean when documents dissolve.
We need mechanisms to ensure that AI development and deployment prioritize human autonomy, preserve the voices of diverse contributors, and resist the flattening tendencies of algorithmic consensus [8, 1]. This will require not only better tools but also new norms, expectations, and forms of accountability, ensuring that the systems we build remain in service of human meaning-making rather than its replacement.
We don’t need to abandon careful scholarship, attribution, and sustained dialogue, but we need to find new ways to uphold them. Alternative models of publication (living projects, micro-contributions) can coexist with traditional peer review if we design systems that make provenance visible and transformations traceable. The goal is not to freeze knowledge in static forms, but to create environments where fluidity serves deeper understanding and where readers and writers alike retain the agency to shape meaning in a world increasingly mediated by machines. The world after documents.
A Note on Production
This document was written in what I have called “Potter” mode [20]. An initial set of thoughts, ideas, and arguments was created by allowing a large language model to interview me about a previous paper (The Shadow of the Machine, focusing on the Dialogic Web and Post-Document World). This transcript was then iteratively edited and restructured by both myself and the AI into this essay format, before a final manual editing pass. This final review ensured that sources were used correctly and that my ideas and intent was properly reflected in the text.
References
Agarwal, D., Naaman, M., and Vashistha, A. AI Suggestions Homogenize Writing Toward
Western Styles and Diminish Cultural Nuances. In Proceedings of CHI ’25, Association for Computing Machinery, pp. 1–21.
Anderson, M. W. R., and Millard, D. E. Seven Hypertexts. In Proceedings of the 34th
ACM Conference on Hypertext and Social Media (New York, NY, USA, Sept. 2023), HT ’23, Association for Computing Machinery, pp. 1–15.
Atzenbeck, C. Unwinding AI’s Moral Maze: Hypertext’s Ethical Potential. In Proceedings of the 35th ACM Conference on Hypertext and Social Media (New York, NY, USA, Sept. 2024), HT ’24, Association for Computing Machinery, pp. 23–28.
Barthes, R., and Heath, S. Death of the author. In Image, Music, Text. Fontana Press, London, UK, 1977, pp. 142–148.
Berg, M., and Seeber, B. K. The Slow Professor: Challenging the Culture of Speed in the Academy. University of Toronto Press, Apr. 2016.
Brooker, S. Man proposes, god disposes: Re-assessing correspondences in hypertext and antiauthorist literary theory. In Proceedings of the 30th ACM conference on hypertext and social media (New York, NY, 2019), ACM, pp. 39–48.
Brooker, S. Computer, Enhance! Augmentation, Ideation, Hypertext. In Proceedings of the 35th ACM Conference on Hypertext and Social Media (New York, NY, USA, Sept. 2024), HT ’24, Association for Computing Machinery, pp. 193–196.
Doshi, A. R., and Hauser, O. P. Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances 10, 28 (July 2024). Publisher: American Association for the Advancement of Science.
Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical
Thinking. Societies 15, 1 (Jan. 2025).
Gray, A. ChatGPT ”contamination”: estimating the prevalence of LLMs in the scholarly literature, Mar. 2024. Preprint available from https://arxiv.org/abs/2403.16887v1
Hancock, J. T., Naaman, M., and Levy, K. AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations. Journal of Computer-Mediated Communication 25, 1 (Mar. 2020), 89–100.
Hargood, C., Weal, M. J., and Millard, D. E. The StoryPlaces platform: Building a web-based locative hypertext system. In Proceedings of the 29th on Hypertext and Social Media (Baltimore, MD, USA New York, NY, USA, 2018), ACM, pp. 128–135.
Hassen, M. Z. The Impact of AI on Students’ Reading, Critical Thinking, and Problem-Solving Skills. American Journal of Education and Information Technology 9, 2 (Sept. 2025), 82–90. Publisher: Science Publishing Group.
Hutson, J. Human-AI Collaboration in Writing: A Multidimensional Framework for Creative and Intellectual Authorship. International Journal of Changes in Education (Feb. 2025).
Liang, W., Zhang, Y., Wu, Z., Lepp, H., Ji, W., Zhao, X., Cao, H., Liu, S., He, S., Huang, Z., Yang, D., Potts, C., Manning, C. D., and Zou, J. Y. Mapping the Increasing Use of LLMs in Scientific Papers, Apr. 2024. Preprint available from https://arxiv.org/abs/2404.01268v1
Lintner, T. A systematic review of AI literacy scales. npj Science of Learning 9, 1 (Aug. 2024), 50. Publisher: Nature Publishing Group.
Marshall, C. C., III, F. M. S., and Coombs, J. C. VIKI: spatial hypertext supporting emergent structure. In Proceedings of the 1994 ACM European conference on Hypermedia technology ECHT 1994 (1994), ACM Press, pp. 13–23. Issue: September.
Marshall, C. C., and Shipman, III, F. M. Spatial hypertext: Designing for change. Communications of the ACM (CACM) 38, 8 (1995), 88–97.
Mason, S., and Bernstein, M. On links: exercises in style. New Review of Hypermedia and Multimedia 27, 1-2 (Apr. 2021), 29–50. Publisher: Taylor & Francis eprint: https://doi.org/10.1080/13614568.2021.1889693.
Millard, D. E. The Shadow of the Machine: Hypertext in the Age of Artificial Intelligence. In The Narrative and Hypertext Workshop (Held in Conjunction with ACM Hypertext 2025) (Chicago, IL, 2025).
Moreau, L. The Foundations for Provenance on the Web. Found. Trends Web Sci. 2, 2-3 (Oct. 2010), 99–241.
Nelson, T. H. Computer Lib/Dream Machines. Self-published, n/a, 1974.
Perkins, M., Roe, J., Postma, D., McGaughran, J., and Hickerson, D. Detection of
GPT-4 Generated Text in Higher Education: Combining Academic Judgement and Software to Identify Generative AI Tool Misuse. Journal of Academic Ethics 22, 1 (Mar. 2024), 89–113.
Ryan, H., Abramov, D., Acker, S., and Elkins, S. Can AI Be a Co-Author?: How Generative AI Challenges the Boundaries of Authorship in a General Education Writing Class. Thresholds in Education 48, 1 (2025), 40–56. Publisher: Academy for Educational Studies ERIC Number: EJ1468037.
Vansh, R., Rank, D., Dasgupta, S., and Chakraborty, T. Accuracy is not enough: Evaluating Personalization in Summarizers. In Findings of the Association for Computational Linguistics: EMNLP 2023 (Singapore, Dec. 2023), H. Bouamor, J. Pino, and K. Bali, Eds., Association for Computational Linguistics, pp. 2582–2595.
