January 2026

Journal entry for January 2025. The following text was produced by ChatGPT 5.2 on the 27th of January, fully verified and edited by host Frode Hegland.

Abstract

During the first month of meetings this year, after a two year Alfred P. Sloan Foundation research project, the Future of Text group convened to explore what kinds of research projects might most productively advance the study of text, knowledge representation, and cognition in Extended Reality (XR). Discussions ranged from geopolitical concerns about information warfare and democratic resilience to concrete proposals for embedding machine-readable metadata into documents and rendering scholarly structures spatially. The group’s early work has coalesced around a central question: how might open, trustworthy, and spatially navigable documents augment human intellect in an era of AI-mediated reading?

Community Update Email

Tom Haymes Notebook LM of this month’s transcript

Songs. Two Different Ones

‘Room for the Mind to Grow’

a faster beat

AI Co-orchestrated music based on this month’s transcripts.

Framing the Problem Space

The opening sessions positioned the group’s inquiry within a broad cultural and epistemic context. Participants expressed concern about large-scale informational instability, the vulnerability of public discourse, and the limits of current reading interfaces. This framing established a normative motivation for the work: improving how people interact with knowledge is not merely a technical challenge but a civic one.

XR technologies were introduced not as entertainment platforms but as potential cognitive instruments—environments in which documents, arguments, and citations could become spatial objects that users explore, manipulate, and interrogate.


Core Research Themes Emerged

Spatial Documents and Embodied Reading

A recurring focus was the idea that documents might be represented as spatial structures rather than flat pages. Participants explored how arguments, narrative arcs, and citation networks could be laid out in three dimensions, allowing readers to navigate scholarship physically and visually. This was framed as a continuation of earlier hypertext and electronic literature traditions rather than a break from them.

The group discussed how such spatial forms might support comprehension, memory, and critical reasoning, especially when combined with gesture, movement, and embodied interaction.


Open Metadata and “PDF+”

Another central strand concerned document infrastructure. The group proposed embedding rich, machine-readable metadata—such as BibTeX records and citation graphs—directly inside otherwise conventional PDFs, for example as appendices that do not interfere with human reading but remain accessible to software agents.

This approach was described as enabling:

• verifiable provenance
• long-term archival stability
• automated citation interaction
• cross-document mapping
• XR visualization of scholarly corpora

The concept of documents being simultaneously “frozen” for archival trust and “liquid” for computational exploration became a unifying metaphor. This was around the notion of Visual-Meta.


Prototyping and Experimental Systems

Alongside conceptual work, participants shared early technical experiments: WebXR prototypes, visualization demos, repositories of Sloan-funded research code, and interactive sketches. These were used to test how spatial interfaces might render texts, references, and analytic structures.

The group stressed rapid, open prototyping as a methodological stance—building small systems to think with rather than waiting for full-scale platforms.


AI-Assisted Reading & Discovery

AI-supported tools for summarizing, mapping, and querying documents were discussed as both opportunities and risks. Participants emphasized that such systems should make their operations transparent and citation-aware, avoiding black-box interpretations.

NOTE: The group expressed deep reservations of letting AI take too prominent a role in cognition. AI should augment thought, not replace it and the question of ‘who influences the models’ poses serious questions about the control of knowledge in the future. The group’s focus is text in XR, not AI text.

Astral posted this in our last January meeting of Yuval Noah Harrari’s warning of some of the concerns of AI.


Humanities Perspectives & Preservation

Several discussions highlighted the importance of grounding XR experimentation in literary history, electronic literature, and archival practice. Questions of how spatial documents would be preserved, cited, and interpreted over decades were treated as central research challenges rather than afterthoughts.

This led to an emphasis on continuity with prior hypertext research and on designing systems that remain legible to scholars outside of technical communities.


Provisional Research Agenda

By the end of the first month, the group’s wide-ranging conversations had begun to converge into a coherent program:

Central Question

How can open, machine-readable documents be transformed into spatial, explorable knowledge environments that augment human reasoning while preserving scholarly trust and long-term access?

What should the initial, novice user, be able to do in XR and what should the expert, trained user be able to do?

Emerging Focus Areas

  1. Document Architecture
    Standards for embedding citation graphs, metadata, and structural models into PDFs and related formats.
  2. Spatial Knowledge Representation
    Techniques for laying out arguments, sources, and conceptual relationships in XR.
  3. Interaction Design
    Gestural, embodied, and navigational methods for reading and annotating in three dimensions.
  4. AI Integration
    Transparent AI systems that operate over open metadata rather than opaque text scraping.
  5. Preservation and Scholarly Practice
    Archival strategies, citation norms, and historiographic continuity.

Next Steps

Developing a small number of concrete demonstrators (currently being tested in Frode Hegland’s company, The Augmented Text Company’s Author and Reader for visionOS):

• a PDF+ document containing embedded citation metadata in the form of Visual-Meta
• an XR environment that renders a paper’s references as a navigable constellation
• AI tools that query the embedded metadata and expose their reasoning paths


Conclusion

The first month of meetings functioned as an exploratory phase in which the Future of Text group surveyed a broad intellectual terrain and tested potential research trajectories. While no single system architecture was finalized, a clear direction emerged: the group is moving toward a research program centered on spatially embodied documents, open metadata infrastructures, and transparent AI-assisted reading environments. The work ahead will focus on turning these ideas into demonstrable systems and empirical studies capable of informing the future of scholarly communication.

Further, the group will have specific themes for each month, where January as a Letter in XR and February will be People/Contacts, in XR.


Extracted Media

URLs mentioned in Chat

Raw Transcript

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *