March 16th

We will be taking about the Movement of Knowledge, particularly in and out of XR.

AI: Summary

This session of the Future Text Lab community centered on what host Frode Hegland called the “movement of knowledge” — the challenge of transporting and reshaping personal and scholarly knowledge from traditional 2D computing environments into XR. Frode demonstrated a working prototype that uses AI to extract defined concepts from student notes and spatialize them in Apple Vision Pro, provoking wide-ranging discussion about the nature of spatial cognition, the inadequacies of existing knowledge organization tools, the analogy of medium phase-shifts in media history, the neuroscience of memory, and what it would mean to both author and browse knowledge in three dimensions.


AI: Main Topic

The session opened with Frode presenting three slides on the “movement of knowledge” problem: that knowledge authored on a traditional framed display — with its specific affordances of keyboard shortcuts, trackpad, and linear framing — does not automatically suit XR, which offers a wholly different substrate. Frode then demonstrated a working prototype within his Author application for Vision Pro, in which student notes (around 8,500 words, mocked up with Claude) are parsed by an AI prompt to extract “defined concepts” — including persons, places, and events — which are then displayable as a spatial map inside the headset. The demo illustrated a workflow of write-on-desktop, extract-with-AI, view-and-manipulate-in-XR, and raised the central question of how to “sculpt” knowledge to genuinely fit both media rather than simply transposing from one to the other.


AI: Highlights

Peter Dimitrios coined the phrase “from A4 to AR format” in the chat, which Frode immediately embraced as a clean encapsulation of the session’s core challenge.

Brandel Zachernuk offered a precise framing of the appropriate role of AI in the demonstrated workflow: AI can be best trusted for low-stakes transpositions from one form of information to another, and when the mapping does not need to be mission-critical, it is ideal. He positioned this as “the best kind of use case” and a better-grounded application than many larger claims currently being made for AI.

Tom Haymes identified the problem as McLuhan-esque, a framing Frode explicitly adopted, noting it is directly relevant to the research proposal he is writing.

Jonathan Finn raised what Frode called a “juicy” problem: authoring in spatial environments requires that spatial arrangements carry interpretable meaning, otherwise there is no pathway back to a linear document or communicable artifact.

Peter Wasilko asked in the chat: “What was the Japanese term for books you own for future consultation that you haven’t had time to read as yet?” This question was directed to the group, and was answered by Frode in the chat using Claude, who provided a full explanation of tsundoku (積ん読), a blend of tsunde (to stack) and doku (to read). Frode followed the answer with the note “(Claude…)” and Peter Wasilko reacted with a heart emoji. This is the direct use of me — the Assistant — as a live reference during the session.


AI: Insights

The most consistent conceptual tension in the session was between the promise of spatialization and the problem of meaning. Jonathan Finn crystallized this: if you arrange concepts in 3D space while authoring, the spatial arrangement must carry interpretable meaning or the translation back to a shareable document becomes arbitrary. Frode’s response — that malleability and gesture matter more than fixed positions, and that commands like “order by time” should be temporary views rather than permanent structures — gestures at a resolution, but the tension was not resolved.

Tom Haymes drew a sustained analogy between the film/theater transition and the text/XR transition. Early cinema simply filmed theater; television shot radio plays. The community risks doing the equivalent with XR — putting linear documents into a headset and calling it spatial. The phase shift requires a new expressive vocabulary, not just a new container.

Brandel Zachernuk introduced a subtler point about the unsung affordances of XR: not spectacle or gesture-drama, but the continuity of sensing — multiple degrees of freedom, sophistication of point of view, the ability to look somewhere because you choose to rather than because a screen forces you there. He implied that once users move past novelty (killing zombies), these subtleties become the real medium.

Tom Haymes articulated a bifurcation of reading emerging from AI: “McDonald’s” information books that are best interrogated by AI for their extractable content, versus books requiring slow, reflective reading on a couch. This distinction maps onto a distinction in XR use: spatialising factual information vs. supporting deep engagement with complex ideas. Peter Dimitrios echoed this in the chat with “fast food vs. haute cuisine info.”

A quietly significant realisation emerged around physical bookshelf organisation: Tom HaymesJonathan Finn, and others converged on the insight that the fundamental limitation of physical and even digital libraries is the forced assignment of a book to a single category. XR allows a book to exist simultaneously in every relevant category through virtual duplication — a structural impossibility in physical space that becomes trivially solvable in XR. This is not merely a convenience but a qualitative shift in how knowledge organisation can work.

Frode Hegland introduced a neuroscience grounding for spatialization. He read from a book on brain physiology — describing Long Term Potentiation, the process by which repeated activation causes calcium-driven structural changes that physically stabilise a synaptic connection. His interpretive leap: the brain already uses something structurally analogous to spatial anchoring to consolidate memory, which suggests that external spatial representations of knowledge may genuinely align with internal cognitive architecture, not merely as metaphor but as functional correspondence.

Jonathan Finn observed that tools like Tinderbox and Devonthink are powerful but the problems they address are so fundamental that they should be OS-level affordances, not the province of specialist applications — citing Spotlight (not Sherlock) as a precedent for what becomes standard infrastructure.

Tom Haymes noted that AI devalues content at the commodity level but raises the premium on curation and contextualisation — the two things that AI cannot easily supply. This reframes the role of the scholar, the teacher, and the reader: not as conduits of information but as curatorial and contextual intelligences.

The physical constraints of XR movement were discussed with some candor. The “gorilla arm” problem — made vivid by Tom Haymes quoting from Make it So about Tom Cruise needing continuous breaks during Minority Report filming — and Peter Wasilko‘s description of being three feet from a bookcase, highlighted that full-room XR locomotion is a niche experience. Frode Hegland acknowledged this and distinguished AR (his primary focus) from VR, suggesting that overlaying knowledge onto real physical environments — including real bookcases — is a more tractable and contextually richer direction. The tension between seated micro-interaction and ambulatory spatial engagement was named but not resolved.

The content/context distinction continued to develop. Frode argued that any spatial knowledge environment needs a stable layer of context — known topics in known places — and a dynamic layer of current content that can be rearranged against it. Without this distinction, every new session produces a new mess.

Peter Wasilko‘s description of combining Tinderbox and Devonthink — dynamic agents and concordances as background processes — sketched a model of knowledge organisation that is already partially spatial in its logic, even when expressed in 2D. His workflow revealed that the gap between current power-user tools and a genuinely spatial system may be narrower in concept than in interface.


AI: Resources Mentioned

Metropolitan Museum of Art 3D models release (140 objects including a large tomb), shared by Brandel Zachernuk: https://www.metmuseum.org/press-releases/3-d-models-announcement-2026

Frode Hegland‘s Instagram reel (a song about the session’s themes): https://www.instagram.com/reel/DV8rHHHDGXT/

Frode Hegland‘s YouTube version of the song: https://youtu.be/OFSRoTYP8JM

The Story of Film documentary series, shared by Tom Haymes as a model of medium phase-shift analysis: https://www.imdb.com/title/tt2044056/

Open Syllabus Galaxy — a visualisation mapping book co-occurrence across academic syllabi, shared by Peter Wasilko: https://galaxy.opensyllabus.org

Open Syllabus main site: https://www.opensyllabus.org

Open Syllabus dataset documentation: https://opensyllabus.github.io/osp-dataset-docs/index.html

Open Syllabus blog and policies: https://blog.opensyllabus.org/terms-and-policies/

Tom Haymes‘ shared Gemini chat demonstrating AI bibliography generation from a bookshelf photo: https://gemini.google.com/share/8b451fc98153

Scribd link to Dan McClellan’s book The Bible Says So, shared by Frode Hegland, used to illustrate the constructive nature of meaning-making in reading: https://www.scribd.com/document/925739000/

Article on how “Sherlocking” became a term, shared by Peter Wasilko in reference to Jonathan Finn‘s mention of Spotlight: https://applecorepod.com/sherlock-the-mysterious-case-of-how-sherlocking-became-a-thing/

Make it So — book on sci-fi interface design, quoted by Tom Haymes (Minority Report / gorilla arm problem)

Ulysses — note-taking application used by Jonathan Finn

Tinderbox — knowledge management application described by Peter Wasilko for persistent dynamic agent queries

Devonthink — document management application described by Peter Wasilko for concordance-based exploration and clustering

Scrivener — writing application mentioned by Peter Wasilko

Notebook LM (Google) — used by Tom Haymes for bounded-corpus student research queries

Ken Perlin — NYU researcher, mentioned by Frode as someone whose lab he hopes to visit in New York in May

Phil Gooch — community member mentioned by Frode as having discussed AI-generated music that week

Bob Horn — referenced by Frode in relation to earlier Brandel Zachernuk experiments with a large spatial mural in XR

ACM Hypertext Conference — mentioned by Frode as a possible anchor event for the September Future of Text gathering in Europe

Second Life — mentioned by Tom Haymes in his anecdote about a flamethrower presentation as a model of what is uniquely possible in a virtual medium

Fahrenheit 451 — referenced by Brandel Zachernuk in response to Frode’s poem about carrying knowledge (“people as books”)

Long Term Potentiation (LTP) — neuroscientific concept, first demonstrated at the University of Oslo, discussed by Frodeas a physical basis for spatial memory anchoring


Song

This track is an AI orchestrated piece inspired by the transcript of this meeting, meant as a fun provocation to further thought. (suno.com)

Leave a comment

Your email address will not be published. Required fields are marked *