AI: Summary
March 2026 was a month of deepening conceptual and practical work around a single core question: what does it mean to work with knowledge in space, and how do we move knowledge between flat, framed interfaces and XR environments without simply transplanting one medium’s habits into another? The community explored this through hands-on demos of Author visionOS, a field-study presentation on annotation practice, discussions of movement, gesture, and flow in spatial interfaces, and increasingly sharp distinctions between reading, writing, and a third, still-unnamed mode of knowledge navigation unique to XR. The month also produced a recurring terminological breakthrough around the difference between context — the stable background of what you already know — and content — the active material you are working on — a distinction that proved generative across sessions on annotation, spatial organization, and the design of knowledge environments.
AI: Main Topic
The animating topic across all five sessions was the challenge of moving knowledge between media — from notebook to flat screen, from flat screen to XR, and potentially back again — while preserving or enhancing its cognitive utility at each stage. This was explored from multiple angles: Frode Hegland demonstrated a workflow using Author visionOS in which a student’s long-form notes are processed by an LLM to extract concepts, people, and events, then spatialized in a headset as an interactive node map. Jamie Blustein (March 9) grounded the discussion in empirical fieldwork, presenting a taxonomy of real annotation behavior across multiple universities — interpretive marks, place-marking, procedural directions, compound marks — arguing that annotation research has over-focused on technology and must return to first principles about why people annotate and what they need from it. Sam Brooker (March 2) brought the perspective of the ACM Hypertext 2026 conference, introducing questions about the grammar of spatial knowledge spaces and the tension between curated and open structures. Ken Perlin (March 2) shared his own exploratory demos using Wikipedia page hierarchies in mixed reality, and articulated a personal test for XR utility that the community returned to throughout the month: would you use this instead of your keyboard and trackpad if no one were watching?
AI: Highlights
Ken Perlin articulated a principled test for XR utility that resonated throughout the month: he does not yet find himself working in XR when no one is around. The question — not whether it is impressive in a demo but whether it replaces habitual tools in actual work — became a recurring benchmark.
Sam Brooker introduced the jazz idiom something fixed, something free as a way of framing the spatial grammar of knowledge environments. The academic paper is the fixed element; the connectivity, proximity, and spatial relationships between papers are the free element — currently ungoverned and ripe for new design.
Brandel Zachernuk observed (March 23) that what Frode is building is not reading — it is a third mode, something like reading a map rather than reading a book. This reframing was received as genuinely clarifying.
Peter Dimitrios coined the phrase from A4 to AR format, a compact description of the challenge of reshaping knowledge to fit a spatial medium rather than simply transposing flat documents.
Tom Haymes described the need for a sextant for knowledge — an instrument for orienting oneself within an information environment rather than just navigating it — a phrase that sparked extended discussion on March 9.
Peter Dimitrios (March 2, chat) wrote: We are writing a shared illuminated manuscript. This was reacted to with enthusiasm and captures the collaborative, layered, historically resonant nature of what the community is trying to build.
Frode (March 16, song lyrics shared in chat): The ultimate display was always you, carrying the map of everything you know. This formulation — that the headset is not an environment to enter but an access mechanism for knowledge that is already yours — was presented as a breakthrough in how to articulate the work.
Note: No participant addressed the Assistant directly by name during any of the five sessions.
AI: Insights
XR is neither reading nor writing — it requires a new vocabulary.
Brandel Zachernuk made this explicit on March 23: what Frode is doing in XR is not reading. The word reading implies inscription on a substrate. What happens in a spatial node environment is something closer to topological comprehension — having a tangible appreciation for the spatiality represented in the text. The community lacks adequate language for this third mode, and the absence of vocabulary is itself a design constraint.
XR reduces visual complexity but this reduction can feel like triviality.
Brandel noted (March 23) that when you take a long document and conceptualize it into nodes in XR, it suddenly seems like there is almost nothing there — which paradoxically raises the question of why one would bother. The reduction in cognitive load reads visually as poverty of content. This is a design problem with no obvious solution yet.
The Perlin test reframes the entire research agenda.
Ken Perlin’s stated position — that he has not yet found himself using XR for actual work in private — functions as an Occam’s razor for the field. It distinguishes demonstrations from tools and reframes the question away from capability toward necessity. His articulation of this position (March 2) shaped how Frode and others described their own work for the rest of the month.
Annotation is a first-order cognitive act, not a secondary markup.
Jamie Blustein’s fieldwork (March 9) showed that people annotate to make sense, not to record sense already made. The most common annotation types are interpretive engagement and place-marking — signs of where the mind met resistance or surprise. This argues against treating annotation as a layer added to a finished document and for treating it as a primary site of thought. Frode’s formulation — annotations as knowledge objects that can be lifted from their substrate — extends this into spatial environments.
The distinction between context and content is structurally generative.
The recurring distinction between context (stable, already-known background — people, canonical works, one’s own history) and content (the active material currently being worked on) proved useful across annotation, spatial layout, and interface design discussions. It resolves otherwise intractable problems: why some things in a spatial environment should be fixed and others fluid, why a headset room should feel like a home rather than a blank canvas, and how to prevent a new knowledge space from becoming a new mess.
Pre-conscious visual processing constrains all novel interface design.
Frode (March 30) surfaced research on the two speeds of visual cognition: the brain judges a page and registers unfamiliarity as threat before a single word is read. This has a direct implication for XR knowledge tools — novelty in layout and affordance triggers avoidance, not curiosity, in most users. The design problem is not to suppress novelty but to introduce it at a rate that does not trigger the threat response. Brandel framed this as the difference between design and raw capability: design is centuries of rounding off edges through accumulated practice.
Physical movement in XR is genuinely different — but the seated case must also be solved.
Jonathan Finn’s challenge (March 16) exposed a real tension: the unique value of XR may lie in physical movement through a knowledge space, but movement is inconvenient, space-constrained, and fatiguing. Frode’s response — that reading is best on paper, and there is a continuum of interaction rather than a binary — was productive but left the tension intact. The seated XR experience and the ambulatory XR experience are probably different tools for different cognitive tasks.
Authoring in XR requires spatial arrangement to carry objective meaning.
Jonathan Finn (March 16) noted that if you arrange concepts in space while composing an argument, and then need to translate that arrangement back into linear text, the spatial layout must mean something that survives translation. Without objective semantics for position, the spatial arrangement is only private and temporary. This points toward a need for what Sam Brooker called a grammar chair — someone responsible for the interpretive rules of a shared knowledge space.
The grammar of knowledge spaces is both shared and personal simultaneously.
Ken Perlin (March 2) articulated this as a cognitive universal: human brains always reconcile a shared hierarchy (what the tribe has decided to value) with a personal hierarchy (where I am and what I am doing with my body). Any spatial knowledge system that ignores either hierarchy will fail. This argues for shareable views — spatially saved configurations — as a core feature, not an accessory.
LLM output speed is incompatible with flow, unless it outputs dramatically less.
Brandel Zachernuk (March 23) raised the challenge that LLMs produce far more text than any user can process with cognitive continuity. Either users are not really reading the output, or they are constantly pulled out of flow to apprehend genuinely surprising content. Tom Haymes countered that skimming is a legitimate skill and that models can be instructed to be brief. Brandel’s underlying point — that the rate and volume of LLM output is not designed around human cognitive rhythms — remained unresolved.
The distinction between authored and AI-generated links is ethically and epistemically critical.
Peter Dimitrios (March 30, chat) flagged this directly: in a spatial knowledge environment where nodes can be automatically connected by AI, it becomes essential to distinguish links that a human deliberately constructed from links inferred by a system. This is not merely a UI problem — it is a question about what it means to understand a relationship.
Reading on paper activates different cognitive processes than reading on screen — and XR is a third thing again.
Brandel (March 23) referenced emerging research on the cognitive specificity of paper reading, noting that it is wrong to use this as an argument against advancing into digital and spatial media. Instead, it should prompt precise identification of what paper does that must be preserved or deliberately reimplemented. XR spatial navigation has its own cognitive profile, distinct from both.
The flamethrower moment as the design target for XR.
Tom Haymes (March 16) recalled a Second Life presentation where a speaker set the audience on fire with a virtual flamethrower to make a point about learning management systems. The room suddenly understood what XR was for: not a worse version of something you could do in real life, but something categorically impossible outside the medium. This remains the design target — an affordance or experience that could only exist in XR and that changes how the user understands the content.
Knowledge tools should be thought of as homes, not vehicles.
Frode (March 23) articulated a shift in how he thinks about the headset: it is not something he takes to a coffee shop, it is something he uses at home, where the room is the knowledge and the headset gives access to it. This repositions XR from a portable device to a situated cognitive environment — closer to a library than a laptop.
