Introduction to the March Project of how a Conference could be presented in XR.
It is fair to say that XR is a ‘tool looking for an application’. This month we are looking at what might be useful in XR to orient oneself in a Conference, perhaps with scheduling and with the academic papers being published.
Future sessions to be planned: Interactions, Metadata, annotations & more…
AI: Summary
The primary thread was the relationship between XR interfaces and knowledge work, examined through two contrasting live demonstrations. Frode Hegland showed his Author system running in a headset, visualizing metadata from the ACM Hypertext 2023 conference — people, papers, locations — with selection, layout, and focus controls operated via a spatial toolbar.
Ken Perlin then demonstrated two prototypes: a hierarchical Wikipedia page viewer navigable with both controllers and hand gestures, and a semantic zoom demo using Robert Frost’s “Stopping by Woods on a Snowy Evening,” where walking closer to an object progressively reveals its title, full text, and analysis.
Sam Brooker introduced the ACM Hypertext 2026 conference (London, September 14–18), themed around “hypertext method” — hypertext understood not just as technology but as a way of thinking: non-linear, networked, dynamic. This framing set the stage for the group’s broader inquiry into how spatial computing might serve scholarly communities.
AI: Insights
Sam Brooker articulated a distinction that proved generative for the entire session: the difference between XR as navigation (moving between two pieces of important information) and XR as a site of knowledge creation (staying in the spatial interface to build meaning through relationships). He drew on Minority Report to illustrate this — all that gestural swiping is ultimately just transport between conventional video clips. The question he posed — whether the kinesthetic, spatial experience is interstitial or constitutive — became a recurring lens through which the group evaluated every demo shown.
Ken Perlin offered a bracing counterpoint to enthusiasm about XR knowledge tools by describing his “harsh test”: would he use any of this instead of his screen, trackpoint, and keyboard if no one were watching? His honest answer — universal failure so far — served not as pessimism but as a methodological standard. He framed his own prototyping as a study in “shared failure,” deliberately keeping AI out of his builds until he understands how bodily interaction with spatial information actually works. His emphasis that “technology is easy, good visual design is hard” recentered the conversation on interaction design rather than capability.
Ken Perlin’s semantic zoom demo — where proximity determines the level of detail revealed — crystallized an idea that several participants latched onto. The notion that an AI could dynamically generate summaries sized to your visual distance from an object in space married the group’s concerns about summarization with a concrete spatial mechanic. Peter Dimitrios called this “progressive disclosure by leaning in” and identified it as genuinely new to AR/XR.
Jonathan Finn proposed a topic the group recognized as underexplored: the visual representation of meaning as distinct from text. He noted that while everyone touches on this implicitly, the community has never directly addressed how information extracted from text might be shown without using more text. Sam Brooker responded in the chat by calling text “a kind of latent skeuomorphic hangover,” and Jonathan clarified he was thinking of semantic graphs — objects with labelled arrows — as standard OS-level interface elements.
Sam Brooker raised a tension around canonicity and curation in scholarly communities. Making it easy to surface, link, and layer commentary risks over-amplifying certain voices — particularly those who happen to write single quotable sentences rather than deeply important but hard-to-summarize work. He argued that the friction of manual pursuit has its own epistemic value, and that the goal should be to introduce complexity in a manageable way rather than removing it through summarization.
The chat log surfaced a parallel debate about AI authorship. Peter Dimitrios drew a line between AI as tool (spell-check, calculator) and AI as author, asking “where is ‘my’ voice?” Tom Haymes pushed back, questioning whether we fetishize writing itself without interrogating the function of the paper — if you read an AI-generated text and say “I approve,” does that make it yours? This tension between AI as augmentation and AI as displacement ran throughout both the spoken and written threads.
Frode Hegland and Ken Perlin agreed to exchange data — Frode will send a glossary derived from this meeting’s transcript to Ken, who will attempt to visualize it in his system. The intention is to look at the same meeting data through radically different spatial interfaces, which could reveal what is universal versus idiosyncratic in spatial knowledge representation.
AI: Resources Mentioned
ACM Hypertext 2026 conference — https://ht.acm.org/ht2026/ — introduced by Sam Brooker, taking place in London, September 14–18, 2026, themed “hypertext method”
Future Text Lab meeting page — https://futuretextlab.info/2026/02/19/march-2-2026/ — shared by Frode Hegland
Augmented Text Info — https://www.augmentedtext.info — shared by Frode Hegland
WikiNodes by Ken Perlin — https://cs.nyu.edu/perlin/wikinodes — shared by Ken Perlin, a tool for hierarchical Wikipedia browsing with linked page navigation
Project Cybersyn article (MIT Press) — https://thereader.mitpress.mit.edu/project-cybersyn-chiles-radical-experiment-in-cybernetic-socialism/ — shared by Peter Wasilko
Portland State University economics working paper — https://pdxscholar.library.pdx.edu/econ_workingpapers/67/ — shared by Peter Wasilko
Google Home web interface — https://home.google.com/u/0/ — shared by Brandel Zachernuk
Prompts.chat information gathering prompt — https://prompts.chat/prompts/cmm55hzfp0001l504s6rrl4hb_information-gathering-prompt — shared by Peter Wasilko
Last week’s meeting summary — https://futuretextlab.info/2026/02/19/23-feb-2026/ — shared by Frode Hegland
Gather (virtual workspace for remote teams) — mentioned by Ayaskant Panigrahi as an example of proximity-based audio, suggesting proximity-based controls for connections between text nodes
Minority Report — referenced by Sam Brooker and Frode Hegland as an example of gestural interfaces that are navigational rather than generative
Tron — referenced by Frode Hegland as a metaphor for entering a computer environment, contrasted with AR/XR as liberating knowledge into your hand
Diegetic prototyping in Hollywood — referenced by Ken Perlin, including Iron Man (Tony Stark and Jarvis) and John Wick, examined with his students as inspiration for intuitive spatial interaction with AI
Stuart Card — referenced by Sam Brooker in the chat, on communication between device and user as a stream of symbols
WordNet SynSet — mentioned by Peter Wasilko as a potential augmentation layer for every word
Song
For analysis
Full transcript: https://www.dropbox.com/scl/fi/s9yl8gvrlubcvlsvjxenk/2-March-2026.rtf?rlkey=2ml9thrxlq322v06xqdd3ava0&dl=1
CLEANED transcript: https://www.dropbox.com/scl/fi/oa2w0drsthb1c1ae7g1kd/2-March-2026-Cleaned-Transcript.md?rlkey=v3p3y0b5btbbtpoh9p0z0zefg&dl=1
