Themed discussion with a presentation by Frode Hegland, moderated by Fabien Bénétou.
In XR, neither our foveal vision nor our field of view increases. What increases is our ability perceive depth based on head movement, to move around three dimensional knowledge objects, and to move around in knowledge spaces. Affordances change by letting us change and toggle our entire field of view in an instant, changing both the figure and ground–foreground knowledge object and background–the information structures and their contexts. How to take advantage of this and design interactions to to truly augment the user in a new and partly alien environment, what we have learnt and what issues we see, is the topic of the presentation and following discussion.
What interactions excite you about XR? What limitations and opportunities do you see? Have you implemented something novel or do you know of any you’d like to share? Do you have questions around what working in XR can be like? Join us to discuss this on Monday.
Frode Hegland, Rob Swigart, Peter Dimitrios, Fabian Bénétou, Peter Wasilko, Jamie Blustein, Brandel Zachernuk, Mark Anderson, Ayaskant Panigrahi, Jimmy Sixdof
AI: Summary
This was a thematic discussion focused on designing textual knowledge interactions for Extended Reality (XR), led by Frode Hegland. The session explored the concept of a “knowledge cube” or “volume” – a 3D spatial environment where information can be organized and manipulated. Hegland presented an overview of lessons learned from their XR research, covering topics like the uniqueness of XR (depth perception, loss of tactile feedback), gesture interactions, spatial navigation, and the evolution of interfaces from early computing to current systems. The core discussion centered around a proposed cube/volume interface where users can place knowledge objects in 3D space, with “smart walls” that can be programmed to represent different organizational principles (concepts, timelines, people, etc.). Participants debated the balance between linear documents and more fluid information representations, discussed practical interaction design challenges, and explored how to make such systems usable while maintaining the benefits of traditional document structures.
AI: Speaker Summary
Frode Hegland served as the primary presenter and moderator, introducing the concept of knowledge cubes/volumes and leading most of the theoretical discussion. He emphasized the importance of augmenting human communication and thinking through better text interfaces, presented the evolution of computing interfaces, and proposed the “smart walls” concept where different sides of a cube could represent different organizational principles. He advocated for maintaining document structures while adding spatial capabilities and stressed the complexity of designing gestural interactions for XR environments.
Rob Swigart contributed philosophical perspectives on the nature of virtual reality and information representation. He distinguished between cubes (bounded) and volumes (unbounded), discussed how virtual reality should not pretend to be reality, and reflected on how human vision systems work by jumping between surfaces and adjusting focus. He also noted the historical flattening of information from three-dimensional cuneiform to two-dimensional text.
Peter Dimitrios primarily engaged in casual conversation about weather and location, contributing to the social atmosphere of the meeting but not substantially to the technical discussions.
Fabien Bénétou provided practical developer perspectives, emphasizing the need for concrete examples and specific use cases rather than abstract discussions. He stressed the importance of defining what functions the knowledge objects should have (splitting, merging, etc.) and advocated for focusing on actionable implementations with specific document types like research papers or Jupyter notebooks.
Peter Wasilko contributed ideas about privacy and security in shared XR spaces, suggesting that different users should see different representations based on authorization levels. He also proposed creative interaction ideas like having cubes with more faces than physically possible through rotation, and suggested using cube corners as functional affordances.
Jamie Blustein brought extensive hypertext research experience to the discussion, emphasizing the importance of navigation landmarks and the concept of “second reading” – how people interact differently with documents they’ve already read. She shared insights from her PhD research on converting scholarly text to hypertext and stressed the human factors challenges of spatial navigation, referencing classic hypertext problems like “lost in hyperspace.”
Brandel Zachernuk discussed privacy and security concerns in multi-user XR environments, shared experiences from Apple.com’s extremely long pages, and introduced the concept of “second reading” from parliamentary procedures. He emphasized the importance of intentional design and warned about the challenges of data ownership and user safety in fluid XR environments.
Mark Anderson challenged the group to move beyond traditional document-centric thinking, advocating for focusing on the underlying information rather than document representations. He emphasized that documents should be just one manifestation of information and encouraged thinking about what becomes possible when freed from linear document constraints. He also highlighted the importance of mutual comprehensibility when designing personalized information spaces.
Ayaskant Panigrahi provided technical insights about current XR hardware limitations, particularly noting that most headsets have fixed focus (everything at infinity) which affects how users can process multiple documents simultaneously. He also suggested exploring gesture-based methods for snapping and organizing windows in 3D space.
Jimmy Sixdof contributed concepts from immersive analytics, referencing research claiming that 15 pieces of information can be communicated simultaneously in 3D space through various visual properties (position, size, color, animation, etc.). He also discussed the concept of “exploded” vs “summarized” views of information objects and referenced Microsoft’s Photosynth technology for multi-scale navigation.
AI: Topics Discussed
What was discussed regarding WebXR? WebXR was mentioned as a platform that could provide users with control over their home space for information, with Hegland hoping it would be open enough for anyone to design their own information environment rather than being controlled by operating system vendors.
What was discussed regarding gestures? Extensive discussion covered gesture interactions in XR, including system vs. application gesture conflicts, the benefits of finger-to-finger touching for pinching, secondary hand interactions, hand menus, and the potential for learning from sign language research. The group discussed specific gestures like making a fist then pinching to move entire spaces, and explored how gestures could be used for selecting, moving, and organizing knowledge objects in 3D space.
Were other topics discussed? The evolution of computing interfaces from Augment to modern systems, the uniqueness of XR (depth perception, loss of tactile feedback), spatial navigation and the need for landmarks, the concept of “smart walls” for organizing information, document vs. information representation debates, privacy and security in shared XR spaces, hardware limitations of current headsets, and semantic zooming for transitioning between overview and detailed views.
Were there any interesting anecdotes? Jamie Blustein shared frustrations with ethics boards blocking her research on annotation behavior across multiple readings. Brandel Zachernuk described Apple.com’s 95,000-pixel-long pages and how usage data revealed people weren’t actually reading the massive amounts of content. Rob Swigart referenced a PBS documentary about DNA discovery where Rosalind Franklin recognized the beauty of the double helix structure.
Did anyone seem to change their position during the call? Frode Hegland noted that a week ago he would have agreed with Jamie about not being able to have everything accessible at once, but now completely disagreed, believing that XR should make all human knowledge feel immediately available. This suggests his thinking evolved recently on the scope and ambition of XR information systems.
What were the major outcomes of this session? The group established a framework for thinking about 3D knowledge organization using cubes/volumes with programmable “smart walls,” identified key interaction design challenges, agreed to focus on the Future of Text book series as a concrete use case, and planned to continue developing these concepts with practical implementation goals.
AI: Concepts Introduced
Knowledge Cube/Volume – Defined by Frode Hegland as a 3D spatial container for organizing information that can be tiny or room-sized, with programmable sides that can represent different organizational principles.
Smart Walls – Introduced by Frode Hegland as the concept that each side of the knowledge cube can be programmed to represent different things (concepts, timelines, people, computational relationships like references).
Second Reading – Discussed by Jamie Blustein and Brandel Zachernuk as the different way people interact with documents they’ve already read, referencing both hypertext research and parliamentary procedures.
Information Shape – Referenced by Jamie Blustein from Andrew Dillon’s work, describing how people develop a mental map of document structure after reading.
Display Reserve – Mentioned by Brandel Zachernuk from Halifax Maritime Museum, referring to collections that are accessible but not prominently displayed, representing a middle ground between hidden and featured content.
AI: People Mentioned
Ted Nelson (mentioned by Frode Hegland in context of 3D visualization work and transclusion concepts), Doug Engelbart (mentioned by Frode Hegland regarding concerns about unknown audience knowledge), Vint Cerf (mentioned by Frode Hegland regarding a presentation with spatial data visualization), Keith (mentioned by Frode Hegland as a friend who suggested spatial interaction methods), Adam (mentioned by Frode Hegland regarding secondary hand interactions), Dene/Danny (mentioned by Frode Hegland as project collaborator), Emily (mentioned by Frode Hegland as his wife), Edgar (mentioned by Frode Hegland as his son), Sue Dumais (mentioned by Jamie Blustein regarding semantic indexing research), Andrew Dillon (mentioned by Jamie Blustein regarding information shape concept), Jakob Nielsen (mentioned by Jamie Blustein in context of early hypertext conference), Doug Hofstadter (mentioned by Jamie Blustein regarding OCR predictions), Blaise Aguilera (mentioned by Jimmy Sixdof regarding Microsoft Photosynth technology), Rosalind Franklin (mentioned by Rob Swigart in DNA discovery anecdote)
AI: Product or Company Names Mentioned
Apple (mentioned by multiple speakers – Brandel Zachernuk regarding apple.com pages, Frode Hegland regarding Apple Maps), Google (mentioned by Frode Hegland regarding Google Maps, Jimmy Sixdof regarding Blaise Aguilera’s current employment), Microsoft (mentioned by Jimmy Sixdof regarding Photosynth technology, Frode Hegland regarding Word as unintentional), Mac/Macintosh (mentioned by Frode Hegland in interface evolution, Jamie Blustein regarding navigation preferences), Zoom (mentioned by Frode Hegland regarding interface changes), BBC (mentioned by Frode Hegland regarding radio show), Hornets (mentioned by Frode Hegland regarding Charlotte sports), Logitech (mentioned by Frode Hegland regarding upcoming XR pencil), TSMC, AMD, Nvidia (mentioned by Brandel Zachernuk in context of semiconductor restrictions), TechCrunch (mentioned by Brandel Zachernuk as secondary news source), Jupyter Notebook (mentioned by Mark Anderson and Fabien Bénétou as document example), Scrivener, Final Draft, Quip, Reader (mentioned by Brandel Zachernuk as intentional apps vs Word), Yahoo, Facebook (mentioned by Jamie Blustein as portal examples), Flow Immersive (mentioned by Jimmy Sixdof as immersive analytics company), Wired Magazine (mentioned by Jamie Blustein regarding early confusing design)
AI: Other
The meeting revealed an interesting tension between preserving familiar document structures and embracing radically new spatial information organization. There was a notable 10th anniversary reference to Hamilton (the musical), suggesting this research group has been meeting for a significant period. The discussion highlighted the challenge of designing for an unknown future – creating interaction paradigms for technologies and use cases that don’t yet exist at scale. The group appears to be working on actual implementation rather than pure theory, with plans to present their Future of Text book series in XR as a concrete deliverable.
Chat Log
16:33:46 From fabien : I can hear just fine but can’t easily text nor talk
16:33:57 From Mark Anderson : ound fine, fixsing camera
16:43:23 From jamie : I suspect that Ted’s point about not wanting XR is that he has his own way of perceiving the virtual space and the translation from XR overlays to what he is used to is too much.
There has been research suggesting that people with higher levels of ‘spatial ability’ do better with hypertext, but when there are interface elements that help people with lower levels, performance of people with higher levels suffer.
16:50:16 From Frode Hegland : HI Brandel
16:51:54 From Peter Dimitrios : how to have the XR equivalent of pushd / popd ?
16:52:21 From Ayaskant Panigrahi : https://ieeexplore.ieee.org/abstract/document/913781
TULIP menu – hand menu work
Love the idea of Mode Switching using the non-dominant hand
16:54:52 From Peter Wasilko : Replying to “https://ieeexplore.i…”
Use the non-dominant hand for Quasi-Modes as envisioned by Jef Raskin.
16:56:00 From jamie : Reacted to “Use the non-dominant…” with 👍
16:56:34 From Ayaskant Panigrahi : Icons – https://github.com/immersive-web/spatial-favicons Interesting connection
16:57:07 From Peter Dimitrios : Reacted to “Use the non-dominant…” with 👍
16:59:00 From Ayaskant Panigrahi : Some work related to time axis on maps from my lab VVISE
Comparing and Combining Virtual Hand and Virtual Ray Pointer Interactions for Data Manipulation in Immersive Analytics | VVISE Lab
Reimagining TaxiVis through an Immersive Space-Time Cube Metaphor and Reflecting on Potential Benefits of Immersive Analytics for Urban Data Exploration | VVISE Lab
17:00:09 From Jimmy Sixdof : Reacted to “Some work related …” with 🔬
17:04:43 From Peter Dimitrios : Thinking about how things like SecondLife evolved with ‘islands’ as scopes/locii / workspaces. Like visiting a website
17:04:52 From Frode Hegland : ‘Cyber-deck’ I like that
17:08:43 From Jimmy Sixdof : Flow Immersive (flow.gl) has a multi spectator view mode and worked through several problems to get speaker/viewer controls righ and who controles the view perspecitive scale etc you can chek their work
17:08:58 From Frode Hegland : Reacted to “Flow Immersive (flow…” with 👍
17:09:28 From Peter Wasilko : This story might be of interest: https://blooloop.com/immersive/news/felix-paul-break-ground-interstellar-arc-area15/ it sounds like they are using VR in a multi-user shared hyper-reality experience to simulate a cryogenic voyage to a nearby star system.
17:10:19 From Frode Hegland : ‘Document’ is also a verb of course
17:10:39 From Jimmy Sixdof : Reacted to “‘Document’ is al…” with 😂
17:10:59 From Ayaskant Panigrahi : 2D Document as one view, essence of info at the center 🤩
17:11:59 From jamie : This is fascinating but I don’t understand what it would really mean. What is the information without the representation? (Please be gentle with me)
17:12:02 From Frode Hegland : It’s also an issue around addressability.
17:13:08 From Frode Hegland : Floor!
17:13:13 From Frode Hegland : We have no floor, it’s multi spatial!
17:13:15 From Frode Hegland : 🙂
17:14:11 From Frode Hegland : Replying to “This is fascinating …”
Information must have presentation, I am all about that
17:15:53 From jamie : Rob: 👍
17:16:09 From Frode Hegland : Jimmy, were you ready to go?
17:16:55 From Ayaskant Panigrahi : Makes me think of throughput
17:18:06 From Rob Swigart : I would say all communication is a form of translation.
17:18:13 From Frode Hegland : Reacted to “I would say all comm…” with 👍
17:18:15 From jamie : Sorry @Jimmy Sixdof , when Frode asked you to speak I thought it was me who was being called upon
17:18:25 From Frode Hegland : Replying to “I would say all comm…”
Absolutely.
17:18:31 From Frode Hegland : Replying to “Sorry @Jimmy Sixdof …”
It’s all good
17:18:45 From Ayaskant Panigrahi : Reacted to “I would say all comm…” with 👍
17:20:06 From Ayaskant Panigrahi : Apple accessibility magnifier https://www.uploadvr.com/apple-vision-pro-passthrough-zoom-live-recognition-accessibility-coming/
17:20:23 From Frode Hegland : Linear and spatial grammar
17:22:52 From Ayaskant Panigrahi : Research documents – read PDF https://sioyek.info/ , some constraints of consuming PDFs in eink devices – this framing could help in thinking about XR. Affordances
17:23:04 From fabien : Reacted to “Research documents…” with 👍
17:23:29 From Frode Hegland : Reacted to “Research documents -…” with 🔥
17:24:01 From Peter Dimitrios : “document” vs. “wiki” – document has a presentation order, vs. just links to pages that user has to navigate
17:24:51 From Peter Wasilko : Reacted to “Research documents -…” with 🔥
17:25:39 From Peter Wasilko : Replying to “Research documents -…”
Available through Homebrew!
17:25:41 From Peter Dimitrios : folders / sub-cubes
17:25:41 From fabien : HTML vs CSS or JSON vs SPA,or responsive API?
17:26:03 From Peter Wasilko : Replying to “Research documents -…”
brew install sioyek
17:26:24 From Peter Dimitrios : Jupyter notebooks are ‘live’ but still have a linear flow because that is the way people ‘read’ them
17:27:29 From Jimmy Sixdof : this is what I was mentioning and apparently it’s 18-dimentions lol (and patented) https://www.immersionanalytics.com/products-technology/dimensional-engine/
17:27:33 From Peter Dimitrios : the flat sides of cube/polygon are like whiteboards in real world – place things on a plane because it is familiar
17:29:12 From Ayaskant Panigrahi : Pull out snippets from the flat document, then see that in different views like images, icons – it can help us look out of the box
17:29:56 From fabien : Geometric primitive
17:30:16 From fabien : European Parliament
17:30:20 From Ayaskant Panigrahi : Replying to “Pull out snippets fr…”
Like sticky notes and highlights, but see different views
17:30:52 From Peter Dimitrios : IMHO my primary use of ARXR today is to have many more flat screens that I can ‘walk’ or ‘zoom’ around quickly and some hyperlinks to navigate between them more easily
17:31:12 From Frode Hegland : Reacted to “IMHO my primary use …” with 🔥
17:31:46 From Peter Dimitrios : ‘screen’ can be whiteboard / powerpoint or CLI or browser with tabs or VSCode editor with something live
17:32:25 From Frode Hegland : Reacted to “Geometric primitive” with 🔥
17:32:27 From Frode Hegland : Reacted to “European Parliament” with 👍
17:32:35 From Frode Hegland : Replying to “European Parliament”
Thanks for the correction
17:32:40 From Peter Wasilko : https://onemillionscreenshots.com/?q=random
17:33:59 From Mark Anderson : Second reading is definitely useful in peer review (even if it is more effort).
17:34:12 From Rob Swigart : VR should free us from the flat document or whiteboard (in a cubic meeting room), which constrained free association and thinking (somewhat compensated for with postits that could be moved around but only in 2 dimensions. How can we expand consciousness into the third (or more) dimension?
17:34:44 From Jimmy Sixdof : my perspective is floating 2d screens is the horseless carriage phase of XR where the existing technology being replaced shapes the new thing because that is the known usage method
17:35:20 From Frode Hegland : Replying to “my perspective is fl…”
(We still have carriages though…) NOT that I think that’s all we should aim for!
17:35:55 From Peter Dimitrios : Baby steps – also, the comments above that some more ‘spacial’ thinkers(e.g. Ted Nelson) can feel thwarted by projections into real visual space.
17:38:14 From Ayaskant Panigrahi : Glanceability if you will
17:38:16 From Peter Dimitrios : Viewing large graphs as dots in 3-space can be useful to help intuition but at some point you highlight certain flows and clusters that become more ‘document’ like as you explain or think about things.
17:38:20 From Frode Hegland : Reacted to “Glanceability if you…” with 👍
17:39:13 From Peter Dimitrios : So I think in many cases, things get ‘flattened’ by the ways we want to think or focus on
17:39:52 From Frode Hegland : Shapes and knowledge – Active Sides…
17:40:58 From Peter Dimitrios : Sidecar documents – like illuminated manuscript with commentaries?
17:41:09 From Ayaskant Panigrahi : Infinite canvas like Miro – extend to XR – scale to different sizes to see different views. When zoomed out, we would see some different representation like an icon
17:41:44 From jamie : There were interesting experiments with tumbler-type visualisations in information retrieval I the early 1990s.
17:42:51 From Frode Hegland : (Question, who is up for a casual future of text social in London late September to continue this?)
17:43:38 From jamie : I’ll be teaching 3 days a week in September 🙁
I won’t be teaching for the rest of the year
17:43:45 From Frode Hegland : Reacted to “I’ll be teaching 3 d…” with ❤️
17:46:16 From Frode Hegland : Photosynthesis is amazing
17:48:06 From Ayaskant Panigrahi : Replying to “Photosynthesis is am…”
A New Spin for Photosynth – Microsoft Research
Photosynth returns as part of Microsoft Pix alongside new ‘Comix’ feature | Windows Central
Some links
17:49:14 From Jimmy Sixdof : Reacted to “A New Spin for Pho…” with 👏
17:49:35 From fabien : Peter: on sidecars I recently started on sidecar “filters” (as used in the Sloan project) namely to bring any (well known) document to 3D thus XR accordance
17:50:08 From Peter Wasilko : Reacted to “Peter: on sidecars I…” with 👍
17:50:16 From fabien : I like the safety pool.analogy
17:52:14 From Frode Hegland : So, we have a volume presented as a cube, as a human sized sculpture in front of you… let’s say it can display any thing, any object of knowledge. What would you like to have in there and how would you like to be able to interact with it? (Next week is how we want the book in XR).
17:54:10 From Peter Wasilko : LLM’s are really bad at identifying NPM packages, they will just latch onto a name that sounds plausible and hallucinate a whole API with examples when the actual package is utterly unrelated.
17:54:13 From Ayaskant Panigrahi : Spatial hyperlinks
17:55:20 From Mark Anderson : Buildings in nthe shared space is actualy a deliberate nod to Andreas Dieberger’s ‘Infomation City’ concept from the mid-90s. see http://homepage.mac.com/juggle5/WORK/publications/thesis/ThesisPDF.html
17:55:29 From Peter Wasilko : I love when a series of books has spine art that slowly resolves into a picture when all the volumes are shelved in order.
17:55:46 From Frode Hegland : Reacted to “I love when a series…” with ❤️
17:56:09 From Rob Swigart : Walk into a document into a new environment with openings – doors, portals? – to different links?
17:56:27 From Frode Hegland : Replying to “Walk into a document…”
Exactly the questions to ask!
17:56:53 From Jimmy Sixdof : this but in webXR + hyperlinks: https://www.meta.com/en-gb/experiences/livro/6285574701462946/
17:57:14 From Mark Anderson : What facets of the FoT books articles do we have that aren’t obviously derived from the printed page? Exploring those might add to the status quo experience of a textual article.
17:57:17 From Frode Hegland : Replying to “this but in webXR + …”
Have a screenshot? I can’t load
17:58:28 From Mark Anderson : Available != immediately in front of us. Its sort of a LoD problem.
17:59:37 From Mark Anderson : Tapestries have an interesting tranclusional aspect combined with contextual presentation.
17:59:38 From fabien : Reacted to “What facets of the…” with 👀
18:00:12 From Ayaskant Panigrahi : Linking this LoD concept with infinite canvas /scaling / zooming – but how can we get a view like an icon when we zoom out?
18:01:06 From Mark Anderson : Finding the (un)interesting is a portential experimentation for summarisation. Ideally we want to find the things we don’t yet know are interesting, lest we otherwise engage in confirmation bias, looking for the things we already know we will like.
18:01:11 From Brandel Zachernuk : I have to drop for a meeting, this has been excellent – thank you Frode and everyone!
18:02:28 From Frode Hegland : Reacted to “I have to drop for a…” with ❤️
18:02:52 From Peter Wasilko : Replying to “Tapestries have an i…”
I am trying to talk Bob Stein into making traversal paths first Class Objects in Tapestries so you can push your current context onto an implicit stack to pursue a side trail and then pop out to resume your previous path.
18:04:00 From fabien : For sure
18:04:19 From fabien : Trying genuinely new things is scary
18:05:26 From Mark Anderson : Do we have a word/term list of the existing corpus of FoT articles so we can experiment with/explore associations based on fact rather than imagination
18:07:13 From Peter Wasilko : Out of tea now.
18:07:49 From Jimmy Sixdof : Replying to “this but in webXR …”
just a immersive comic reader but they get some mechanincs right
18:07:59 From fabien : Take care all
1 comment