Skip to content
- The corpus we will work on is authoring articles for The Future of Text volume 6 in ways that can be read in a traditionally bound edition of the book but primarily in XR as a richly interactive spatial volume.
- The general academic is our target user.
- We are focusing on VR, with AR being a stretch goal if the hardware will be able to support it (as of start of 2025 passthrough is not available in WebXR on the Vision Pro).
- The interaction we are primarily focused on is the spatial arrangements of knowledge, with typing/writing linear documents a second priority.
- The visual style will primarily be minimalist, to maximize legibility and information space, with user options for decorations, including for the initial view.
- Over the last year we have found that Reading & Authoring is intricately connected so we will work on them in concert.
- What authoring functionality interactions should be possible when reading a document and what environment should there be (annotations etc.)?
- How should we indicate such possible interactions when reading (context menu, mana menu etc.)?
- Can we design ‘far out’ solutions, at least as concepts while we work on the main project?
XR Preferences
- Are we more interested in a seated or walking experience, if so, in VR or AR? Both
- We are interested in utilizing large spaces as well as seated individual spaces and the transition between them.
- How can we go beyond the expected spaces? Such as a ‘card’ having more than 2 sides.
The Experience of Being in XR
- What are we interested in exploring? HUD, Hands, head etc.:
- Might a HUD be useful for access to frequently used commands? (If so, should be visible at all times or summoned, if so, how?)
- Can the hand menu be updated to allow for pointed touch?
- How might we distribute commands or storage on the body? (Wrist, chest, back etc.)
- Are there other aspects of being in XR we would like to explore?
When Thinking/Mapping
- What are the basic components the user should have to hand to work with, and how should the user be able to manually and computationally lay them out? How should they connect to further maps and how should traditions happen (be initiated and displayed, such as buttons and changing VS. adding spaces etc.)
- How important do we feel memory palaces will be?
- How should a user be able to draw components from published XR and traditional documents into their own maps?
- How should the user be able to do hiding/revealing, follow connections, create connections and annotations, layouts etc.
- Are we interested in working with timelines? If so, to what end?
When Composing Documents/Editing
- What should document composition/Editing look like and what affordances should we work for? (Should it be based on paragraphs, sentences, concepts? How and where should components be located in space while working?)
How can information be structured? (What interactions and metadata should be available for what reason?)
- How might performance be included?
- How should we indicate possible interactions when composing with ‘floating’ units of elements on a page/screen?
- What shapes should the text be possible to take, in what context and for what use?
- When it’s text in connected rectangles we should consider how the text should look inside rectangles and how it should be connected.
- When not rectangles we should consider what shapes and for what purposes.
In order to have environments to do the above, which workspaces should we develop and how should the user transition between them?
- Library of user’s academic papers, links to web resources, XR experiences and resources as well as their own documents and Maps. (Categories to be defined for different levels and terminology to be defined)
- Reading a single document deeply, reading many documents comparatively, reading groups of connected documents and following connections/citations from any of these
- Writing/typing by focused typing/pasting speaking–writing in the basic sense of getting thoughts into glyphs
- Authoring/Composing a document through the use of visually available materials such as sources and Maps to sculpt Volumes
- Thinking/Mapping space similar to previous but connected to Maps outside of XR, rather than documents
- Meeting to read or discuss documents (do we even have capacity for this?)
In Order to Support the Above, What Enabling Infrastructures do we need to Develop?
- How can data be (from the end user’s perspective) easily transferred between headset and desktop? (Continues our JSON journey)
- What interactions from standard digital workflow need to be carried across for our purposes? (Such as copy and paste, spell check etc.?)
- How much code or interaction from year 1 should we use and how much do we need to replicate to have a reading environment?
- What tech do we have for year 2 at the start, primarily asking Fabien?
- Should we drop PDF in XR altogether with the premise that PDFs will be converted?
- What categories of information should we store and transmit? (Document, annotations, layouts, cuttings, etc.)
- Are we interested in AI with XR? (If so, for analysis, summaries, views, additional data, layouts etc.)
- Annotation and Notes system for active reading to have such components available on authoring