Knowledge Sculpture in XR continues the work from the Annotated Bibliography, as demonstrated at the ACM Hypertext ’25 Conference.
Scenarios
There are many scenarios we’d like to support and since one of the key aspects of working in XR is that the user should be able to instantly change their entire view, we are going to try to develop several simple scenarios:
- Grey space, as we have now, for Annotated Bibliography.
- Cube-Based Exploration
- Knowledge exploration space to go through large amounts of documents (papers and other) which can be flexibly changed to different appearances and layouts.
- Library, made from geometry to look like a fine old Library, where data is stored in shelves.
- Timeline, where the user has access to ways to find papers and their own notes.
- Writer’s room, where a user is composing an article or book, with multiple views of chapters, notes, concepts, sources and so on.
Interactions
All the interactions listed below are initial perspectives which are to be discussed in the community with the expectation that any and all are up for change.
Select & Move
The user should be able to move single elements, selections of elements (by category or multiple manual selections), to change their layout configurations, their appearance and their annotation status. The user should further be able to hide elements (to be un-hidden through a menu system), and to fold/collapse selections of elements.
The user will be able to pinch and move elements using Fabien Positioning Billboarding (as opposed to the traditional Continuous Billboarding). There are two ways the user can select elements in the knowledge space; by individually pinching or by selecting by type using the Mana Menu. Actions can be performed through Mana Menu + Ring Menu and by selection of element and Ring Menu:
- Multiple Manual Selections. The user will be able to manually select multiple elements by pinching their non-dominant hand ‘in the air’ while they keep pinching (and letting go) of elements. A visual recognition of selection will be presented and at the end of selecting the user can un-pinch with their non-dominant hand and either move the full selection with pinch and move with the dominant hand, or issue a layout or view command which will affect all selected elements.
- Selection by type. The user can select by type using the Mana (hand) Menu.
Mana Menu
The Mana Menu appears on the users non-dominant hand when the index finger from their dominant hand approaches. The Mana Menu has two main functions: Allowing the user to select elements by type and executing commands on selected elements.
- ‘Thumb’ Selection to list produce Ring menu to select based on Categories:
- Person, place, concept, location, institution, event etc.
- Annotations status (important, hidden, to be reviewed etc.)
- ‘Index Finger’ Action to produce Ring menu for Actions:
- Annotate (important, hidden, to be reviewed etc.)
- Hide
- Change display (size, shape, color)
- Layout (center vertical, horizontal, middle etc.)
- Other fingers un-assigned to reduce complexity.
- ‘Little Finger’ Undo to de-select (after selection) or to undo (after action or following link)
Ring Menu
Ring Menu on selection is produced when user selects an element or text in an element (such as text in a document) and double-pinches. The user can spawn the Ring menu directly, without the Mana Menu, by selecting one or more elements and double-tapping (or by holding the secondary hand flat and open in a specific orientation), to produce the same Ring menu as the action index finger produces.
Background
The user can access options for the environment on their dominant hand wrist menu, to change background 3D model, color etc. This is also where user can choose how to export the Knowledge Sculpture.
Resulting ‘Volume’
The result should be to allow the author to publish their knowledge sculpture in a traditional, flat & frozen document, including PDF, as an appendix mentioned in the body text, much like a citation mentions a reference.
The subsequent reader can then open this document in the XR environment and when they get to the section which mentions the XR they can unfurl the XR sculpture, which will appear in original spatial relationships, complete with all metadata and interactions.
Current State
You can try a basic Knowledge Sculpture uploaded from Author to XR in your headset here:
A highly connected ‘mess’: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-56.zip
A more centralized mess: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-57.zip
Small concepts and one document link: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-58.zip
Large mess with little metadata: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-59.zip
Shaped, with some metadata: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-60.zip
Originally Column Layout: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-61.zip
Document Writing Experiment: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-62.zip
Design Options Experiment: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-63.zip
Inventing Interactions Experiment: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-64.zip
ACM Hypertext ’23 Papers: https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-65.zip
ACM Hypertext ’23 Papers central, with some metadata (one active in XR): https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-67.zip
Single Letter for Short, Long paper or Poster (no Metadata): https://companion.benetou.fr/index.html?username=q2_visualmetaexport_map_via_wordpress&showfile=https://futuretextlab.info/wp-content/uploads/Frode-dynamicviewvisualmetaexport.jsons_-69.zip
Potential further work
Create Notes
In addition to starting with an imported Map from outside of XR, the user will also be able to add new elements in the environment, starting with text elements which can have metadata/definitions attached. For simplicity we can call these Notes. We expect to be able to support adding References directly into the space in the future, but to start these will have to be imported from an external document or Map.
Keyboard. To write text into the space, the user can tap their primary hand wrist (a Cube) and a keyboard will appear, allowing them to type into the space. The keyboard can be overlaid a physical table if the user wishes. The resulting text will appear on top of the keyboard, similar to typing on a mobile phone.
Cards. A future method for creating ‘Cards’ (as part of a document), could be something like this:
1 comment