Themed Discussion moderated by Frode Hegland after a presentation preceded by a demo by Fabien Benetou and and a demo by Peter Wasilko.
Draft Presentation
The discussion takes place in the context of the XR Sloan Project we are working on so there are are some decisions which have already been made, to help frame our discussion:
- Academic ‘Knowledge Object’ Focus. The user is an academic and the Knowledge Objects in the space will primarily be based on academic PDF papers.
- The Knowledge Objects will know, and be able to display and connect according to what they are: Author, Title, Date etc. as well as keywords and possibly themes, names and dates. How this data is gathered and stored is out of scope for today so we can make the assumption that it is available and can be potentially be displayed and interacted with in the ways we come up with.
- The Interaction is split. There is one set of interactions for the volume of space the user is working in, with all the Knowledge Objects being available for placement anywhere (ie. there are no ‘frames’ or ‘volumes’ within). The other is the authoring volume which we have currently nicknamed the ‘Cube’ but the final form is indeterminate at this stage.
- The Workflow. The user opens up a set of Knowledge Objects, interacts with them to see them in various aways, annotates them (writes a summary on the object) and collates a set into their Cube as the work product, in the form of an Annotated Bibliography. This interaction is what we will be discussing.
We Ask:
What kind of volume do we want to author–what kinds of knowledge objects should it contain–what kinds of interactions do we want it to support–and what interfaces should we have to compose it?
Should we use a wrist or hand menu? How about a HUD? Maybe floating commands? How about a ‘tool-belt’ and how about different ‘gloves’? What gestures can we support which do not clash with system gestures and which are clear enough for the software to differentiate and for us not to have gesture clash?
Organize thoughts & Author Knowledge Object
There are two major categories of objects in the space: The research, primarily references (which may be annotated) and the user’s manuscript, which we refer to as ‘Cube’. We therefore need to ask what the interfaces should be to
- How can we summon and hide the ‘Cube’
- Summon and hide the References an other Knowledge Objects we need to work with?
- How can we tell the system what layouts and operations we’d like to perform?
- How can we select objects individually?
- How can we select objects by category?
- & more
Knowledge Objects to Support
- Primary: References in a form which the authors and reader can choose to interact with, including a timeline view, etc. including user Notes/Annotations.
- Media including images, video etc.
- User’s Notes in text form.
- Defined Concepts for Knowledge Mapping which may contain definitions, categories and connections to other objects.
- AI Agents
- Code to execute
- & more?
Desired Views/Layouts/ViewSpecs
- Knowledge Map(s) viewable as 2D image and 3D volumes by author and reader, linked to both the text, references and media.
- Narrative path through the information for the reader to follow the author’s argument, usually in the flow of the Text but should also be able to be woven into the other elements. Can also be audio/voice, avatar and structural.
- Reference Lists/Citation trees showing References in different ways.
- Timelines for References and time elements within objects.
- Linear Document in traditional form.
- & more?