The Use Case we are working on for this first year is based on an academic, including students, working in traditional digital environments with their PDF documents and then choosing to don a headset to get a richer view of the information and connections. Year two will incorporate authoring as well. This will primarily be based on the following stages of reading:
- Skim/Overview to check interest. Open a new document, or a new set of documents (such as all the Proceedings from a Conference) and go through them to determine if they are worth reading fully. This can incorporate skimming the visual surface of a PDF, reading an abstract, seeing an AI summary or analysis or seeing the document or author’s in context, & more.
- Deep Reading (comfort). This phase is where the user opens a document with the intention of reading it through, in a comfortable way, while in context with the rest of their knowledge environment.
- Add Notes/Annotate. Active reading includes the user noting down, through hand writing, keyboard writing, highlighting or ‘doodling’ on pages, as well as speaking annotations, to augment their ability to understand while reading and to find and see the work in context later.
- See Connections (references & links etc.) is probably the aspect of reading which is most immediately suited to XR environments, where what they are reading is contextualised, as a document, snippet of text or concept, in a knowledge space the user collaboratively builds.