Context/Frame

According to our Sloan Agreement, our objectives are to:

1) support dialog for how to work with text in XR,
2) build XR software and,
3) develop metadata infrastructures to support software interaction that integrates with real-world workflows.

This project seeks to harness the potential of Virtual and Augmented Reality (VR/AR), commonly collectively referred to as Extended Reality, or ‘XR’, to expand academic communication by developing open-source software to make it possible for users to read, manipulate, navigate, and create in three-dimensional space

• Imagine working on a research project in your office and you have an entire wall of notes connected to source and other notes.
• Imagine following connections like strings in a web throughout your library and beyond to make associative links between ideas.
• Imagine further the potential of constructing entirely new shapes of knowledge.

XR Software. Coding for XR will be a learning experience where knowledge gained will be made public through the Symposium and Book. Our end goal for our software development is to allow a user to put on a XR headset, access their PDF library, read and interact with documents in XR and traditional systems, and export their work in traditional and useful formats. This workflow will be possible because of the integration of software we have already developed for macOS: www.augmentedtext.info.

XR software development entails programming for integration with Hegland’s Reader and Author software programs within the XR environment as testbeds.

Software Functionality

User Story, outlining the specific implementation aims of the project follows. Target users are scholars, including university students, performing a literature review for a paper they are writing. The components involved will be file synchronization and WebXR software. Our aims for interactions are as follows:

Library

  • On first use, a view of the user’s Library (initially PDF only) will appear. Interactions and views to see documents, their contents, their authors, and connections will be a major part of the research.

Document

  • Full document interactions: The user will be able to directly interact with a document to move, scale, and set preferred reading angle. The user will further be able to lock the document to table or headset/user’s head. The user will be able to read as a single page, two-page spread, multiple-page spread or as full pages in a large rectangle.
  • Document component interactions: The user will be able to interact with the document to put elements from the document in 3D spatial positions either manually or to pre-determined locations, including images, table of contents, glossary, graphs, and references.
  • Multi-Document interactions (Connections): The user will be able to interact with citations in one document and see how they connect to other documents in their Library and beyond.
  • External Document interactions: Documents not in the user’s Library will be presented as ‘token’s in citation trees and will be made quick and easy to retrieve. 
  • Headset/ Traditional Computer Transition: The user will be able to take off their headset at any time, and because this approach to headset/traditional computers uses Visual-Meta, any document presented in XR will feature an additional and temporary Appendix where full spatial information will be recorded for use next time that the user chooses to interact with the document in XR.
  • Future/Advanced Interactions: Interactions with knowledge graphs will involve questions of how document knowledge graphs connect or interact with user’s knowledge graphs, how to hide and show nodes, how to nest graphs and more, using the extended space without ending up with overwhelming clutter.