XR

    

Accessing the XR Experience in headsets

When using an XR headset, such as Meta Quest or Apple Vision, the user goes to https://futuretextlab.info and navigates to ‘XR Experiments’ (which is the page you are now on), and then clicks on Current XR Experiments for Testing.

On this page the user will be able to see all our experiments, with the current testing environment on top, followed by previous work underneath. Following one of the links will open a page describing the experience, with a video preview and a link to the actual WebXR experience.

Clicking on the link to the WebXR experience loads a web page with a single sphere, with text to explain that the user should tap the sphere to enter the XR environment, and that the sphere will be on their arm to provide further controls.

Future options

In the future the user will go to https://futuretextlab.info then click on ‘XR’ and be presented by maybe one sphere, maybe more, as we may develop different experiences for different uses.

Custom Library option

In the lower left corner there is a button for the user to click to upload a catalog of their library in JSON. This is optional, the default is the ACM Hypertext Proceedings. If the user chooses to upload the catalog of their library it will need to conform to the standard we are developing.

Entering XR

Once the user clicks on the sphere, there may be dialog boxes for the user to confirm access, depending on the headset they are using. They may also have needed to specify access to WebXR in the headset settings, such as currently needs to be done for the Vision Pro (In Settings go to Apps, then Safari, Advanced, Feature Flags and enable the WebXR Device API and WebXR Hand Input Module. Restart Safari).

Interactions in XR

Once this is done, the user enters enter a grey environment with a Library Catalog displayed in front of them (by default ACM Hypertext Proceedings). Initially there is only only such Library Catalog but we aim to support more than one later.

The user will also have the sphere on their left wrist (we hope to support left handedness later) which they can tap on for further controls.

All their documents are shown in a vertical list, sorted by title. The user can point to any document to see more information about that document, such as the abstract (more information will be shown as we decide what will be necessary and available over time),

In order to interact with the text in the environment the user needs to make a pointing gesture, by folding in three fingers, from little finger onwards, kind of making a ‘gun’ gesture. This will result in a cursor dot where they are pointing, supported by a thin ‘laser pointer’ at the start. To interact with text the user simply needs to point to highlight and in order to make an action happen the user needs to pinch their fingers.

Open document from Library to Reading environment & return

A primary option is to open a document to to view it in the Reading environment, by tapping ‘Open‘ below the document’s Abstract.

To return to the Library from the Reading environment, the user can long-tap on the sphere on their arm, or tap on the sphere and then choose ‘Library’.

Further interactions

That is as far as we have come to date.

We are now looking at the mechanics of interaction, the standard interactions, such as opening a document, moving text around and issuing commands. Design directions are under discussion for how this should appear and function.

Previous XR Experiences

Repository & Access

Contact admin at frode@hegland.com for access to this Github repository and to post your work here to share.