User Stories

I want to be able to

Interact with a PDF (with my hands) so I can more comprehend the material more easily and intuitively than I can with a flat screen and a keyboard.

Synthesize other people’s material with my own impressions (even if just highlighting) and have the ability to restructure particles of content in a way that is more useful for me.

Have Local & Offline access to XR space without having to connect to external services.

Easily capture various particles and compositions and export them out of XR into an external medium (ex send to email).

Access single documents, multiple documents, my own Library and external resources.

Interact with single documents, multiple documents, my own Library and external resources for everything from reading a single page, to seeing an entire document opened on a ‘wall’ to navigating connections between documents and people based on deep connections.

Interact with my information, whatever it is, in a richly visual and interactive way, such as the GigaMapping approach: https://systemsorienteddesign.net/what-is-gigamapping/

Interact on my own, or in a shared virtual space.

Access in multiple virtual rooms/contexts/libraries all with their own characteristics, all synchronised.

Allow for open, rich metadata in an open manner which is connected to my traditional computer environment, to enabled rich interactions.

Access the last state stored to remind me what I was doing in my last session.

Built on Alan Laidlaw’s notes: https://laidlaw.craft.me/futureoftext_user_story_sketches/b/3BC69A07-CFE5-4928-AA6C-188E2A051BA6/Note-on-technical-requirements