The story so far.
Or: How I’d prefer we go about this first book.
The story so far is a user opening up the PDF version of The Future of Text book Volume 3, in a regular PDF reader and there is a link to open the book in VR/AR (I think I’ll just refer to XR from now on). The user then clicks on this link futuretextlab.info/reader and this opens the book into an XR experience.
I use the term XR for this because the essence of this first foray into this space is a book artefact, not a full on space. The core reading experience will be the book as it appears in PDF, augmented with Visual-Meta to extend the experience. This is our first venture so I really don’t think we should extend further than we can. It is ‘the book in XR’. It is not something else.
Basic Appearance& Interactions
The book hovers at an initial pleasant reading distance and the user can use both hands to grab the book to place it where they want and to re-size it any size they want. Next page is a vertical palm swipe right to left. To go back is the opposite.
One or two palms up, skips along to the next two page spread with a level 1 heading (which indicates a new article). These are animated in, coming in from the bottom, as shown here:
In my own testing, when going though a large volume of text, simply going to the next article or full section is important, maybe most important. This is also quite an easy interaction to teach and remember.
It should also be possible to drag out elements, such as a picture/image, mural or even video, and have the ‘book’ become easily miniaturised with maybe a pinch, or not, depending on user wishes when this takes place.
Ways to describe them and interact with them have been posted for comments at: https://visual-meta.info/2022/10/23/images/
How different elements, such as a concept map, murals, pictures and so on can be pulled out, interacted with and so on, is very much up for discussion.