Spatial Volumes of Knowledge

Dene, Fabien,

I think we might want to pivot to mostly look at Bob Horn murals in XR for academic documents. Let me elaborate… After having gone through the user process https://futuretextlab.info/authorship-process/ and we decided that what we should focus on this year is the act of composing a spatial volumes of knowledge we could focus more.

For this I suggest we not only do seated, but also standing up.

Proposal. My extension of this, my proposal, comes from Bob Horn’s work, the requirement to also have the work be accessible in traditional media, and that there will need to be some sort of a ‘binding’ mechanism, I feel that we should focus our effort on allowing the user to construct basically a mural, primarily in 2D but using the third dimension as it becomes useful, created out of knowledge objects (defined terms, citations, quotes etc.) as Mark says ‘structure in, structure out’. This can then be flatted into a vector graphic and added to a traditional paper as an illustration with all the context data in JSON in the appendix, in Visual-Meta style.

Background. An aspect of this reminds me of talking with a German professor some years ago who was interested in photographs of downed German aircraft from world war 2 and I told him it might be interesting and useful to make image maps of the pictures where the user could click on the sky to learn the weather that day, the ground to learn the location and the aircraft to learn about the aircraft and so on (entirely without text as the main interface). Another aspect is Brandel’s early and brilliant Bob Horn mural as a single image in XR which is navigated around not by virtual walking but by pinching and moving the whole thing as though on a virtual string, giving the user no queasiness since their brains say they are not moving, the mural wall is. The instant ability to move around it was astounding. It’s 2D and not interactive but still powerful.

Binding. The issue around a ‘binding’ mechanism comes from thinking about the iPad and how it seemed to have near-unlimited potential when launched and now we have digital books and PDFs, and not much taking advantage of of the unique display and interactivity (just think of some of the early iPad animated comics and magazines). There will always be a ‘shape’ to what we visually interact with and our job is not enable a user to build a custom experience like a 3D room of a book, like CD-ROM and DVD books, but collections–volumes–of knowledge which the user can experience in conjunction with with other volumes and contextual information.

Structure in, structure out. Yet another aspect of this is Visual-Meta, which is simply an approach to not hiding metadata, when copying and pasting. In this scenario it would enable the user to extract text from source papers when doing active reading and retain all the connective citation information when they attach it to their knowledge volume/mural. We could extend this to more metadata being moved into such Spatial Volumes of Knowledge.

The technical framework. Essentially the logic of image maps, in 3D, with elements having metadata for connection, interaction and exposition.

The academic framework. This would extend elements of both hypertext and spatial hypertext.

Reader experience. A reader can access the result as a PDF, HTML web page, JSON string or anything else and read the full text and see any augmented images as flat, and when entering an XR system such as ours which can parse this appended data, the augmented diagrams become interactive on request–through being pulled out of the document.

Authoring experience. The user works in what looks much like a large map, not visually unlike what we have in Author today, where elements primarily shown as plain text, but encompassing plain text, defined terms (where the definitions can be used for connections and layouts), linked documents and citations/extracts as well as multimedia objects which can also have metadata such as definitions and connections, including pictures, video and 3D.
    Once we have such as system up and running, with the accompanying controls in the appropriate interface (view, selecting and move controls etc.), we can look at the spatiality, including definitions of one or more ‘backgrounds’ and how to do nesting (hiding/showing), groupings externally and more.
    We can then also look at how to integrate this into the traditional framing document (as manuscript then exported to PDF fx.) and how such diagrams should appear in relation to each other and how they might connect and potentially transmit data between pieces.

Spatiality. I would further suggest we re-think our ‘seated’ position and maybe even mandate that this should primarily be a standing by a wall experience to really maximize the spatial aspect of XR. This may not be practical so we could consider the Brandel pinch and move the world (non-dominant hand) or grab (as in SoftSpace) but we should at least consider getting away from primarily seated, though some work should be possible seated as well.

Experiments to Experience. Within this frame we can better map out the required interactions and how they should fit the available (and invented) interfaces, providing rich research opportunities.

Leave a comment

Your email address will not be published. Required fields are marked *