Interactions. Our research covers the end-user experience of working with text in traditional (analog & digital) environments as well as in VR domains and using direction interactions as well as AI augmented interactions.
We ask how far can we develop the potential of interactive text to expand our mind and increase the dept of our understanding?
Infrastructure. The work is underpinned by our research into the infrastructures which will be necessary to support such work. This includes how to store structural metadata (including headings, what is inside images, glossary, layout in 2D, 3D in relation to space and other context etc.) and connective metadata (how to cite the document and what it cites) across the domains and how such metadata can support manual and AI interactions.
We ask what actions can be enabled, what connections and views can be followed, viewed and created, as well as what analysis can be made possible and how the information can be transparently moved between domains while also taking advantage of the unique characteristics of each domain, such as the multiple dimension layout possibilities of VR spaces.
Goal. The goal is to truly unleash the potential of richly interactive text for augmenting how we think and communicate.
What we do not know. There is much we do not know. We do not know yet how best to interact with text in VR to help us get to grips with our information in richer ways. We do not know how to transfer data, and particularly metadata, between VR rooms and in and out of VR. We also do not know how we can most powerfully employ AI to help us view and interact with our textual information in the most useful ways. And this is only the tip of the iceberg we are looking at.