Welcome to the Future Text Lab, where we look at what it can be like to work with text in richly interactive and immersive augmented (VR/AR/XR) environments and using direct interactions and AI augmented interactions. We look both at what can be implemented today and what can become realised in the distant future and what infrastructures will be necessary to make it happen.
We support dialogue on the subject by hosting an annual Symposium on The Future of Text and a Guest Speaker Series. We disseminate what we learn in our Journal and annual series of books on The Future of Text. We also produce software to put the research into practice and we have developed an approach to enabling infrastructures.
We host an annual Symposium on the Future of Text. The previous symposium was hosted Online and in London on the 27th and 28th of September 2022 and the next Symposium is expected to be Online and in Bergen, Norway, fall 2023.
Guest Speaker Series
We feature a Guest Speaker every month in an open forum. The full recorded Monthly Guest Speaker sessions, including Q&A/dialogue, is archived on video and published with full transcripts in our Journal.
‘The Future of Text’ Series of Books
We publish an annual ‘Future of Text’ series of books which are available for download from ‘The Future of Text‘, with volume 3 due to be published late 2022, including the transcripts from the 2022 Symposium.
We feel strongly that as we move through increasingly large and complex information environments we will need greater agility to thrive and not be overwhelmed, manipulated or disconnected. Interacting with your knowledge should be as immersive and engaging as playing a great game, dancing, painting or flying…
We feel the future will be multidimensional and multi-device, with much important work left to do for tradition interfaces, such as computers and smartphones, as well as in VR, which we believe will be a hugely important dimension to our work.
Interactions. Our research covers the end-user experience of working with text in traditional (analog & digital) environments as well as in VR domains and using direction interactions as well as AI augmented interactions.
We ask how far can we develop the potential of interactive text to expand our mind and increase the dept of our understanding?
Infrastructure. The work is underpinned by our research into the infrastructures which will be necessary to support such work. This includes how to store structural metadata (including headings, what is inside images, glossary, layout in 2D, 3D in relation to space and other context etc.) and connective metadata (how to cite the document and what it cites) across the domains and how such metadata can support manual and AI interactions.
We ask what actions can be enabled, what connections and views can be followed, viewed and created, as well as what analysis can be made possible and how the information can be transparently moved between domains while also taking advantage of the unique characteristics of each domain, such as the multiple dimension layout possibilities of VR spaces.
Goal. The goal is to truly unleash the potential of richly interactive text for augmenting how we think and communicate.
What we do not know. There is much we do not know. We do not know yet how best to interact with text in VR to help us get to grips with our information in richer ways. We do not know how to transfer data, and particularly metadata, between VR rooms and in and out of VR. We also do not know how we can most powerfully employ AI to help us view and interact with our textual information in the most useful ways. And this is only the tip of the iceberg we are looking at.
Software. We have built a word processor, Author, and PDF viewer, Reader, both available for macOS. Reader is free while Author is sold at a modest cost to help support development. A free evaluation version is available and students will be given free copies on request.
Infrastructure. We are working on enabling infrastructures, including what we call Visual-Meta to enable even flat and frozen PDF documents to carry useful metadata into augmented environments.
The wider Future Text Lab community meets twice a week. We have a page with VR resources for the community, for easy access while in VR, General Resources a Chat Log of our recorded twice weekly conversations which are archived on YouTube. If you are interested in getting involved, please get in touch.
Frode Alexander Hegland
& The Future Lab Team