future text lab
Welcome to the Future Text Lab, where we look at what it can be like to work with text in traditional digital, richly interactive, immersive augmented environments as well as enhanced with AI. We look both at what can be implemented today and what can become realised in the distant future and what infrastructures will be necessary to make it happen.
Symposium
We have been hosting an annual Symposium on the Future of Text for over a decade. The previous symposium was hosted Online and in London on the 27th and 28th of September 2022. The next Symposium will be Online and in London, UK October 4th & 5th 2023.
Books
We publish an annual ‘Future of Text’ series of books which are available for download from ‘The Future of Text‘, with volume 3 published December 2022.
Guest Speaker Series
The full recorded Monthly Guest Speaker sessions, including Q&A/dialogue, is archived on video and published with full transcripts in our Journal.
Dreaming
We feel strongly that as we move through increasingly large and complex information environments we will need greater agility to thrive and not be overwhelmed, manipulated or disconnected. Interacting with your knowledge should be as immersive and engaging as playing a great game, dancing, painting or flying…
We feel the future will be multidimensional and multi-device, with much important work left to do for tradition interfaces, such as computers and smartphones, as well as in VR, which we believe will be a hugely important dimension to our work.
Research Questions
Interactions. Our research covers the end-user experience of working with text in traditional (analog & digital) environments as well as in VR domains and using direction interactions as well as AI augmented interactions.
We ask how far can we develop the potential of interactive text to expand our mind and increase the dept of our understanding?
Infrastructure. The work is underpinned by our research into the infrastructures which will be necessary to support such work. This includes how to store structural metadata (including headings, what is inside images, glossary, layout in 2D, 3D in relation to space and other context etc.) and connective metadata (how to cite the document and what it cites) across the domains and how such metadata can support manual and AI interactions.
We ask what actions can be enabled, what connections and views can be followed, viewed and created, as well as what analysis can be made possible and how the information can be transparently moved between domains while also taking advantage of the unique characteristics of each domain, such as the multiple dimension layout possibilities of VR spaces.
Goal. The goal is to truly unleash the potential of richly interactive text for augmenting how we think and communicate.
What we do not know. There is much we do not know. We do not know yet how best to interact with text in VR to help us get to grips with our information in richer ways. We do not know how to transfer data, and particularly metadata, between VR rooms and in and out of VR. We also do not know how we can most powerfully employ AI to help us view and interact with our textual information in the most useful ways. And this is only the tip of the iceberg we are looking at.
Implementations
Software. We have built a word processor, Author, and PDF viewer, Reader, both available for macOS. Reader is free while Author is sold at a modest cost to help support development. A free evaluation version is available and students will be given free copies on request.
Infrastructure. We are working on enabling infrastructures, including what we call Visual-Meta to enable even flat and frozen PDF documents to carry useful metadata into augmented environments.
Community
The wider Future Text Lab community meets every week. We have a page with VR resources for the community, for easy access while in VR, General Resources a Chat Log of our recorded twice weekly conversations which are archived on YouTube.
If you are interested in getting involved, please get in touch on Twitter or Mastodon if you’d like to join us.