These meetings are very much open and we invited anyone who have an interest in text and/or working in XR/VR/AR to join us. Please feel free to join as many or as few sessions as you like. We also understand that sometimes it’s not possible to be there at the start or at the end and this is OK.
8:30-10 Pacific | 11:30-1:00 Eastern | 4:30-6 UK | 5:30-7 CEST
(in case of conflict, such as Summer times being out of synch, US times are adhered to)
Welcome
As we enter 2026 we look to further extend our minds with text. We start with the premise that thinking and speaking are not the same thing, something which has become more important in the age of AI chatbots.
Aim
Our aim is simply and clearly to expand the discourse around what the future of text and and should be, with a particular–though not by any means exclusive–focus on XR, and to support the building of tools and environments in order for us all to be able to ‘experiment to experience’ what working in XR can actually be.
We are trying to develop interchangeable, open, rich knowledge objects in spatial configurations – and looking at how we can best interact with them.
Approach
Fast & Slow In conversation most of our thinking happens behind the scenes of our minds, with thoughts formed in statements created quickly, drawing on the vast resources of our pasts.
We examine new evidence with less of a draw on our past with slow thinking†. When we allow ourselves to ‘spend a moment’ to think, we start to organize our thoughts, in the thinking space of our working memory, paper and on screens.
Extending Minds This takes mental effort, for which we rely on tools to both help us freeze the information we need, through, scribbling, doodling–writing, and organize what we write.
Text The written word is a powerful and rich mental thinking extender.
Potential Writing down sentences (speech to text or typing) can be akin to reflexively speaking, while structuring text supports slower, more deliberate and reflective thought. Augmenting slow and deliberate thinking is the only real hope we have to escape the limitations of our past. Thus, the potential for an extended thinking space is extended not only by who we are, but also by who we want to be. This is why we are working on text in space.
Tools Building on the research experience of the last two years which were supported by the Alfred P. Sloan Foundation, with PI Dene Grigar and Co-PI Frode Hegland, we are focusing on tool building to expand our ability to to work in XR.
“Whom to Augment First”
When starting his work on augmenting human intellect, Doug Engelbart asked “Whom to augment first”†. Since he worked at a time when pretty much everything had to be developed from scratch, he chose programmers.
For the last two years we have been working to augment how academics read and write and as we complete these two years of the Alfred P. Sloan Foundation supported research, which was specifically to augment academic work in XR, we can ask this question again; going forward, who do we want to augment?
We can continue to work on academia, with the potential of great change and with the concern of great inertia and resistance to change. We can work on students and we can work on people who are independent thinkers. Personally I think augmenting students is the way to go, if we include in that category professional academics and freelance thinkers.
What this means practically is that we try to re-think thinking and communication using text, and do not worry about cosmetic academic tradition (formatting, styles etc.) while adhering strongly to the intellectual academic traditions (clarity & connections). We will therefore spend 2026 on re-inventing reading and writing for anyone who is interested in thinking with text–and to provide resulting work in academic formats where possible, as fantasy rooted in reality, to inspire, by building tools to push the needle forward.
We will focus on connecting (through citations and structure) and shaping (through spatial arrangements) thought, the interactions to enable this and the underlying data to make it possible.
We will consider a unit of shared knowledge to be by one person, such as one article or paper, which can be connected through metadata to a wider body of knowledge, a ‘volume’ in the form of a Journal/Proceedings/Book. This will address questions of what comprises ‘volumes’ of knowledge in philosophical and practical terms.
XR Reality
Presence Over the last two years a few things have become apparent. XR space is different, but it is still ‘space’ and we still experience it from a distinct point of view. Furthermore, in my personal experience of reading on displays from laptops to large computer displays and in Reader on the Vision Pro (M2 & M5), a reasonable amount of text to comfortable track to read while providing a reasonable amount of contextual text is about a two page spread of our book in PDF form, or a large display of single column scrolling text. In other words, too much text makes for too much eye and head movement. Whether presented in XR or 2D, the visual display of the text matters, in terms of use of headings, layout and bolding/highlights, to balance clean and easy reading and quick navigation.
Scale When we stand up the picture and interactions (literally) changes. When we can walk around in our information we can embody it beyond what we can do when only accessing it from a single vantage point. In addition to augmenting how we interact with text in traditional forms for deep reading (reading on a rectangle is still more visually pleasant then text floating in a room), we also want to extend the utility of text in space, as knowledge objects in configurations of knowledge sculptures and interactive networks.
Connected Knowledge
Academic discourse needs citations to be connected and the system of citing is both a manual chore (most systems require citations to be included as plain text, with no clarity for systems to understand them and thus allow readers to follow them instantly–few link to papers, just to download sites) and at the same time they are being over-automated with AI producing fake papers† which are being cited.
To deal with this citations need to truly connect and be quick to create. This is something our project has experience of dealing with and will continue to work on.
To Do is To Learn
The experience of the last two years has provided us with a deeper perspective of what working in XR can be like, as well as many more questions and ideas to test.
Challenge Going from ideas to implementations of knowledge in XR has proven to be a real challenge, far beyond what we expected. To some extend this is a reflection of the technical challenges of working in the open WebXR and developing within the confines of native operating systems toolkits and APSs, such as Apple’ visionOS. It has also been harder to imagine past flat.
Prototyping In 2025 we approached this with a series of prototypes for the community to experience and we aim to continue with this approach in 2026, alongside more traditional app based workflows and experiences. It must be clear that the extend of this work will partly depend on funding, which we are working to realize.
Dialog
The Open Office Hours meetings are hosted every Monday and are recorded (with pauses when requested for privacy) and posted on YouTube, as we have done for over 500 meetings so far, including last year’s. We aim to continue with themed discussions, invited presentations and regular open office hour sessions. There is no requirement for participants to attend the full meeting, not to attend any meeting beyond what they are interested in and have time to attend.
Language We will be careful and specific with our language, though casual and friendly. For example, we will separate explicitly what aspects of reading we are talking about, as well as what aspects of authoring. Similarly, when we discuss AI/LLMs we will make an effort to make clear which aspect we are talking about, such as chatbots, image or sound production, research, analysis, process etc. and not generalize. This is to reduce the chances of rabbit holes where we go down different holes in the same discussion, not looking at the same aspect of reading or writing. We will also aim to not talk too much about escaping the constraints of thinking in terms of paper or the digital legacy of paper since we have been around that topic many times and too much may actually re-focus us on thinking about paper.
VR It will be urged that if we have headsets we should have them available during our meetings for instances where someone would like to present XR work.
Diverse Perspectives We are a very diverse community and we aim to increase the diversity to fuel the dialog with as many interested perspectives as we can gather. If you are already part of this community, please feel free to suggest to others that they join us. If you have not been to one of our meetings yet, please feel free to stop by. You will be welcome. These sessions are open to anyone who has a genuine interest in any aspect of the future of text and require no prior specialist knowledge, just curiosity.
Record
A basic overview via AI will be present in the listing on the Future Text Lab website as well as in the YouTube description. Summaries will also be posted on social media. Any participant should feel free to email me with changes to their record if it was not correctly recorded, transcribed or summarized, as well as provide further links or information.
Summaries The transcripts are done via sonix.ai where I need to label each speaker, once per session, so that the transcribed text is correctly assigned to the right speaker. The accuracy of this has improved greatly but can sometimes have errors, which I look for and will fix if they escaped me and I was later notified.
Sample Record: https://futuretextlab.info/2025/12/20/15-dec-2025/
Current Prompt: https://futuretextlab.info/2026-prompt/
Initial Themes for Discussion
Plan for 2026
Reading a Single Article
Navigating a Volume
Navigating Collections of Volumes
Authoring Textual Documents
Authoring Connections of Nodes
& more.
In ‘Other’ Words
The above started as the introductory statement above, which was then turned into spoken poetry with ChatGPT assistance, and then orchestrated with AI (suno.com). This is an experiment around a different aspect of text, to see if a more poetic, though machine made, presentation of what was discussed can spur thought and dialog. Lyrics
Frode Hegland, PhD
London, Late 2025
TO ADD: Code of Conduct, Baby + Bathwater, personalities and interactions.
