Thoughts : The Immense, Immersive and Immediate Impact of VR/AR/XR

Article for possible discussion:

It is time to look far into the future, because the far future is nearer than we might realise. The Meta Oculus sold more units than the xBox in 2021† and the quality of VR/AR hardware is improving at a rapid pace. The basic premise of what is presented below is the belief that within a period of something like five years, lightweight VR/AR headsets with tremendous visual processing power will be in widespread use. It is not that long ago that computer video was only possible in postage size form with low frame rates and today we watch 4K TV on our screens as a matter of course, whether at home or on a train, and (the very recently thought of as sci-fi) video conferencing has become the norm for many. Take this recent history into account and project forward a few years, based on current hardware, it is clear we are about to all own incredible headsets within the next few years.
What is frequently called VR, AR or ‘metaverse’ is not a software application or even a platform. It is the acknowledgment that the way we interact with our knowledge happens at human scale and that the way we have been able to employ computer augmentation so far has been artificially constrained to small rectangles.
What richly immersive environments will unleash will be a fully new dimension of how we can interact with our knowledge and ourselves, it will be quite literally escaping flatland.

A New Dimension of Opportunity

However, as we are about to enter this new dimension of opportunity, we need to–together–decide on our priorities and not only allow large commercial entities to build, and thus own, this new aspect of our humanity.
This year we expect to have the James Webb Space Telescope send back images from the far reaches of time and there is a real chance we will experience life elsewhere in the universe. We owe it to ourselves to get our act together before this happens, not simply reactively when it does happen. And if there is no life found anywhere then our responsibility becomes vastly bigger. We are the miracle which lets the universe experience itself. As we are about to unleash upon our brains an augmentation beyond anything our species has previously been able to muster, as we step into this ‘cyberspace’ fully formed, we must together decide who we want to become.
Soon current ‘displays’ will be seen as a flattening of the ‘real’ information space, though such displays will remain useful, same as smart watches and more. We will (hopefully) not live in this space continually, hopefully this will be a place of augmented interactions and freedom, which will also give us the freedom to reconnect with the physical nature of our beautiful blue and green planet.
This is bigger than hardware and software. This is bigger than markets or governments. This new world deserves the best of us, if it is to support us becoming what we truly can be.
Therefore we believe in the importance of threading the Internet into this environment to allow growth in a truly open evolutionary environment, a multidimensional Internet, not in a commercially owned and branded ‘metaverse’ exclusively. For this to happen it must be possible for common spaces to exist, not only commercial spaces. It must also be possible to create, co-create and share data objects. It is not enough to have an Internet between what we currently consider to be computers, nor is it enough to have an Internet of Things. We also need an Internet of multidimensional space.
The current state of affairs is that it is not easily possible to meet up in a virtual room and share data objects in that space. This will become possible when companies invest in having their own environments support specific data objects and specific interactions.


The opportunity of VR is highly interwoven with what AI will offer, from such affordances as speaking and having the system respond, for commands and even for simple labelling, on to advanced gesture and full body analysis–and beyond. The richly immersive virtual worlds will not be static or simply a rendering of the real world, they will offer a dynamism provided by interactions and advanced AI, which will literally move mountains to help the use comprehend and communicate better.

VR @ Work

What we will enter such environments for will include games, mediation, social experiences and work. So far the experiences of working in this environment has been limited to avatars around a table in exotic locales sharing whiteboards.
We feel that the potential is vastly more powerful than that and it is too important for individual companies to simply sell such tools to us as part of their corporate upgrade cycle. We also feel that this is not something to be left to hobbyists either. This is for all of us to be an active part of.

VR & Text

The history of text is the history of how text has increasingly augmented our ability to think and communicate. We think that in this new world text will not be made obsolete just because the full visual environment can be brought to bear. We think that text has the potential to be truly unleashed in such an environment. Truly multidimensional, richly interactive text can give us new perspectives on the the world and ourselves, more than an illustration or image in isolation.

A Scenario

Picture this basic scenario: You sit down at your desk and work on your laptop. At some point you will have a meeting and it will be with your headset on, this particular meeting will be in full VR so you will be in a completely artificial environment. Your colleagues are all there and look the way you expect them to, with different levels of visual fidelity to how they look in ‘real’ life. You still have your laptop on, and the laptop screen and keyboard is visible in VR, as you can do today.
What you cannot do today however is to take what is on your laptop into the VR space to share as a ‘native’ VR object, but this is in an ideal futures so you gesture with two hands palm up and the contents of your laptop screen, which in this case was a rich 3D graph of everyone’s projects, with nodes connected to sub-nodes. You close your laptop and continue your presentation to your colleagues with gestures. You move nodes around, add labels, accept additional nodes from colleagues who simply ‘throw’ them gently over to you. This network can stretch into room scale when needed, connect to other people’s graphs and generally it can be interacted with as though we have taken every sci-fi movie’s best holographic interfaces and made them real.
When you are done, you fold everything back into your laptop and when you are working on your laptop you have all the same date, though in a flattened, limited view. This cannot be done today and will not be possible unless the environments we build are open.

Imagine how far we can take our cognition when we augment our full intellect, when we set our minds free by using our full bodies, not just finger taps and small rectangles. Imagine swimming in knowledge and not fighting to stay afloat because we only have shallow vessels to work with.
Imagine threads of connections appearing as you need them, imagine our brains truly free, nor because it is somehow removed from our bodies, but because our bodies can now be used in concert with our minds to a whole new level.

The Current Trajectory

When we take the time to enter VR as it exists today, play around and get a feel for it, then sit back and think of the potential, it is a bit shocking to look at the current development trajectory of the technology and its supportive infrastructures. It is pretty much a few large companies duking it out to see who will own this new world. It would be an absolute tragedy to have that play out, to live in a VR AOL rather than a richly connected VR Internet.

Hybrid Information

Something as simple as having something in a traditional environment and taking it into VR and then back out is not possible. We need to develop the ability to work in hybrid environments where we take our data and tools with us.
This is why we need truly open standards for information to be useful to us in both tractional and VR environments. We have been working on such an open standard, which we call Visual-Meta since this method captures metadata in a visual way, to the point of being able to print richly interactive documents which are aware of their internal structure and where they sit in citation and link chains onto paper, and loose none of this important metadata. This standard is open enough to be able to carry multiple aspects of VR data in hybrid forms.

Ownership Matters

In order to make this possible we have to look at who owns the spaces, the data objects within them, the possible interactions and the the renderings of such spaces and objects. Currently the state of affairs is that the experiences are wholly owned by the company who makes the environment, the objects, the interactions and renderings. This is how the companies who have early leads want it to be but think of how constrained it would be to work in an office where everything, from your chair to the room to the computing power and displays and software you run on the computers within it are owned–and therefore innovated by–only one single company. That sounds like a joke but in VR this is where we are headed. Make no mistake, such spaces should also be allowed to exist: If a company invests in making a powerful experience with powerful interactions and people find this useful, there is nothing wrong with building and entering such a space, but just look at how we work today, such single-focus spaces, such as opticians, cafe, mechanic’s and so on, exist but they are not where we spend all our time and the benefits from these spaces should be possible to take with us.


In order to enable such

The Future Text Lab was formed to foster dialogue on these issues. We already work with traditional computing environments, with a track record of producing powerful thinking and authoring systems, and we are developing visions of the future of computing where we are unconstrained by current limitations, though dialogue and software.

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *