Cambrian Compatibility

In reply to–and in support of–Jack Park’s ‘Towards a Modern Cambrian Explosion Inspired by the late Douglas Engelbart’.
https://docs.google.com/document/d/1qwX7vgm2xWEoqNe9O3DfNRKRTDvSHpLCsLj3ygx7_5o/edit?usp=sharing

My dear friend, mentor, and inspiration, Doug Engelbart, invented much of the knowledge work environment we all use today. Since he was an early innovator he had to invent much of the infrastructures as well as the interaction technologies which was a mammoth task. Over time this became a liability to further growth since it did not rapidly enough take into account changes in the wider ecosystem, such as the Web, leaving his NLS/Augment more isolated than integrated.

Personally, I would like to continue to make ‘craft’ software where my personal taste and experience is the leading inspiration, not the result of market research. This may well be an arrogant perspective but it is an honest one and there is one aspect of this is my work on metadata. My main notion is simply this: information that can’t move is not communication. This is why making information open for users to move is crucial if we are to develop an environment of innovation where small developers like me, research institutions and corporations can all contribute to building what we feel is best for the market, and have users choose what works best for them, and to change systems when their taste and needs change.

The Cambrian Explosion of half a billion years ago happened rapidly and tailed off once the ecosystem had adapted†, once the infrastructure for development/evolution became settled and ossified. Quite literally, the bones of the scaffold in which life can evolve had become more rigid, less flexible.

I would argue that we had a similar explosion in the 1960s and 70’s with the start of directly interactive digital computers systems† which resulted in the GUI we all use today in different variants and where evolution slowed to a crawl, in what we might term the ‘Computer Cambrian’. Doug told many the story of how someone walked in to his office and called him “just a dreamer” to which he angrily replied that “dreaming is hard work”. I firmly agree.

Dreaming, in the Engelbartian sense, is the effort to envision a better future which can be realised, as opposed to ‘fantasising’ which is dreaming without concern for what can be realised (hence ‘Science Fiction’ and ‘Fantasy’ being two distinct, though related genres of literature). Dreaming therefore takes real mental effort and it can be augmented through dialog, the dreaming of many minds. This is something we, as a community of computer users and developers of software, have seriously neglected over the last few decades.

A great promise of working with headsets/visors/augmented glasses/XR (VR/AR) is not only the potential of working in fully immersive information spaces, it is that we may reawaken our imagination to dream again. This is why I am investing so much time and effort into XR.

In order for dreaming in XR to be realised, the environment needs to be–to use an agricultural term (suitable since we often refer to knowledge gardens) supportive and fertile. This refers to the environment of our individual and collective understanding, social, political, academic and financial frameworks and the technical need for information needing to be robust and flexible, locked and viewable in myriad ways and for the information to be stored and communicated with context and to be recontextualisable.

We are working on the social, political and academic side of this with our series of Symposia and Books on The Future of Text and on the information side through the work on what we call ‘Visual-Meta’ which is simply an approach which says that “if it’s important, write it down”. Our initial implementation is for metadata in PDF documents where an appendix called Visual-Meta uses basic LaTeX style formatting to provide the BibTeX for referencing the document and more, including structural (headings etc.) and connective (references, links) information. A current effort is working on transferring useful spatial and knowledge data between traditional 2D and XR 3D environments through JSON, using canvasJSON and other open approaches. We are thinking ‘XR first’, 2D 2nd.
We are also working on how to tread dialog such as we are engaged in here, without all of it going into an online server either owned by a third party (Slack, Google) or rented (DNS) for private setup, with the benefits of local documents and universal connections. This involves looking at flexible binding of documents, extensions of PDF and ePub, the use of HTML and Markup and more.

Further to this we are eager to engage in working on what would be required and what would be possible to create and nurture for a truly Cyberspace Cambrian Explosion.

https://visual-meta.info  https://futuretextlab.info

Leave a comment

Your email address will not be published. Required fields are marked *