Frode Alexander Hegland
When the substrate for text was still, we could not expect it to do anything by hold our thoughts. When the substrate for text became digital the substrate became active and we got interactions with text beyond anything previously possible. We got copy and paste, spell check and clickable links. Simultaneously we got print to digital where the aim was to freeze the text like in the good old days. This gave us PDF. We also got HTML where the first pages were editable by the user but over time commercial pressures and lack of user engagement meant that we have ended up with pages with complex code for display, with very few pages with plain HTML. There is more to this, much more, but I am sticking to generalities of what knowledge workers use day to day to make my point, I hope you will indulge me.
I want more. I want text that does something. I want text to be able to make itself clear. Clear to human, machine and of course; AI. As unambiguously as is feasible.
‘Author’
I will tell you about this from the perspective of how my software Author works, since writing is thinking, editing is considering and–when writing about software–implementation is experimenting to experience. I make no apologies for that, I am not here to sell Author. I am here to put my money where my mouth is by implementing as many of the ideas as I can, to see which actually work. I hope that is fine with you.
Author in Action to Active Text. When reviewing articles for this volume I sometimes needed to change plain text citations which makes sense to a human reader and turn them into active objects in Author so that when we export this volume the citations are automatically numbered in the body of the text and added to the References. My process for doing this is to copy the information in the reference section into Author, where I run ‘Ask AI’ (the name of AI in Author so that it is not tied to a single service) to have it formatted into BibTeX which can then be pasted into Author as a Citation. Pasting a BibTeX formatted citation into an Author document means that it will appear correctly on export and automatically be included in the References Appendix.
Clear Text : Computational Text
Text in BibTeX form explicitly means something, the grammar is simple, for example:
title = {The Future of Text}, year = {2021},
The consequence is that it can be used computationally. ‘Computational text’ is a phrase Vint Cerf used in The Future of Text II (Cerf 2021) . It is a truly useful phrase.
URL’s also mean something and therefore can therefore also be used computationally, they can be clicked in most places where they appear, this has become a social convention.
DOI’s can mean something on paste if the software supports it. In Author you can not only paste BibTeX entries and they will be parsed, you can paste a DOI–in the form of a number or full URL–and Author will catch what it is, query the CrossRef database for citation information and paste as a full citation, same as pasting BibTeX.
Where ‘Something’ Happens : Computational Substrate
In all the cases above the text is presented visually, it can be plain text, it can be styled or marked up. The action happens because of where the text is seen–the substrate–in other words the software. This is key to the point I am making here. The software is what makes the magic something happen, but the text must be clear and available in order for the software to do this.
‘Liquid’ is another one of my tools, developed quite a while ago and still active and available for macOS. It is named after my philosophy of Liquid Information, which I developed in dialog with Sarah Walton. The idea here was ‘stolen’ from Doug Engelbart’s NLS, though I did not know it at the time, and of course told him to great amusement. As a macOS utility it works by the user selecting text and performing a keyboard shortcut to access a series of options to interact with the text or to search based on the text. Recently it has of course been updated to also use AI based on the selected text and a chosen prompt. It works like this: Select text in any software, keyboard shortcut (default is cmd-space), the text is lifted into the Liquid interface presenting several options for the user: Ask AI, Search, References, Convert, Translate, Copy and Share. The user can then click on these options to choose which command to execute, such as References and then Wikipedia. For a more experienced user they will likely use the keyboard shortcuts instead, such as ‘R’ for References followed by ‘W’ for Wikipedia which will then instantly perform a search in Wikipedia of the selected text.
Extended Substrate. I pointed out that Liquid is available for macOS. The architecture of mobile and XR operating systems does not allow for this level of utility. Liquid uses the ‘Services’ system in macOS to send the selected text to the Liquid interface. I am dreaming for a similar approach to be possible in XR.
Copy and Paste as Actions for Change
In Author the action of copying can perform operations such as copying only the names in the selection of text and so on. This is using AI (Apple Intelligence or ChatGPT depending on user preference and/or text selection size). The user can similarly choose to use AI on paste, where the default ‘Paste Processed’ first checks if the text is in the user’s preferred language, then also check if it looks like it was copied from PDF or the Web and needs reformatting. Note, this is in addition to the default option for regular, unprocessed paste.
References & Spaces
We have [], () and superscript as conventions for how a reader can tell that a number or letter refers to the entry in the Reference Appendix at the end of a document. This clear and useful. My third piece of software relevant to this discussion is ‘Reader’. It can parse this (in conjunction with Visual-Meta) so that the user can click on an occurrence of [] and expect to see a pop-up with all the information information, and the user can further click on the citation information to open a link either to the Web page of the source or, if they user has already downloaded the cited PDF, to the PDF itself, to the page cited.
Maybe we could expand this logic for other media? As we increasingly step into eXtended Reality with headsets, it could be useful to encode spatial information in this way. The way we have been looking at this is separating three aspects of spatial text: Glossaries, Locations and Annotations. Annotations are, in this context, not author created but reader created†. Location should be, in this view, as basic as possible, only containing link to the entry in the Glossary via ID, and XYZ information, nothing more. The Glossary would similarly be as brief as it could be to still be useful, with only the term (the ‘label’ shown on a knowledge map) being necessary, while definition (plain text describing the term), tag (category of what the term is, such as a person or place) and link (as citation or resource) would be expected, with further fields being allowed if the author requires and the reading system can parse it.
This can then be stored in the user’s authoring system in flexible ways and exported as nodes in a Wiki or a knowledge graph as desired.
This would be an implementation of clear text and computational substrate. The text should be clear both for a human reader and for extraction by software, including AI, and a limitation of document storage (Word and PDF primarily but also HTML in cases) means that extraction of text is less robust the more complex it is, such as being long enough to beyond a single line since line breaks can break continuity.
We can imagine [] with letters, for example, which refer to a Locations list (the document could have more than one Locations list and more than one Glossary). Clicking on this in XR would populate the Locations of the Glossary terms in space around the user, as an interactive ‘knowledge sculpture’. Clicking on another [] could open another knowledge sculpture.
Metadata(?)
In conversation, Ted Nelson says there is no such thing as metadata. During the discussions around my PhD thesis there was a great deal of discussion around whether an appendix (as in Visual-Meta† style could be considered ‘meta’-data, since it was in the document and it was on the same ‘level’ as the ‘data’. Mark Anderson states in a text message that “Metadata no longer has a clear single meaning.” This is part of the reason for my exhortation on clear text. While the notion of metadata is useful in certain contexts–the items in a gallery, the phone tappings by governments–it may not be a useful way to look at text, particularly in documents.
Explicit Lyrics
This was about explicit lyrics. This was about text which clearly states what it says, and what it can do, with nuance and in concert with computation and with other media, including music.
