Office Hours 4

AI: Summary

This session centred on a guest presentation about hypertext infrastructure, which opened into a wide-ranging discussion about what it means to build shared foundations for knowledge work. The presentation reframed hypertext as an activity — the forming of relationships between things — rather than as any specific technology, and argued that the field’s deepest problems are infrastructural: fragmented application models, crude memory architectures, and programming languages unable to cope with changing assumptions. The concept of an “augmented clipboard” was introduced as a practical intervention point. The subsequent conversation wove together themes of addressability and immutability, the distinction between document-level and fragment-level sense-making, the possibility of machines constructing frame-building models from human annotation behaviour, and a recurring metaphor of the Cambrian explosion as both competition and cooperation.

The presentation proposed that hypertext should be understood not as a technology (nodes and links, recommender systems, machine learning) but as a process-oriented activity concerned with forming relationships between anything — people, digital artefacts, systems, components. Detaching hypertext from any single technology was framed as a survival strategy: if the field’s identity is tied to, say, AI, then the next AI winter threatens the community itself. This reframing shifts the centre of gravity from implementation to the deeper question of equivalence — determining what is the same and what is different according to some set of criteria across an exponentially growing body of research.

The “augmented clipboard” was presented as a concrete entry point into the infrastructure problem. Rather than attempting to rebuild computing from scratch, the idea is to identify hubs of activity in existing systems — places where information naturally passes through — and enrich them. The clipboard is one such nexus. The vision is a system that, upon receiving copied content, would surface relevant interconnections, infer references to publications, present related artefacts, and suggest annotations and links — all without requiring the user to leave their current context. The insight is architectural: the services that interpret clipboard contents should not be hardwired together but should participate in a propagation network where they discover one another dynamically.

A significant tension emerged around the question of what level of granularity matters for sense-making. One participant described how current AI-assisted research tools can recommend relevant documents based on an analyst’s behaviour, but argued that sense-making does not happen at the document level — it happens at the fragment or idea level. Even after entity extraction surfaces thousands of instances of relevant concepts, the analyst is simply returned to an information overload problem at a different scale. The real challenge is pointing at specific ideas that are relevant to the ideas a person is currently working with, not pointing at containers that hold those ideas somewhere within them.

The concept of immutability emerged as foundational to the problem of addressability. For any system of pointing, linking, or referencing to be reliable, the thing being pointed at must not change underneath the pointer. The analogy was drawn between a street address (stable, absolute) and directions that reference a red car (temporary, relative). In Highlighter, imported documents are treated as immutable — a new version may be published, but any annotation or highlight remains anchored to the original. This was recognised as a key requirement for building interoperability between systems: if what you point at is what I see, then different tools can share a common ground of reference.

The possibility was raised of machines constructing a “frame-building model” by observing how a human analyst arranges fragments into narrative coherence. The argument ran as follows: if a researcher spends hours rearranging 150 highlighted snippets into a storyline where each adjacent pair must follow logically from the last, then the resulting sequence encodes something about the researcher’s emerging cognitive schema. Could a machine analyse these adjacency relationships and construct a model of the underlying framework — not to replace the human, but to help fill gaps, suggest alternative perspectives, or recommend not whole documents but specific sentences that would strengthen the narrative at a particular point? This was described as a new kind of data source for training machines to bridge human and machine cognition — and, provocatively, as a way to capture and share the frame-building model of a superior thinker within a domain.

The conversation surfaced a deep frustration with the fragmentation of computing environments. One participant described the experience of needing to work across too many applications, none of which compose with each other, and the resulting temptation to abandon the field of computing altogether. The desire was articulated for a system that provides the illusion of a coherent environment — not applications but processes that supply interfaces anticipating what the user needs. This frustration was recognised as pointing toward the same infrastructural gap that the session kept circling back to.

The Cambrian explosion was invoked as a metaphor for the current state of knowledge tools, but was immediately complicated. The explosion involved both competition and cooperation — organisms were eliminated, but their remnants were repurposed for entirely different functions, and cells learned to connect into larger structures. The implication for tool-building is that an ecosystem requires a shared medium (the “same ocean”) before diverse tools can proliferate and interact. Without shared infrastructure, there is only stagnancy — many tools doing similar things in isolation, unable to compose into something greater than themselves.

The Zipf distribution was mentioned as an example of how machine intelligence can assist sense-making in unexpected ways. English words in use follow a natural frequency distribution; by comparing the distribution of a specific document against the baseline, terms appearing at unusual frequencies can be identified as likely indicators of what the document is actually about. This was offered as evidence that some aspects of human cognition may be more tractable to machine assistance than assumed — not inscrutable, but patterned in ways that statistical methods can begin to exploit.

A distinction was drawn between instructivist and constructivist design philosophies in knowledge tools. One system was described as presenting all available information and affordances on screen simultaneously, while another was characterised as showing only what is needed at the moment — relying on the user’s internal model rather than external scaffolding. The recognition that too much visible information becomes a distraction rather than an aid prompted a desire to explore how a cleaner, more minimal interface philosophy might be applied without sacrificing capability.

Leave a comment

Your email address will not be published. Required fields are marked *