Patrick Lichty
Abstract
This essay examines developments of text in extended reality (XR) by tracing its conceptual lineage through historical electronic media research. While XR allows text to inhabit immersive three-dimensional spaces, this development builds on decades-old visions of electronic and spatial textuality on experimental interactive tropes. Pioneers Ted Nelson, Steve Holtzman, and Danny Brown each envisioned text as dynamic, multidimensional and architectonic—prefiguring XR’s possibilities. By analyzing Nelson’s intertwingled hypertext structures, Holtzman’s perspectival “digital mosaics,” and Brown’s architectural lexial web experiments, the essay argues that XR writing systems can move beyond flat screens to become truly spatial, relational, and structural. Contemporary XR platforms often mimic print and screen-based layouts, but integrating insights from these earlier models highlights new affordances for text as a volumetric and interactive medium. The future of writing in XR, the essay concludes, lies in reviving and evolving these foundational ideas—transcending static panels in favor of spatial architectures of language, and thus enriching XR with a deeper, critically informed textual practice.
Introduction
Traditionally, the organization of text has been bound to material media—parchment, paper, stone, and, more recently, electronic screens. Each substrate not only presented text but shaped its conception, structure, and reading. The contemporary era is at a transformational juncture: virtual and mixed reality (XR) systems (e.g. Microsoft HoloLens, Meta’s Quest headsets, Apple Vision Pro) invite us to rethink the ontology of text altogether. XR offers not merely a new display format but an entirely different spatial condition: text can now inhabit three-dimensional space, be positioned volumetrically, and embed itself within interactive environments (Interaction Design Foundation). In such an environment, words are no longer confined to flat pages or screens; they can float beside the reader, anchor to physical locations, be embedded in the structure of media, or change with the user’s perspective and actions. This raises fundamental questions: What is the narratological nature of “text” when it is no longer flat or merely hyperlinked? More radically, what forms of writing and reading become possible when text gains depth, volume, and spatial relationships, both static and dynamic?
The future of text in XR is not being invented ex nihilo. It emerges from a long lineage of experiments in electronic and spatial writing that far predate today’s XR platforms. Decades before head-mounted displays, theorists and artists imagined text as a dynamic, multidimensional, relational set of phenomena. The work of Ted Nelson, Steve Holtzman, and Danny Brown forms a fundamental genealogy of structural text in digital space. These earlier models of electronic text provide conceptual and structural insights that can guide the development of XR writing systems today (Wardrip-Fruin and Montfort 134-153). This writer also asserts that digital media come with their own cultural contexts, cautioning that content “created for one milieu may not translate well to another” (Lichty 2009). Spatial writing envisioned for XR must therefore be informed by past insights to fully exploit the new medium, rather than simply porting over paradigms from print or screen and translating to new visual regimes.
This essay explores how these precedents shape the present moment and argues that the future of text in XR depends as much on learning from legacy research as from adopting new hardware. By understanding the spatial and relational logic of Nelson’s hypertext, Holtzman’s perspectival mosaics, and Brown’s architectural interfaces, we can begin to imagine XR as a radically novel architecture for writing and reading. Notably, while some futurists once predicted that immersive media might render the written word obsolete, reality has proven otherwise: as one analysis observed, “the text has not virtualized into hyperreality” and such post-text dreams “have failed for the time being” (Epstein and Lichty, 1997). Text remains an indispensable mode of expression; the task now is to reinvent it for the volumetric, interactive, and multimodal canvas that XR provides.
From Page to Space: Reimagining Textual Structure
When we speak of “text,” we often unconsciously imagine a page—a two-dimensional surface filled with linear writing. The dominance of print culture ingrained a sequential logic that shaped everything from syntax to narrative form (Bolter 12–13). Even the early digital revolution—desktop publishing, word processing, hypertext fiction—remained tethered to metaphors of the page. Computer screens offered scrolling documents or linked hypertext pages, but the underlying conceptual model stayed fundamentally two-dimensional (Landow 23). The transition from print to screen preserved the paradigm of flat textual layouts and linear pathways. It was not until electronic literature pioneers like Larsen, Moulthrop and others began to break these paradigms through hypertext theory (Moulthrop), laying the groundwork for XR text.
XR technologies undo this longstanding assumption. In an XR environment, text does not sit on a static surface; it exists in space as posited by WJT Mitchell (Mitchell, 539-567). Words and paragraphs can be positioned behind, above, or beside the user; text can float midair, rotate, expand or contract in size, and even attach to or embed within 3D objects. Moreover, text in XR becomes contextual – its position, scale, or orientation can directly correspond to the surrounding environment or respond to the user’s movements and gaze (Milgram and , 1989). Instead of a reader turning pages or scrolling, one might walk through a text or see it change form based on perspective. This was pioneered by artist Jeffrey Shaw in his 1989 VR work Legible City, in which an interactor rides through cities of text on a stationary bike, reading the city in a form of Situationist derive (Shaw, 1989). Although this is thirty years ago, this ontological shift forces us to ask: What is text when it is no longer flat? And what new forms of writing and literacy become possible when text gains spatial depth and behavior?
Such questions were anticipated by visionary researchers who imagined electronic writing beyond the confines of the page and screen. As early as the 1960s–1990s, thinkers proposed models of “electronic paper” and hypertext that broke free from linear print logic. Revisiting their work reveals the roots of many contemporary XR ideas, while also exposing conceptual gaps in current XR interfaces. Tellingly, most current XR reading applications still behave like floating computer screens—replicating the constraints of two-dimensional windows instead of harnessing the full possibilities of spatial computing. This is a symptom of what this writer describes as failing to account for the new milieu: XR’s potential remains underrealized when we simply transpose familiar text formats into immersive space. To design truly spatial writing, we must learn from earlier paradigms that treated text as flexible, connected and dimensional. The following sections trace a lineage of such paradigms through the work of Nelson, Holtzman, and Brown, whose ideas sketch the path not taken—and still available—for structuring text in electronic space.
Ted Nelson: Intertwingularity and the Architecture of Text
A study of electronic textuality begins with Theodor “Ted” Nelson. Nelson’s lifelong project Xanadu was not just a hypertext system proposal; it was a philosophical reimagining of how information could be structured in a digital medium. He declared that all knowledge is deeply “intertwingled,” interwoven in multidirectional ways that are obscured by the one-way linearity of print (Nelson, Literary Machines). In Nelson’s view, the traditional book or article – a single linear thread of text – artificially slices through a web of interconnected ideas. He sought to design a document system that would expose and embrace this inherent interconnection.
Nelson’s Xanadu as a Spatial System: Nelson did not work with XR technologies, but his ideas uncannily prefigure a spatial approach to text. Central to Xanadu was the concept of transclusion – the inclusion of content from one document directly within another by reference. Rather than duplicating text via copy-and-paste, transclusion would let multiple documents share the same content, always drawn from a single source. This mechanism anticipates a textual architecture based on a network of relationships, not pages. Transclusion creates a kind of virtual spatiality: a web of linked regions that a reader navigates by following connections rather than turning sequential pages. Nelson’s design documents for Xanadu described features such as multi-window displays showing different versions of a text side by side, graphical connectors or “transpointing arrows” illustrating the ties between documents, and the ability to “deep link” not just at the level of whole documents but down to individual paragraphs or sentences. Each such connection can be thought of as a corridor between pieces of text; each transcluded fragment becomes a doorway into another context. In essence, Nelson imagined text as a multidimensional structure long before technology could physically manifest it. The Xanadu vision gestures toward volumetric writing: every text fragment situated in a networked space where the reader can traverse information along many dimensions.
Nelson’s Relevance for XR: Modern XR systems, however, have only superficially engaged Nelson’s principles. Today’s XR text is largely presented two-dimensionally as fixed billboards or panels – essentially virtual screens – rather than as spatially meaningful structures interlinked in three dimensions. Realizing Nelson’s ideas in XR would require rethinking text at a granular level. For instance, truly intertwingled XR text might allow any word or phrase to be a portal (a transclusion link) into another document or micro-space, with citation trails visible as spatial links connecting floating text nodes. One can imagine persistent volumetric documents whose very shapes or layouts in space reflect their relationships – clusters of quotes surrounding the source text that they transclude, or a “document genealogy” floating in the periphery, showing how a piece of text has been remixed across contexts. In short, Nelson provides not just history but a roadmap that XR developers have yet to fully follow. Embracing his vision would mean designing XR text interfaces where connection is a first-class property – making the “intertwingled” nature of knowledge literally visible and explorable in the reading space.
Steve Holtzman: Text as Dimensional Mosaic
If Nelson offered a theory of multidimensional text, Steve Holtzman provided a practical model for navigating information in multiple dimensions. In his book Digital Mosaics, Holtzman argued that meaning in digital media arises from contextual relationships, and he introduced the idea of “synthetic dimensionality” in information structures (Holtzman 85–109). Rather than treating text as a string of characters or pages, Holtzman suggested we consider it a mosaic of elements whose arrangement can shift with perspective and context.
Holtzman’s experimental system, PerspectaView (developed in the 1990s) foreshadowed an XR approach to textual organization, even though it ran on a conventional screen. PerspectaView arranged textual and multimedia elements on a dynamic canvas of tiles. Users could zoom in and out, rearrange pieces, and view content from different vantage points. This interface implemented several principles that are essential for XR textuality: contextual grouping (related pieces of information clustering together visually), perspective shifting (the meaning changes as you rotate or move through the information field), and layered information (multiple overlapping layers of text/data that can be revealed or hidden). In PerspectaView, information wasn’t locked into a single hierarchy or sequence; it was arranged in a spatial mosaic that the user could navigate in a fluid, nonlinear way.
The relevance of Holtzman’s vision to XR is clear. In a head-mounted display, one could inhabit a textual landscape or information cloud organized by context—imagine walking through an argument where each supporting quote or footnote hovers near the claim it supports, or a narrative whose subplots literally branch off the main storyline in space. An XR realization of PerspectaView might present paragraphs as rooms or islands, sentences as floating plaques or surfaces, and conceptual categories as layers stacked in depth, all navigable by the reader’s bodily movement. Holtzman’s core argument was that meaning emerges from the relations among units of information rather than the units in isolation (Holtzman 111-112). XR is perhaps the ultimate medium to express this: it can let the reader physically shift position or viewpoint to reveal new relationships, turning reading into a kind of exploratory traversal. In effect, Holtzman was an XR visionary avant la lettre, anticipating that the richest understanding of text might come from seeing and inhabiting its structure rather than only linearly reading it.
Danny Brown: Noodlebox and Architectural texts
Nelson conceptualized structure and Holtzman modeled dimensional context; Danny Brown, by contrast, explored interfaces as a user-driven set of architectural elements, driven by informational typologies. In the late 1990s, Brown created Noodlebox, (Boulton, 2012) an interactive interface paradigm for Roy Stringer’s Amaze.co.uk created entirely with Macromedia Director. Although Noodlebox was authored for the Web (pre-dating modern XR), it anticipated an XR poetics in creating user-defined “cities” based on architectural box-like structures, based on various informational typologies. From this, interfaces accrete and perform based on the user’s configuration of the space. Brown’s experiments align with work in electronic literature that treats electronic lexia as sites of interaction and meaning-making, like the JavaScript work of artists like Jason Nelson.
In Noodlebox, the semantic-architectural building block interface structure can be expanded into a lexial form of concrete semantics, in which, based on the typologies and criteria set by the user, boxes and structures (”buildings”) could be configured to form recursive/subordinate structures or configurations like sub-trees in a menu or a paragraph structure in a document. In many ways this is reminiscent of a more concretized form of Holzman’s digital mosaics. Lexial texts are not a passive process but an action—evoking meaning through construction, definition, and placement. The performative aspect of configuring one’s entire information structure as “city” speaks to XR’s interactive capacities. In an XR setting, information elements might respond to the reader’s presence: words could reconfigure in context with the user’s position or the configuration of “boxes” might allow for certain responses when engaged. Such performative dimensions add a layer of meaning beyond the semantic content of the words themselves. As Brown’s work suggests, XR could enable a Mitchellian spatial poetry or Shawian narrative city where the how of text (its position in space) carries as much significance as the what (its linguistic message).
Brown’s explorations point toward several XR-specific textual possibilities that were not expanded in the Shockwave installation or was not documented. For example, elements in XR can be responsive to user proximity or motion – a phrase might only resolve into legible form when you stand at a certain spot, underscoring the embodied nature of reading. Boxes could have conditional visibility, revealing or hiding layers of meaning within an element in the environment, depending on angle or interaction. Media can even morph between linguistic and object forms, blurring the boundary between text and image – an environmental element could re-present itself as a media element, then settle back into readable text. Clusters might disperse into environmental phenomena (touching a box, and it becoming a constellation of stars in the virtual sky) to symbolize ideas. This user configurability was only hinted at in early web experiments like those at Amaze UK explored radical notions in browser interaction that could become tangible in XR. In sum, Brown’s experiments add an experiential dimension to the structural insights of Nelson and Holtzman: he shows that the semantic structure of an environment can itself convey meaning. Together, these three visionaries provide a vocabulary for thinking about text in XR as interconnected, dimensional, and alive.
Intermezzo: McKenna’s VR Fantasy
During the 1993 Cyberthon chronicled by Earwax Productions (Earwax Productions, 1993) philosopher and ethnobotanist Terrence McKenna discussed what he termed as his VR fantasy. His vision for VR, and likely XR, was a synesthetic concretization of language. As he stated, one would translate phonemes (the “small mouth noise”) or elementary parts of speech into typologies of shape, color, and reactive quality. In his (XR) scenario, as one spoke, a concrete proto-linguistic structure would form (“over the shoulder”) of the interactor. As time moved on, this manifold would expand, governed by the rules of grammar and syntax. McKenna argues that from an existential perspective, if one can “See what I mean, you would in essence, be me, as point of view is everything.” While this quick fable is somewhat divergent from our existing narrative, it creates another metaphor for concretized narratives analogous with the pioneers in this essay pointing towards media texts in XR.
XR Today: From Flat Panels to Spatial Experiences
Given these rich prior visions, one might expect contemporary XR applications to fully embrace spatial text. To date, however, most mainstream XR platforms have only cautiously expanded textual presentation. In current implementations, XR text often remains visually flat – typically presented on virtual “slates” or floating windows that emulate paper or screens. For example, the Microsoft HoloLens defaults to pinning 2D panels of text within the user’s view, akin to hanging a computer monitor in space (Microsoft). Meta’s Quest and other XR systems likewise offer what are essentially infinite virtual desktops, rather than deeply spatialized textual ecologies (Meta Platforms). The effective failure of adoption of the Hololens/2 in applications greater than those at the enterprise level does not signify a failure of the spatial computing paradigm as such, but the degree of break from pre-existing paradigms and price point that allow its adoption. While hand tracking is now integrated into the Quest 3 headset, and many of the qualities of the HoloLens are now part of the Apple Vision headset, their use as a dominant set of affordances is not yet in place, analogous to gestural swipe regime of the iPhone.
Even so, XR’s first generation of interfaces are transplanting the familiar GUI paradigm into 3D: we have windows hovering in space, but largely are still windows, with text boxes and browser windows that scroll as they would on a conventional screen. This approach provides continuity and usability, but it misses the opportunity to redefine text for immersive contexts.
Encouragingly, the landscape is evolving in incremental steps. Experiments in electronic literature, art, and archival point towards XR’s unique textual affordances (Grigar and Moulthrop 42–56). For instance, HoloLens app Type in Space (Park, 2018) allows lines of verse to float in the room and dynamically respond to the reader’s movements – a poem would literally dance around or with the reader. Grigar, et al’s work at the NEXT lab (“Building ‘The NEXT’ Archive”) use WebXR to create spatial paradigms for archival artifacts, creating the ability to interact with these “objects” in a more embodied form. These exploratory projects, while niche, illustrate XR’s capacity to:
Situate lexia within physical environments: Words and other media lexia can attach to and augment real-world places and objects, creating mixed reality narrative layers.
Create spatial-linguistic relationships: Lexia can be arranged in space so that meaning is gleaned from where relations are found or how they are spatially related to each other (for example, important points literally surrounding the reader, digressions tucked in the periphery, related concepts clustered together based on metatags related to the text or objects).
Combine text with 3D media: In XR, textual elements can seamlessly integrate with images, 3D models, or sounds. A single word might trigger an accompanying holographic image or spoken narration, blurring the line between reading and viewing. Or, concordantly, interaction with objective lexia to summon text and media.
Support embodied reading gestures: Readers in XR can use their bodies to navigate lexial/textual space– crouching to read a caption low to the ground, reaching out to grab a floating archival beach ball which then tells you its story, or following a trail of words up a virtual staircase. The act of reading becomes an embodied performance.
These innovations hint at what a genuinely spatial literature could entail. While this is not exactly like Terrence McKenna’s “VR Fantasy” of real-time concrete linguistics he mused about in the radio program Virtual Paradise (Earwax Studios, 1995), it hints at the semiotic activation of extended space. Yet, as of now, such prototypes rarely tap into the deeper structural ideas offered by Nelson, Holtzman, or Brown. They often introduce novelty (e.g. a poem floating in midair) without fully rethinking the underlying semantic architecture of virtual space. The next step for XR lexial design is to marry these technical capabilities with the rich historical conceptual frameworks. The resulting paradigm shift pushes past virtual billboards of text toward environments where the text’s placement, movement, and connections are intentionally designed to convey meaning.
Toward XR Textuality: Lexical Objects and Design Strategies
Truly reimagining lexial space in XR requires us to operate at a more granular level than documents or pages. One promising concept is the “lexial object” – treating each word or phrase as a discrete interactive object in the virtual space. Nelson’s transclu type in space sion hints at this by empowering fragments of text to carry their context with them; Holtzman’s layers and Brown’s architectural structures likewise imply text that functions as modular pieces like a concrete interface semantics. In XR, a single word could become a portal or container: for example, selecting a word might open up a microspace around it or an extrusion of the text’s planar space containing definitions, images, or linked references (much as a hyperlink does, but now spatially presented around the word). A phrase could be a node in a spatial grammar, connected via visible links to other related nodes across the environment, creating a literal network of language around the reader. Textual units could also be performative agents: imagine an important term in an article that gently pulses or changes color until the reader interacts with it, indicating “I have more to show you.” This vision aligns with digital literature scholar Dene Grigar’s argument that electronic text is inherently multimodal and flexible, capable of shifting form and medium. XR makes such flexibility literal, allowing text to transition into image, sound, or interactive object and back again within a continuous space.
The implications of lexial-level interaction are significant. XR enables seamless transitions among modes of representation: a word in an XR document might, upon focus, expand into a 3D model or data visualization; a citation might unfold as a floating video clip of the referenced scene or an audio recording of the quoted author; a metaphor might trigger a spatial animation that illustrates its meaning. This is not hypertext as mere link-following; it is hypertext as transformation – each textual element can invoke an experience in another medium, all within the spatial frame of the XR world. By embedding multimedia at the lexial level, XR text becomes a richly layered construct where reading, viewing, and interacting converge.
Designing the next generation of XR media systems, then, calls for synthesizing the contributions of Nelson, Holtzman, and Brown into concrete principles. From Nelson, we glean the importance of structural interconnection: XR text platforms should enable fine-grained linking and transclusion, making references and quotations tangible in space (for example, persistent “quote balloons” or trails that show where each piece of text originated). Citation networks and document genealogies could be rendered visually so that a reader literally sees knowledge as an interconnected web around them. From Holtzman, we take dimensional organization: XR systems should support contextual clusters of information, layers of content stacked in depth, and interfaces that let users change perspective to reveal new facets of meaning through individualized construction of meaning (e.g. building informatic spaces like stes of objective building blocks). The spatial layout of text/lexia can encode relationships—perhaps important points appear in closer proximity and larger scale, while tangents are smaller or tucked further away, creating a semantic depth map. From Brown, we incorporate an architectural poetics: XR text should have a personal architecture, representing the interests and the concrete semantic architecture of the user’s proclivities. Building on the typological structure of Noodlebox, interactions can be added that allow for behaviors (texts that glow when being “touched” by gaze, or assemble themselves from sub-elements as in a menu as the user approaches) to convey tone and emphasis beyond static words. Generative and emergent forms of text could also play a role—imagine an XR narrative media experience where the arc literally grows out of the premises provided, arranging itself differently depending on the reader’s travels through the architectural space.
By fusing these strategies, we begin to outline an XR native approach to textuality. In such a future, reading and writing are no longer confined to scrolling through rectangles of text. Instead, they become an exercise in spatial architecture: authors would design information spaces for readers to explore, and readers would navigate ideas as if walking through a museum or interacting with an art installation. Notably, this vision does not discard the written word—instead, it elevates and transforms it. As Lichty and Epstein observed in the late 1990s, even amid multimedia and virtual reality, “the text is now only one of many cultural referents” and no longer enjoys the exclusive “hegemonic privilege of the book”. XR embodies this pluralism of media. Yet, rather than diminishing text, the XR medium calls for texts that know how to coexist and intertwine with images, sounds, and interactive components in three-dimensional space.
Conclusion: The Past as Prologue for XR Text
XR hardware has advanced rapidly, but fully reconceptualizing text for spatial computing remains in its infancy. It is increasingly clear that the breakthroughs needed—greater dimensionality, relationality, and interactivity in our textual systems or even new paradigms of embodied gestural relations—were in fact anticipated decades ago by pioneers who have not received their due recognition. If the field of XR is to evolve beyond current regimes of interaction and representation, it must rediscover and build upon Nelson’s transclusive webs, Holtzman’s dimensional mosaics, and Brown’s architectural poetics. The conceptual foundations for truly spatial writing were laid long before XR technology could actualize them.
By drawing from this lineage, we can begin to craft new writing and reading environments that fully embrace XR’s potential. The future of text is indeed spatial, immersive, and multimodal—but it is also continuous with the past. In a sense, the visions of Nelson, Holtzman, and Brown were waiting for XR to give them form. Now, XR developers and designers have the opportunity to implement these ideas at scale: to create volumetric documents that are dynamic, relational, structural, and embodied. Such documents would be “deeply intertwingled,” and transclusionary to use Nelson’s terms, embedding each fragment of text in a living network of connections and contexts. By integrating the wisdom of earlier electronic literature and new media theory into today’s XR design, we can transform reading from a static act into an experience of exploration and engagement within true electronic spaces for writing and meaning-making. In this convergence of past insights and present technology, the long-held dream of structuring text in electronic space finds its realization.
Works Cited
Aarseth, Espen J. Cybertext: Perspectives on Ergodic Literature. Johns Hopkins University Press, 1997.
Bolter, J. David. Writing Space: Computers, Hypertext, and the Remediation of Print. 2nd ed., Lawrence Erlbaum Associates, 2001.
Boulton, Jim. “A Car Isn’t a Metaphor for a Horse and Carriage.” Digital Archaeology, 24 Sept. 2012, digital-archaeology.org/noodlebox/.
Brown, Danny. Noodlebox. Interactive Digital Textwork, c. 1999.
“Building ‘The NEXT’ Archive for Born-Digital Art and Literature.” College of Arts & Sciences, Washington State University, 3 Sept. 2025, https://cas.wsu.edu/2025/09/03/building-the-next-archive-for-born-digital-art-and-literature/
Earwax Productions, and David Lawrence. Virtual Paradise — The Reality Tape. 1992-93. Radio documentary.
Epstein, Jonathon S., and Patrick Lichty. “Machine: Mapping the Multimedia Terrain of Postmodern Society.” Sociological Spectrum, vol. 17, no. 3, 1997, pp. 323–338.
Grigar, Dene. “The Text Is a Flexible Object: Multimodal Writing in Electronic Literature.” Electronic Book Review, 2015, pp. 12–18.
Grigar, Dene, and Stuart Moulthrop. Traversals: The Use of Preservation for Early Electronic Writing. MIT Press, 2017.
Holtzman, Steven. Digital Mosaics: The Aesthetics of Cyberspace. Simon & Schuster, 1997.
Interaction Design Foundation. “Extended Reality (XR).” Interaction Design Foundation, https://www.interaction-design.org/literature/topics/extended-reality-xr.
Landow, George P. Hypertext 3.0: Critical Theory and New Media in an Era of Globalization. Johns Hopkins University Press, 2006.
Lichty, Patrick. “The Aesthetics of Liminality: Augmentation as Artform.” Leonardo, vol. 47, no. 4, 2014, pp. 325–336.
—. “The Translation of Art in Virtual Worlds.” Leonardo Electronic Almanac, vol. 16, nos. 4–5, 2009, pp. 1–13.
McVeigh-Schultz, Joshua, and Katherine Isbister. “The Charismatic Avatar: Embodied Design Fiction for Future Virtual Workplaces.” Proceedings of the 2019 Designing Interactive Systems Conference (DIS ’19), Association for Computing Machinery, 2019.
Meta Platforms, Inc. Quest Pro Developer Guide. Meta, 2022.
Mitchell, W. J. T. “Spatial Form in Literature: Toward a General Theory.” Critical Inquiry, vol. 6, no. 3, Spring 1980, pp. 539-567
Microsoft Corporation. Microsoft HoloLens 2: Developer Documentation. Microsoft, 2020.
Milgram, Paul, and Fumio Kishino. “A Taxonomy of Mixed Reality Visual Displays.” IEICE Transactions on Information Systems, vol. E77-D, no. 12, 1994ishino.
Moulthrop, Stuart. “You Say You Want a Revolution? Hypertext and the Laws of Media.” Postmodern Culture, vol. 1, no. 3, May 1991.
Nelson, Theodor H. Computer Lib/Dream Machines. Self-published, 1974.
—. Literary Machines. Mindful Press, 1981.
Park, Yoon. “Type In Space for HoloLens 1 (2018) – Your Physical Environment Becomes a New Canvas for Typography.” Mixed Reality Now, 2018, https://mixedrealitynow.com/type-in-space-explore-spatial-typography-in-mixed-reality-with-hololens
Shaw, Jeffrey. The Legible City. 1988–91, installation (interactive digital), bicycle interface, computer system, projector, variable dimensions, ZKM | Center for Art and Media, Karlsruhe.
Wardrip-Fruin, Noah, and Nick Montfort, editors. The New Media Reader. MIT Press, 2003.
