Dene Grigar
The NEXT (https://the-next.eliterature.org) is a virtual museum and library created by the Electronic Literature Lab at Washington State University Vancouver that holds thousands of born-digital games, art, and literature dating back to the 1980s onward. The NEXT also holds, along with these the digital files that comprise these work, physical artifacts––such as packaging, costumes and props used in performances and readings, artists notebooks, gallery cards, and other forms of ephemera––that accompany and contextualize the works.
The NEXT is unique among repositories relating to born-digital archives. Online sites that exhibit born-digital works, like Rhizome.org, do not hold the files of the works they show. Others like The Interactive Fiction Archive hold and make the digital files of games accessible to the public but do not exhibit them. What makes The NEXT even more special is that it exhibits the physical archives associated with its born-digital archives as interactive 3D models for the browser and for the Virtual Reality environment at its Visualization space (https://the-next.eliterature.org/visualizations/). For example, The NEXT holds the files for Sarah Smith’s science fiction game, King of Space (https://dtc-wsuv.org/elo-repositorytestsite/works/2042/0/0/) and makes it accessible to visitors at The Sarah Smith Collection; also available from the Visualization space is the hand-made brooch Smith produced of Lady Nii (https://the-next.eliterature.org/visualizations/experience#16), which visualizes this main character for visitors, giving them a sense of her timelessness. This feature of exhibiting both digital and physical artifacts online to the public is one not found elsewhere at other memory institutions.
We began visualizing archives in 2020 at the start of the pandemic with the idea that people could visit the space via on the Web and manipulate the models in the browser, turn them around with their hands and read the words, see the images, even rips and smudges, on the entire artifact the models represent. This approach means that the models are as faithfully produced as possible. We have experimented with keeping the file size manageable so that the models load at a good rate and, at the same time, maintain the integrity of the artifact. The idea we were shooting for was providing the opportunity for visitors to interact with the artifacts.
But interaction wasn’t enough. Even though we could simulate the experience of handling the objects, we couldn’t build a sense of immersion, which is better way to understand things. Immersion suggests being submerged into something, to be inside it. Immersion helps us to go beyond the border of reality and lose the feeling of separation with a thing by closing the gap of space between it and us, between the real and the unreal. It makes the experience feel immediate, helping us imagine a 3D model as something more than a reproduction of the artifact but, rather, the artifact itself.
Immersion can be created in many ways, but what we chose for the artifacts in the Viz space was Virtual Reality, or “VR.” So, in January 2025, we began programming the Viz space so that anyone, anywhere, with a Quest 2, 3 or an Apple Vision Pro can put on their headset and immerse themselves in the space with the artifacts. Because immersion is predicated on a multisensory experience, we built in sound so that when visitors touch John McDaid’s “chocolate box of death” that holds his sci-fi, mystery, hypermedia novel, Uncle Buddy’s Phantom Funhouse, and receive the sonic feedback resembling touching the physical box. Using their hands, visitors interact with the music cassettes packed in the box, turn the box around, read the words on the floppy disks, and inspect the wear and tear of the box itself, hearing the different sounds related to each of these objects.
All of the programming created to visualize the physical objects in the Viz space have been done with open Web languages using web standards and APIs related to WebXR.
VR space lends itself well for us to contextualize the artifacts and leverage the experiential quality of VR. For example, when visitors access Holeton’s beach ball used during his performances of Figurski at Findhorn on Acid, they can experience it as it was when it was a prop, but the space also contains an image of Holeton with the beach ball, a video of him talking about the work the beach ball is associated with, and an animation of the mechanical pig that figures as the motivation in the hypertext novel’s plot. To date, we have produced over 24 models and show 16 at any given time. Most recently, we have introduced sound into the environment. This means that when visitors touch the beach ball, they hear the sound of the plastic material; when it is tossed, visitors can hear the beach ball hit the surface.
With the process in place, we are now working to make more of our physical artifacts accessible at The NEXT. Coming next are the folios and removeable media associated with the hypertext narratives and poetry published by Eastgate Systems, Inc., from the early 1990s to 2014. Collections like this one rendered in a form that can be experienced by visitors anytime and from anywhere addresses an ethical decision to support access to information and more importantly, human expression as it developed during this early digital age.
