Frode Hegland, Paul Smart, Dene Grigar, Mark Anderson, Tom Haymes, Tess Rafferty, Fabien Bénétou, Peter Dimitrios, Karl Arthur Smink, Brandel Zachernuk, Ayaskant Panigrahi, Jonathan Finn, Hrithik Tiwari, Nic Fair, Huang Ying, Jimmy Sixdof
Abstract: The Story of Your Life: Large Language Models and Personal Memory
This presentation explores how large language models (LLMs) are transforming the landscape of personal memory. Whereas traditional lifelogging technologies merely capture and store data, LLMs actively participate in remembering through interpretation, dialogue, and narrative generation.
Drawing on advances in multimodal modelling, retrieval-augmented generation, and tool use, I examine how these systems support new forms of encoding, elaboration, and retrieval—facilitating both factual recall and the co-creation of autobiographical narratives. In doing so, LLMs not only preserve lived experience but reshape it into evolving life stories, contributing to what has been described as narrative niche construction. More broadly, the talk considers how these systems are altering the ecology of text itself—shifting from static inscription to dynamic, co-authored narrative—and asks how our future memories, and perhaps our future selves, will be written in collaboration with machines.
AI: Large Language Models and Personal Memory in Extended Reality
AI: Summary
Paul Smart presented philosophical research on how large language models can support personal memory through augmented generation techniques while the group discussed developing multi-user VR environments for collaborative document work, exploring the relationship between memory accuracy, forgetting, narrative coherence, and the future of spatial text interaction.
The group emphasized that forgetting is a crucial function of civilization and memory, not just a failure – as Tom Haymes asked “How much of our civilization is based on forgetting?”
Mark Anderson stressed the importance of considering different user roles in collaborative spaces, distinguishing between active manipulation and passive enrichment of shared information.
Frode Hegland highlighted the critical challenge in authoring and XR work: figuring out what you’ve already written and where your ideas are stored, which spatial memory systems could help address.
AI: Speaker Summary
Paul Smart presented his philosophical research on large language models and personal memory, explaining how augmented generation techniques enable LLMs to support personal memory without retraining models. He distinguished between parametric memory (encoded in neural network weights) and external non-parametric memory (separate data stores). He provocatively suggested reframing LLM “hallucinations” as “confabulation,” arguing that narrative coherence may be more psychologically beneficial than strict accuracy, since human memory naturally confabulates to preserve life story coherence. He described his work on digital replicas of philosophers like Andy Clark and Keith Frankish, and explored therapeutic applications for dementia patients and depression treatment. He emphasized that memory alignment with human experience may be more important than perfect accuracy, and discussed four stages of LLM-supported memory: encoding, elaboration, retrieval, and joint reminiscing.
Frode Hegland moderated the meeting and provided updates on developing multi-user VR functionality for their authoring system, where one person actively manipulates content while others view in stereoscopic VR without audio, using separate communication channels. He emphasized simplifying the current development to create working prototypes rather than perfect systems, and was particularly interested in how spatial memory systems could help authors locate their previous work. He asked probing questions about Paul Smart’s virtual Andy Clark project and how it relates to personal memory systems, and stressed the importance of the community and flexible participation.
Dene Grigar participated briefly, noting Mark Anderson’s schedule and confirming she would be interested in the multi-user VR developments being discussed, particularly around organizing symposiums and collaborative work.
Mark Anderson raised important technical and conceptual questions throughout, particularly about spatial coordinates, object properties, and the semantic meaning of layouts in VR spaces. He emphasized the value of having specialized roles in collaborative spaces rather than requiring one person to do everything, comparing this to naval operations requiring multiple people for complex tasks. He questioned whether large language models can “forget” or place guardrails around sensitive information, particularly for therapeutic applications dealing with trauma. He also noted the risk of “Collection Fallacy” in note-taking systems where merely collecting creates an illusion of value, and warned about data duplication issues in VR object management.
Tom Haymes engaged with both practical and philosophical aspects of the discussion, noting he already acts as a “digital representation” and emphasizing the importance of power dynamics in networked environments – distinguishing between hierarchical presentation spaces versus collaborative discussion spaces. He uses Notebook LM as a “backup brain” with libraries on coding, pedagogy, and constructivism, and shared his observation that eyewitness reports are notoriously inaccurate. He questioned what constitutes a “hallucination” – whether it’s a bug, different reality, or creative act. He increasingly digitizes notes from physical books to find them again, and found connections between Enlightenment philosophy and AI approaches.
Tess Rafferty asked practical questions about collaboration features, suggesting version control similar to document markup for multiple authors working simultaneously. She proposed the GROUP function to link objects that move together, and suggested indicators showing whether users have viewed objects in networked environments. She saw potential for immersive environments both for creative collaboration (like plotting scenes) and education (like visualizing historical settings). She also made the provocative suggestion of “LLM as marriage counselor” and noted how memory could be replayed neutrally to see events more objectively.
Fabien Bénétou provided extensive technical expertise on networking solutions, emphasizing that the group’s use case doesn’t require sub-millisecond latency or massive scale – likely just two to five users. He explained various networking approaches including WebDAV, WebRTC, PRGs (peer-to-peer), and CRDT (for merging documents). He discussed Mozilla Hubs as an example but noted its complexity and Mozilla’s discontinuation of support. He emphasized “asymmetric collaboration” where difficult tasks like typing in VR can be handled by desktop users, trading technical complexity for better user experience. He shared a personal anecdote about rediscovering a wiki entry he wrote 15 years ago about software he thought was new, demonstrating the value of personal information management. He passionately encouraged everyone to journal using whatever method works for them, calling it an “emotional and intellectual safety net.”
Peter Dimitrios demonstrated a Lenovo Legion 2 headset – a virtual display rather than VR – exploring gesture controls with laptop cameras instead of VR headsets as his retirement project. He maintains a 40-year personal “external brainpack” of files he greps through and is working to republish them in Federated Wiki now that he’s retired and free of IP restrictions. He raised questions about Paul Smart’s virtual Andy Clark project and shared a link to Paul’s paper on “Predicting Me: The Route to Digital Immortality.”
Karl Arthur Smink raised critical questions about AI psychosis cases where models amplified psychotic symptoms, the exploration k-value injecting randomness that compounds over time, and user inaccuracy in self-assessing emotional states. He explained that machine learning systems balance Exploration (injecting randomness to escape local minima) and Exploitation (using learned information), making them inherently probabilistic and unpredictable. He warned that external sources of “truth” teach humans to mistrust their own judgment and become reliant on those sources. He mentioned keeping an Obsidian vault backed up on git with journals, code, books, and D&D materials, and noted that journaling is a “life-saver.”
Brandel Zachernuk provided thoughtful commentary on social embodiment and the stacking of contexts in virtual environments, noting that unadorned speech shouldn’t be semantically significant so it doesn’t trigger actions while discussing them. He emphasized that being co-present with people is often the primary context, even when working on documents together. He referenced Anil Seth’s characterization of perception as “Controlled Hallucination,” suggesting that attempting to avoid this might require “boiling the ocean.” He suggested using Wizard of Oz techniques with humans first before implementing actual AI to establish high watermarks. He noted that Galvanic Skin Response and other structured probabilistic streams can constitute “language” that LLMs can reason about, not just human language. He had to leave partway through to bike to work but planned to listen to the recording.
Ayaskant Panigrahi from Hyderabad, India, shared extensive technical knowledge about multiplayer VR implementations, having explored using phones as controllers in WebXR sessions. He mentioned Mozilla Hubs, Infinite Reality engine, Frame VR, WebRTC, and various multiplayer libraries as potential solutions, noting that passive viewing for non-active participants seems very doable. He shared links about Cloudflare Workers AI for RAG implementation, discussed baai/bge-base-en-v1.5 embedding models and meta/llama-3-8b-instruct, and referenced Brilliant.xyz’s Halo demo and Sesame AI’s organic text-to-speech. He works on XR (Unity and WebXR) and recently into AI development, and found the discussion about AI hallucination/confabulation and narrative coherence particularly interesting. His website is ayas.fyi.
Jonathan Finn contributed observations about Zettelkasten as a structured memory storage method (specifically Zettlr app) and referenced Oliver Sacks’ books containing interesting examples of confabulation. He engaged with Karl’s technical question about exploration values in machine learning, comparing it to holding a string from one end while waving it randomly versus both ends.
Hrithik Tiwari from India introduced himself as an entrepreneur working on metaverses without crypto but with WebRTC for over 4 years, focusing on B2B applications. He mentioned being introduced to the group 1.5 years ago and finding the future of text in XR fascinating. He’s launching version 2 of his product and plans to demo it at the next meeting. He found Paul Smart’s presentation “very interesting.”
Nic Fair, a colleague of Paul Smart at Southampton, introduced himself as a senior knowledge engineer interested in knowledge representations and virtual reality. He supervises Huang Ying and has worked with Frode on funding bids. He raised an insightful closing question about whether static storage of memory can be reflective of reality given that memories alter and coalesce with every visit and external verbalization/visualization.
Huang Ying (also called Ying) introduced herself as a PhD researcher at University of Southampton studying Extended Reality for Health Education from educators’ perspectives and how to integrate AIGC (AI-Generated Content) for XR education content creation. She is supervised by Dr. Nic Fair and expressed excitement to join the discussion. She reacted positively to many points in the chat.
Jimmy Sixdof contributed via chat suggesting the use of layers in the environment to utilize the depth (z-axis) for pages, topics, documents, and indexes to fit the context appropriately.
AI: Concepts
Retrieval Augmented Generation (RAG) was defined by Paul Smart as a technique where a large language model exploits external data stores by sending queries, retrieving relevant information as text, and assimilating it into the context window to provide personalized responses without retraining the model. This separates the core foundation model from the personal data repository.
Confabulation versus Hallucination was redefined by Paul Smart, explaining that within AI research there’s growing consensus that “confabulation” better describes LLM errors than “hallucination.” Confabulation preserves narrative coherence at the expense of accuracy and in human memory research is considered a feature rather than a flaw, playing a psychologically positive role in forming coherent life stories.
External or Non-Parametric Memory was explained by Paul Smart as a body of data that lies external to the large language model, separate from parametric memory which is encoded in the neural network’s weights. The LLM can query this external repository without retraining, enabling efficient personalization.
Asymmetric Collaboration was described by Fabien Bénétou as a design pattern where difficult tasks in VR (like typing) are handled by desktop users, trading some technical networking complexity for better user experience by bypassing current VR interface limitations.
Search Augmented Generation, Knowledge Augmented Generation, Tool Augmented Generation, Human Augmented Generation, and Code Augmented Generation were described by Paul Smart as different forms of augmented generation where LLMs interact with search engines, graph databases with ontologies, online computational services via APIs, human users for clarification, and code interpreters respectively.
The four stages of LLM-supported memory processing were defined by Paul Smart as: Encoding (acquiring information actively or passively), Elaboration (processing records offline for semantic tagging, summarization), Retrieval (recalling information via explicit requests or contextual cues), and Joint Reminiscing (conversational interactions about the past similar to social memory sharing).
Mana Menu was mentioned by Frode Hegland as part of their work, sometimes mis-transcribed as “Manna menu,” referring to a component of their VR system.
Networked AFrame was referenced by Fabien as the networking solution that Mozilla Hubs relies on, which their project will likely use for multi-user collaboration.
Collection Fallacy was mentioned by Mark Anderson as the mistaken belief that merely collecting notes creates value (more notes equals more value), when often it just creates management problems without actual utility – commonly seen in “tools for thought” communities.
Chat Log URLs
http://paulsmart.cognosys.co.uk/pubs/2021/Predicting%20Me.pdf
Chat Log Summary
The chat log began with technical discussions about Lenovo Legion 2 display glasses that Peter Dimitrios was using, with Mark Anderson finding and sharing the product link. The conversation included casual exchanges about time zone confusion due to daylight saving changes, with Ayaskant Panigrahi apologizing for recent absence due to illness and Diwali celebrations.
Participants discussed networking complexity for the VR project, with Ayaskant questioning whether networking would make development harder and Karl Smink advising that implementing networking early is better than adding it later. Fabien shared technical links about asymmetric collaboration between desktop and VR, including his past work demonstrations. Brandel Zachernuk mentioned croquet.io and had to bike to work partway through, leading to jokes about texting while riding.
Technical discussions covered coordinate systems with references to quaternions to avoid gimbal lock with Euler rotations, and debates about whether a flattened cube is still a cube or becomes a square. Participants shared resources about various multiplayer frameworks including Mozilla Hubs documentation, Ethereal Engine, Frame VR, PeerJS, Colyseus, and Yjs.
Fabien shared extensive links to his personal information management system and noted using multiple networking prototypes (WebDAV, WebSockets, NetworkedAFrame, EventSource via ntfy). Ayaskant shared links about Cloudflare Workers AI with RAG tutorials, Brilliant’s Halo demo with organic text-to-speech, and his personal website.
During Paul Smart’s presentation, chat participants discussed philosophical questions about memory, hallucination, and perception. References included Andy Clark’s work on extended mind, the movie “Eternal Sunshine of the Spotless Mind,” and Oliver Sacks’ books on confabulation. Tom Haymes shared his article connecting Enlightenment philosophy with AI approaches.
The chat included discussions about personal information management tools, with mentions of Zettelkasten, Zettlr, Obsidian vaults backed up on git, Federated Wiki, and journaling practices. Peter Dimitrios mentioned maintaining 40 years of personal files and trying to publish them now that he’s retired. Karl emphasized that journaling is a “life-saver” and raised concerns about AI psychosis and the probabilistic nature of machine learning systems.
Hrithik Tiwari and Huang Ying introduced themselves, with Ayaskant discovering they’re both from India (Ayaskant in Hyderabad). Multiple participants reacted positively with emojis throughout the discussion, particularly to Paul Smart’s insights about memory, confabulation, and the therapeutic potential of LLM-based memory systems. The chat concluded with thanks to Paul and Frode, and Nic Fair’s question about whether static memory storage can reflect the dynamic reality of how memories change with each recall.
