Overall arc: Dave is working hands-on in the Map view, testing node selection, movement, layout, and spatial organization. The conversation moves from immediate UX feedback into a deeper discussion about how AI should be integrated into the spatial workspace.
Interaction design
Dave finds the “only selected nodes can move” toggle important — he wants to be able to casually nudge nodes out of the way without formal selection, especially during spatial triage. He also wants lasso-style multi-select (reach out and grab a cluster) and better grid/plane snapping so he doesn’t have to micromanage node placement. He frames this as “beneficial friction” — the system should handle the busywork (aligning, grouping) while the user does the cognitively valuable work (choosing what goes together and why).
Extending the space with AI
Dave’s strongest conceptual push is that he wants to be able to point at a node — say, “Don Norman” — and ask the system to tell him more, pull in related works, or expand that concept outward into the space. He draws a comparison to Microsoft Copilot’s grounding model: is the AI drawing from the current document, from your corpus, or from the open web? Making that provenance visible matters.
AI as a co-agent via MCP/API
This is the session’s biggest architectural suggestion. Dave proposes giving the AI the same agentic powers as the user — querying nodes, selecting by property (“show me all people based in the US”), issuing layout commands, moving filtered results to the periphery. He suggests MCP as the most natural way to do this: a structured wrapper that lets the language model understand and operate the Author interface. The AI wouldn’t just answer questions — it would manipulate the spatial workspace alongside you.
Voice + gesture as the AI channel
Both of you discuss using gesture to open a voice channel to the AI, and potentially using gesture to signal whether the AI should answer from its own knowledge or go do research. The pull-and-speak interaction — select something, pull it out, speak a prompt, release it where you want the result — is sketched as a natural spatial-AI pattern.
Core vs. context
Your concept of core writing space (A4 width, focal area) versus contextual surround maps well onto what Dave wants from the AI: results and peripheral information should appear in the contextual space, while the user’s focal attention stays on the core. Dave connects this to his idea that AI-filtered content could be moved to peripheral zones automatically.
Spatial representation observations
Dave notes that when standing still, a spherical projection of nodes feels natural, but once you start moving around, flat card-like nodes encourage thinking in layers rather than objects. He also experiments with placing nodes behind him, on tables, and near real-world surfaces — reinforcing the value of surface anchoring and the “spatial hypertext parser” idea.
Narrative trails from maps
Dave describes a workflow he wants: identify ideas in the map, put them in order, describe the narrative structure, then have AI transform that map-of-content into a written output (blog post, paper section). This positions Author’s Map view as the upstream triage step before generative writing.
