January Project Workflow

Reading in XR. User opens PDF document in Reader on the Apple Vision Pro, looks through it and along with citations indicated with [1], the user will see links to spatial layouts (of which there can be several) and they can click on one, for example <Spatial Demonstration>. This results in nodes of further information appearing, as the author laid them out. The user can then move the nodes where they like and tap on them to see more information.

Creation: The workflow is authoring in Author both in macOS and in visionOS, where the user lays out the document’s Map spatially in visionOS and this is then included in the export to PDF for viewing and interacting in Reader visionOS.

Author macOS

Author macOS will need to be able to export the Map data, including Z dimension, in Visual-Meta for Reader visionOS to use. It will also need to support ‘anchors’ in the text to allow the user to spawn the Map views, similar to how citations are included in the text.

Implementation: This will be done through a new option in the Map in Author macOS, at the bottom right, under ‘Layout’: Copy Layout. The user can then use the context menu to Insert/Map Layout. If the layout does not have a name (see next item) the user will be promoted to give it a name. Once done, the name will appear inside ‘<‘ and ‘>’ so for example something as basic as <Spatial Demonstration> can appear in the text on insertion.

Visual-Meta Spatial Map

The Visual-Meta will contain two JSON formatted items for the Spatial Map: Layout and Glossary.

Layout

In the Layout only ID and Position is mandatory Rotation and Scale are optional, as well as other parameters which can potentially be added later.

"Layout" : {
"nodePositions" : { "ID-GOES-HERE" : {
"position":"-.5 1.5 -.3",rotation”:”0, 0, 0","scale": “1, 1, 1"}

   

Glossary

The Glossary’s role is to store all the relevant information about the node in space, which can be as little as a title/name or as much as reference to an external 3D model. Current fields are as follows:

"Glossary" : {
"ID-GOES-HERE" : {
"identifier" : "document",
"description" : "The article explores…",
"documentPath" : "/Users\Shapes.liquid",
"phrase" : "Information Shapes",
"tag" : "article”,
"urls" : [

Please note that actual line breaks will need to be experimented with to balance use of space and robustness against line break issues. I expect the example has issues so comments by those who better understand the issues around formatting is appreciated.

Reader visionOS

Reader visionOS needs to be able to display a Map in the same way as is possible in Author visionOS, based on reading spatial data in a PDF, as well as be able to add annotations (in terms of layout, tags and notes). The user can click/tap/pinch on the and that spatial layout will then appear, same as is currently possible in Author visionOS.

Song

Musical exploration co-orchestrated with AI to maybe inspire different thinking on this topic:

Leave a comment

Your email address will not be published. Required fields are marked *