The January Project

I have been invited pitch to a prestigious organization to further increase the credibility of this work to further grow our community and experiments.

As such I’ll be sending a letter in PDF form to the person in question and this letter will also be viewable in XR. 

I’ll be spending January building this capability and we will use this opportunity as a ‘live brief’ to help spur specific discussions around how this should and can be done.

The aim of the letter, is to convey:

  • The importance of investing time and energy to better understand what working in XR can, and should be. 
  • The letter further aims to underscore the importance of open shareable of rich and robust metadata for XR experiences and framed/traditional digital experience.

Constraints

  • The experience will use the Visual-Meta approach. The specification for Visual-Meta can be updated/changed.
  • Uses Reader visionOS. This is a constrained test.
  • The main focus is on reading the document in XR.
  • This experience will be in AR, meaning there will be full pass-through video of the environment. This is because Reader has been developed for AR so far.
  • Text only. There will be no included images, graphs, video etc., for this test, it will be pure text.

Elements

  • Knowledge Objects we will have available include:
  • Structure of the document for navigation & outline:
    • Headings
  • Plain text includes:
    • Full text of the document in plaintext Markdown
  • Spatial data will be the more than one:
    • Sculptures/configurations of the material in XR
  • Annotations by the recipient user
    • Notes/tags spatially and on-document in forms yet to be designed.
  • Furthermore, the letter will also include Full citation information for others to cite the document, which will not be relevant for our experience. 

Authoring Scenario

  • Composing the letter should be done both in XR and framed digital (traditional digital using screens, which may or may not include 3D views), using both media to their advantage. 
  • Specifically, it’s expected we’ll go into XR to place elements spatially, copy the spatial coordinates and embed them in the next version of the PDF as a viewSpec.
  • In other words, primarily typing in framed and arranging in spatial.

Reading Scenario

The recipient reads the letter by opening the PDF as a two page spread in Reader on Apple Vision Pro.

  • The recipient can currently position the letter and choose to view it as single page, two page spread or all horizontal. 
  • The recipient can further select text and choose options from a context menu.
  • Interactions with elements are currently not implemented. What would we like the recipient to be able to do? 

Questions for Interactions

Views

  • The default view will be the PDF in two page layout. Then what?
  • What view(s) do we want the reader to be able to switch to? Do we want to enable timeline views, citation views, columns of data? If so, what might be actually useful for reading just one letter?

Interactions

  • What interactions do we want to enable? 
  • What should be reader be able to create? Spatial Annotations?
  • & further questions as they arise

The ‘Document’ & Experiments

The current PDF, JSON and original .liquid (Author) document is available here for those would like to tinker: https://www.dropbox.com/scl/fi/rxjvowzghv4hpptrp3i2t/jan-project-1.zip?rlkey=c4f54an3mmk2zaex36yuhm5a8&dl=1

Experiments

Horizontal layout as image: https://www.dropbox.com/scl/fi/rbge13t7hjwceyfw5xq7a/letter-horizontal-for-vr.png?rlkey=w4nsp7es0xwu0uv0nq75q5rvn&dl=1

Vertical layout as image: https://www.dropbox.com/scl/fi/jib3fm3s2nvvvtewg6v0l/letter-vertical-for-vr?rlkey=3r7xaosw7st2wwoh7d8kjjnrm&dl=1

Index cards: https://www.dropbox.com/scl/fo/8m91fcbpg9h6bgvmtanmf/ACpcKkJ-AkbF86Xgo-9r9w8?rlkey=c2rnu8pthn8yhlpdzyxvthtp2&dl=1

Visual-Meta Elements

Visual-Meta Elements

 

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *