XR Workfl0w

XR work to deliver on the Metadata import to XR, modification, export to 2D and subsequent ‘unfurling’ back to spatial XR.

The user activity will be a combination of data exploration and spatial thinking, with the user being allowed to interact with text elements as well as data points, either imported as documents or as data sets.

1) Initial data can be imported. We have several import filters.

2) User builds a build a configuration in XR. The interactions will aim to be as described as ‘Knowledge Sculpture’. This configuration of knowledge objects can be composed of:

  • Plain Text (at least), which can be typed in XR
  • Text with metadata/definitions (perhaps on double-tap) ideally
  • Citations from ACM JSON or Reader or other source.
  • Documents of various types and more.
  • Data Sets, from RSS and other formats.

3) Exits XR. User exits with this configuration stored in JSON (emailed to the user, as can be done now, or through server), which has the default name ‘[Name of Sculpture-Date]”.

4) New 2D document. User pastes the JSON into a new document (Author, Word, Pages, Notepad, it should not matter) as an XR Appendix. The user can also refers to this in the body of text such as “[Name of Sculpture-Date]” but that is optional.

5) Return to XR When this new document is exported (at least to PDF, would be nice to also support Word, Plain Text, or whatever suits the system we have currently), and imported into XR, the user can tap on “[Name of Sculpture-Date]” to spawn the original knowledge sculpture configuration into space.

If there is no “[Name of Sculpture-Date]” in the document then the XR system can provide an icon for such interaction.

  

Progress

There are issues listed which we are working on.