Mark Anderson: Hello.
Brandel Zachernuk : Early.
Mark Anderson: Well, let me just oh,
Frode Hegland: A few things, so.
Mark Anderson: Well. I was a bit washed. Cameras are a bit washed out lighting right in here.
Frode Hegland: Well, I can’t actually hear you.
Mark Anderson: You’re sorry. I was just saying the. Yeah, no, I can never get the lighting right in this room.
Frode Hegland: Anyhoo, so yeah, I saw your emails and thank you. So what I need, it’s probably very, very obvious, but I need to have a few things, probably from Brandel on this. Yeah. In Britain, almost encode or pseudocode so that I can write that blue sky paper of this is how I take things in and out of the air. And I think an element like this is. Yeah.
Mark Anderson: Kind of like the chicken egg problem with this is is that the elegance of the visual metadata measure is that you can essentially link to anything you like. And there are sort of several things is, well, how much do I take? Because, you know, what point does something become bloat? But obviously, if I have it to hand, it’s immediately defensible because I know I’ve got it. So I didn’t even have to look local. I mean, I definitely know it’s there, whereas if it’s external, I might have to go and get it. But then then we question, you know, what’s local? And I think the interesting the other the other thing is, I thought about it. Of course, ownership itself is incredibly complicated. I mean, not not not just to split hairs, but in the sense that. You know, things are editing, sometimes you might want the it’s almost the degree to which you can sort of pull something apart, play with it. One of the things I’m finding experimental with data at the moment is learning to be very, very careful that I don’t do something that basically changes the master record, which is the original provenance. I know. So in a sense, do I play? Do I edit a copy, I guess is what I’m saying? Or do I edit the real thing? And that’s again, these are all sort of News Feed in.
Mark Anderson: I mean, again, there are any metadata. So in other words, it’s a setting you make. I think I think they’re there. So I’m ticking over these things. I’m trying to I’m trying to figure out in my mind what’s what’s necessary metadata? Because if we don’t have it at the point where we want to use the thing that we’re talking about, then we are we’re stuck or we’re forced into a bad choice. And what can be treated as as nice to have? For instance, one of the things is you might think in terms of the sharing is is almost think of it in terms of a privacy thing. So I want to take this into the room so I can see it. I might want it to be only seen by me. I might want it to see only by members of the current meeting. I’ll leave aside the wider. Let’s leave aside the sort of the whole world watching. But the point is, is it a thing that I PDF? Is it a thing that only I want to see? Or is it the thing that everybody can see? And then from that level that that opens up the thing of, well, if they can see it, can they do anything with it? Peter, we’re just mulling on issues of sort of privacy slash visibility when moving objects into the VR space.
Frode Hegland: Yeah. Yes. But yes, because it’s very, very easy to do a demo with all kinds of cool things floating about. But the question is, how do you get them? How do you hide them? How do you take them out? How do you share them? How do you use them? And how do you re-edit them and buy ownership? What I mean is a little bit intellectual ownership, but also which system is that the operating system that owns it? Is it a PDF document that owns it and visual matter? Is that the application that owns it?
Mark Anderson: Well, you’ve got two sorts of ownership there as well. Unintentionally, in the sense, it’s because we need to be careful in terms of saying what we say owns it is in a sense, almost manipulates it within the software sense. Whereas I, I understand you should mean, in a sense, in a sense, if there’s only one copy and we all walk away who gets to walk away with it. You’re thinking with that ownership in those terms.
Frode Hegland: Not really. Not really. Because the example that was in the PDF earlier today is I’ve written many lists to you guys in the first PDF document and you know, we all do that and it’s quite useful. But then once that is done, you keep writing them forever. I mean, exclusion is, of course, a conceptual model here. So the point I’m at is where I write it. Name it. That list and the documents, so the document, you know, now I have to think about the interface and author, but that’s, you know, fun. I then share it with you guys as a PDF and maybe one of you decide and this is very much about the library discussion we had last time where we talk about table of contents and book covers and you decide that you are. You want to have a look at all the lists and our journal, for instance, so that now becomes a trivial thing to display. And one of the lists is this is a list of potential projects. I want to work on that. So you pull it out, put on the side, you know, somewhere in space. So now the question is, what is that now? Is that still a part of the original document, or is it something that the software room were in owns, but it has, for visual matter, citation to where it came from, et cetera? And then and then mark you enter the room because so far one person, you know, you see this thing. Mm hmm. How do you then grab it? But how do we decide that this is not also yours?
Mark Anderson: Well, just stepping back a second. So I mean, when the thing arrives and you pull in a list. This list of list did not, did not did wasn’t there until you asked it, so at this point, it’s just a constructed thing. It’s it’s not it hasn’t arrived. It is literally only exists in the space because we’ve created it there. So that’s an important thing. So it starts being nothing. It literally is an evanescent thing, and I see I see a second hand up that exists there. So no, we actually have to define. We probably need a sort of some sort of definition because if we keep having to, if we end up with a thing that’s a bit like the early Windows 2000 where it’s kept saying, Do you want to do this? You know, people, people sort of give up, so you might have to have a preemptive identification that you might clarify later as to who. I would say that in the VR space, there’s sort of I’d be the ownership in the level of privacy and then control, I mean, I think back to the days when I worked, we control documents. You know, some people never got, never knew they were there because they never got to even go to the place where you even knew they existed. Once you got through the door, you might know they exist, but you couldn’t necessarily look at them. And if you were allowed to look at them, then you could do things that you can change pages around. So that’s perhaps one way to look at a model aisle. Peter, you had a comment.
Peter Wasilko: Yeah. I think once you start to think about ownership, we have to sort of step back and address the issue of authority control and. Have a federated. Directory of all the people who have been creating data in this space. Now once we do that, we can make it everyone’s responsibility to update that federated directory with what their current contact information is and I actually took an unsuccessful run at. There was a task force inside the United Nations that was addressing copyright issues and one of our law events. We had a speaker from that group come and talk to us about the things they were doing and afterwards it just struck me as how incredibly brain damaged our current system is of making people hunt down rightsholders instead of making it incumbent upon rights holders to make themselves readily found. Since the original rights holder, if he cares about his rights, should have primary responsibility, an obligation to make it painless for people to find them instead of having a fire tracking at all. Let’s see now he died, but we’re within seventy five years of the guy’s death, so we’ve got to try to figure out who his estate is. Oh wait, the estate sold the rights off some publishing house for some money. Where did the rights go now? They might have delegated them off to some other group, but only a piece of the right, because one of the big things in law is that rights are not an atomic monolithic thing. Rights can be infinitely portioned, broken up and replaced a million different ways.
Peter Wasilko: And that’s really great because it makes it easy to monetize things that couldn’t be monetized otherwise. For instance, Frodo might decide that he wants to release the rights to his dissertation for conversion into manga, and all he would sell is the rights to turn his dissertation into a manga, and he’d reserve the rights to turn it into an anime. And if the manga went really well and someone wanted to make an anime based upon the manga of his dissertation, they’d have to go and renegotiate the rights. So what I argued to the U.N group was that, you know, guys, why don’t you set up the master rights holders directory and then everyone can register the things? And if you own an orphan item out there that no one knows where it is, you can go declare that you are the actual right holder. How you do that can be ironed out in detail, whether you’d have to put up a bond if you wanted to do it or whether we’d just take your word for it. But at least put that mechanism in place now. If we’re setting up a new space, it’d be much easier to build that in from square one. And once it’s painless to get in contact with the right folks means that if I’m changing my email address within so many days, I will promptly update the system, and as soon as I update the system, it will resend me any communication that went to me during that last seven days or whatever it was before I updated it.
Mark Anderson: That was a slightly different issue than the one I was getting at, which is, you know, you’re absolutely right. And so we know. We know. Give me a we sort of know who the contestants are. I mean, if I if I step to the more trivial level of, you know, not necessarily a something like, you know, a book of thesis, but you know, I’m I’m showing you, I’m showing you my shopping list. So this is not something of I’m picking something deliberately of no great intrinsic worth. So we don’t fall into into the sense of the whole value thing that can otherwise creep in. I mean, we can get that later. But if you know, I brought the shopping list in and I’m sharing showing it to you, I’m just wondering if, if if the physical space as it exists at the moment gives us any clues as to how we tear this down. Because obviously, if I take well, if I take a piece of paper out, it is self-evident unless it’s magic paper that people in the room who are cited can see it. First Lady Revelation. But I may held the face towards me and not show you what it is, I will not tell you what it is. But that is kind of obvious, so we take something to 3D space.
Mark Anderson: I suppose the question is, and given that we’ve talked about the fact that if I have a document in a 3D space that just because I’m stood somewhere, I’m literally as it were in a different point of view in the 3D space. Why should it be that I actually had to literally turn the object around to see the face that interested me? So if that’s the case, there’s a question, you know, as I take my document into this space, it is effectively being revealed unless I placed some impediment in its way to say, there’s a thing here, but you can’t know what it is. I mean, I think we have to also accept in a way that I know we went through a painful stage in the early days of the web and people said, Well, you know, I put my photograph online and someone and someone took it. And the answer was, no, they probably didn’t take it in that it wasn’t like they crept into your house and stole it off your mantelpiece. They saw there was a picture in front of them, which I thought, that’s useful. There was nothing to say, thou shalt not take a copy. So there’s there’s that, that thing. We’re going to have to read
Peter Wasilko: More like you put a billboard in front of your house and they took a photograph of the billboard while driving down the street.
Mark Anderson: Yeah, yeah. And so I think there’s a, you know, there’s that pragmatism built into it. So I guess it’s the thing of. If you assume that you take something into a space and you haven’t preemptively set of privacy because it’s something very special and you don’t want people to know what it is, but you might be happy that people know that there’s something there or or you need it. Now, bear in mind in the software space, I could choose to have a whole lot of things invisible to me that aren’t visible to you, which is something we can’t do in real life. I can’t hold this up and pretend that you can’t see it.
Frode Hegland: Well, that’s a very interesting point, mark, because what you can do is take it out of frame. I think you probably have something similar in AR VR, where, you know, with the Oculus, we have the Guardian area, which is inside. Outside, you will have a public shared for the meeting, but anything we put outside there, we have to manually put in or points.
Mark Anderson: So that’s a good point because I think what what that sort of set, what that’s actually saying is that so there is a a boundary, even if even if it’s sort of conceptual that when the object passes, that boundary passes into a more public space than has hitherto been the point. And you still may have various privacy labels or sort of things regarding some of the data within that object. But yes, I mean, in a sense, yes. So we were just discussing about the fact that you can see this, but I can take it out of frame. So that’s one simple form of privacy because we were we were we were chewing away at the gnarly issue of, well, you know, ownership. But your facet, you know,
Frode Hegland: So I’m glad you’re here. I have a very specific question for you that is probably over trivial, but for the ACM hypertext series visual matter recently introduced two years ago, and this year I have three days to submit for a blue skies track.
Mark Anderson: Well, for the abstract you’ve got, you’ve got a week after that to write it.
Frode Hegland: Yeah, yeah, that’s true. But so what I think I should do this year is write on visual, meta and augmented reality environments. So the thing that becomes the because I’ve introduced visual matter before, so so the thing that probably is useful is to say I have a document, it’s opened up into the space. Things are floating about now closing the document. I need to write a new appendix automatically, of course, with all the attributes of where everything is. If you can help me write like a sample or dummy, even for one object, that would be really, really useful.
Brandel Zachernuk : I in terms of the the spatial and relational information that might pertain to the fragments.
Frode Hegland: And yes, by the way, I didn’t thank you enough for your email reply to the to the Oh, that’s fine. Yeah.
Brandel Zachernuk : So it was pretty long, but I just thought those are all of the things we’ve discussed here. So I thought I’d formalized the list of stuff in case any of the other folks in there.
Frode Hegland: Well, there’s nothing wrong with putting it into text, so it was greatly appreciate it. It is now, of course, and all your projects are, of course, now open the website
Brandel Zachernuk : Categorized and organized and all of that. Yeah, thank you. Thank you.
Frode Hegland: Yeah. Now that’s what we’re thinking about. So the workflow that we’re just discussing here is the following, and this is a little bit about the earlier PDF. So I’ll make it brief, but I think it goes to the core of a lot of things. I’m sitting here writing a thing and I decide that what I need is a list, so I write a list of stuff that, for instance, the example I used was potential data we will have in this environment. So that just words. And then in my software, there is some kind of an interface interaction where I must give this list a title. So that’s all nice. That means when I’m writing an author, I can collapse and expand. I get all these affordances, but now I export it to Visual Meadow. The visual meta knows the title of this list, and it knows the items in that list. So then we go into an augmented environment where reflecting on covers, table contents and all that stuff. I say I want, well, someone else says Mark, let’s say I want to go through everything in the journal. That is a list I wouldn’t see all the list we’ve had. You know, that’s easy. He has it on a system. Here they are. One of them is interesting, so he pulls it out. I really need us to discuss what happens from that act because it’s now to do a demo. Of course, super easy. But to have it have some kind of meaning and value when that is pulled out.
Frode Hegland: It is now it is coming from a specific document, but that document could potentially disappear. So that’s one thing. So I would like it to be an object that has the visual motor attached. Of course, that will be hidden now so that he can then put it into a new document if he wants to. But also, if we enter the room, the rest of us. And he has a few of these things floating about in his public space, not hidden on the wall that we can’t see. We should be able to say, Oh, I need to work on that list and take it into our work. So you know where it is stored, how it’s handled. I call it ownership, not a very good term. So that’s the beginning of it. But then one thing that I think would be amazing is if Peter then says, Oh, hang on, I’m working on that bit. He puts it in this thing, goes home, works on a document, and then he changes it. Publish it as a PDF of visual matter again, and it’s published as based on. So then we can have a citation tree over time showing who has worked on this list and who has changed it. Because lists are pretty important than many environments, right? So now that I’ve said too much. Do you have a perspective on how this should be handled?
Brandel Zachernuk : So the sort of the passing, the passing around of sort of shared views of a thing with somebody making a hard change to it. Um. I mean, nothing jumps out at me other than being able to see those links, but I’m also conscious that those links become spaghetti very quickly. I have heard that they are it’s less the case in 3-D, but you know that that really only forestalled the inevitable. So I.
Frode Hegland: Well, we do have sort of sorry. I was just going to say visually that should be able to be turned on and off. We definitely need to have TED Nelson visual connection soon. There’s no question about that. And this is yet another one of those we can have. So it’s just one just in a very practical terms. Let’s say you’ve decided to program this thing. And you have been given this example set of visual meter stuff. You have this data to begin with. How would you run it through software to be able to do this kind of stuff? Talk to me like I’m literally two years old, please.
Brandel Zachernuk : Well, so. You know, the document itself contains individual fragments, and the visual matter will relate to either the. The totality of the fragments are two specific fragments, and in particular, and the way in which you sort of reconcile that is looking for addresses and bank cards and figuring out which parts they pretend to. If there’s paragraph numbers or if there are specific sigils that relate to this, this this, this component of official matter relates to that. Know you’d you’d you’d construct those things based on the size of the nodes that are identifiable and separable based on whatever accepted markup exists in it. And then you’d construct whatever sort of visual annotations that are appropriate, according to all of the visual meta tags. And then you’d position them, you know, you can use automated layouts. They are there, there are more intuitive than there are in 2-D, but in 3-D. But things like cultural. Are. Corin Harrell. Um. So there’s a there’s the things like that, the Herald and fast multiscale algorithm for graph visualization. Know so you may apply some, some basic graph visualization with the additional aspect of constraints that exist for I mean, if they have an explicit position, then that’s awesome. But if not, then you jam you to inform it based on the user or users of the system. Their positions and vantage points prioritize certain clustering to be visible at those distances and stuff like that.
Brandel Zachernuk : So, yeah, in terms of how people relate to it, it that depends on what kinds of. Manipulation and input modalities they have, like one of the one of the challenges, if you’re we’re talking about co-present stuff is that it’s nice to have sort of a canonical persistent space rather than having people have vastly different views of that information because it means that they have less to relate to each other with. So one all for that is that when you say grow data or shrink data, that kind of thing, what you’re doing is actually shrinking or growing yourself in the opposite direction, such that if you if you want to become a tiny pipsqueak and get right into the stuff, then you can kind of see a different perspective of those things. All of those sort of the fluidity of those things really depends on how comfortable people are with sort of disintegrating the notion of scale because a lot of people believe that specific real world scales ought to be obeyed and other people think that the manipulable of those things are the tilt brush is too good to to let go of. And so they really enjoy that, as well as world of miniature as a mechanism for being able to navigate. I’m. Yeah, I don’t know if there are any other things that triggers, but I’m conscious that other people have things that they want to see.
Frode Hegland: Yeah, well, I think about that. Peter, please go ahead.
Peter Wasilko: Yes. A few thoughts now. One is that when I bring the data into the system. I would like to be prompt at that point for whether the original source of the data is me personally. An external source of unknown provenance, an external source of unknown but unstated provenance, which of course, implies that I’ll have the ability to supply that provenance information later on if needed, but I’m choosing not to at the moment. So that’s distinct now at the point that I’m bringing in as an unstated provenance. Ideally, the system would prompt me at that point to record it locally so that if I do need to supply it later on, I won’t have forgotten it by that point in time. So all of that would be sort of local metadata attached to the item. And then I’m choosing what I want to reveal in world, which would be that unknown provenance, unstated, provenance, personal. And then we could address the permissions issue at that point, too. One thing that I always wanted but never had was an automated with permission system. So instead of just putting it out for anybody to use it, I could put it out basically with personal leave, and then the system would automatically get in contact with me through the authorship directory that I was alluding to before. And then I would see how you want to use it.
Peter Wasilko: So, for instance, maybe someone wanted to make a commercial use of it and depending upon the commercial use fraud, might or might not want that reference to visual matter to be there now. If it’s a Apple or IBM commercial, that’s giving that as an example of a kind of functionality that they want to support in their system. It might be highly advantageous for him to have them make that commercial borrowing of some representation from a visual media talk or something. If it was some sleazy person in Nigeria of unknown background, he’d be highly suspect and might choose not to. Now there is a very well-developed security model already in place out there, and it’s called an object capability model. And that’s worth having a look at. There are actually a couple of programming languages that were designed completely around that for tracking really finely grained permissions and only delegating the kinds of permissions that you want. So if the system made use of that and also pulled in sort of a type of usage ontology that we could develop, all of that stuff could be smoothed out and automated. And well, gosh, it would probably cost a fair amount of work to the legal community. But what the heck I’m all about making life better for authors.
Brandel Zachernuk : She has that been. Yeah. Mark, yeah,
Mark Anderson: I think I have a little sort of thought experiment by way of trying to keep things simple because one thing is always needling about when Fred was asking me earlier today about, you know, the memo is just describing things and I’m thinking, well, to a certain extent, if we’re if we think for a moment of textual data, sure, there to a certain extent, there’s there are two parts to this. In even in a VR space, it’s the text to still effectively stored somewhere as the, you know, electronic ones and zeros that resemble that text. There is then the render that’s then how it appears, if it appears at all in the space. And I think he’s so, you know, to a certain extent, change is too hard. So if I get my shopping list out, I can tear it in half. Now, I haven’t changed the contents of the list. I’ve changed the visual visual representation of it significantly. Whereas if I just change something on the list, the list won’t. In a sense, the visual part of the list? Well, I guess if I’m showing the letters, they change. But but effectively I can edit the text without necessarily affecting the visualization. And I was thinking, as Peter said about, you know, the thing of this, you bring something in and you talk about permissions. The thing that’s sprung into my head is the problem of, you know, it’s another the modal thing. Did you pack this bag yourself? Well, yes, sort of. No, but well, I didn’t make the clock that I packed in it that was made by somebody else because I bought it in a shop.
Mark Anderson: And so there are practical limitations as to what in the moment we can really know because it’s terribly easy just to abstract. So we through all these questions. And so I can almost say that, you know this stage well might experiment with putting things into a sort of liminal space, which is your private boarding area. And you might want a gesture that says, No, I know what this. I just want to ram it through the through the special home used with care, because that just that just sort of puts it out in the public space. Whereas if I wanted, if I wanted to set controls every which I hadn’t preset, then I have this problem that I’m now out of the flow. And that’s the date I put about the modal thing. If I say right, you’ve made it list, you must now name it. Now, I’ve forgotten what the list is about, because I’m thinking about possible names for it. And I mean, this is what’s problematic in the software thing because you sometimes you definitely need to put a label on something. But the moment when you need that label, sometimes it’s in the best person to ask the answer. I mean, there’s never an easy answer anyway. I just I just sort of throw that in the mix. It helps us. Get to the point, because I do think it’s certainly if we talk about text as opposed to anything that you might put in VR space, that the text is going to be a data object associated with or part of the virtual object so they can be, in a sense, revealed or edited separately.
Brandel Zachernuk : Yeah, I I like the recognition that those two sort of manipulations have different implications. One thing that sort of reminds me of is I have a pretty strong desire to retain all of the state changes for a document and graph. And so it’s it’s nice to be able to kind of serialize all of those operations such that you have the ability to recover what the state of a document or or network of documents is at any point along its timeline. I feel like that that allows for context restoration as as well as potentially some some other aspects of recoverability. From a technical perspective. So yeah, it would be my preference to have those things. Another thing that I’m really intrigued by, and I know it’s not a completely free lunch, but just the fact that at least in the context of text, it’s really small and document and disk space and transfer speeds are really high. And so I’m intrigued by the possibility of leaning on document inclusion, where you are fairly unperturbed about the idea of things going pretty large and have the ability to lean on. Like I said, like Wikipedia, style inclusion isn’t just like if you want something, if you want a piece of something, you pull the whole thing in so that you have the ability to use it if you need. And that kind of solves aspects or aspects of things like electron. I mean, you still need to have some reasonable limit on what size things are because you can, as we as we have at work and Apple Markham will have keynote files and Photoshop documents that span to upward of nine gigabytes. And funny things happen to computers when that happens. But but I think in text, the limits are probably a little further. So I would be intrigued by the possibility of serialized systems that can retain the two distinct sort of operations that you mentioned, Mark. And I think it would be interesting to err on the side of in the event that you want to want a piece of something to take all of it and store it behind the scenes in case you want more.
Frode Hegland: I mean, we’re talking about what is an object in augmented space. And we’re talking about how can we refer to it? And we’re talking about when it loses its place, because in one view, you could say that you take this list out just to keep that simple as an example. And it’s still actually data wise where it was originally in the document. The only thing that the VR environment knows is it’s now displayed differently. And when it’s taken out of that, it goes poof. So the question is, when do we say that it is now a freestanding unit in this world, but at least one that knows where it comes from? So I think what I’m trying to do is something much simpler. I think Peter is going really far into a very important area that I’m looking just as when I’m writing stuff like for you guys, just little think pieces or whatever. I would love it if one day Brandel just opened it up and some experimental workspace. But I need to know what it would be in that workspace. Because the thing is, we’re sharing things in thin air, it’s bizarre. And this is obviously the whole new world that’s so exciting. It’s not like I send you a JPEG or a word document. You know, I’m showing you at the thing, you know, how how in the world do we? What is it and how do we share it? What is it tangibility?
Brandel Zachernuk : Yeah, I mean, I think as with the list and the document that you mention sharing when it changes, I think should hinge on the implications for use. You know, one thing that I I think I’ve mentioned that I object to in the past is the the issue where too many editing applications store insufficient levels of opinion about state changes and the idea that you are zoomed in to this particular piece of the Photoshop document three days after a little bit better at this, because there are kind of cognizant that there is an importance to I’m actually looking at the bottom left leg on the wheel well of this car. Thank you. Whereas Photoshop and Illustrator saw very little of the sort. And so, you know, and where it is, I’m sure even worse in terms of the. And then know that most other sort of word processing document. So. So we definitely have to have a lot more about not just the document, but what a perspective over the document might entail and and view specs, obviously sort of render that to be in the sense that a lot of the time people think that mere sort of zoom and and pen actions over document aren’t a thing. I think VFX blows all that away in that it obliges people to be aware that it that there is some manipulation that that is has some kind of stateful implications, if not for the underlying data than for the the view that inevitably has to be cobbled together because there’s no single canonical view of the document. And so. Yeah, I lost my train of thought.
Mark Anderson: I have a quick thought listening to that, and I wonder if, you know, because it’s really difficult at this stage, you have the chicken egg problem. You could spend an awful time building something and then find it wasn’t actually quite what you meant to build. You know, while there are so many things in play, I’m wondering if what somewhere push into this is to start to write a number of assumptions with no intent that they’re necessarily right, but almost always putting them there, so you deliberately trip over them if you need to falsify that assumption. So, you know, when we say we’re taking something to the space, start saying what we think we mean, just just in just in words, because it’s a quicker and easier to do. And it’s just sort of a way of surfacing some of the background assumptions at this stage of the game. They’re not going to be not even going near right. And there may be actually, you know, pitifully or poor in the understanding of the thing, but I think that might help sort of tease out the space because one of one of the problems working totally in the abstract is that this list of jottings, you know, something scribbled on the paper has as much sort of relevance as a published book because they’re all just things. And yet there will be. So we need to we need to sort of tease out some of these things as well, because if we begin to put an edge, even even if it’s a slightly fuzzy one to it, I think it makes it easier to put a handle around some of these things because otherwise you’re having to deal with everything from the tiniest to the, you know, the thing that’s the copy of the United States Constitution against, you know, that bit of paper you found in the street that clearly don’t have the same controls and. May I shared with that, but I think you get the point.
Frode Hegland: Ok, so jumping into this in the document, I shared that you reply to, I can’t even remember the name now, but isn’t that document one of the projects I suggested? Because Mark kind of pressured me in a good way and to suggest looking at projects, not just the other aspects, is a timeline which of course, we discussed myriads of times. This, of course, comes into that. We need some sort of mechanism whereby these things can be communicated to the users timeline stuff. You know, I have we have a trainer at the moment, my wife and I, it is all very good. Plus, of course, we have the Apple Watch. It would be so useful to get all the health stuff on the same timeline. And when you’re talking about editing documents like an author, like a lot of modern stuff, I do come and ask, what does that do? It tells Time Machine that there’s a new version. It’s basically what it does, right? So if we can take these changes that you’re talking about Brandel or edits into that timeline and also significantly when we give each other stuff. Like, it’s not necessarily a change of ownership, but when the thing that I put in space now, Brandel takes, that’s probably significantly useful for a timeline, right?
Brandel Zachernuk : Yeah, yeah, when I when I when I take potential, if not ownership than than viewership rights exclusive or not exclusive, that kind of thing. Absolutely, yeah. Like I said about the big estate stuff like what what it is should depend on what we want to do with it. So in the case of your document that you’re sharing. What is it that I need to do if it’s sort of merely understand what it is, you mean, then it’s pretty likely that it’s a document that. Um, that contains a linear flow of information and and introduction, and I don’t read super well, I mean, it’s I can finish some books before I’m OK, I’m fine. And but my my preference is conversation. I really like having things introduced to me. I like people talking to me about stuff and I like I like ideas being performed. I’m not huge on failure, but that’s separate. So I. So I think what I would like from a from a document is something that is presented in that sort of explicitly literally discursive way. You know, where maybe words appear, maybe sounds are played through the the timeline of the narrative that you’re laying out. I think hand gestures work a lot, too. So.
Frode Hegland: Ok. Ok. Ok. Ok, I have to jump in for a second, sorry, Peter. Right, OK. Multimedia was rubbish because learning how to make a movie is a real skill, so you can’t just
Brandel Zachernuk : Authoring his own onerous.
Frode Hegland: Yeah, authoring is owners, however, and Brandel. It’s nice and positive for you to talk about your reading. We all read in different ways and you’re superintelligent and accomplished. So for people in the future may listen to this, and a similar situation may feel
Peter Wasilko: It was not to be self-effacing.
Brandel Zachernuk : To be clear, I’m not. I’m comfortable with my with
Frode Hegland: My Oh no, no, no, no,
Brandel Zachernuk : And all these things, I just I just specifically with the modality of reading. Yeah, yeah.
Frode Hegland: And in this community are the future text that’s important. I mean, I have similar issues myself editing. I can’t do it at a certain point. I have to call Mark and we have a fight, right? But one thing that I would really strongly, you know, I come from an artistic background. I went to Chelsea School of Art. All of that stuff. If we could come together with, of course we have to do timeline. No question, but if we could also do make a document and then not just do a voice-over, but do a presentation of that document like the ones the two ones are written to you guys. Because in this sense, I think we’re quite intimate and we can be open and relaxed with each other. A lot of it is flow of thought rubbish, but I don’t want to delete it at this point. I don’t have an academic thing I need to withhold. So I’ve kind of written in the beginning. This is actually so and so read down to here. If you want to read the rest, it’s up to you, but you don’t have to. If I could say that and maybe have my hands turn the pages and voices there, I think that would be a real and valuable contribution. So the issue you’re talking about is real and important. And thank you.
Brandel Zachernuk : Thank you. Yeah, no problem. So to that point, I’m sorry, I’ll be done in a bit, I promise. Peter is the perform ability of those authoring sort of sort of hints and implications. Is we the only thing that’s been on our mind so far is being able to type letters and then put them in places and to some extent, format them to half size. So that’s all we have. But if we have the ability to actually author gestures, point to things and have other aspects of of performance and unfolding be a little bit richer. Then I think people will lean into it because I think that there are, you know, that’s that’s one of the things that is in tension with applications like keynote and PowerPoint is that there’s this temporality that we are absolutely abundantly aware has these values, but we have such a dearth of mechanisms for being able to kind of assemble those things and then kind of perform them. And so there’s there’s a real tension that between those those modes that hasn’t been resolved by the tools, but in large part simply because of the input modes that we have available. And I’m really optimistic that with something a little more. Oh yeah. The last thing I was going to say about that is that, yeah, all all software systems are formulated. And you can imagine doing them with graph paper. You can imagine doing all software to a graph paper, and some people occasionally sort of get giggles by by encoding Twitter as graph as pieces of paper that notes that you would physically kind of write out.
Brandel Zachernuk : But the same thing goes for hand. Our motion capture data you have like stick figures and you say, like this one goes up and then that moves there. And what’s interesting is thinking like, Yeah, that would be possible, but it’s so absurdly dumb to approach the problem in that way because of all of the obligations of input to one of the points that you made earlier about, about the provenance of specific documents and the ownership of those fragments. Something that I think is is that as much as you can get simply from the context and from other pieces of information that like like what people are doing and being able to make good guesses is always better than having people encode that information themselves. Because unless it’s the information I specifically care about, then they’re likely not to fill it up. And and the quality of the information is likely to be bad. So. So that’s the other aspect of it because of because one has to understand and concede the authorship offering is onerous for everything, but the very specific thing that the person intends to offer at this moment. Then coming up with ways of backfilling that or gently asking for it at the right time is really essential.
Frode Hegland: So I asked Peter to hold this thought, since he’s noted it down for a minute because this is exploding into something a while. A couple of years ago, I mocked up a thing and it’s funny, but it just didn’t get it where it was based on question and answers. So imagine someone famous like a politician or actor or whatever. But also imagine someone on the other side of the world who you just don’t know about. It works like this. There is a spoken or written question that person answers another one answer questions so you build up a database of question and answer. This would be audio, but of course, transcribed for logic, right? So then in the future, let’s say this is a teacher who is in demand. A student can ask that question and the avatar will answer. Right. So it can then allow for many questions. The problem with it partly is the fact that most people don’t know how to do video conferencing. So if it was VIDEO first of all, you’re going to have different clothing on every time. But most people really don’t understand basic lighting and stuff like that, so it would look absolute shit. However, in VR, that becomes very different. So if we take it a step further, imagine this. For example, I make a presentation to you guys of a document that I’ve written similar to what we talked about earlier. You guys then ask me questions. I answer those questions.
Frode Hegland: When done, those questions and answers become part of the visual matter or something. So the more time I present this work with, the more Q&A it’s encoded in the document automatically. So that, you know, it builds up and that is available for people in the future, that could be like a new way where a document is just a starting point for something much bigger over. You know, because I’m very much about PDF into world, PDF goes are very, very traditional, very, very conservative, but at the same time as we do that, we obviously have to try to invent completely new things. And also the fact that we have our avatars, we have all these things could be insanely interesting. And since everyone’s still quiet. One little thing when I do the transcripts, what the software also allows me to do it is to read to change the voice based on what’s being done. We’ve seen lots of research, but it’s normally in use now. So you can actually teach it the voice. So if the text changes a little bit, the voice will change what’s being said anyway. So the whole thing about the video being a problem and we have avatar, it could do the same with audio, even if it’s recorded in different situations and maybe different levels of tiredness. It can be processed in the future so that it’s a uniform dialogue. You feel like you’re talking to one person. Right, Mark?
Mark Anderson: Yeah, I keep thinking here in this. I mean, the thing with the VMAs is all just metadata. The problem is that that that means essentially everything is possible, but it’s still the question is so which we do. There is this disconnect, I think, between the movement or a presence between the actual and the virtual or even augmented space. And what we. Think of ourselves doing when we move something into that space, because to a certain extent, if I have if I have a document in my computer to move it into the virtual space, I’m going to have to wrap stuff. Well, not necessary. Wrap around it, but I wouldn’t. I need to associate it with further information. And I’m just trying to noodle that because I I totally sort of with runners point about the inclusion issue and having sort of more state data, but I’m also conscious, you know, how how big does my shopping list have to get before somebody points out that? Wait a minute, you know, was it really worth all that effort? I mean, I’m being slightly flippant, but there is a point to it because it’s terribly easy to slide from one end to the other. And this is some sort of this is what I’m. This is what I’m sort of noodling on. I’ll just throw in one quick thing before it passes. And the idea that my Brandel is what I’m saying on camera. So it’s clear my concerns about it. But I do have because I spent an awful lot of my spare time. I do actually have literary TED Nelson’s literary machines in e-book form because I made it, and I’ve sort of got TED to agree to let it out. We haven’t got around to deciding where and I’m I’m careful about letting something that I know is very close to his heart. Just sort of disappear off into the ether because it’s one zeros, but within the sort of grouping here, I don’t know whether that’s something that Brandel you might find interesting. I don’t know if I send you a copy before I come. I thought I did, but if I didn’t, I will.
Brandel Zachernuk : I don’t recall receiving it. No, that’s that. Sounds fun.
Mark Anderson: Three. Four. So it’s just, you know, I mean, you’ll know what you know, what is what, he said. But I just wondered if that might be a nice point of getting us to something that funky demo I. The other difficult part, be asking TED and getting him to quite cottoned onto it. We’re doing, but I well, I’m here on the record saying I don’t want to be playing fast and loose with his copyright material, but I think it is quite a sort of quite an interesting source and relative to what we’re doing.
Brandel Zachernuk : Yeah, I also noticed after Bob Horn mentioned his book that it’s it’s on archive.org, which means it’s there, whether he likes it or not. I’d love to know whether he likes it, because if not, then I prefer to try to find it, but find a copy to buy. But but it’s because it is explicitly attempting to mimic some plausible this way. That’s right. Yeah, having hypertext, I would look for the other one, but I sort of 40 pages through was looking at it. It’s it’s really neat. One of the things that’s really cool is look at like the the his his enthusiastic discussion of virtual reality on page 244. It’s it’s it might as well as 15 minutes of one of these recent meetings. And so it’s interesting to think of what I mean, understandable what 33 years has done to his level of interest. But but potentially this time it’s different. And I mean that because of the technical sort of changes that have occurred over the way.
Mark Anderson: Yeah, I feel I feel it. You know, reading this, the thing I keep having to tell myself is that, you know, remember that the artwork was sort of pretty much state of the art when it was written. So if it looks, you know, in a sense, I don’t, if you can even see it at all. But you know, that probably looks sort of terribly simplistic and old fashioned, but you’ve got to look, you’ve got to look beyond the artwork. I mean, there’s some really interesting ideas here. Oh, not at all.
Brandel Zachernuk : I think I’ve said before I’ve been as a professional medical illustrator for nearly 10 years, and that kind of thing does a very, very useful job. You know, when you when you’re talking about glistening guts and arteries and things like that, it’s nice to get a little bit more shading on there, but use the tools that you have. And that’s a that visual style structure, an excellent balance between being rich and lying within the constraints of the the the medium as it existed. So, yeah, no, I wasn’t poking fun at it at all because
Mark Anderson: That picks up an interesting point, and I think we probably got slightly cross about in our last or last. But one talk when we were talking about about table of contents and things. Because anyway. Well, what I thought was what I thought I was hearing talked about wasn’t so much, you know, dead dead tree libraries in this kind of thing, but more the metadata of the text, which in the digital sense is a living thing. And we’re talking about moving into a space where we can actually interact with that structure. So that’s that’s why I thought it was quite exciting because I know, I know its lineage is in, you know, old tech. And but we’re in a transitional phase and I don’t it’s I was trying not to sort of get too absolutist about what what may or may not travel forward because I think at this point we don’t know. But even if you didn’t call it a table of contents having something that effectively you could describe it as as a sort of a a tree of glosses. Which is a perfectly fair description of it, which allows you to basically crawl your way into into the the corpus.
Frode Hegland: Peter, sorry for the big diversion, and Mark doesn’t completely lose what you’re talking about there, because that’s especially the whole TED thing, but Peter, please go ahead.
Peter Wasilko: Ok. Going really, really far back, I had a couple of security thoughts and one was that we might want to introduce a notion of eyez only material that. At the client side, gateway to the system would be blocked and replaced by a peer to peer negotiation with the intended recipient to make sure that the data never gets into the system at all. In order to prevent any security holes that might be lurking in the system from exposing that data if the data never gets entered into the system, we don’t have to worry about your credit card number on your list of credit card numbers. You might want for some reason to let mark Andreessen have a look at it. You trust him, I trust you with that kind of information, but you certainly wouldn’t want it to be in the system where a bug in the system implementation could potentially expose it to the world. So for something like that, you’d want to have some sort of an EIS only marker, and the data would explicitly get routed directly over a secure pipe between the two endpoints that you want to have access to the data. And also the client could have some general patterns as filters and that kind of really sensitive data. So you could have a regular expression looking at Social Security records per say and credit card numbers per say. And if in any data flowing into the system, it sees that pattern. It would just exit them out and make some sort of a little note that you could explicitly get around using an eyes only peer to peer transmission.
Peter Wasilko: Now, jumping forward completely unrelated thought we might really want to start formally rethinking. Plagiarism in this world, there are certainly lots of times where I’ve had. Ideas and references of uncertain provenance, and I will consciously not use that particular idea, I literally will write around it because I know that I don’t know exactly where it was that I originally found the idea, so I can’t explicitly reference the idea without the potential of someone come in with big club down the road and saying, Oh, WeThinkCode was sloppy. He was trying to steal Brandel idea when that really wasn’t the case. It’s simply that in my cognitive ability, I couldn’t access the memory of it being Brandel. That particularly gave me that idea and years ago, at a time that I didn’t think it was going to be important to something that I was writing in the future. So in that respect, we want to have an explicit notion of an uncertain provenance or an unidentified provenance so I can make it perfectly clear. It’s not my idea, and I think I mentioned this in a couple of previous chats, but I just want to re circle to it now just to reemphasize that I’m not sure the provenance, or maybe I’m sure the provenance, but I might be wrong. And Marc Anderson read that, and he thinks that I might be off on the provenance so he could make a little annotation. And then we need an agreed upon community standard that as long as we’re adhering to the process of trying to progressively upgrade our citations and annotations, it won’t be considered an act of plagiarism if someone’s memory isn’t 100 percent crystal inaccurate.
Peter Wasilko: And also, I think we should start thinking about what we want to be a fair use in the VR and as a community, have some sort of a fair use manifesto that we could adopt and get into the literature so that when judges down the road are coming and addressing the question of Is this fair use, there’ll be some documentation in the literature that a court could look at to tell them, here’s how people in this community. We’re looking at these issues as to what should be fair, use and ideally even try to get rights holders on board to acknowledge what fair uses are. Because one horrible problem in the copyright law area is and law in general, until something gets litigated, it’s just a big, murky landmine area that people avoid because no one wants to be the one who’s going to be the test case for a novel question of law. Now. Even if you feel that it’s highly likely that you would win. Just the pure mechanical cost of litigating the issue through to a conclusion are such that unless you’re Elsevier, you can’t afford. To get a court to finally reach the conclusion that anyone in his right mind would agree to. And sadly, Congress and various legislative authorities around the world have never sat down and addressed fair use beyond general principles like how much impact does it having on the financial market for the item? Was it done with a for profit motive? And you have a whole list of fuzzy factors, but again, everything is horribly fuzzy and there’s nothing that you can hang your hat on and say, Ah, here’s the piece of legislation that defines fair use.
Peter Wasilko: There’s a safe harbor that I know that I’m under so I can go ahead and use it. So the more that we could do to crystallize notions of the academic community’s views of fair use, the higher the odds are that a court or future legislature will iron this stuff out. Another beautiful example is orphan software and emulators of long dead systems. As a practical matter, anyone who you ask with the right mind would say Yes, it’s a fair use. If someone can get a copy of AUGMENT and Running, it would be perfectly fair to create that if you can resurrect copy of a famous hypertext system like. Notecards cards or an early HyperCard that should be deemed a fair use, but the legislatures have never gotten around to formally addressing that issue to give us a safe harbor, to remove the ambiguity and in the presence of the ambiguity. Again, because everyone’s afraid of the cost of litigation, how it could wipe out the individual even though he’d probably win. It doesn’t matter that you’d win. And of course, this is what powers patent trolls and copyleft trolls knowing that the cost of being proven correct are so much higher than the incidental costs of writing off your rights. And they’re not touching the thing or agreeing to essentially pay blackmail money to avoid the litigation level. Right comes into play.
Frode Hegland: Yeah, I mean, Peter, these are interesting issues, but I think probably for a while within what we’re working on here, the act of publishing something when you publish something, you’re saying it is for the public. There will, of course, be you publish within a specific community. And that’s an interesting issue, too. But I’m not sure if we’re really ready to address, you know, the walking before we are able to crawl the document in and out of places unless it’s a perspective.
Peter Wasilko: So mainly I wanted to get that on the record so that we can come back to it in two or three years. And it’d be a good open topic for someone who’s looking to do a PhD and law related to IP to pick up. Oh yeah. I’m just trying to get it into the record so that someone can come back to the issue when they’re in a better position to address it.
Brandel Zachernuk : The performance issues aren’t really from the lack of type safety. It’s more from the fact that it needs to be so closed down sandbox and fenced in with all of these different type checks that for the most part, when you when you agree to to make something for Sony or Apple or Microsoft or whatever, then there’s there’s an entity, there’s an agent that people can negotiate with and that entity can be punished. And that sort of mostly keeps people honest and and that doesn’t exist on the web at all. And so. So it’s a much more untrusted environment. And so there are there are incredible capabilities that you can have for annotation and mutual manipulation of documents and stuff like that in a context where you can trust those people. And it’s one of the things that I think universal scene description is going to fall into is that right now, it’s all in Pixar. Nobody’s worrying about one division of Pixar making nefarious changes to another one. Lighting isn’t at war with models at this point, although that you know who knows what’s going to happen in the future.
Brandel Zachernuk : And the third part this sort of is that I think limited fair use per say, is at odds with the semantic web and that if any given 30 second clip of the movie is can be considered fair use to be able to include within a video. And it’s possible to make linkages between all 30 second clips. Then you have that video, you know, and there’s nothing there’s nothing that any one person did. It’s sort of a death of a thousand cuts that Isaac Asimov really liked to write about in terms of allowing robots to murder people without breaking the three laws. So like between all of those things, yeah, the scaling issues of actually expanding these things up to perpetuity for a public web are terrifyingly thorny. They need to be kept in mind in the context of people building it from the beginning. But but but I agree that it’s it’s worth thinking about. I used that it can have right now with a more limited pool of people because because many team has been driven mad by just one of those problems on its own.
Peter Wasilko: Great points, Brandel, thanks for sharing that.
Mark Anderson: Brandel, I sent you an email which hopefully will turn up my we’re here that’s got a link to two of the bigger documents and some other ones that are. I’ve got a number of TED things which I’ve actually taken from the Internet Archive, but what I’ve done is I actually did. I basically effectively retype them in digital text phone because, well, his his his his writing is like mine. Not exactly the book. And sadly, in places he’s written off the edge and I I’ve learned not to ask TED, What did you mean, though? So I can’t remember it years ago? Quite rightly so. There are bits, but what I’ve tried to do really for future cells or researchers is say, Okay, so this is what was written. So that’s they’re afraid. You’re asking about seven issues. I just put another copy. I basically I’ve got all that stuff in a tinderbox, so I basically have it as a hypertext. The problem of the seven issues is that he kept changing the terminology, and although that’s presented it sort of three times. He didn’t necessarily sort of link the pieces. So that’s research might really I mean, I don’t think you’ll find that anywhere else. That was something I did for my thesis. Not that it’s particularly erudite or anything, but it was just sort of like trying to do that. I done it because it’s such a canonical thing, and I got fed up with people saying, Oh, but seven issues, and I said, Well, you know, the world has changed some of the things here, really. We’ve moved on from and I think you look at it now and think it’s all a bit dated. Some of it’s a bit dated and it doesn’t make it any less worthwhile of its time. But I don’t know how much input it has for the current day. But anyway, it’s you know, I’m happy to help. If people want it in a different form, I can do that.
Frode Hegland: Mark, all I meant was when I referred to it was it’s a list. Well, but less so that’s a bit of a useful list that has been referred to and, you know, it’s a good one in our community. Did you guys have a chance to look on the website? I have a link to the economist doing an article on the metaverse in China. They are investing a huge amount last year, over a thousand companies new companies were founded with the name Metaverse on the title. Well, the Chinese, yeah, and they’re investing so much in the hardware and software. So of course, and also they’re buying game companies too. You know, a lot of this will be based on gaming ways of doing things, which is, of course, very interesting and very different from us, but we can benefit from it. But I think the amazing amount of money being spent in China on this says a lot of the technologies will get a lot cheaper, a lot quicker. And that’s just really good news.
Mark Anderson: Yeah, my my first CD burner, I think, was $450 and burnt in real time. And the technology was gone in five years.
Frode Hegland: Don’t you remember when we were all joking going to the record shop? Excuse me, do you have any blank CDs? Ha ha ha. They don’t exist. So, yes, anyway. Right. So in terms of the basic one of the beginnings there is, how what should I do for my blue skies paper? Of having visual in and out of air,
Mark Anderson: A very quick thing, I suppose I ought to ask. I assume the air is going to be used again. Sorry, a visual metaphor is going to be used in the Jets conference again this year. It was a one off test.
Frode Hegland: It was supposed to be used again, but the people involved in my world have not had the ability to communicate with the right people for this year, just trying to leave as many keywords out of the transcript.
Mark Anderson: And I can fill in the gaps from that, which doesn’t surprise me at all, which is great shame. But I you can. Still, the point is it doesn’t invalidate your paper can refer back to. So the paper was in last year and I it’s I think it’s a really useful starting off point because some of the issues we just raised today, basically you have something that is already being posited and at least trialled as it were as some way as building effectively providing a backbone onto which to add all these necessary references. Because as we’ve discussed, we don’t really know quite what you need and in what volume and what has to be local and what’s it reference scope? But we sure as heck need something to put it into, and it sort of makes sense to me to have something that, you know, has been started with the intention of being. All the key parts of it being readable for a while yet. Because obviously, you’re not going to want to spend all of ones and zeros of some bits in the VM, but that can be easily referenced, but rather than, you know, the worst case would be to find that someone said, we’ll do that and what we’ll do is we’ll hide it as a nested bit of information in some VR format, which I think will be a surprise.
Frode Hegland: But Brandel? Yeah. What do you think if we were going to do a test with this, what will we actually do?
Brandel Zachernuk : Well, so, you know, one of the things that is good about VR is that you can put things at distances, you can have levels of sort of granularity in the context of a map. It is its own level of detail in the sense that when you look at the Thames and its winding at this distance versus being able to actually get in and see where the like the Flint River joined on and stuff like that, but tech doesn’t do that very well. So having the ability to encode the multiple levels of detail in the way that sort of that that Bob discusses in a structured writing and having a visualization of that that allows people to fluidly transition between those things. I think that that’s a useful representational job for visual media and a useful visualization job for virtual reality in that context. The the aspect of being able to hint at provenance in the way that that Nelson’s TED like visible links is useful as one of the objections that I have to. Having visible links on all the time is that it’s super distracting and it really, for the most part, quite quite linear. Test tasks of reading are interrupted by these things that you may want, and that’s where you know that that notion of few specs is essential.
Brandel Zachernuk : But again, just as authorship is authoring, it’s onerous. So is sort of overly detailed sort of view state manipulations and in the context where you don’t have the ability to to kind of just really make those things and where you have to kind of commit to a long string of specific sort of assertions about the way you want to have something, you know, Photoshop is much better and that you have the ability to kind of zoom in and out. But even that, as you must know, that can become incredibly tangled in terms of the specific configuration of layers that you have on and off. And so, so finding sort of gestural and progressive ways through those those problems is very valuable. And I think that the combination of virtual reality, hand-tracking, what is tracking all of those things in combination with the richness of the data that can be leaned on to be able to make those selections in such combinations, as you say? I think I think those things work well together, and I think that could be the sort of the locus of the job that visual matter does in virtual reality.
Frode Hegland: Does that make sense? It does to an extent, but. So I think if we look at it, the simplest way is we’re not going to be our environment, let’s say we are, it’s simpler. And I produce a document that has visual media on it. And I decide that what I’m interested in is only the glossary. So I pull that out through some commando action. And now decide to move it about like on a map, because I can make the lines up there by selecting, let’s pretend it’s exactly like it as an author. So I select Mark Anderson. This is a line to Peter and that kind of stuff when I’m done. I want to be able to write that constellation into a new appendix of that visual matter in that document. So the initial baby steps question I have is how would that actually look? Of course, just writing to PDF from a VR environment, I’m sure require all kinds of APIs and libraries and all kinds of good stuff. But how can we write it in a way that this same environment can open that to the same view, but also it’s reasonable to expect someone developing a different environment can read this. Figure it out easily and also support that same view.
Mark Anderson: Doesn’t seem to me. I mean, the thing is, as long as you’re glossaries is sort of structured in a way that it, you know, isn’t stored in some binary format no one can read. But I don’t think it’s just a bit of information that passes around what is it? One of the things you’ve got? Sorry.
Frode Hegland: It’s also where it is in space. That’s what I’m thinking about like.
Mark Anderson: Well, that’s a different thing if you want to, if you know, that wasn’t clear from what you said, if you want to have the the exact sort of, you know, thing like when there’s no diagrams, if you turned it into something like that, so you wanted the exact 3D position within a within a bounded area. That would be some more. But that’s just I mean, I’m sorry, I keep you saying that we’re just it’s three dimensional data, so it’s a bit more data. But but the the the turning into the data that goes into the PDF is not going to be happening in a machine inside the VR. I mean, it’s it’s it’s a computer function. So it’s I don’t think that’s something that you have to concern yourself with. The more interesting thing is, okay, so what extra information would I? Would I take out in VR?
Frode Hegland: My job is visual matter. So I do need to concern myself with that right, and Brandel knows a lot of the not just practical details, but the quality of life having dealt with in detail. So that’s why I’m wondering if we can take just one word. And have it in one place and just write that in what looks like a Big Tech slash visual style, that makes sense. I could do it myself really easily, but I just know I would do it really badly in a way that doesn’t suit the right kind of reading. So that’s was.
Brandel Zachernuk : Sure. So, but to Mark’s point, what you would need is a six degree of freedom reference for whatever node transformation that you have. So if you have if you if you understand objects to be anchored at their center pivot, then you essentially render that to a top of the billboard. Then you’d figure out what that center pivot is and then you’d translate and rotate it according to the to the what you probably use a matrix for. So you have a four by four matrix of floating point numbers encoded that indicate it’s 3D space. If it’s if you have things kind of with local coordinate reference frames, then you’d you’d need to rebuild that from the top down based on view forms of all of the objects. You typically say like this one is apparent to that one, but it’s a little bit down. This one is pointed to that one, but it’s a little bit forward and tilted to the side. So, you know, in the same way that any visual same graph exists, it’s just that so used is a pretty good model of it. You look at other things like GTF or look at the way that people discuss coordinate frames and applications like Unity 3D or blender, then that’s what you do.
Brandel Zachernuk : Whether there are additional pieces of information about constraints saying that this thing needs to be on a horizontal surface of this science, then that’s that’s a determination that is only really now emerging people like Tim anti-West at Unity Labs. That Unity is the application platform that most people make most games on these days. And the other one is unreal, but they both have pretty similar sentiments. I’m not aware of the level of effort. Unreal is going into in terms of what they’re doing with that. But certainly what Tim and he’s been doing is great where she talks about the sort of being able to apply a constraint. So so to your point about the portability of these views, combined with the integrity of the sort of relationships, it’s one of the things that’s really interesting is like, do you need to have this on a surface, on a horizontal surface? What are the sort of appropriate bounds? Sorry, go ahead.
Frode Hegland: I was just going to say, can we co-write a short paper on this?
Brandel Zachernuk : Oh, sure, yeah, I mean, it’s it’s it’s out there. If you watch some of timony stuff, then she she’s kind of discussed it and described it. I haven’t. Used the exact platform tool kit. I’m not sure if there are legal reasons for that, but I just haven’t gotten around to it because I like to have more control than that and I don’t like the U.S.. But it’s a it’s a lot of the thinking is really interesting in terms of recognizing this is an augmented reality where the expectation is that you’re going to be in a space that you have to kind of defer to some extent rather than having a completely sort of pre-cleared area around which you have complete free reign. So, you know, there are there are benefits to that. But but it also brings the richness and complexity of who left their toys on the ground again.
Frode Hegland: Peter has been helpful in fighting the passers and validators for visual matter, which is really great. So I’m thinking within this example we’re talking about here, what I think will be really useful would be on this new appendix. When you close the document and you make it, it has the following information it says which bit of the document was moved out of the document. It says. But several different origin points, because it may be related to the document itself, it may be related to the viewer, it may be related to a table. This, you know, so what you’re talking about there, so it has to explicitly say these are the possible ways you can place it in space, right? And then what I’m thinking is we’ll have a kind of a cascading hierarchy of details, so it must have the word chocolate. That is that’s what it is. It must be facing this direction. All of that basic stuff. But and then maybe literally a line below it can say it was presented in this font and this color with this shader. So, you know, keeps going into detail, but depending on the reader software, it can stop wherever is relevant. And if we can then provide somehow translators into unity or even blunder, I guess, or whatever into this, you know, that will be phenomenally useful for the actual boring stuff. You know, if we find someone we can actually pay to code it, that might be useful because then they will do it in a very different way than we do it from our passion perspective, know what I mean.
Mark Anderson: Yeah. So then how would you envisage the cut off between? I mean, I’m imagining storing that much data. Well, I just know from taking the not, I thought, very much visual matter in my thesis and adding zero matter. It was about 60 pages of text on the end of the PDF. So whether you put all the 3D information in verbatim in the thing or reference it as a separate thing is something to consider because
Frode Hegland: That’s the question. Yeah, exactly. Yeah. So that is the huge interaction section. I’m talking about a thing where I have a document. I pull stuff out and then, you know, it is contained there. But as Mark and I was talking about briefly earlier today, when do you snap that connection? Right, that’s also another issue, but that’s probably an issue for the space. So that’s why I would like it if the software space, whatever representational engine we’re using, would know what this is, what visual matter of its origins. But of course, this is when we’re getting into real world complexities that we probably have to try things to see. But at least I should be able to explode a document because if I can’t explode it, use it and keep that, then who would want to do that?
Mark Anderson: An interesting thing that we’re not seeing right now. So describe how you codify the corporate structure is are you thinking that that would be retained as a 3D structure? Or are you also wanting to try and map it back onto TED? Because I suspect that would just add complexity that at this stage perhaps might be put to one side. I mean, it just adds, you know, because. Well, I mean, in a sense, you can make you can take a projection through it and collapse it. But of course, then you’ll find that things just on top of one another and you’re back into all the sort of 2D design things that we deal with.
Frode Hegland: All that that that would be a detriment. Okay.
Brandel Zachernuk : Yeah, sure.
Frode Hegland: It’s an extremely valuable point, mark, because if you have a 2D and a 3D, they shouldn’t interfere with each other either. So a top of the hierarchy of noting it probably should say this explicitly say this is an augmented reality view or this is a virtual reality view, depending on the background. Let me just open the door to the family one
Mark Anderson: Because I’ve got one. A very conscious piece goes hand out something that’s pertinent to this and has touched on several Cooper before that. I mean, one of the tools I use has a way that it will actually show you everything where basically it plots. It plots the network across across the spherical face. And in fact, it’s asked because the one I don’t I don’t want to see the links half the time. I want to know that things are linked. And actually, what I really need to be able to do is to drop links in and out visually without affecting or affecting the document. Because what you’re what you’re trying to do is you’re trying to understand the relationship, and that’s why it works really neatly in authoring, you know, threads manner of not having all the links in your face all the time, which is why I think I always struggle with. I mean, I get what TED was after, and it’s a long way back, but it really only makes sense if you’re doing what he was doing, which is looking at relative edits of the Gutenberg Bible. It’s not going to help you work out, which is your more recent shopping list or is unlikely to so. But yeah, but the the notion, I think it’s the notion of the lines that actually I think the wrong thing is now we have the technical affordances, we have the sort of the visual part was at its time, I think it made sense. Now I think I think of it more as it’s having the the fonts of being able to to show or to compute or to use the even just the inference of the line, let alone whether it’s actually been codified to help you understand the relationship of objects. That’s tremendously powerful, but that’s not something many tools are doing. Peter, for a cut you off again, OK?
Peter Wasilko: I have a couple of thoughts. One going back to the possibility that they might not include visual matter in this year’s conference proceedings. Maybe we should introduce the idea of stand off visual matter, and then we could publish our own separate visual for the conference papers using a hash of the official ACM PDF of the paper, and then attach the rest of our visual matter to it to make up for them, not including it in this year’s conference proceedings. Also, I think it would be really nice to introduce the notion of an existence proof as an element in our VR world and our bibliography. Now specifically, I’m currently reading a book called Fair Management The Story of a Century of Progress, A Guide for Future Fairs, and it’s a history of the 1933 1934 Chicago World’s Fair. And the book is a very detailed description of how the actual fare management operations went. Now, one of the elements buried in there is the reference to the Fair Management Team having created a thirty five foot long list of items that needed to be done to bring a full fare to execution. So we know somewhere that there’s a list type list that runs thirty five pages includes every single element needed to create a new world’s fair.
Peter Wasilko: Presumably, some of them wouldn’t be relevant in the future, but we know that this list exists. It didn’t have any formal citation in the text, merely that somewhere out there in the world, and we can hope that the people who are managing archives connected to that fare somewhere preserve that thirty five foot list. And it would be really nice if we could create a representation of the fact that we know this historical artifact did exist at one point in time. And if anyone can ever find it, we’d really like them. The link back to us and tell us how we can get to it. So the fact that there’s an existence proof alone has value to the academic community because now some scholar was interested in the history of world affairs. Knowing that this thirty five foot list exists out there might start researching the problem of how to find it. Ask people the right questions, or maybe someone who’s writing their memoirs would find out, but there’s a list out there that people who are interested in.
Mark Anderson: It’s sort of like an unpopulated endpoint. Well, it has a definition as an endpoint, but it doesn’t attach to anything yet because the thing has not yet been been found to to to sort of to make that attachment. If I understand correctly, you’re first.
Frode Hegland: Yeah, that’s yeah. The first point, though, Peter. Sorry, we don’t need to have stand of visual matter because in radar, if you open a document that has a DOI in the first page, it uses scholars and Crossref to try to find the tech for that document and appends it for you automatically if you prove that. So to get normal is very good. Yes. So that’s thank you. That’s really, really easy. The only reason we did it officially was, you know, for the official flavor of it, but also hoping that the next stage would be other metadata, such as headings, which of course, we cannot get through that method.
Mark Anderson: This just quickly, I recall when we were talking last and bob pornos here, its policy was the name I was trying was he’d been in a meeting when that was mentioned because he was talking about a summarization stuff. And it occurs to me that that was the name. I couldn’t remember that they’re doing summarization.
Frode Hegland: Yeah, they do absolutely amazing stuff, and Phil is a really, really nice guy and very
Mark Anderson: So you might want to you might want to just point that to Bob then. I mean, I happy to do it, but you know, the better. So you could probably give a better description and you actually know the guy who runs it.
Frode Hegland: Yeah, yeah. Yeah. Let me give you the URL. Oh, I’m posting it on that chat there, but yeah, there it is.
Oh yeah, cool.
Brandel Zachernuk : You just. Oh, one of the things that I was wanting to say about 2D versus 3D is that hypertext for all the current HTML hypertext, for all its flaws, one of the things that has been really useful is that it actually doesn’t have explicit encoding of the the position of elements within it. People can hint and nudge, and you can turn on an absolute coordinate system. But the benefit that the the ability to create constraint driven layouts and the so-called responsive web is dependent on not having those things in there. And so it’s one of the challenges is that with with the coordinate frames is that sometimes when you start encoding positions, it starts doesn’t look like you mean more than you do by having those things as as as positions in space, because there are times when it doesn’t matter where things are and then there are times that it does. And I don’t I don’t know that there’s a particularly good. I don’t think that there are many domains in which people have a solid representation of when, when, when pressed, for example, positions matter and when they don’t. And I think that’s an interesting question within the context of sort of specialized information manipulated and navigated in three days that I think that there’s a clear, pretty clear case for both.
Brandel Zachernuk : Whereas I very much want this to be here. Thank you. And then it just has to exist. It just has to exist and it has to be in a place. But you know, it’s not that big a deal for me. And so I think whereas, you know, in 3-D modeling everything Pixar does, they really want to make sure that the gravy is not just the right color, but in the right place. There’s no there’s no sort of laissez faire sort of about it. So, yeah, I think I think that text is a really interest and information is is perhaps if not unique than at least the first to be in a position where positions may matter, but may not. And so there’s sort of there’s this concept of potentially waiting as a ADHD of when when things matter like that and when they don’t, and being able to kind of configure views based on how much you want to sort of accede to the expectations and demands of the document versus when you want to take it over.
Frode Hegland: So that goes directly to you walking and thinking in your garden. Let’s say that in your garden, you have a future tech slab wall and you want to put up relevant stuff from different things people have written in this conversation. That’s an that’s a case where location really matters.
Brandel Zachernuk : Yes, exactly.
Frode Hegland: Yeah, so I guess in a case like that, it would be removed from the original document and it would be owned by the new environment. But it would have the citation information of when it came from right.
Brandel Zachernuk : Yeah, I mean, I’d love to be able to share it with people and say, I’m willing to dig up parts of my garden for them that they’d have a hard time sort of making sure that it’s the relationships are obeyed. So there are sort of relative frames where somebody may want to put down, like, where is it going to be your anchor for this or is going to be your anchor for that in order to be able to kind of repopulate it?
Frode Hegland: Yeah. And also ability to the more I think about it and we talked about Nelson Lyons hiding stuff becomes more and more important because it becomes so messy.
Mark Anderson: Ok, well, it’s funny as you’re talking that it’s this idea I could sort of see you, you know, in your large space of being able to sort of put your hand in a pot of information, literally throw it against the wall and outcomes the thing you handle and all the things that are attached to it by links and then which would pull them forth, but wouldn’t necessarily be rendered in the observation space and in an in the view spec sense. Being able to say with that same dataset just to cut it into a, you know, basically a left right time time line for sake of argument or to bend it, to move the people and things into different things, which is very sort of to me, quite a few spiky manipulation. But underneath you still got you, still got your your your skein of information, which says to me that actually the sort of the resting state where it’s not being manipulated by a view state probably has to be recorded, even if it has no intrinsic worth. But in other words, if you do that, all you’re the only thing you ever left with where the view speck thought it was, which is fine, but it’s probably never work. That’s not where you want to get back to, because otherwise you always end up in a render, which I think is unhelpful because the render implies something that your mind is now not thinking whereas you want.
Mark Anderson: You want to be able to go back and think, say, no, no. My space, my layout. And we I mean, the other thing is, we’re living in the shadow of an awful lot of perhaps misused force directed graphs. Which is something we’re going to have to fight through in the sense that I think it does two things, one is that it gives us it gives us mental ideas of how things might look, which are based on no more than the thing used to render them. That’s difficult enough if you understand how it works and even more difficult to still just magic. But I think one thing it really derails is this sort of. Very small, h hypertext, your view of things with often the explicit or implicit linkage between things and being able to view those relationships rather than some gnarly, hard edged algorithm saying right, you will go and stand over in that corner because you’re waiting is less than this. And that’s sort of not really, I think how we think.
Frode Hegland: Right, so these different spaces, if it’s about the thing or the room is, of course, very important. And what I’m wondering is if our recent discussion of Table of Contents and Covers comes into this because maybe what the visual matter should only do is stuff about itself. So if you turn this out and you move it to the side and you do all that you’re making in your shape of this digital thing. So that’s completely legitimate to have it safe as a visual matter. So when you open the document again, it’s all things sticking out or whatever it might be, right? And that should then be different from if you put it against the wall. So maybe we need an action where let’s say we have a glossary term and you move it there. That’s one thing. But if you tear it off and slam it, so to speak, then now it’s owned by the environment. These kinds of thresholds will be interesting. Brandel I’m sure you have perspectives on that.
Brandel Zachernuk : Yeah, there are some really interesting sort of investigations of the way our expectations of inertia and mass threshold speeds work, so there’s a really big game for its four five well, it’s for it’s for a PC VR called Fantastic Contraption, sort of a play on the the older game, the incredible machine, which itself was sort of Rube Goldberg. I don’t know what British people call Rube Goldberg devices. I know the thing I remember.
Mark Anderson: Yeah, I remember the PC game of the, you know, the incredible machine, right?
Brandel Zachernuk : So fantastic contraption was a flash game in the toe, in a knots, and that that Colin Northway, the developer of it, then I guess maybe was was contracted or something to build a virtual reality version of it. And it’s gorgeous. It’s a wonderful experience. Unfortunately, I don’t think it’s been to to to quest yet. So you need a PC to be able to play it. But one of the things that was really neat about is is you move stuff around. But if you threw it, then it would just go over the side of the island that you’re on and and disappear out into the world. And I’ve never tried to explore the threshold between the sort of Haifeng stuff versus sort of positioning it if there were some, some velocity there. I guess it could be. But by the yeah, I think that’s sort of the the big ascribe, certain semantic meanings to certain velocities and relationships from one space to the other and projections thereof. I think I think they’re all really fertile pieces. So for for what makes sense and what’s natural versus what’s what’s learnable and useful within the expressive constraints of the system. The other thing about all of these things is that a lot of people sort of square black and blue that it’s it’s it’s not easy to do immediately, then it’s not worth doing. But I just I think about how unintuitive Photoshop is and how much value people get out of it. And if you’re going to make a photoshop for by text, then maybe it’s going to be a bit difficult.
Frode Hegland: Yeah, I can’t actually comment on how difficult Photoshop is because I started Photoshop version two. Yeah. Yeah. It wasn’t very complicated then, but feature creep and Photoshop has been positive because it actually gives us further capabilities. But your point, of course, stands very strongly. But I think this will have to in some way be one of our real experimental things. And Adam, I expect you’re listening to this. So some of the things you talked about hands and gestures, you know, we could like with the TED Nelson Lakes. There’s no reason we can’t, you know, click here to get scissors and cut them right if we want to get rid of them. We can play with really banal ways of doing it. And then we could. Maybe, you know, you stare at it for too long, it’ll start burning or whatever. Something useful will come out of that, but we have to define new closeness and distance in this space. And it’s just insane what the four of us, because I’m talking about the four in the room now get to actually think about because if you go back 20 years about this, this is pure speculative science fiction. And now we get to look at the actual implementations. It’s just super wild.
Brandel Zachernuk : Hmm. Yeah, that was the thing that struck me when I was looking through Bob Horn’s book over the weekend and seeing the bits on virtual reality and the way in which it might be usable. The fact that he sort of says, well, wireframe is possible now and higher end systems are able to render things and technicolor. And it’s. By no means trivial to be able to produce something like white reality, but I did do it and it was a matter of weeks, not months. And and that’s one person just sort of chipping away at it. So, you know, a spirited, concerted effort from a team of people from sort of with with a combination of tools and talent to be able to do it would be able to get a great deal more done. It’s funny I was just thinking about I have yet to go through more of the ACM hypertext proceedings, but it just seems like the and forgive me. But it seems like a lot of the appeal of what Hypertext is has kind of fallen off for a large swathes of the population, just as these technical capabilities have actually started to appear sort of become incredibly unappealing as things just by maybe by virtue of the fact that it’s possible.
Frode Hegland: Yeah, that’s absolutely what’s going on, and it’s funny when something becomes real, then it becomes no longer magic. And it’s so easy to think and fantasize without repercussions. You know, another repercussions are looming so large. But one thing than Brandel may be we can get together during the week or write something you reply or something on a visual matter in these environments would be constrained around the object itself, forgetting the rest of the room for now, because that probably isn’t a visual matter thing, it’s other stuff. But we then combine the two things of. Making a cover like a sculpture, we take the table contents out here. We do this, that and the other. It looks absurd as a 2D thing, but in the room you can really see what it is with. The other thing we talked about, how can we encode the author pointing turning pages, speaking all of that stuff? What what is the manner that that could be encoded? Because even if the actual audio is stored on a server? But if the transcribed audio and reference to where it is is in the visual meter, that means that we can have just the incredibly richest representations where it’s possible. But we have whatever else else, I don’t know. Does that sound like something interesting to you?
Brandel Zachernuk : Yeah, absolutely. Sort of stepping. It’s sort of very it’s very similar to I don’t know if you’ve seen chalk talk. It’s it’s temporal thing. It’s what what I asked Barbara about, but very, very interesting as a as a sort of a pedagogical tool for the most part with a fairly explicit sort of classroom bent. But so so it’s for sort of discovering, producing and relaying things more or less along the lines of what you have stuff in your head, you’re kind of conjuring aspects of it. But but I think it would be interesting to think about what chopped up is for a book, what it is for, for being able to introduce those things.
Frode Hegland: Can you put it in check? Sure.
Brandel Zachernuk : Yeah, thanks. Yeah, I’ll find. He’s done a bunch of presentations of it. I mean, there are chalk talk and augmented reality. It has remained thin. Conceptually, it’s very appealing, very, very exciting. But but but but in terms of like the actual content that you can get through with it, it has remained to. And that’s that’s always. Dangerous because it means that nobody’s come up with a good enough reason to expand its use case into something that somebody really needs to do. Well, the thing I’m
Frode Hegland: Sorry, please go on.
Brandel Zachernuk : Sorry, oh, just worries me because because for the most part, when tools are, it’s only when tools are good enough that people can use it for their own thing, that that it’s going to take off and do stuff. I’ve been sort of comparing different declarative models for how people do are trying to envision something a little bit more document like and less imperative programming language like for for 3D web because there was one called A-frame. And I know the people well, who who make it, but aspects of it didn’t feel right. But it’s also like in terms of the stuff that people have succeeded in doing it and doing with. It doesn’t doesn’t look great. And that’s suggestive that people who have tried. Found that it doesn’t solve the needs. And there’s one that I like a lot less react. The thing that Facebook is powered by, for the most part. But but it really does appear to be making letting people make things that are good and so may grudgingly have to accept they have a point. But yeah, so it is much more the former in that it’s very visually impressive. But but I’m not aware of anybody other than them having succeeded in making a go of it, for doing anything and for the most part, is sort of largely an Ivan Sutherlands camp of I don’t know if you’ve seen Ivan Sutherland to talk about Sketchpad. He says the only useful thing that was ever produced in it was the drawings for his PhD and some hexagons his mother wanted them to make one. It was terrifying as a creative, a tool to hear hear that. I mean, he’s aware of and I think made peace with the fact that he’s his sole claim to fame is inventing computer graphics and virtual reality. And it’s probably good enough, but it would have been nice for the tool to do something good.
Mark Anderson: You walk, you walk out this lovely darling and put it straight on the fridge and then go and see what they were doing earlier. I don’t
Brandel Zachernuk : Think. Yeah, I don’t think he ever found out what the hexagons were for. But I do think
Mark Anderson: One of the things I sent you, one of the reasons I sent you home computer revolution, which is hardly seen these days, it occurred to me that, you know, a goodly number of things came to pass. So that might be an interesting, exploratory sort of demo type thing. So, you know, here’s here’s the sage of Sausalito. So how did it go?
Frode Hegland: So the thing about this sorry, just bulldozing you and Adam Brandel, you like to do demos and you don’t like to write, and that’s absolutely fine. But I look at Mark’s Scottish trousers. They’re they’re hilarious. Sorry. That’s just the comedy interlude. We need it anyway, right? So what I can imagine is I don’t want I think what can potentially happen here can be really, really useful for people in the future. And I don’t want us to just have a few YouTube videos and VR. That’s what you’ve been doing, Brandel. And that’s fantastic. But I think more of what you do, it needs to be recorded for the long term. So I can imagine that you go in and you do a talk or what you’re doing. Forget an initial document. The document could be as short as a paragraph of a summary of what it is. It could even be. It’s one I am. I’m looking at this thing. I don’t know what the heck it is, but then your interaction with it can be stored in this way. That is quite frozen for the future, for people in different ways to go back. So that’s why I think we we should try to work on a specific, specific way of putting it like that.
Mark Anderson: The good thing is, obviously, if you speak, then your your words will be forever, slightly badly machine translated into text for you.
Brandel Zachernuk : Yes. No. I’m aware of the sort of the benefits and flaws of that. I don’t know if you ever you ever saw auto detect, but it’s it’s it’s a very simple thing that it’s almost useful for me. Yeah. Should work still. Yeah. Ok, so it’s this thing here, it works out only for Chrome, but you can have that on Mac as well because Safari hasn’t yet exposed the speech API or maybe doesn’t intend to. I’m not sure, but the idea for this is that it if you click the record and then you say stuff, then it’ll come up. And when you go below the threshold volume, then it stops recording and you start another section as you get it, as it sort of finds it. And then you can click on those words, click on their sentence fragments and hear them. So if there’s any transcription errors, then you can go in and manipulate those because they are text content editable. I found that to be a useful sort of halfway point between between having the ability to speak and get something out and also completely losing the thread of when things are. Mis transcribed, because there are a lot of times when the transcription of one or two words obscures the entire meaning, it
Mark Anderson: Is amazing how little is needed. That’s what I found when I tried. I tried using it because I have a really slow typist. I tried using dictation software. And the trouble is, it lulls you into sense of security. And so you keep on talking without checking, and suddenly you’ve got two paragraphs of I can’t even figure out what it’s exactly.
Brandel Zachernuk : And so one of the things so you’re so you’re forced into one of two modes, you know, devil may care and and whether you get what you get. You can make use of it is sort of up to the sort of winds of time or you sort of you work in a defensive mode where you’re constantly verifying. But neither of those is particularly usable. So this is hopefully sort of a hedge between those where you have the specific sentence fragments that you can click on. And here are the specific things. And hopefully that’s enough of that context restoration that you have some sense of what it is that it might be if it didn’t record the specific thing itself. So the reason why I bring this up is because I want to put it into the ark. Obviously, my solution to everything. And so, so one of the next things that would go along with it is the ability to encode the gesture of the company speech, as well as the ability to have those things so that you can put that speech in places. You can move those things around and you can and you can have a company and kind of gesture potentially, you know, to take a leaf out of Carlin’s book there with chalk talk, maybe draw things. But it’s by no means a necessity to get it to be worthwhile. But that’s that’s kind of one of the things that I’m envisioning what I sort of bring up that in the context of your sort of asking about a book or a document that sort of starts from a blank slate for the most part, rather than having a like rather than having materials that he intends to relate to.
Brandel Zachernuk : There are sort of implicitly existence insofar as he’s got that cool of bird that he draws and then it walks around and does cool stuff. But it’s not that if he doesn’t have an explicit palette that he’s drawing from. So I think you’re presenting a book or a document, a corpus of an explicit agenda. It might be possible to have some kind of representation of that, both from the perspective of user expectations, as well as from the perspective of having it easier to be able to kind of alter an environment such that you can pull this out, pull that out, perform this at this time, because the issue with problems currently exists is that he’s got the bird. Then he’s got the time series the pendulum. Sorry, you haven’t seen it yet, but he doesn’t have a million things. He doesn’t even have a hundred things. And so, you know you need to you need to be able to kind of manage the scale and complexity as it becomes more than beyond a toy to to an arbitrary presentation device. Do you have specific and uniquely identify or just identifiable gestures for those things? Or do you have to have an explicit palette that kind of make use of those things? So, yeah, really, really interesting. We’re really limited in terms of the explicit scope that it can kind of address at this point. And I think that I think that there’s something really interesting within the context of visual virtual presentation of visual metaphor powered sort of component.
Frode Hegland: I just put a name in chat that I’d rather you don’t speak. Sure. So this is someone who has been very supportive over the years of the work and is in art. So that narrows it down a bit. Imagine this person with a new book because he’s very much into tech. And you know, the first iPad books, he was one of the first to do something interesting there. Imagine having him in as an avatar going through his book, maybe even reading it for you, not just audio, but turning the pages, highlighting things right? If we can manage to find a way to encode that, that can be amazing, right?
Brandel Zachernuk : And the authorship aspect of it, how onerous it is. You know, I think director’s commentary for all of the decisions that the middle middle of the road and downright poor author’s commentary that and director’s commentary that accompanies DVD features. I think if you had a more amenable sort of context in which people can kind of relay thoughts and anecdotes and create artifacts and pull them out, I think that there are some really, really rich opportunities there. And I don’t mean in a commercial sense, but I don’t mean in the commercial sense either of the things that people can do based on the things that exist in their spaces and the pieces that they have to have as a consequence of the other media that they’ve created. So yeah, that would be really exciting. It could be. You feel like you have this year
Frode Hegland: And also commentaries. Yes. I mean, just imagine what a nature document. And having, you know, a lot of people watch movies and Oculus now or equipment being able to literally go behind it. Right? By the way, Peter, before you speak, I have to tell you something. I bought the ultralight Oculus Flight Simulator, which came out yesterday. I spent 15 pounds, something like $20 on it, lasted 15 seconds, got completely sick, had to take my headset off.
Mark Anderson: It jumped off the roof. Didn’t go well. Yes.
Frode Hegland: No. I mean, in VR, if I don’t move, I don’t want the world to move. So I think we’re in a similar page anyway. Peter, what did you have to say?
Peter Wasilko: Yeah, another unpopular endpoint thought maybe we could find some work done in the area of stage direction from the theatrical community that could be applied to describing how we want avatars to move around in a complex and compact format. I’m just sort of like assuming that there might actually be somewhere out there a taxonomy and standard phrases that directors would use and embed in screenplays to describe the kinds of motions and gestures that you might want to have applied to an avatar. And if we had that language, then we could map that to skeletal motions in a standard stick figure VR model that could then be applied to any of our avatars. But I don’t know if anything like that exists. But if we knew someone who was involved in the staged theatrical community, we might be able to ask them and they might say, Oh, there’s a standard reference book that talks about exactly that problem. Here’s what you need to go read. So it’s an unpopulated endpoint, but if we could find someone who overlapped that area, we might be able to get a good pointer.
Brandel Zachernuk : Absolutely. Yeah, that sounds really, really productive as an opportunity. I also my my brother was one of the lead visualization artists on a couple of small indie flicks Lord of the Rings and Batman and Harry Potter and Bridget Jones to the edge of reason for some reason. So, so he has a pretty good experience of all of the different sort of pieces of the way that movies go together and the kind of collateral that’s produced there. I’m not aware of whether he’s been involved in DVD productions of the sort of the meta information of a film, but there’s there’s a lot of opportunities there
Frode Hegland: Just on that interest. That’s very impressive. Many, many, many years ago, when the whole DVD thing was happening, I wrote a letter to the DVD Standards Organization in Japan, suggesting that they add a thing whereby you could film a scene that will take exactly the same amount of time, but you film it from slightly different angles or slightly different dialogue, and it would randomly play one of them. You wouldn’t have any control. It’s based on the joke that people two friends are watching a movie cowboy movie says, I bet you five dollars. That horse is going to fall soon. Yeah, I bet you five dollars is fine. The horse falls, he says. I’m sorry. I have to admit I’ve seen the movie, and the other guy says, Yeah, me too. I don’t think the horse would be that stupid twice. Yes, right? But that could happen. But you know, to go into the thing about multimedia production, like the work with your brother, that’s absolutely amazing. Huge budget. But if you’re able to do it with just yourself, with just your gestures and your voice, like we’re talking about hair that is literally transformative. You know, imagine some of the great teachers in history saying, here’s my book on so on. So let me tell you this. Oh, by the way, let’s just put down the book and I will talk it through with the book is here. You know, it’s amazing. I got to cut off pretty soon, too. But yeah, Brandel. Maybe I try to write something and I send it to you.
Brandel Zachernuk : Or, Yeah, yeah, I’ll read the thing that you sent through earlier as well. Yeah, it’s a I think I think we’re sort of coming coming toward a little less of the talking in circles that you mentioned. I hope and I feel like there are some, some really interesting sort of directions that that some of these things are pointing in. If you’d like, I can make stuff up with some positions and things like that. But but it’s really it’s if all you’re talking about is positions, then it’s just visual matter plus position by that. But it strikes me that that’s actually simultaneously too much and too little information for a range of needs. And so a better enunciation of those needs is at least as useful as the sort of the solution that a specific representation would cause with a kind of give to. Thank you, Mark, for your discussion of credentials. I will say that it’s not my my I have I have a pretty, pretty good commercial credentials. I’m not worried about that. It’s just that the relevance of those, those credentials. The context of the discussions we have is the thing that I find challenging, it’s just like I worked for Coca-Cola also thinks about text.
Mark Anderson: Now I just conscious also that two of us to just be sort of wading through the nightmare of thesis. And so it crops up only because it’s what it’s what’s in the hopper at the moment.
Brandel Zachernuk : Absolutely, absolutely. Yeah.
Peter Wasilko: Great session, guys, on Friday.
Frode Hegland: Yeah, thanks. Very good. Pdf thanks, everyone. This is layers and layers to talk about. I think we may not be getting close to project, but we’re getting closer to understanding what the what the room is of the room where it happens.
Brandel Zachernuk : I’ll link just so everybody has the link to Horn’s book as well, because I think it’s great. I’d love to check with him on whether this one. Yeah, yeah, it’s on archives. I’d love to check with him if he has a preferred way of people getting it because it seems like it’s out of print. And so if he’s OK, people having people kind of point to it, I’d love to tweet about it and some of the observations from it. But I’ll I can. I can email him directly.
Mark Anderson: You know, so many of my books come from university libraries, which I which says something itself.
Frode Hegland: His book is right there, but that doesn’t help this discussion. It was virtual. We could have this discussion all over again. But right, exactly. Another really important thing is I think we need to make this into a core needle focal point for what we’re talking about. And I know you agree on that, but I think we also need to invite more people. We talked about Maggie last week, which is great to hear in dialogue with her. But if there’s other people that you feel are doing great work in this, just get them in. We’ll put them on a pedestal and really listen to them giving good, you know, dialogue because in a year, if we have a bit of a real. I mean, I’ve always seen myself as an artist, creator and all of that stuff, but that’s not what I can give the world right now. I have to just help the right people in the room record what they say. Yeah, right. So we had this spreadsheet and we had lots of talks. But if there’s any people on this, just, you know, feel free to invite without running it past anyone else. Just make it.
Brandel Zachernuk : Yeah, sure. I appreciate it and thank you for having the nerve to invite Lanier to talk to him. I’ve never, never had the nerve to strike up conversations with people like that, but I think it would be fun if we did. Yeah, there are some some folks who have actually reached out to me explicitly about the sort of the idea of text, and I’m going to chat with them one on one and see if they want to come along at some point as well, because that’s very comforting.
Mark Anderson: Now it’s also as as it sort of beginning to come into a slightly sort of tighter frame. And so in terms of something new, might at least try and put some vision to is that it’s easier possibly to bring some people in in a sense and then focus it around that and allow someone to talk, you know, either enjoy it or add to it or even just pull it apart, know because they’ve seen things we haven’t. And that could be tremendously useful, too.
Brandel Zachernuk : Yeah. Yeah, yeah.
Frode Hegland: Wonderful. Got to go. This is wonderful. And let’s keep at it. Bye, guys.
Mark Anderson: Thanks very much. Thanks.