Chat log: https://futuretextlab.info/2022/01/28/chat-28-jan-2022/
Frode Hegland: Was just going to text you a playlist. We just have to hold the meeting like this. Yeah, hang on, I to turn off the blur.
Mark Anderson: That’s your music not mine. No, no, no, no. It is. I’m going to stop. That’s too much like. So not that I can never get the lighting work in this house. Yeah, it’s interesting. I had a good, actually good session yesterday. The one thing I can’t do is I can’t make it work with any of my glasses. There’s not room to fit them with back on. And funnily enough, if I hold the head, if I hold the headset really close in, I can get in focus. So I find I need to do it in small chunks and then I get. It’s not. It’s not sort of the thing Peter mentioned. It’s not the motion. It’s it’s actually just reading stuff that’s slightly out of focus gets a bit.
Frode Hegland: I’ve had a problem with that, too. I ended up getting the click in lens things.
Mark Anderson: All right.
Frode Hegland: And they cost £100, which was obviously both studied. But that was annoying. And it helps a bit. But it doesn’t help 100 percent because it’s still a little bit about where. Yeah, it’s maybe a focus. And maybe not.
Mark Anderson: This will also I, where I basically wear what they call computer glasses, which are the sort of it’s like the sort of gateway version of very Focals. So there’s a slight difference. So basically, this and this are different, but I mean, they’re not insurmountable, but I’m rather recording. It’s logging these things as I go in a sense, because they’re not. I think it’s a takeaway. I learnt from the years working on on systems that see that we’re in a set, you might say incomplete designs, and that inside this beast was something that did what it was supposed to do, but it did it most horrendously. Human unfriendly fashion was a sharp bit sticking out. And it’s that sort of thing of because I know what happens is, well, we’ll sort that out in Mark too. And of course, they never do, because that’s now another department who don’t care, either. And it’s one of those marvelously somebody else’s tasks.
Frode Hegland: Yeah, this is, of course, what’s going to be a big deal with the Apple device, this stuff.
Mark Anderson: You see, I mean, I I carry these around where you’ve seen them before, but these which are they’re French and then they do them in half half steps from one to three. So but that covers most of the people. We’re sort of late middle age, you know, reading a reading distance degradation, you know, as you can get these out in the tube or something just to read a notice. That’s what they’re there for. Or reading labels on the back of tins in the supermarket. But I do what they are. I mean, you can get them. You can get them in other shapes and things. But you know, there are 10 Typekit apiece. So why you can’t, you know, in a sense, why you can’t get cheap reading glasses now. If you’re wildly stigmatic or something, that’s a whole different ball choking you and my mother, she’s plus or minus TED. But you know, my wife’s eyesight is really bad. And yeah, you wouldn’t expect most things to cope with that. But for a majority of people, basic basics are reading, you know, basic reading, distance correction. I think it’s something we’ve had cracked for years. So, you know, it’s the kind of thing where that’s a really interesting design oversight because it wouldn’t have been possible. It wouldn’t have been difficult to build a bridge to literally just drop in some corrections, which probably would have increased the sort of standing audience by a significant proportion. But, you know, all public service.
Mark Anderson: Hi, Adam.
Frode Hegland: Hey, Adam, do you have perfect vision or something, Mr. show off with no glasses?
Adam Wern: Nope. I use lenses. Contact lenses.
Mark Anderson: So we were discussing the inability to get this to work with these, you know, which on one level, it’s the first of all a problem. And you know, this is early stage, but I was just musing on the fact that you can easily buy up these marvelous French readers because they’re even smaller than those spend and they eventually break in the middle. But for, you know, a few euros. That means I can read the labels in the supermarket if I forget my real glasses, and I would have thought that would have provided because there’s uncorrected there, you know, they do between. They do between 0.5 and three and a half degree steps and you know, there’s no different eyes or that kind of thing. But for a lot of people, it’s pretty good enough because I find that in the very limited of use I was getting eyestrain, I wasn’t feeling any of the nausea, but I can quite understand that some people do. But then again, I spent a lot of my youth at sea, so I probably pass through that stage. Because actually, funny enough and pertinent to this, I remember going to a tinderbox weekend in Hanover and someone kindly took us to a they have a ship simulator where people come to learn how to basically drive super tankers and things. And one of the programs they have is coming into Hamburg Harbor, which they like Rotterdam places is big, complicated and busy. And one of the programs he has is for TED drivers. So we were stood inside this thing and it screens all the way around and you stand on a little dais which is affecting the bridge and you see, right? Well, let’s turn the weather up to a level. So we were in a snowstorm in Hamburg in failing light, and some of the people with me clearly were beginning to feel sick, just watching the screen. So I totally get that there is a graduation of that effect.
Frode Hegland: Indeed.
Adam Wern: So did I get it correctly, Mark, that you have you have an Oculus, tried it now.
Mark Anderson: Hmm. Yep, so I’m yeah, and I’ve got it all working. Another interesting thing was I had because both bits came together, I opened it, I happened to open the box that was on top, which is the smaller one, which is the battery pack. And I’m very nearly ripped part of it off because the instructions no one had thought about that because the the diagram of the back headpiece looks not dissimilar to a face mask. So you said basically I was trying to remove this when I was. I should have been removing that, which was something in a different box. But the person who had written instructions hadn’t actually thought through the problem in. I hadn’t said step one, get hold of this bit. That’s quite another bit of interesting, poor industrial design
Adam Wern: And I don’t get it. They spent so much money on engineers and yeah, and the engineering side. And this is kind of low hanging fruit design, better a better package involving good interaction designers. They should put just as much money on that. But I don’t for some reason, management is blind to that kind of.
Mark Anderson: Well, it’s a bit like, you know, if I was funding stuff now, I wouldn’t fund it any. I group that didn’t have an ethicist who wasn’t a coder paid to be in the room. So that all the time there’s someone leaning over the shoulder saying, look, as long as you do that, as night follows day, something bad will happen. So fix it now. Don’t punt it down the line for somebody else. And in the same way, if you’re doing the industrial design, someone will wheel their truck to you and said, you know, those steps really aren’t going to work for many of your customers. You sort of you need that and it’s there in us all, you know, because we allied away, the things are inconvenient because I just want to get the thing, you know, I want to get, I want to get the thing done. But yeah, and that’s an interesting part of this whole thing. And one of the first things I did was actually cast the picture onto one of my screens because I wanted my wife to look at it because I knew she’d had problems because she’s got a very poor eyesight.
Mark Anderson: That’s quite interesting. So this is someone who’s been, well, a signals officer and then a lawyer, corporate lawyer all her life. So, you know, sat on multinational billion pension boards and things. So not exactly a slouch, but sort of basically said, yeah, it’s like a fun toy, but what does it do? The only thing she did notice because she did actually try it without her glasses, a little bit blurry. She liked the basically on boarding room you turn up in. So that was pretty cool. But I got to look at the bounding sort of bounding box, which I actually think is right. Really, that’s really nicely done. But but it’s again, it’s an interesting thing because I’m thinking, well, if I can’t communicate, someone else is a perfectly sort of educated grown up. Um, bearing in mind, the gist of the conversation wasn’t this is games, this is this is this is technology we will be using, not necessarily in this form, but in the future. So I think I’ve got I think I’ve got some explaining still to do, but that’s a nice challenge to have.
Adam Wern: And I’ve tried it with very iPad literate kids, but who has never been in in not really in 3D worlds and not in the multiplayer thing, so it’s very interesting to see their their first reactions to putting the headset on. They grasp it very, very quickly. And things like race coming out from the hands is kind of the graph that within seconds that they’re Ray coming out of the hand, hitting something far away is a kind of a pointer or. And it ties back to what Barbara talked about. That pointing is a very perhaps even a genetic thing that we are born with the ability or very quickly to construct a. A model for pointing, and so that was very interesting to me, but we also had a very interesting incident yesterday, Rebecca and I, we tried out Mozilla hubs. The slightly more open M. Metaverse, the thing that Mozilla has been producing, so I was on my laptop and Rebecca took the headset because it’s cooler with the headset and then I have to leave the room. And when I came back, Rebecca was on the chair because we were in a world where a microphone in the world of virtual world was high up. So she climbed the chair while in VR to reach to speak into the microphone to see if it was working. I was so horrified by that. That’s because if she falls, she will be in the virtual world, falling on to real desks and
Mark Anderson: A big virtual cushion, but underneath the chair.
Adam Wern: But it’s really, you know, you really get immersed in that world. When Frodo and I did some experiments as well, and when I came out of that world, I immediately went and we were in the conference room. I immediately went and washed my hands because I felt that I’d been in a foreign environment that I’m so used to get to washing my hands when I come out of a conference room or something with people.
Frode Hegland: So thank you for highlighting. It was not just because you were with me.
Adam Wern: Nope. And also other things I pointed my switch pointer to and it got stuck in this chair and it felt really horrible. I didn’t know what was going to happen if I was going to be placed in first lap somewhere, and that would be would have triggered the awkwardness of real life, that it would have been a very intimate thing that neither of us would know.
Mark Anderson: So I made a note that we clearly need a hand basin when you when you come into your room as well, so you can virtually wash your hands on arrival.
Frode Hegland: This is this is actually part of the discussion I had with Vint yesterday from the whole Mysore thing. And because he said, You know, who do you know, who is who in a virtual environment and Darren’s book because he goes back talking about the 70s and 80s and 90s, the saying before the internet, you go travel to the city, you didn’t know who was who. You know, I guess it’ll evolve in the same thing in VR spaces, in some places, if it’s a meeting room for an office. There will be verification for who. So even though you may look like you’re wearing a funny suit, like today I’m wearing a hoodie. I don’t usually wear a hoodie, but that will be the same kind of thing. But then you’ll go to a club and you can accidentally sit on someone you don’t know who they are. So the ability for social spaces will be very interesting.
Mark Anderson: This agree a degree of culture in that as well isn’t there. I mean, some people are just happy to rock up and meet people they never, you know, in a sense that, OK, there may be cues as to who maybe has some sort of higher status in a certain group. But you know, it’s interesting. I can imagine if you worked in a broadly sort of engineering or quite structured background, you require that you acquire the need to know who is who, because otherwise your karma is also bent out of shape. But actually, it broadly doesn’t matter unless you’re having to interact with someone where one of the four people in the room has special skills and you don’t know which one it is.
Frode Hegland: Well, it’s funny, Mark, because I’ve never met him in real life, but being in the other room where we walked around and we could draw shapes and space, which was crazy. It was a very ugly space. Fine. But at some point we’d end up being really close face to face. Hmm. And it was not very nice. Right? Which is hugely interesting. It is that sense of presence. But to other things, before I forget number one, I’ve pasted in here a link to the universal control thing, which became life today. Yeah, I think that’s a hugely important thing because I think Apple will leverage lots of different technologies together for VR. The glasses will not do everything on their own, which is, you know, like it’s very, very interesting. Also, I had one of those Oh my gosh, what is we are good for in specifics today. Not is it? It’s not a question anymore. And I looked at two different things. One of them is the well, actually, I’ll briefly share my screen, if you don’t mind. So I went into my software called author, and I did this. So I started, you know, mapping out us, you know, like Mark Anderson Journal newsletter or that’s that’s Alan Hair. You know, that kind of stuff, because I thought, if we can’t, because the article that Peter sent was interesting, but the whole conclusion, oh, what’s the point of doing 3D stuff that we can’t even work in 2-D, which was really annoying, but fair enough.
Frode Hegland: I should be able to start doing something in 2-D that I can imagine. We then take into 3-D. But what happened quite quickly was, let’s look at Brandel hair. What’s the point of me just having him here? You know, if we actually worked in the same company, we had department and stuff. I’m sure we both be in some kind of a box. But what would be interesting here is there was some kind of a feed. So if Brandel did something, this thing would update for me. But I think that’s slightly out of scope. So what I thought about then was. I don’t please don’t read this because it’s just my weird notes, but. So Adam managed to take the future of text book as a PDF into this Mozilla space. And when you went close enough, it was incredibly readable. You know, one of the things Vince said, people won’t go into VR to read, I think he’s completely wrong. I think people will because one of the magical things that I only also understood today is very narrow field of view. So when you move your head, it’s an infinite space, almost. But when you do not move your head, it’s a concentration space, which for me with my issues is actually really, really good.
Frode Hegland: So that was one thing, but I thought, what is the key thing that we all have to look at what kind of work we want to augment and the kind of work I want to augment if I’m honest to myself, which is hard to do. It is the act of authoring and the act of reading. So therefore, and I really want to know what you guys think about this. What I think should be done is, first of all currently and author concepts are per documents. If you start a new document, you can copy the concepts over, but they’re not there by default. So first of all, make the the concepts part of the system not per document. Then add a few very, very few things, such as category, you click on that and you have person, location, institution, a few basically tags, but you try to make it very few. You can allow people to add a tag manually, but make it a bit difficult so they don’t do it too much. And then finally, we have here. Time, if it’s a famous person who is dead, might as well put in the born and died. It could be useful. So point is, if we are allowed to because notion and Roman, of course, they all do it and they’re all better than this, but if we look at it, really simple way.
Frode Hegland: If these things have categories and then Adam, who document and let’s say he manages to get the visual meter out, that means that we should be soon able to be reading a documents and the we can then do things like this. See, only the defined concepts. You know, pull them out and do all that kind of stuff. Hey, I haven’t defined very much, but we because we know what’s in it, we can start doing really powerful views. So sorry for going on and on. But in closing, if we can help an author in a normal environment writes and to stuff that automatically when they produce the documents, it is in a way that when it’s taken into VR, you can read it in a really flexible way. I think that would be really. A powerful and interesting, oh, final thing I forgot. If things are defined concepts when you point to them, the mouse should point change to a pointer, even though it’s not a link. You know, we have to find a few affordances like that, but I’m really wondering what you guys have to say on the notion of that kind of document workflow even mute myself for a minute.
Brandel Zachernuk : I think having the concept that find her sort of workspace rather than a document sort of points to an interesting question of in the context of hypertext, what constitutes a document and where where are its limits? Because it’s a view and the idea of view, respect and any other sort of manifestations of it are that that there are specific kind of trails and that they are in a lot of ways there’s less integrity. And I don’t mean in a pejorative way to what is this document versus that document when you have this collection, this loose collection of of ideas that you want to kind of put together. The challenge with that is that sometimes you want to say different things about different things. And so having something that is on a canonical reference to Doug, for example, is challenging unless it’s the kitchen sink. And so I definitely like the idea of being able to create documents and have a source of truth for things. And yeah, it’s making one’s own hypertext or with the PDF at some level, but at the same time, because of the specificity that one might desire at various times, that that’s that’s something to look at and think about the sort of practical consequences of.
Frode Hegland: Just on reflecting on that, it is really important that I consider these concepts and glossary the author’s point of view. Not, not at all. Any kind of truth. And what becomes interesting over time, maybe, is you read my document that has this, but you also have your own glossary. So you should be able to say this document that’s in front of me that I didn’t author. Show me only the glossary terms I have defined in here, because that’s what I’m interested in. And so what I’m looking for here is I think it’s the same wavelength this year where we give the system stuff to work with. All right, so here is my defined stuff, there is your defined stuff. We have citations and so on, and then we can build because over the last week I’ve been going a little bit crazy thinking in this VR space, we can do this and that and all of that. But as Mark keeps hammering on about and it’s completely right, where is the data coming from? All right, so how can we that how can we then introduce data to this in a comfortable manner where we feel like we’re nicely building when we write? Similarly, when we read write.
Adam Wern: And I think one approach that is fruitful here is to to not work these concepts into a database or the kind of central store of truth, but but see it more as the sum of all documents or a part or a slice of all document. So you could have your I’ve said it before, but I don’t want my. That tech reference is creeping into my theater references, for example, or very few of them, I don’t want to see them. I don’t want to see that the other way around. For example, many of the remaining of my theater concepts and the embodied embodiment parts of it really relates to VR. Seven things from theater coming into my tech world, but not the other way around. And so it’s and also personal stuff I don’t want. I want to separate that. So having having you? Maybe that is what you call categories. But but slices of your document space to to draw from rather than being the source of truth, but more that they are options to draw from would be a way to looking to approach it. That I think is fruitful and not explored enough in tech circles. There is this kind of idea that you put everything into one big database and you have been working on the idea of the the document as canonical truth. And I think that your version is a bit more interesting because it automatically has a sense of time as well. A document was made in a in a time period, and it may be relevant now, but it may be not so slicing the. It’s like anything. Things buy time and category is very useful. Yeah.
Frode Hegland: Over. Yeah, I say you have Mark, but just to respond to that. When I was doing this earlier on, I wanted to be able to let the user if they were high school student or university student, specify whether it’s for maths, homework or music or whatever. Absolutely. But then I’m thinking more about the kind of thinking and sense making and all of that stuff we’re talking about. That’s why it becomes more unified. But I really don’t mind having a little tag in the corner that says, you know, this is for such and such space. I think that’s absolutely a legitimate thing to do. Um, yeah, over. I’m making this up. That’s my money while I’m listening to you guys.
Mark Anderson: Yeah, I have written about three things. I mean, interesting. I think my first reflection when you thought it was probably being coloured, many of our Brandel in, I mean, just one of the problems, of course, is how these things scale outwards. And the truth is, will never have. We’ll never really have the time and attention to detail to wanted to classify everything in this. And that’s a problem with classifying systems. But I like I like the concept that certainly it’s also shades across to what Adam said. But it’s not unreasonable. You might say, Well, look within my sphere, my personal sphere, which might be all your documents here and out there, but broadly, in some way, yours, which is to be defined separately that you could say No, no. Well, this is my own constellation of facts and interesting things. And if they were structured in a way that they could be shared with people, that’s also useful because as rightly said, well, where is the canonical version? And to a certain extent, there doesn’t necessarily have to be because one of the problems with the the all encompassing sort of database is that, you know, well, we all end up in some sort of religious war over what the ground truth is when it doesn’t really matter.
Mark Anderson: You know, we won’t even agree just to have two truths in the same wrapper. Not that it, you know, it really doesn’t matter. But you know, on a human level, it’s clearly something we just don’t do very well. Having, you know, having made our choice, we’re a hodgepodge. So but I think it’s good and I like the hyper textual nature of it. I think it’s also interesting because it has an innate sort of link or an in an inferred link structure, even if it’s not actually there, which is important if we want to effectively move around that as a I don’t want to call it a network and I don’t want to cause it to grow up. But if we want to move it between interconnected things, we want to move, be able to sort of translate, teleport along the lines of connection that that’s quite interesting. And that’s the kind of thing that we might begin to be able to do in in, for instance, in a VR space is harder to do on a 2D presentation. I may be wrong, even on the latter on data. I mean, yeah, the reason I bring it up is is all too often data at the moment is basically just the exhaust
Brandel Zachernuk : Of our activities
Mark Anderson: And we think, Oh, there’s something interesting in it. There aren’t many people you see who actually sit down and plan their data. And I don’t mean in something like, you know, international organization planning metadata. But this is something I fell into. I got housed on a university project and I was staggered. There were sixty. 60 funded projects, all funded by a national funding thing, and not one of them had a description of really their understand, their data, their entire approach was, Oh, we’re going to be making data because we’ll have lots of it. It’s probably important. So what we want to do is we want we want to have intellectual property rights over it, but we don’t really know what’s in it yet. And I and I well, that’s what Wendy asked me. Stop helping.
Mark Anderson: So this is an inconvenient truth at this point. But I find it hard to walk away from that because that is one of the things we’re facing. And so a really interesting thing in the context of our sort of, you know, sea level discussion about about the airspaces. Well, does it make us think about how we design? So that will be helpful to us that, for instance, to be able to translate from this flattened thing to this to to to a richer structure. You know, it’s a bit like this thing about going from a a complete hypertext linear rising a wiki. Well, how do you do that? It’s not. It wasn’t built to do that. I mean, you can as a sort of an author, you can choose what I will take this path, but it’s the same sort of thing. So which is one of the ideas I sort of I did play with Nodar. And yeah, it’s fun and it takes me to get used to. But but what I really wanted to do, my next impulse was right. I want to take a data set that I have, and I want to dump it in here and I want to see what all those little, you know? And yes, I’ll I’ll start with two thousand circles, but I sort of, you know, I’ll know, I’ll know some labels and I can and I can begin to do stuff. But it’s interesting that I don’t think people are thinking like that.
Mark Anderson: And one of the problems I think that comes out of that is that you get over fascinated with the the interface. So you know, there’ll be there’ll be endless ways to have different colors or different shapes, but no thought as to what they represent to why you might want to be different shapes. So it’s not that it’s not impressive work, and it’s all stuff that needs to be done. But again, you know, that’s sort of that is the implementation level stuff that I think we want to skip over and see if we can take that, if we can, if we can take it as granted that probably by the time we need it, that kind of stuff will be better. So what is it? How how should we say, take your your Constellation’s, your glossaries and put them into a into a space so that the map you just showing us an author is something that is actually useful to us. Because in effect, if it’s just a 2-D map in a 3-D or Indy space, we can’t we? Okay, we’ve moved it from one environment to the other, but we haven’t enriched it very much. And so that sort of I think. And the last thing was another interesting aspect because time is mentioned is that this comes to often when I’m looking for a book and it’s trying to find out that if a book was republished, if I’m trying to find a book, it’s been recommended to me and it’s been republished well.
Mark Anderson: Has it been republished and altered? And I don’t just mean type that type of little correction because you’d sort of expect that. But did it put an extra chapter in or so in a sense? Am I actually talking about the same source or not? And again, that that’s something that is very, very rarely picked up on. And I’ve also put in the sidebar, I think freights read it, but I’m really enjoying a book that on the face of it is supposed to be about translation. And it is. It’s called Is that a fish in your ear? But in the course of talking about translation, he inevitably gets into the will. You know, is there as much fixed in language as we think there is? Not really. You know, can you define what a word is? Well, you can tell a computer what you think a word is in terms of, you know, things surrounded by spaces or punctuation. So it’s really, really it’s a really nice to sort of dive into the same ambiguities, but just just from a different direction. Because it’s almost comforting to see that exactly the the problems he’s bringing up are things that actually are not unfamiliar to me having that in other circumstances.
Frode Hegland: So I think we’re all kind of on the same thinking here, and one thing I went through a couple of weeks ago with this definition dialogue concept box was to very easily and quickly import wiki data. And it turns out not to be that hard, but then suddenly I think we’re inching very close to a universal truth rather than someone’s glossary. So what would be really interesting, from my perspective, is to build, you know, I do this side of things with your input. But then when it goes into VR, if you extract the visual matter, of course, you have other access to lots of other data, too. So if you choose to say, you know, here’s Doug Engelbart to use that example again, you know, you want to see the wiki data about him and whether it contradicts or if it’s if it’s not contradicting, should be able to literally glue it together as one thing, and that will help you with the rest of the space. In other words, you start with a corpus of some sort, but you can expand and go here and there. I mean, I really think that if this were to happen, TED Nelson would just start crying. Yes, mark
Mark Anderson: This. I mean, because it keeps bringing me back to I mean, I’ve consistently failed to get Chris to understand what I meant, because when he did his tablet, his web sleuthing, because around the time I was talking about it through my thesis, I was talking with Dave Mallard, who was one of my supervisors. And I think you’ve now seen in the group here and I say, Well, one of the things that’s all very well. I mean, I can go around the links on Wikipedia, and that tells me what people bothered to link. What I’m actually interested in is saying, Well, if I jump into a subject that I know of but don’t know, so I don’t know economics. Can I look at it and see where the holes are? I want to say I actually want to see the anti patterns. I want to see where the dark thoughts are that don’t exist in I was. I know there ought to be something about the subject, but there isn’t so or the ability to sort of take from Wikipedia said, No, I want this node, this node, this node, this node, and then the rest can sort of fade out at the edges. The links are also there, but I want to look at these objects, the interrelationship of those. And that was. Whereas what Chris was doing at the time was of was in a more deterministic way, just following the structure, it was there. One of the things, I suppose, because things have been doing one thing I’ve been used to doing for years is actually basically forking out the data, literally forking the rubbish out, having just saying what’s there and is what’s there, what what the person that gave it to me thought it was or said it was.
Mark Anderson: It does. It does what what they’ve given me. Does that actually represent what they they want it to be or think it to be? I mean, very often the first the first answer to those two is no, and it’s not through anything bad. It’s often people just don’t understand the nature of the information they have and how it works. So more often than not, that’s one of the reasons their information isn’t working because they don’t understand the nature of it. They think it does something or they think it’s it’s produced to do something that they want to do. And in fact, it doesn’t do that at all because it’s structured or interrelated. So this thing of being able to sort of take information without necessarily so tearing off all its links, for instance, if you grab a handful of Wikipedia. I don’t want that necessarily be sheared off from the rest, but it just, you know, came back to the Oculus boundary type thing, well, it’s outside. I don’t really care. And if I need to move my boundary out or bring something into the space I’m looking at? That’s fine. But otherwise, rather like Adam’s visualization, the rest can just gently recede into the background.
Frode Hegland: I think that brings up the question of what does Adam and Brandel want to build so we can support them with work and ideas and stuff? In this context, if anything, I mean, I would love to see an incredible reading space that’s focused on one person that can easily incorporate more people. What do you guys actually want?
Brandel Zachernuk : Well, something that I built over the last couple days is a multiplayer VR page that also supports multiplayer 2D views of being in the same environment.
Frode Hegland: Brandel is an Oculus because all four of us have Oculus. Now if you send a link, we can go in.
Brandel Zachernuk : Yeah, I mean, the latency is probably not amazing. It’s it’s I’ve just been experimenting. It may be may very well be that something like Mozilla hubs is is a better basis. But what I wanted to do was understand it from the ground or I was close to the ground as I can be bothered doing so that I can kind of stitch different sort of signals and systems together. And so that’s been fun, something that I want to do. But again, like I said, just sort of more at an infrastructure level and to have essentially a boilerplate of techniques and approaches to be able to explore these smaller things. And my intention is to be able to sort of connect with two devices at once because at this point, speech recognition isn’t something that you have direct access to in Quest while you’re in the web. One of the things that Nvidia does is it has a limit because of the way that it’s actually needing to make use of a paid service, so it needs to clip the ticket along the way in order to do that. Whereas Google Chrome actually provides a speech recognition system, graphics that you don’t need to have any. But Google probably listening to speech, but other than that, it’s something that you can make yourself. And so that’s what I use in word reality. And you don’t have the ability to do that on Quest. But if you have your headphones, when you’re wearing these headphones and those connected to my Mac or PC and then in there, then I’ll be able to have both hand-tracking, full hand-tracking and and the ability to do detection and speech recognition.
Brandel Zachernuk : So that’s something that I’m doing for that, for the benefit of being able to kind of think about what it is the text input is and the sort of the multi multiplicity of representations that text can have. I’m really interested in in where you put text. I think that, you know, that’s one thing that Noda does, but I feel like it’s actually under indexed like. Being able to more effortlessly place text in places and and also to be able to replay the sort of temporality of when text was put in places, there’s a really neat guy who sort of took over the department at UCSD. I think. Ed Hutchens and an HCI and cognitive science department, which is a really neat sort of mixture of things, and I watched his high lifetime achievement acceptance speech in Tokyo from 2015 recently. One of the things he talked about was the fact that computers are wonderful context destroyers, but they should also be wonderful context restorers as well. And so something that I’m really excited about is the possibility of being able to encode actions such that you can kind of get into the swing of and recognize what it is you might have been thinking about as you were doing stuff.
Brandel Zachernuk : And it sort of comes back to something that Mark was saying earlier about what is useful data to encode something that that another person in the. In the VR community recently was asking, and I think I know what kind of kick they’re on. Otherwise I was it Apple? Was it Google? All those places. But like how many apps record all of the temporality of your actions in the sense of having something like an undo? Q But but recognized and elevated as being a more constituted significance for the purposes of being able to understand what a document actually is? You know, there are some things that are represented as a series of steps serialized in the way that, for example, Houdini. It’s an application for doing production, film effects, level computer graphics, and that applies a series of procedural steps so that you never sort of lose that. But it’s not the history press, because it’s not necessarily a reflection of the weight of the timing that you undertook those actions with. And GitHub does exist, but it’s not actually a fully granular representation of those things in those orders. So, you know, one of the things that I’m interested in is going, well, what are what are some of the artifacts that we made of? What are the attributes rather, I guess, are native components of the artifacts that we’re typically interested in producing and what manner in which can they become represented in a context for either scrutinizing, sharing or or otherwise thinking about the things that we already expect? So to that end, I like the idea of recording just in normal 2D apps, the sound and the speed with which you Typekit so you can hear somebody really smash it down.
Brandel Zachernuk : And also, the right angles are a really interesting sort of experiments, but you know, that’s while flippant, also representative of the kinds of aspects of performance in the characteristics that might be recordable and made meaningful within a spatial environment as well. So I think that what I want to make is enough context, one to be able to kind of to give recognition to the fact that all of this stuff is pretty achievable and pretty easy, especially with the right boilerplate in place and to to to say to to large point about what is the data. A lot of that data is stuff that we probably wouldn’t have thought of as having the ability to be meaningful. And at such point as we track that and make recognition that it actually could be a useful thing for us to respect and understand. We’ll have a pretty significant step change in our relationship to that data and recognize that a lot of the things that we’re putting in and saying this is a document are pretty important. So that’s a lot, but that’s an answer.
Frode Hegland: Really cool. Yes. Yeah, the retro stuff we record, the richer stuff we can interact. Absolutely.
Mark Anderson: My comment briefly on just one thing before we go to to to Adam’s take on what he wants to build just because it speaks directly. Just one thing on the time. One thing that we tend to overlook because we well, we measure the things that we can measure, which is quite natural thing to do. But what we don’t actually measure or because we don’t tend to record, is intent. So the classic thing is we’ll type 100 characters, pause two characters, pause three characters, pause, focus. So basically, the third edit is the three typos you corrected in the things so that actually what you want to read is not that, you know, the important bit actually isn’t the 100, it’s the three edits after errors. Now, of course, that’s not that’s not obvious if you’re just looking at the character flow, because unless you actually have an understanding of what’s being changed, are you going to rewrite into the language which says to me one of the things that we should be as a sort of skill we might wish to acquire in a new sort of literacy as it were, is get in the habit of, you know, accepting that our input processes are not always as good as they should be and our mind wanders and things.
Mark Anderson: But it’s it’s sort of recording just in the way that actually recorded in a separate vein, having something say, when we’re having this conversation, you can always tap a button set. Important thing happened here. It’s exactly the same, but in one’s own work, because we can, we can record the flow of the characters going into a document. And we can we can know that. Yeah. Yesterday afternoon, I wrote a long section or I wrote a number of documents, but it gets us that far. What it doesn’t quite answer for it is, Oh, did I actually write something of real insight then? Because to do that, I’d have to go to that time frame, find that thing and get into the language of it. And I, I don’t have any answers in this, but I just sense this. This has come up across my bows a number of times now. And the one thing, I suppose it seems, because we don’t do it, it’s self-evident to us when we look at our own work. But we don’t record it in the moment. And maybe that’s maybe that’s a form of data we should build a receptacle for, and I shut up because we want to hear what I was interested in working on.
Brandel Zachernuk : It.
Adam Wern: You started so many interesting things that I want to discuss more here before, but when I’ve been playing with both the Mozilla hubs and a little bit in the world, where is it work workspace, horizon thing and especially Mozilla. And when dragging things in, they’re placing models and placing documents in their research. It’s obviously clear that there is no kind of greater collection of things you import, either the whole world you go to world or you have small, singular objects, but the collections are missing in in a way. Of course, you can import a room or something, but it’s kind of there is almost no. They are more detailed the information in there, and I’ve been looking at the US, DC, the the Pixar and also Apple format for four models. The open format and it’s very, very 3D centric, model centric. But because I have been thinking on how to hang on extra data data in it and. It looks like it’s really hard because there are just very small primitives. But we need some sort of bigger collections like that, this model that part this cube of this model or this, this image represents this and has an hyperlinked.
Adam Wern: And has this date stamp and time stamp and all the data we’ve been talking about. Yeah. Today and much more, we want to hang that onto the model, different parts of the model and also relate different models together so as when you bring two models in together and they happen to have a relationship that you can visualize or or show that relationship. So we have some work to do there if we want to have a kind of a side, a metadata format that we can hang on to 3D models and do something a bit more useful with and also the collection part to have a document. The three document, if it’s a kind of a montage or even with a space, a gallery thing where you have some kind of substrate that you’ve put the knowledge objects on where it must be represented and encoded in a way that is. And I envision something that is a XHTML expectable something people can actually work with and and really learn quite quickly, if you’re it. I will.
Brandel Zachernuk : My hope would be that it would actually be. I think I agree, Adam, about that, the question of where is meaning, and I don’t feel like you ask, does it in terms of in terms of qualifying relationships and stuff? And so either you add that to the U.S. or you just say that meaning comes in and they’re sort of a ground truth inside USD and and actually all of those relationships constraints and those and those kinds of things that define TED essentially interactivity, but certainly meaning come through in another format. And frankly, it might as well, you know,
Adam Wern: It could be under. There is a question of do we have a wrapper around where the model is part of the. And we point into the model. Or is it that it’s. Or is it inside that file and can they coexist within one one kind of a specific specification or a file? But what we really need is very address ability so we can address everything with the good position and describe it. So every every stroke or every form or every little triangle or whatever it is so we can address everything or most things so we could do a mapping between the visual representation and all the other things. Another side note here, or another thing, is that when I walked into the these Mozilla worlds. I feel like, like the future must involve some sort of running code as well. That the object must be living and do things that you bring in the kind of the. You have machines that you bring a machine with a visual representation and other representations bodied representations saw, but also that they can do things. So you’re bringing in a tool that is both visual and two that does things either produce new things on covert things or or enlighten you in some way. So we must have running code. We can’t just have lots of flat or flat flat things or models or 3D models of all forms. They must do something in order to end it, and we must think of a way that. Of course, we have the problem with security and sandboxing things in public space, but at least in private or high trust spaces, we need to have powerful tools that are visionary represented.
Frode Hegland: Yeah, absolutely, absolutely. And I think we need to start segmenting things because yes, if I want to bring in a propeller plane like a toy propeller plane into a room, I should be able to do so. You know, it has a simple engine. It doesn’t do much, but it is not as static that object. So I think it’s very, you know, it’s a representation of what you’re talking about. I think that’s really important. And some of us this week have been talking about Apple’s opened up approach where you have kind of a dead duck, but you put in the tools. That’s obviously very related to this. But I think if we’re going to try to build something together to demonstrate, I think we need to look at who wants to put in what effort and for what’s. One thing I do not think we should do is to try to build a system of universal knowledge graph connections. That’s something that nerd community seem to do a lot. You know, if I represent my world exactly, and you represent your world, exactly. We plug them together. We understand each other. That is, I can say, all the shaking of heads here that that’s just just it’s just it’s just silly. So I would very much hope that what we can do is bring in different data types and through translators or whatever we might call it, have them talk to each other. So, for instance, a concept map from wherever, and then we have wiki data here. Does it connect or not, it should be almost a manual thing, because once you have too much automatic stuff, it goes too much outside of the brain. Not that machine learning and all of that shouldn’t be used, but Dave Millard used the term intentional. The concept definitions and author are intentional. You can’t just tag something. You have to write what it is. So I think in this space, too, it’s useful, especially because the power of this is immense, so we can very easily overwhelm ourselves to bring in a million Wikipedia articles is soon not going to be a big thing. So, you know,
Mark Anderson: This is always a question. Yeah. So it’s the demo sense is so what are we demonstrating that we that we can’t do? And and not just that we’re doing it in 3D, but why are we doing it 3-D? Because we can’t do it in fewer days than that?
Frode Hegland: Yeah, so. Exactly. So I think we need to design, you know, many people here will be doing many things in many worlds, but what will we do together? I think we need to design that. You know, we have the few people in a room. What are the things they’re supposed to be able to do?
Brandel Zachernuk : Yeah, well, one thing that I really like in of user experiences is his attention to some of the earlier work on the description of focus plus context and recognition that reading a newspaper, this was on a back in the 80s, and so most monitors were sort of 320 feet or so. And so he sort of likened reading a newspaper on a on a on a computer to having a a very small sort of cut out through a piece of paper that you are then scanning over an entire broadsheet. And you know, it’s perfectly adequate for a form of reading when you’re actually looking at column inches and you’re and you’re scrutinizing a particular story or scanning through its classifieds. But it does nothing to the context picture of what is the thing that I want to kind of lunge in and read at this moment. And you know, there were there were hard cliffs in terms of the capability that one had to be able to display any of that by virtue of the low fidelity display. We simply didn’t have the pixels to represent those things. But you know, those those things are going away. And so we have most of the way that the terms that we interact with computers on are at that sort of final level of Zoom for the most part. And I don’t mean in the literal sense, there are things like zooming user interfaces, but they’ve also failed to recognize that that concept of level of detail is not merely physical scale.
Brandel Zachernuk : It’s not that something is bigger or smaller. It’s that something as at different levels of granularity and fidelity based on the intention for representation, like what is it that you intend to do with this document? And to that end, what are the levels that are important? Something that I was playing with a while ago before I got sort of not distracted but pulled into virtual reality representations? The timeline stuff that I did for linear timelines I did before I was playing with the VR and I was like, Oh, this can work in VR and it has different characteristics. And I think that’s that’s an important thing not to to discount the possibility that something simply the good dimensionality really does confer benefits in terms of your manipulation and interaction with it. But one of the things that I was really curious about at that time is the question mark was saying about what are the what are the the thoughts that aren’t fine, but but even even simpler, like if I have if I’ve been to this page and this page and this page in Wikipedia. What are the other things that that means I should probably read can I can I get a reasonable summation of of those links? One of the challenges with Wikipedia looking at, I just looked at the economics article. It has an op degree of two thousand nine hundred twenty seven hundred and those are not all the same in quality. One one benefit is that one thing that you can do is look at the number of times that a concept because you typically only look one page once in a Wikipedia article.
Brandel Zachernuk : But that’s not to say that the concept is reintroduced over time and time again. So one of the things you can do then, is to look at the number of times the term in an outlet is reused. In order to qualify that link identify what sort of relative importance it has because something can be peripheral or something can be central. If you’re if you’re reading about economics, then then simply the fact that it references out of like, there’s only one link to Adam Smith. But but probably Smith is important if you’re talking about what economics is versus, say, Alec Baldwin. I don’t know if he’s in the economics article, but probably a fairly peripheral if you’re talking about that. But yeah, so that’s something that I think I want to do on it. That’s something I think that is important to recognize as a job for virtual reality is being able to come up with these multiple layers of abstraction and give give visibility to the fact that there are these different ways of thinking about things like level of detail. One of the one of the points that occurred to me while people were talking about the objects is that Pixar and other people who are involved with creating objects in 3-D like this are, I think, pretty poorly positioned to recognize
Mark Anderson: That what they’re throwing
Brandel Zachernuk : Around, or a whole bunch of almost entirely empty signifiers that there are things about those things that matter and they matter to different people for different reasons and absence and ability to to imbue those information to those things, those those objects, those those elements tell a story vastly thinner than most people would be expecting to be able to tell with them. And so, you know, I think though those are those are significant challenges and problems with sort of spatial and dimensional representations. But I think they’re also really interesting provocations that can lead to some. Really transformative perspectives on what it is that data is what it is that navigation is and the way that meaning can kind of be manipulated in a way to make these things useful and fun to play with. That’s a lot. Sorry.
Frode Hegland: That was great. So my tiny little initial point was, in some ways, reading a newspaper on hair may be better than I broadsheet because broadsheets take up a lot of space. You know, you basically need a table for them. So that’s the wrong kind of concept, excuse me, context. Where it is on a page doesn’t necessarily matter much. But I think the other thing you said about, let’s say, Adam Smith, I really want to be in an environment where I’m reading Adam Smith, and because I’m doing something with economics, I select that touch had left at whatever. Then I do a Wikipedia search or whatever. And then the results come up and then I have an interaction to say this is actually about that. So that will become that kind of mentioned it earlier, but that means that in the future, as I go through other things, all that information is now verified as being that. So, you know, to see important people in time, all of that then comes automatically. So to build the opportunity for that kind of connective space or constellation or whatever, I think is important because I think what we’re working on here is not to build a visual sculpture. I think what we’re working on here is much more meta matamata than that. We’re working on here to build the opportunities to constantly rearrange the sculpture.
Brandel Zachernuk : Yeah, something that I’ve been really conscious of, I’m sorry, I promise this is short in the context of special computing is that a lot of people are just woefully under indexed on how much kind of needs to be revised about what we think of as information in order to be able to make the best use of the medium. And so that’s why I’ve listened to you invent talking the other day on that YouTube chat. And, you know, I don’t think he sees it. And I think that part of that is that the value proposition of computing per say, we accept it wholeheartedly, as is right now. Then there aren’t that many advantages because we need to do a lot more infrastructural work in order to get those things before so. I’m glad everybody else sees it here.
Frode Hegland: People black and white completely agree. Guys, yeah. Mark Adam,
Mark Anderson: However, a couple of quick things. One just on papers. Actually, when you hang up the phone, I thought, Gosh, that’s the last place I’d really think. I mean, because I was at the weekend, Saturday. I read, I read the Financial Times and the Times, but I’d probably read the and I read The Guardian in times online most days. But I always know I’m getting less than I get in the real paper, and that always bugs me because it’s a bit like and the worst of all is the BBC News, which is now just all sort of, you know, lifestyle faff. And it used to be a real source. You know, it was, you know, a world trusted world source of knowledge. So I’m not sure actually, that it gives us the granularity that we want. Mainly, I think that’s due to a probably fairly low skills layer of people who are doing filtering on the news. Anyway, that’s by the by. I think that’s going to mention was so an interesting point, and I think this is probably what Brenda was speaking earlier. I was thinking, Well, gosh, you know, so we’re sort of talking about annotation. So I’m looking at some things. Sorry, I might have been my favorite speaker. I lost track, but. So are we looking at these things or I follow these links, one of the things I’m sort of doing? Oh, right. Actually, this is pertinent. This system, and as I spent a lot of time in Wikipedia mainly trying to understand the difference between what people had actually written and what they’d intended is that.
Mark Anderson: Yes, and it is a technocracy, and that’s one of its weaknesses. So everything’s sort of seen through. Yes, I can measure it. But but does it do anything useful? It’s a question people tend not to ask, which is totally different to the argument, the sort of humanist argument about whether they agree what’s what’s in the written word at the end, which is a whole different ball game I don’t want to get into. But so in a sense of OK, right? Yeah, there are, you know, there are 3000 outlets here, but which are useful? Well, I could annotate them in the system. But then that would probably be a separate bit of warfare because as my research showed, people don’t like sharing you touch my shit and everything kicks off big time. Despite the fact that notionally it’s a commons because basically people want ownership. So it’s I’m thinking, so what one may be doing there is, is, is you’re sort of making trails and this goes back to when I saw Chris’s demo five years back, so I said, Oh, that’s what I want to do. I want to basically take things. And I was thinking, you know, so 2D space, but it could be more. I wanted I basically want to I want to grab that and I want to grab that. I want to grab that and I want to put them in a clean space without cutting them off from everything else.
Mark Anderson: But I just want those bits in the petri dish so I can look at them and I can understand the interrelation. And I think that sort of speaks back to what you were both saying earlier. An interesting point about WikiLeaks and it didn’t get mentioned, but but but it’s another thing that happens here is when we look at things like other people’s intentions, we’re often guide by again our appalling habit of counting things. So it’s a classic problem. If enough people follow the same bad link, it’s the most promoted. Well, that clearly wasn’t quite what we meant, but we haven’t. We haven’t generated the systems other than basically humans coming along and fixing it, saying, No, no, no, no, you’re you’re, you know, you’re following the wrong thing. I mean, short of removing the link, it’s really quite hard to do, and we haven’t. So we haven’t we haven’t dealt with that issue either, which means that measuring measuring links or making use of links is very often in felicitous and gets us to exactly the wrong outcome. It’s the more subtle thing of being able to look at things. So you’re looking at Adam Smith, you’re thinking, but one of the things related to Adam Smith. So I don’t necessarily need to know if he had a collection of names in his garden. Interesting, but not pertinent to the fact that I’m trying to understand maybe his philosophical take on something you know or some aspect of his life or how he relates to a subject.
Mark Anderson: And they’re all in there. They’ll all be in a corpus, be it in the Wikipedia or something similar as a hypertext. But you can’t see the wood for trees. People don’t. People don’t write links with the sort of hyper textual intention that we’re talking about. They write it within the style guide of Wikipedia, which is is not well written, but it’s mainly done when people stop complaining. You know you’ve done it right until somebody else starts complaining. And it in a sense it is that poor because it’s a complete open commons, which makes it actually a very bad study. What I’ve come to realize because there’s so much unintentionally unusable human behavior, that’s a skein of noise over what’s a fascinating dataset. The trouble is, the dataset has been accumulated without necessarily a lot of careful thought, and it’s just full of half the amount of half finished stuff. But there’s no index to it. There’s no the index is nearly all. Actually, brute force, single word match the the categories to agencies cover, anyone can make a category, so you go to you go to Wikipedia. I want the economic everything on economics. It’ll give you the economics category, but that’s not that is certainly not everything on economics. If if you were to look at all the documents, it’ll just be whatever somebody takes out. And these these are real interesting sort of tricky things to do with it.
Frode Hegland: Let’s not go too far into their plumbing as a Wikipedia, but the issue of no, it’s
Mark Anderson: The measure issues. So hypertext issues Wikipedia’s yeah, they get they get lost in the subject matter. The point that the important thing is the lessons that it shows in the felicitous way in which it leads us away from the understanding of the sort of visualizations and the way of displaying information we’re talking about.
Brandel Zachernuk : Yeah, I mean, to that end, I would say that there are aspects of sort of data that can be measured for things like, like I’ve mentioned, Google has the ability to measure bounces. So when you visit a page and then you return back to the search results from whence you came immediately. They know that that’s a bad, bad sign that that was a low value resource and reference, and that probably it was elevated to high,
Adam Wern: But not necessarily a bad business for them that you come back to the ad page. So it
Brandel Zachernuk : It’s correct. That’s correct. Yeah. But say, say in Apple, where where I do have some, some level of dominion over of that thing, it’s something that I make use of is when when people have bad, when people return off of pages, when we have bounces, we know that something about what they did is not in line with what we want to be able to give them. And I’ve heard a lot of people talk about these implicit behavioral measures. So something that people did on Amazon Mechanical Turk back in the day was they would they would look at various aspects of the way that somebody was performing a task in order to come up with a sort of automated assessment of the quality of the answer based on the time it took to do it, what what kind of mouse movements they had because they found in broad strokes that it was possible to make assessments about that. They’re not 100 percent, obviously. For example, somebody using, you know, accessibility assistance is going to have completely different use of utility profile of the way that they’re making use of aspects of the site and all that kind of stuff.
Brandel Zachernuk : But I think that there are an incredible rich plethora of signals that can be used in 2-D, and I think that expands only ever outward in 3-D. It puts up against privacy in all kinds of other aspects of it to the point of Wikipedia, what are valuable links? One of the things is how many people follow it, but also what is their onward behavior after that? If it’s non-economic ish, then you know that it’s not necessarily useful. So there are ways of doing it. It depends on what kinds of capabilities you’re willing to break into a system with regard to the aggregate aggregation of data and reporting of it. What sort of privacy implications that have know things like differential privacy do some job toward providing the best of both worlds. But yeah, I think that there’s more we can know, and there’s even more we can know in spatial providing where eyes open ethically about what implications it has. And I want to. I want people to know that.
Adam Wern: So I raised my hand because I want to save the statues, Frodo wanted to sculpt the statues, and I want to be a proponent for actually saving the statues a bit a bit more. So when you do some knowledge work and get a search result or bring in a few documents, follow links. I want the actual search results to be objects in themselves that you could scratch off, take a list and if you get a search result, you could remove things from that and save the actual result as an object. And I think 3D is in a way good for this because in flat interface, if you have a kind of a nested doll of many different spaces. So if we look at the operating system, many windows. And if you look at this room window, it follows its own logic and physics and so on. And it has menus and it has lots of icons and they are. All these things are different spaces. It’s not one unified space with one logic, but 3D has more opportunity to be a bit more unified in that the the place where you went from and the target where you went, the results of your search could actually be placed inside the actual same same space. And that brings up very many new opportunities to unify the interfaces. No more menu bars or scroll bars because you don’t need a scroll bar, you can just look up and down and get get the length of the document. But just by glancing, you don’t need a tab bar because you can look around the document to see the other documents in the same stack. So many of the list goes back to what Peter wrote in the email about that article, but I think actually three days is better for many things because we’re so limited with the screen, so we have to do this kind of imaginary toolbars and other spaces just to navigate.
Adam Wern: And so we need to or we are forced to do things a bit more hyper textual. Or interactive than they need to be, but in 3-D, we have so much space we can actually replace objects and the sources and and show the history of where we went from and where we went to. And that forms kind of a sculpture, a knowledge sculpture that I actually want to save. So I don’t want to combine it. I rather have the next time I come back to the same sculpture, I may take a copy of it and put it further away. The arm or a walking distance or a glanceable distance, and then I say I can just as designers do, they copy the object on in Photoshop and do it 100 different variations or a sketchbook with 100 or different variations. To me, it’s very important to see things side by side. The juxtaposition or glanceable differences that is where I derive so much value, just having a timeline scrubber and the history is not enough for me. I just really want to put different states between each other besides each other in juxtaposition. So I really need to see it could be because I’m very visual. I want to see things side by side. But I think it also ties into what Barbara Kirsti talked about that animations can be very bad because the actually putting the different steps a bit more visualized before you can lead to greater understanding. I think it is like that for many people, and so I would like to have the different variations of the statues beside each other. Yeah, I hope.
Frode Hegland: Well, that was really nice, because you kind of said, we agree, disagree, but of course we don’t now. Mark was mentioning earlier how his wife really liked the basic room inside Oculus. You know, I go to the Japanese room and it’s really nice in there. It’s really, really nice. So I put down a few notes there, number one, about the sculptures. Yes, we should be able to save them in many different ways. You know, I consider that publishing. That’s why I really like documents. For me, what document means as an intentional here’s a thing that I’ve now framed. That’s all it means, rather than a continuous time stretching of modifications of it, at a certain point you say, OK, that’s that or that’s that. So that’s what I mean by document. I’m not saying it has to be a rectangle, but also we need to look at where we should focus our brains because there’s so many aspects, but also in kind of a real thing. We should start guessing what kind of APIs will Apple quote unquote give us? Right. That’s a really important question, because I’m sure when Apple comes out with stuff, they’ll provide incredible developer tools for people to be living in their world. You know, that’s what they’ve done forever, and that’s what they have to do. And that’s all well and good. So obviously with Brandel deep knowledge of. And also, Adam, you’re getting there of web and other VR tools. We have to really look at what our opportunities are.
Frode Hegland: And just as a slightly side issue, I could imagine in a few years putting on my whatever and being in this Japanese room that I can actually walk around there. So imagine you have this one floor that is my office and meeting room. So I know I meet you guys there, my library here, whatever you know, I lay it out like a normal space because it’s a mind palace, partly. But then I can go up from the roof and that’s where we kind of infinite space. So it’s like all the Wikipedia is there. All of this is there. It’s like, Whoa, you know, everything is here, but mentally we need to be able to be boxed and unboxed, obviously, right? So if we just have the complete freedom of VR without thoughts, it’s. Overwhelming. But if you imagine some kind of a structure of crazy everywhere, specific rooms, the floor below, but then we have a basement that’s kind of like plumbing. It’s like the deep library of the stuff that we care about. It’s almost like our settings and so on. Maybe we can start thinking about issues in some sort of a way like that, but also focus on, you know, right now we’re in a meeting. One of us is published a document of whatever that means. And the rest of us are able to go through to try to understand that, I think would be really useful to get to a point of what that might mean in our community.
Adam Wern: I would like to add another space and it’s kind of the void or white paper or white room or black room or gray room, but the really, really empty space where you could bring in to get that kind of focus. Of course, you can do it in your Japanese home, and it may serve that purpose for you or for me as well. But the really blank, really blank white canvas is very. Sometimes very, very fun to start with, with just a word or a sentence or an image or a model or two things side by side and nothing else. Maybe your tools there?
Frode Hegland: Yeah, yeah, absolutely. That sounds wonderful. Mark Yeah, go ahead.
Brandel Zachernuk : Brandel those those were the first that the first four environments that I made for board reality was that was a lakeside. And the pass through to the extent that was available on five and then a pure black and a pure white sort of space that you can just kind of exist within and be able to kind of observe. It was it was nice. And I think that those are those are really valuable kind of things to be able to separate them and live in for different reasons.
Mark Anderson: Yeah, I think there’s blank spaces are good because, you know, it came up actually in a separate sort of tinderbox week last week. But I I sort of I was just thinking on the fact of matter and dark matter. And, you know, dark knowledge certainly far exceeds divine knowledge. So there’s also knowledge out that we haven’t discovered yet, but we constantly focus on the stuff that we think we know anyway. And so what that often makes difficult is to sort of look beyond. So these blank spaces, I think, are really good because they’re extensible and they help train people who are who tend to think always think inwards to just expand that, expand the horizons a bit. So I think that’s that’s a tremendous useful. And I just wanted to pop it. It’s a slight bite. But I went to I watched Brett Viktor’s most recent talk, but it’s one interesting thing. I took away from it. I was, and I’m beginning to see it as a slight trope now. So, oh, if we did this, we could walk around, you know everything I’m thinking. Have you ever been to a museum or a weekend where you say, let’s do four museums over the weekend, actually? I mean, I know you’re only walking in the mind, but there is a sort of slight misunderstanding. So that’s another thing I get what’s meant behind it, i.e. that you can see this thing in a separate space. But it’s interesting at the moment we’re using some of no pun intended seemed quite pedestrian terminology to discover. Sorry to describe the way in which we might pull these things apart effectively into just a separate environment. Then reality would come to you rather than you go to them. I suspect.
Frode Hegland: So I’m raising my hand again because I want to push you guys a little bit. On. Would you be interested in the visual matter, VR thing? What I mean and what I’m asking specifically is. If somebody offers a document with visual metaphor as a PDF and puts it somewhere. Do you find it interesting to parse the visual matter as part of how you can present a document in VR space? Because if you are interested in that, then you can of course, invent what this would mean in terms of the visual matter and the interactions.
Adam Wern: I mean, it’s very unclear what the admission matter is. You say that we can put anything in there, but then you’re talking about metadata in general and how that is useful if it’s useful metadata, what what kind of metadata are you actually talking about? Can you give example that would be useful to you? So I can.
Frode Hegland: Yes, I would have. I wouldn’t couldn’t imagine anything I’d rather do than tell you that the different kinds of metadata that documents can contain and that visual matter supports include structural metadata, which is essentially headings and page numbers so that you can choose to divide the document up by headings. Because PDFs usually don’t have that, then there is contextual metadata, which is what did this document cite. So that’s the references, but connected to where they are in the document and addressable by one second. Yes, I’ve got real quick. I’m in a really big meeting.
Brandel Zachernuk : What?
Frode Hegland: I would like this
Adam Wern: One for people. Yes. Big ideas. Big, important here. Yes.
Frode Hegland: This one. Hello. Ok.
Adam Wern: We are the big meeting.
Frode Hegland: What we’re putting it. Ok. Ok, cool. Right. So that’s a really core. But I’m realizing more and more that the defined term slash glossary is really, really important, too, because I really feel that as we massage our knowledge, let’s use the term again sculptures. Thank you, darling. You know, to help us think so and so is so and so that’s actually really useful for people trying to understand us because it is very much about intention. You know, this is what we intended to write. So that’s why one of the views I already have in reader is only show me headings plus defined terms, because that’s what the author cared about. That could be one of the views that in VR, you could say, you know, put everything in the background.
Mark Anderson: You know, you must remember that’s a really interesting thing in doing that too, because if you if in doing that and you’re looking at the say the head headings and the defined terms you don’t see as the person who’s authored the document, the reality, the document, it argues one of two things either you didn’t do enough definition of terms or we may need another sort of object that that wouldn’t sit well as a sort of defined concept. But you say, no, there might be another another strand of object that we need that exists as another bit of metadata like that allows the story. And I don’t have an example in my mind, but I just know sometimes you may think, Yeah, but that doesn’t really, you know, this thing is not really is not a concept, really, or it’s not a person or a thing I can really describe. But there’s an important idea.
Frode Hegland: Oh, and the defined concept dialogue, I’m looking at what kind of additional stuff to put in, like the type would be private and so on. This is what you guys mentioned earlier. The tag would be person, institution, that kind of stuff. So the question is what would be useful here? Because the whole idea is that the whole dream for me is you have a rectangular documents, if you actually reading it, there’s probably no better format than having a rectangle with beautiful type to really deep read. That’s fine as long as it’s laid out nicely. But then you can’t read all the documents all the time, so you need to just choose what elements to look at. So if you’re reading one long document, you should be able to put it on a timeline which this kind of thing can do. Or you should be able to say only show me the bits that are about people which this can do. And then, of course, if you are dealing with a corpus like the hypertext proceedings, if all the documents have this, then you can see I want to see everything where they refer to this time period or only about institutions, only about that.
Frode Hegland: So you start moving the things around. And when I talk about PDF visual matter, it doesn’t have to be. There’s no real reason we couldn’t export this HTML as well. Right? I would do a PDF as a backup because it’s plain and simple. But the point is. Imagine. We all of us like to tinker. You know, we don’t do paper plastic airplanes anymore, but if we had the time we would probably be building model airplanes, wouldn’t we? Right? Could you put them together? It’s really nice and it just feels good making that shape happen. Imagine providing a piece of software environment where it’s pleasant to do that because you know, it’s not just for your brain. It’ll go into an environment where other people reading will have access to all your beautiful little details here and there, and they can thread them together. Does that answer the question a bit more, Adam, on what kind of metadata? Because it’s not limited to that, if you decide that another type of metadata is useful, we should look at how to make that happen.
Adam Wern: Yeah, I think I agree with your last question, where where you build something with your small notes and your personal reflections and share it with the world because I feel that I don’t do that, I build it for myself so often. I have lots of things and I have so many. I work with improvisation drama and I’ve done a large number of exercises and refined them and they really exercises, really need the commentary to be useful. You can’t just do an exercise, you really need to know why you’re doing that, what you’re practicing, what you’re what you’re looking for as a teacher and so on. And all this more details that it makes it useful. You can’t just read a game and understand the. The beauty of it, but sometimes you can transfer the beauty and the things to look for, but do. And I want to share that, and a book has never been the right. There is no linear order to these exercises. There are, they are more clustered and I want to have other canvas for it. I. I’ve been thinking of doing big posters or but it’s kind of hard. It will be a few metres wide or big at the hard to send to people, a poster to read and so on.
Frode Hegland: So yeah, it’s a little bit perfect for that.
Adam Wern: Yeah, I mean, it’s a perfect playground for it. And. And also, I like a bit too messy sketches, so that leaves room for the audience to think perfect things are a bit bit and draw my set as well. The good draw, my third lines, you write in your mind as an audience. These are the best lines. An author can never match those lines. What you think? So Goodrum, I assume the silence and the silences where the audience writes the lines. So I like sketches because it’s opens up the audience participation into things. And so I would like to have an object or a way of transferring these sketches and all these. The ideas are small anchor points. Records that the place, a tune that that’s played in the audience mind, so and I’m really against the Knowledge Graph because knowledge is by definition inside us, knowing things is inside us that people talk about knowledge as outside. And I think it can only be inside us and and we can have cords or anchors to knowledge and our information system. We can never have the knowledge in itself. And I think there is a confusion. It’s not good to speak about knowledge in that way because it misses the smaller details that you evoke things by those anchors. It’s not the knowledge itself, and we need to be more precise with that language. But and I want the instrument or the document sounds dry, but the document to to send it to other people.
Adam Wern: And so that part I want to have for the better. I don’t see the connection to to the current current PDF commission matter. It’s not clear because we can go directly for that kind of a rich knowledge object and and not be limited to the very, very linear and kind of the destructive the destructive nature of PDF and working against the technology here is not. Doesn’t feel useful to me when when Hml is already beating PDF sons in so many dimensions, even now, it’s even getting fixed layouts and it has meta data, you can hang on directly onto text. So why hang it on a page in the end, in a in a in a typographic format? It doesn’t make sense outside that purpose. I really understand why it’s good to print it in in the legacy world of PDF, but I don’t think it’s. It’s a data want I want to have the data and in a disturbed form so I can not work against the technology and actually do the kind of representation we have been talking about. So that’s I think you print to PDF, but you should not have that as a data source. So I think a PDF is just like the models we have here. Maybe we can attach a model with the meter data and the data, and I want something pure to project from.
Brandel Zachernuk : One of the things that I’ve been doing with this infrastructural multiplayer stuff that I’ve recently is trying to build out of a sort of a fruitful enough foundation to be able to build those applications on top of. And so, you know, it is my definitely intent to intend to take a look at parsing visual meta and thinking about what sort of representation sprang out to me as being interesting and useful for the benefit of being able to kind of navigate and pass the document. But based on where I come from in terms of my relationship to text geographies, those kinds of things, it’s much thinner. And so I don’t have that same kind of basis on which to reason out of the box about it until I get my hands on the ability to to actually process it and think about what significance it might have.
Adam Wern: Now, that could be my thing as well, that I’m not. I did my master’s thesis 12 years ago or so, and I haven’t looked at the diploma since then. I haven’t done anything, any writing, and I also am a non native English speaker that also puts an extra layer or block. Here I read a lot of English things text, but but I don’t write it, and that’s why I’m reluctant to write for the future text as well, because it takes me five times as long and I need an editor in some way or proofread. And we don’t have an abundance of proof readers here because everyone wants to write and no one wants to proofread. I think. Or more people want to ask.
Mark Anderson: Yeah.
Frode Hegland: So I got to jump in. I got I got to put my sword in and fight on this one. So really, really important. First of all, linearity is really important, really, really important. You cannot have academia without linearity because linearity is making an assertion, making a point. This is one of the first discussions Mark and Chris and I had when I started at Southampton. I was all about make it all hypertext protects and everything. But then you don’t have an argument. This is something that Barbara also talked about. A graph or diagram is fine, but it doesn’t tell a story. And this is why it’s so bloody hard. Like the thesis, the last couple of months are hardly wrote anything, but I couldn’t think of anything else. It’s really, really hard. And I’m not saying we should make those long documents for everything. That is not what I’m saying, but I’m a little bit now polarizing myself with what you said because I’m very much agree with what you’re saying. But just for balance, a sentence has to have grammar. All right, you can have the word, yes, no, fine stuff like that, but anything else, you need grammar. And that is the basic importance of text and speech. So if you have a longer piece, you need some sort of a threading. Right. So what I’m trying to write about the piece that you hated the kind of manifesto thing, it’s really, really hard to write, especially in a community, but it has to be a little bit of an intro. This is important, but does it have to be always one linear, long thing? No, absolutely not. A lot of that can be moved around. A lot of it is arbitrary. No question. So the whole balance of how do you make a linear statement and how do you put stuff behind it is really, really crucial.
Frode Hegland: This is where I think we as a community really share perspective because we talked about newspapers earlier. I don’t read newspapers very often. You know, economists and a few other things, partly because it’s so shit the way it’s written. The story has to start with. John walked the dog outside and it was rainy, a bit of personal fluff, and then he saw it was the end of the world, right? So it’s like, I like machine learning to get rid of that first bit and then it’s bad copywriting with lots of repetitions. All right, this is not good linearity. But but you need to find out what happened, so if we can manage to get closer to. A little bit of a statement and connected, that’s why I’m so on and on and on on about the concept stuff. I want to be able to write. I had a meeting with Adam today and we talked about B.R., we decided blah blah blah. If most of those words are defined, including Adam, that means that the reader can see, Oh, it’s Adam Byrne, who is he blah blah blah, right? This is really important, but I still have to write that one sentence. And going to the whole visual matter thing, I do not think that visual matters would be a very useful format inside VR. Absolutely not. But what I do think it is is a bridging format. Because it’s ridiculously open. That that’s all I’m pushing for with that. It is it is, you know, every kind of thing should be able to go into our VR rooms. It should be able to do the kind of drawing we did. It’ll have meaning VR, sorry, visual matter will be useless for that.
Adam Wern: Well, but isn’t HTML even more open in terms of tooling and the actual ease of getting the data in and out of it? It’s so interesting that I feel when now when I start with the VCR, that I can hang things directly onto objects or that I can in HTML that I can hang data or metadata directly onto text or paragraphs and even characters if I want everything I feel. And the tooling for that is so much better than when I worked with the Web 10 or 15 years ago. Now you can even make your own tags and it’s fine suddenly, OK? And it’s it’s a new world to me, coming back from a from theater, coming back to computers and the and.
Frode Hegland: Ok, let’s fight over this one because I think what you’re saying is actually wrong because it is hidden. Right, you don’t show the HTML that’s hidden away, you show a rendering of that. That’s where to me, it gets kind of dangerous and I guess that’s why we have this markup stuff, which is kind of a hybrid. Right, because yes, you can do put a lot of stuff into HTML, that’s really, really useful. You put the data there, but then over time, when things and tags and meanings and renderings change, that goes kind of a way that’s just my concern.
Mark Anderson: But they won’t change, though. I mean, they mean what they mean. I’m not sure I buy that argument, actually.
Frode Hegland: Well, if you look at older pages, especially if they have some kind of multimedia stuff to do with them, even basic stuff like look at 911, a lot of the stuff that was to communicate what happened on 911 is completely unbeatable today.
Mark Anderson: But that’s not to do with HTML per say, that’s to do with interim technical format. So the fact that you can’t watch a video that was shot in the format is not supported now is is is actually nothing to do with HTML per say. That’s a failure to serve digital formats.
Frode Hegland: It’s not just about that, but OK. So the thing is. No. Ok, let’s not argue about this too much, because in principle,
Adam Wern: If do we have any pub, which is the HTML version of the document, so it’s a fair fight because I feel that you are talking about the kind of going back to the Wayback Machine and trying to browse a video. But if you have an E! Pub, that is a fixed thing, a document you own with a known format fairly compliant, the data is still there. So the question is whether it’s visually represented to the user in some way, like your last page, last page with a visual matter. But that is also rendered. So it’s rendered. So there’s question about being shown to the user, and that’s where I’m with you. Metadata is often hidden and then then we don’t fill it in because it’s hidden and we don’t care about any. It takes too much time to fill in forms the things that are hidden. But if if the metadata had been the first page, the cover of your EPUB or thesis, you would make sure to fill in the fields because because that would be the first impression. Of your document, so I think it’s more about actually showing metadata and make sure that people feel it in and that it’s also an economic problem or an organizational problem, and a problem
Mark Anderson: That chimes with the facts of sort of an awful lot of data being the exhaust state rather than actually intentional effort. Yeah. And the fact that it’s the fact that it’s hidden. I mean, this is partly this gets back into all sorts of cultural divides between humanness and the technologies and all sorts of things that they’re actually completely pointless
Brandel Zachernuk : Distinctions
Mark Anderson: To draw. I mean, it’s basically, you know, the problem with some of the metadata is involves something you’re not used to doing, which means to most of us, extra work. And most of us don’t like extra work. And it’s pretty much that. Not sometimes because it’s not made easy to do.
Frode Hegland: Most of the metadata and visual matter is free. It requires no effort at all.
Mark Anderson: No, no, no. I understand that.
Frode Hegland: It deliberately doesn’t. It doesn’t. There’s nothing filling in. You fell on your name and the title. That’s all.
Adam Wern: But just some of the metadata in HTML, the like headings and so on. It’s there by default, of course, and it has more pristine text. Right now, it’s not as bad as it was 20 years ago when integrity, character and code I live. Yeah, I have some special characters in Swedish that are always were mishandled, and that is a Latin script, which is, yeah, far better than all the others in the world, except for English. So I think,
Frode Hegland: Yeah, OK, well, fine. Let’s not waste too much time on this aspect of it because, you know, visual matter is slightly archival also. But for you guys, let’s say you’re talking to me, a software vendor, somebody is an author doing a thing and now they want to have it in a VR space. Would you both prefer that it is rendered in HTML?
Brandel Zachernuk : So I don’t have a preference. Like I said, I’m pretty promiscuous as to the data sources and the representation, so long as I come up with a mechanism for parsing. I’m not concerned whether it’s it’s represented in visual media style or if it’s in anything else, as long it’s consistently possible, it’s something that I can make use of. I think I actually think that this sort of discussion argument over HTML versus other formats is not a distraction and actually central to some of the discussions that we ought to have here. And that one of the one of the issues with HTML, as Adam pointed out, is that its job has changed over the years. And so what matters about what is represented within it has led to this. We’re talking, we’re talking about a range of things. And one of the things I think you did very reasonably item is comparing an EPUB H html to to a to another fixed document because link rot is a separate question. But representational kind of deterioration is that is the thing that we’re talking about in terms of what ceases to make things relevant. That preceded Brandel has a really good book, a good bit. And I guess in the new Dark Age, where it talks about the fact that the BBC produced a sort of a millennium edition or something for the doomsday book that was all produced for the BBC and Micro in the nineteen eighties. And now twenty something years later, 30 something years later, people have to have a digital archive retrieval sort of rescue effort for the BBC micro thing. Well, the doomsday book is just as readable as it was thousand years ago or whenever it was written. Yeah. And so, you know, I think
Mark Anderson: That most of the early hypertext literature now can’t be read. Exactly. Flash is disappearing.
Mark Anderson: But I mean, there’s an interesting to try to get some threads here. I mean, you know, first I would say Brandel don’t think you’re not up with the hunt in terms of academic stuff. I mean, I come I mean, I think I think a PhD is merely a sort of log attendance. It’s just can you get to the end because as Fred said, a lot of it, it’s just sort of thinking, maybe not thinking about anything else. But it’s the one of the things I was thinking about is in relation to, say, doing some visual matter is, I think you’re absolutely spot on. And what I mean, for instance, the visual metaphor is at the moment has stuff written in Big Tech because the initial use case we were doing related to an academic citation, which is a side thing, but it’s just something that happens. But you couldn’t. It’s not organized as a corpus such that you could just put a you on everything, if only or even a URL didn’t immediately die. So it has this. I mean, bib tech is quasi possible. It’s like it has several different religions that live within it that don’t talk to one another. And so they, you know, the parsing is a TED more complex than you want. But it was I remember the discussion at the time was a bit like going back to, you know, well, doing things with PDF.
Mark Anderson: The point was if you look at all the other formats we’ve got, it’s the only one that is effectively, I guess, non-musical. And what if you take a word? Or something? Do you know that it won’t get changed by a process? A PDF is broadly, for better or worse, baked in. That’s that’s the upside of it. Pretty much everything else is potentially a downside with visual matter. The fact that it’s using Big Tech is basically we had to choose something. The biggest heavy lift at the time was doing the the academic referencing, which argued to and it was basically a choice between that. And I think although RSS, which meant your eyeballs bleed but just in a different way to Big Tech, so it wasn’t exactly a stellar choice. And I always maintained from the get go that to anybody. Anybody prepared to ask or listen was, it’s there until we find something better. It’s not there because it’s good. It’s the least worst of the available choices. And it’s which gets to your point that and my thought in my involvement in it in so where I try to contribute into it was exactly where you said, Look, if it’s possible at the end of the day, it really doesn’t matter. The main the biggest error we could make is to produce something that just can’t be passed. So if I have to look at it as a legacy device and even if I have to go through several levels of paths for the really old stuff, it’s not lost because we can track back and we won’t need effectively an emulator, which is a whole different kettle of fish.
Mark Anderson: So I think that’s that’s. And so the interesting point about Will using visual matter, I think it’s written to Anna’s point is the thing I’m thinking interesting about that is the in and out. Can I take visual metaphor, put it in a visual visual space and get something useful from it? It’s not that we want to look at the actual data. We will be wanting to look at a render of the data, but we might want to do something. Maybe, you know, we might, for instance, we might want to look at the glossary and the interrelation and, you know, sort of throw to authors like concept that sort of, you know, there is a 2D map and it probably is going to be a sort of 2D ish map even in 3D. But you’ll be able to have you’ll be able to interact with it more easily, perhaps. But also what you can take back out. You know, so because an interesting question is, well, if you can’t, then we’ve got more work to do on on visual matter. I mean, if it’s if it’s if it’s the sort of, you know, if it’s a sort of roach motel where the data goes in and doesn’t come out, that that is a problem to address.
Mark Anderson: A couple of other things that say is in terms of visual matter, I’m well, Freud’s thesis is effectively baked off now. Mine’s done. And if I fix the issue with the with the ligatures, we’ve got a couple of bits of large. Um, metadata, you know, visual data that we can just play around. The point is there as complete as they can get there from known resources, you can talk to the author and say, What the hell did you mean with that? I know, for instance, I’m perfectly happy to take that and for instance, put more concept map data, which I probably manually said because I can’t put the whole document into author at this point. But it wouldn’t be difficult for me to take the visual metaphor that’s attached to that and enrich it with some more stuff such as such as if it might have come from author and simply to allow you to look at it and say, Here’s the thing what can I, what can I make of this? The answer may be nothing, but if that’s the answer that’s actually really powerful, that’s all pertinent to what we’re doing here. Because, you know, part of this thing is, well, how do we take what we have and put it in this new space in a way that’s useful, which also makes me think about the escape and size, because necessarily often people start with really small representations because, you know, you’ll just try to make the damn thing work at the outset.
Mark Anderson: But I am thinking, Well, how big should we be finding some bigger bits of data? And that’s an area where I’m happy to help because it’s a job in itself. Yeah, that’s detracts from the building. You know, just having the thing that you put into the thing that you make is is a wearisome body of work in itself. So if can usefully do that, I’m really happy to help. And to that end, I still think some of the data that’s within my overall citation dataset might be useful because, for instance, we could play with that on a linnaeus, on a on a temporal scale and get somewhere close, for instance, to the psychology article I must have mentioned moons ago. And it may be that even having done all that, that proved to not be useful. But I think the answer is you won’t know until we’ve done it because it gets back to this problem. Is there something really exciting and interesting? And having this this this wider space, what’s unfortunately less clear is what’s useful within it, which doesn’t invalidate the fact that it’s interesting, but it still makes it difficult to know what we’re going to do gainfully within it. I’ll shut up and let Fraser be.
Frode Hegland: This is I’m really appreciating everything that’s being said here, and it’s lovely to fight with Adam on this because it’s the good kind of fight because obviously we agree on the end goal. So I wrote a few notes here, and I think maybe the top one in terms of VR is this is something to smuggle things past the gatekeepers. But there will be different kinds of data gatekeepers and all kinds of things. So this jokey definition of visual matters being just writing what the document is at the end. I think it’s really, really important, and I’ll just put my notes in there. It is far from perfect. It is missing a lot. So all the kind of stuff Brandel was talking about earlier, getting the key presses and the sounds and all of this absolutely fantastically useful. All right. All of this stuff has to happen, but it’s a different thing for different use. You know, to open that and be our from author or whatever software. Yes. Right. But just to get the basic stuff, you know, here it is. I framed a little thing and this is what I want the world to know about it. Please, can you? Software wise read this stuff is really, really important. And also, it’s been mentioned the whole link wrote every once in a while just to say the obvious thing citations are different from links and that they don’t actually give you anything that just tells you how to find it. So when we talk about different levels of referencing a citation should ideally open up something for you immediately. But if it’s missing, it’ll contain enough information to help you find it elsewhere.
Mark Anderson: Well, it also tells it tells you what it is because it predates links when there weren’t.
Frode Hegland: And it tells you that attributes of it so that you can go find it. But I think we should look at that same kind of thinking for other things. You know, not necessarily something that’s cited specifically, but look at. You know, kind of hard to describe things anyway. Yeah, so, so so that’s why I’m asking. There are many, many things we need to do, but I would really, really like a workflow of somebody sitting on a laptop writing some stuff. They open it up in VR. And you know, the way that I mentally picture it now is, you know, we’re in this. We have the fake VR laptop and you take it instrument on top of it and it scans that text and suddenly you have the data, you know, just just a way to make it
Mark Anderson: The data, how it has the data.
Frode Hegland: But this is what I mean. Imagine this. This is me being really dumb on purpose, right? So in Horizon’s work room or whatever you have your virtual screen, there’s visual matter on the screen and you take a piece of software that is an OCR scanner. And it touches the screen and it reads that text, and that is how the data goes into that room.
Mark Anderson: So I think we’re at cross-purposes. No, the bit you’ve described is just translation. I’m not confused about that. I’m saying when this data has got from your laptop file into the 3D space, then what do we do with it?
Frode Hegland: Oh, well, that’s where it gets really fun. Because Adam, today he puts the book. And so it’s just a rectangle, just flat. But then a lot of the stuff we talked about earlier can start to happen, like, I’m sorry for going on and on about it, but the whole concept thing. If the author has defined certain things as having certain value that is connected, that can be then literally taken out and be put in a space. And if you have Wikipedia behind that, for instance, that can connect as well. So you know, Doug Engelbart so and so oh, he lived in Atherton. Atherton, OK. It’s a location. Let’s do a map thing. Where is it? You know, you can go on and on and on. But these are the points where you don’t have to go page by page. You have all these elements, you have the citations so you can draw in everything at sites. If if the concepts actually is people, you can choose to have icons for the people on the side. This is crazy stuff that I think we could spend a lot of time in VR once we have this basic data to play with.
Brandel Zachernuk : Yeah.
Mark Anderson: Yes. To a degree. I mean, there’s the difficult the counterbalance that is, yes, we can say, Oh, I can see all the things geographically, but do we actually? So you you you can theoretically build everything, but it has to be built. And the things part of it is saying at this stage, if you’re thinking practical terms, which aspects of it are worth getting past the barest bones so that you can actually explain to a much more dating wider audience, there’s something useful there, because the sort of slight danger is that especially probably a more educated end of the audience, people say, We are fine, it’s great. But you know, look, I have a computer. It does all this stuff already. So the comes back to the thing having taken. So the point is when I say, what do we do with it? Having had a hand in helping make this a matter, I understand what it’s for and I have a strong belief in it. I’m just trying to think what, what, what tasks should we set ourselves to do? Yeah. I’m also wondering if the data that Adam has at the moment includes another useful thing is that David Lobo went through, and it may only be in his highlighted system at the moment, but he’s done a whole lot of keyword thematic thematic tagging, if you will, of the documents that would usefully be added back into whatever data set Alan’s got and was showing you, because that’s another interesting sort of thematic stranding that the exactly the sort of way where a more malleable display
Frode Hegland: Market to address to address your question. What I refer to quite a bit earlier is there are many things we can do in VR, and I think everybody here should express what they want to do and see if there’s an overlap. But in this particular context, what I’m talking about is augmenting someone’s ability to write a thing where they define the details, the whole concept thing and then publish that in a frozen form where someone in VR can read that linearly if they want to. But also all that extra highlighted bits of the visual matter metadata is accessible for them to view and flexible ways. So they’re reading something and they come across. And I think this is very TED Nelson. Visually, you know, you come across here is bracket one. Twenty six, you click on that or whatever. And then that document, there’s a line to it. You see it floating in the background. You can choose to view the whole thing or not so visual that actually gives us a lot of data to interact with. Mm-hmm. And that’s all I’m saying, so I think that that in a flow as one of the things to do. One person reading one document written by one other document or other person actually has a huge amount of actual real interactions and demo because, as Doug used to say. That while actually, no, my teacher, at least I don’t tell them, show them Doug only ever made progress with a demo. This is one opportunity we have for a demo. I’m willing to support many others, but this is what I was talking about. Yeah, Adam. At a Mr. Mute,
Adam Wern: The thing is that I’m in my VR honeymoon phase right now, so the I’m I get so excited by the things you can experiment with, especially where like hands spatial audio. I really now when I’ve tried the spatial audio a bit more with you from the end, it’s really important to place. It’s so. You’re opening to actually have spatial audio, if it opened a new thing for me here. Just as with the hands actually seeing your hands and pinching things and taking things and enlarging things with your hands in the different apps and even in the browser. To me, that is fantastic and also nothing I hold on for.
Frode Hegland: Oh, I didn’t realize was visible yet. Sorry.
Adam Wern: Don’t do graffiti when I speak, I’m easily distracted. And the good thing here with with VR is that there are so many or so few fixed interactions in place when it comes to text or even the hand gestures. What they mean and there is no overriding system telling you what to do. Like in every operating system, you often have to fight the if you want to do something novel or a bit more, maybe explore something better. You have to fight and try to disable and come around the inbuilt functions, like text selection in the browser. If you want to do something different with text selection, you almost have to render every character by yourself on a canvas. It’s the idea to really fight. So in VR, that is open and we have the opportunity to and many things don’t translate well, like text selection in VR. You have your hands and controllers. It’s not obvious how you do it. But if we give the technologists, I’m one of them, of course. But too much time they will have. They will translate everything from ordinary operating systems into flat into flat canvases with awkward text selection and not using the high fidelity hands, for example, or where it’s slightly more analog or so. So to me, it’s much more urgent to get better interactions, new embodied interactions into VR than to to doing the kind of bringing in Wikipedia, because I think that it’s easier to do later on. So it’s much more urgent to be to find the user interfaces than to to do the slightly more visual work.
Frode Hegland: Yeah. And I think Brandel agrees with you because he’s doing a lot of that. I think we should do that, but I also think it’s important that it’s somehow based on real data because otherwise it becomes very removed from the world. And that’s why something like visual matter might be part of it. As at least, you know what, some of the data’s there anyway. Maybe, guys, I just saw this thing come out. We should all meet. And here it’s one of those multiplayer run around and it looks very polished. It’s very game ish. But we all have Oculus now. It’s it’s twenty three pounds, so it’s not cheap. But then we can do some swimming, jumping and spatial audio and at something something to consider anyway, that we meet in a few of these spaces and see how it feels as long as we don’t walk into each other at home,
Mark Anderson: Think something that? Go ahead.
Brandel Zachernuk : But that’s something that that that slow Mira demo that I produced a while ago is for being able to demonstrate for people and for in a safe space for themselves is just how intrusive and how transgressive people are able to be because it must actually happen to you. You basically wouldn’t believe it. So, so yeah, it’s neat to know that you had that experience and can be aware of the way in which it needs to, that the concern needs to be brought to bear against the way that co-presidents can occur within within sort of a co-located virtual space because there are all sorts of real crazy rules that we’ve set up.
Mark Anderson: You might have to oculus boundaries. You know, one is, you know, one is sort of the outside of the room into the room. And then depending on how introvert or extrovert you are, you have another boundary we decide to around you. Whereas also out of the room,
Frode Hegland: Guys, I have to go. It’s past the hour and I have to feed the family. But Brandel, can you please on our blog, add some links to the real VR experiences you have so we can go into Oculus and experience what you built. That will be really, really appreciated because if you add it there, then once you’re on that page, you just clicked in your own.
Brandel Zachernuk : Sure, you know, that’s a good point. Ok, I can do that. I can send a link to the sneaky. This is the link for the for the Oculus experience or actually presents. So if anybody jumps on there, then you’ll have the ability to move a thing around and you’ll see my camera and my ball. But also, if you do this on a quest, then you’ll have your camera. Whoever is in there is blue right now. You presumably see my camera and you’ll also see my hand and I’ll be colored the same color as my ball. And it’s just a very, very sort of basic for first foray into making use of a web interface to be able to transmit this data. It’s not as fast in terms of the latency that as I would like. One of the things I’m looking at is making use of a binary socket or a pure connection such that it doesn’t need to be coming through and multicast everybody you make.
Frode Hegland: Sorry. Ok, then I am. Tagging you, OK, I just added it to our Web. What’s it called sneaky VR paste?
Brandel Zachernuk : It’s called various. So I didn’t choose the name. That’s something that that just comes as a consequence of of making use of the free tier of glitch service.
Frode Hegland: So what do you call it? This thing,
Brandel Zachernuk : It’s just it’s it’s a it’s a it’s a multiplayer boilerplate.
Mark Anderson: Okay, interesting.
Adam Wern: I’m very interesting as a next step to because I said to Brandel that I was going to look into multiplayer, but I had nothing to multiplayer. That is, why am I doing something light textual kind of drag and drop from a from text and actually getting text in there to multiplayer with it was obviously a first step. Uh, so I’m very interesting to get the multiplayer. Into that massive multiplayer thing, because you have a
Brandel Zachernuk : If you have a glitch account, then then I can show you give you admin rights to the project so that you can pull it down and take a look at it. It’s just using Node Express and then socket to be able to do the connections. Yeah. But yeah, I wanted to have something sort of simple enough to be able to branch off and do more specific things.
Adam Wern: That’s super cool and we should get some. I looked into web or it’s called RTC RTC. Yeah, I looked into that and it’s doable. It’s just a lot of work, but it’s not and not extreme work, so we should be able to it. I’ve been playing with an idea that you doing. I want to try hands as I want to write a run, an idea with you quickly here, just kind of being a traditional type sector with actual type, metallic type, but you have words instead. So you have lots of because in VR, you can have many boxes and not just uppercase lowercase boxes with with type, but you can have words and you can also have words in different layers as well. So you don’t have to respect the physical boundaries. You can have words in different layers and just go down deeper to get new words or go through a letter to get combinations of for word. So you have I want to play with the idea that you pick words and put them together with your hands from the kind of typography, Old-School typography, and also that you could take the word and dip it into a kind of a synonym bowl and get the synonyms for it. So it’s really hand hands-on, not just that you click on a simple synonym. So but so it’s really everything is done with hands, you dip word and you also took words and ripped them apart and put them together as a unit. So I looked at your what was that called the lead text editor? Yeah. So even that, but even more removing the boxes are making the word floating. So it’s even could like twist of the word. I really liked your demo.
Mark Anderson: It’s not matching Adam in a virtual letter press shop
Adam Wern: In a switch to switch up
Mark Anderson: Right to work, and I make stuff on a letterpress.
Adam Wern: Yeah, in a sweatshop, we’re making essays for Frodo.
Mark Anderson: It’s something that you find you are in a virtual sweatshop working for somebody else.
Adam Wern: I found that the idea. Do you have any idea what you could do more than I find the idea of actually breaking synthesis apart by pinching it at the break point and taking it apart? It would be very interesting instead of a kind of cutting it with the tool, but actually ripping it apart with the two pinches or putting it together as a as a. It’s just an idea that
Frode Hegland: I just see your hands that’s based not Brandel that you made. Listening to you guys in that space.
Brandel Zachernuk : Yes, now I can see you on my on my two day thing, I might be able to jump in and be able to see you as well. I know, I agree, Adam. I think one of the things that I was really excited with, I don’t know if you saw a smart triangle, but the idea of making a calculator with no buttons and actually representing those things visually sort of points to the fact that the mechanism through which you undertake a task doesn’t need to be as tightly bound to the textual representations that we make use of today. And so, you know, to that end, I want something that I’ve been playing. I have played with a lot is what is Photoshop Protect? And so having something like an opacity slider which dictates not actually the sort of graphical opacity, but the textual opacity. So taking a list of synonyms from a thesaurus and replacing sort of simpler words with more opaque ones. My father’s been involved in government and NGOs and stuff like that. And so they always say, why is it? Why is it 50 cent word when a two dollar word will do? And so being able to have an opacity slider that you can actually manipulate in order to wrap things up or down. I think it’s really interesting, as well as things like in-situ thesaurus kind of alternative recommendations and things are very appealing. So gestural manipulation of what are considered to be textual things help to recognize that writing passé is not the process of inscribing specific glyphs onto a thing or punching keys, but the process of codifying thought in a way that can be represented and retrieved by other people at different times. Yes.
Adam Wern: So and so
Brandel Zachernuk : I’m glad you like it. And so that means that what rating is as well as being the most important thing that that humanity has ever done. We’ll continue to be, but we got it wrong. We’re we’re vastly too specific with regard to what it actually is as a core activity. And this is an opportunity to go, no, we right by resolving fault and we can resolve thought by dipping stuff in and sending them. I love it, but it
Mark Anderson: Sounds to so, you know, maths, which alludes a vast amount of humanity past the most simple level. There’s something else that’s ripe in that sense. So there’s a closer interaction. So if you need to understand a bit of geometry or calculus or whatever doing doing something and that actually that is in fairness, quite well touched on in what’s his name’s Victor Victor’s thing, and I thought he was spot on there.
Brandel Zachernuk : Well, he’s he’s much more confident in mathematical concepts than in education or information, even in communication, per say. I mean, he’s he’s no slouch in terms of public policy, but in terms of his actual academic understanding what it is to educate and to communicate, not quite the same. So yeah, I agree. Cold, so yeah. Adam, if you if you want to send me or if you’re on a glitch already, then I can give you the access to the code base. I mean, everybody can. It’s not that it’s private or anything, but you do, I believe, need to have a glitch account, which is right.
Adam Wern: I want to get one. I’m not there yet, but I will get the email.
Brandel Zachernuk : Yeah, yeah. So whenever anybody gets those things, then they can take a look at it and you don’t need to be a developer to want it because you can also fork it and then give it to other people and stuff. I’m not. I’m not concerned about it. I just I just wanted some barebones thing to start, start being able to reason with these things and kind of communicate. Another thing that I’m really excited by is the fact that I realized that pressure sensitivity is available in Chrome nowadays so I can use my Wacom and I can actually. With this, I can use my Wacom and a virtual reality headset at the same time, call all those things together and have a really, really high fidelity drawing environment to whatever extent I desire. So, you know, there’s just so many incredible opportunities, right? Yeah.
Adam Wern: One thing when I have you here on the phone or and so passthrough is currently it’s not available, you know, but I wonder what the hacking hacky solution would be. I wonder if one solution would be to stream a video feed from a kind of a duct tape. Raspberry Pi zero two or with the camera module, I have one laying around here somewhere, so a duct tape it to the headset to actually get passed it through through a large right like a web connection. That would be one way to kind of get bigger because I want to see the see my my iPad with the Pencil to get. It also has pressure sensitive. In some way, it kind of. And also this keyboard. And so to get the keyboard in there when you need a discrete device. Yeah.
Brandel Zachernuk : Well, iPad actually also gives you the three to three degree of freedom orientation information. So you’d be able to know just and not not just the pressure sensitivity on the surface, but also the telltale.
Adam Wern: I played around a lot. Yeah, it would it be excellent, right? With it in the air, but yeah, to your hand? No. Ok. I have to go to my family as well. Now it’s Friday evening here and I skipped out.
Mark Anderson: Very, very good passing thing on a practical note is just if, if, if there’s anything in terms of causing the, you know, the virtual matter by all means, give me a prod the conceptual end I can, I can consult. But if it’s just, you know, why is there a bracket here when there shouldn’t be or something? I probably might be a better starting point as I’m responsible for some of that. Yeah.
Adam Wern: Awesome. Yeah, I’m I really mark. I’m sure we will get both photos in there and I passed PDFs and make rectangles of it. But it is non interactive. I can’t really select text because it’s really hard to to find out which character it is in practice, but it’s doable, but it will be a lot of work to just do that. And and with the mark and mark with your data set, I am sure we will get to get to visualize that data, set the hypertext data set in some way in in three dimensions. I think it’s very suited for that. So we will do that. It’s just that I’m so into the embodiment of parts, and that was a newish, if not just shiny tool, but getting your hands back. It’s really important for me. At some point we may get the eyes and feet back as well. I think feeds are completely forgotten, just the dancers in the world that like the feet. But but that’s for later.
Frode Hegland: Ok, well, it’s really, really, really important to hear you say that. I mean, I’m raising a child and he is a bit of a dancer, and I don’t want that to be lost either. Of course, a lot of the work we can do sitting down. So, you know, that’s a whole different thing. And then they have the issues with movements. But and also you refer. You said earlier, Frode Spatial Meta and it makes me feel a bit horrible when you say that, even though I know that I kind of came up with it, but it’s it’s an invention that is so obvious. That I hope that something we can really share because and it’s only one way of one communication medium of things, so if you want to change what visual media is, let’s let’s do that together anyway.
Adam Wern: Yeah, but I say it is. I see it as a thing at the end of the end of a PDF because metadata is not your invention. One thing which is always been there everywhere after the first glyph stroke or a stone or a leather burning or a painting in 50 years ago, we had metadata in some way. So. So and I worked with metadata and RF and so or the database. So I feel that visual matter is your thing. Metadata is everyone’s thing. That’s my distinct distinction.
Frode Hegland: It’s yeah, that kind of makes sense. However, it’s worse than that because if you go back to Mesopotamia and you look at the origins of cuneiform, what we call today, the telephone or however you pronounce it, that was actually from them at the end of the document, it would say this was written by scribe so and so working for so and so. So the very first
Mark Anderson: And he didn’t pay me.
Frode Hegland: Yeah, exactly. There can be little comments like that. That’s exactly right. So now that, you know, only in the 17:08 and hundreds, it moved to the front of the book. So all I’m trying to do is if they can say what it is, you know, we can say what it is anyway. The interaction in this space are important and I hope to see you guys in this gamey thing after dinner. I’m probably going to go in there a little bit. Just move around. And it’s just not my kind of world, but it looks. It’s got great reviews, so maybe it’s something we can do requests.
Adam Wern: Have you tried Hand Physics Lab?
Frode Hegland: No, but I will do that.
Adam Wern: Probably that because it shows what you can do with the and to which Fidelity remember to have good lighting because Oculus has a camera based and tracking thing, which means that to get good contrast for the camera and the computer to work with, you have to have good lighting. So don’t sit in the dark closet when you do this.
Frode Hegland: No, no. I have noticed that that’s been an issue. It’s actually told me I low turn the lights on. Ok. This was lovely, guys. Maybe see you on the VR space. Look forward to Monday.
Mark Anderson: Okay, take care. Bye.