Tr : 14 Feb 2022

Video: https://youtu.be/xlOBxeRRZAM

Chat Log: https://futuretextlab.info/2022/02/14/14-feb-2022/

Frode Hegland: I thought Bob just joined on another email. I might just say I just got an email request from one of the guys for the link just once I can. Yeah, OK. Right. Good. Ok, I am here. Yeah, so I’m in Norway now, bargain as we just communicated back and forth, it’s it’s just to same. My mother really worried are kind of summer house. And it’s definitely not summer outside. Either Fabian started meditating or he’s frozen. And you’re muted. Five is not meditating, he was frozen Yanyuwa, frozen just as you close your eyes for a blink.

Fabien Benetou: Yeah, I’m not sure why I have some connection issue. Everything is wired. It shouldn’t, but internet weather?

Frode Hegland: Yeah, it’s so bizarre. I’m sitting here in the countryside by fjord in Norway, and the internet is super ridiculously fast. And then when I sit in London, in Wimbledon, sometimes just nothing. So yeah, it’s a bit right.

Mark Anderson: First world, problems.

Frode Hegland: Yeah, indeed.

Bob Horn: Hey, Bob. Hello.

Frode Hegland: So Bob, having seen some of the messages going back and forth about your mural in VR, are you tempted to get yourself a headset to look at it for real?

Bob Horn: For virtual, real?

I I’m not quite ready yet, but but I’m interested. And one of the things that you know that I’d like to do, you know if I could have maybe 10 or 15 minutes today to not to not for any kind of presentation, but to ask specific questions of some of the people in the group. I really would like to because I think we, you know, by organizing a little bit, just partially organizing a little bit of our discourse, I could identify what some of the issues are, which which may be a bit all been discussed, but haven’t been summarized in any kind of easy way. And even though it’s very easy, even though it’s great to have the the record that you make at the end of at the end of our discussion, it isn’t organized, you know, it isn’t organized into the near term and long term, for example. And and I’d like to know what some of those are before I decide to commit my time. To this, because my time is so is at least to me, so valuable that I cannot spend it when I’m saying this to each of my projects, I’ve got it on my agenda this afternoon for the University of Melbourne National Security Project and so forth. But yes. Oh, and and there’s something there’s a really good really, at least to me, really important question that I ask an ex-wife of mine who who spends a lot of time in second life and says that there is there.

Bob Horn: They have a bunch of libraries. I don’t. She doesn’t. It isn’t her thing. But they have a lot of libraries in Second Life that are organized in some way. And she’s willing to put me in touch with a librarian there. You know, a serious, I would say, first life librarian who also works in Second Life and would be able to tell us at least about how they organize in three dimensions. Are a whole bunch of different, different libraries, there are different universities who are teaching courses in this place and there are different people, she says, who have brought their whole libraries in. We, you know, and we’re just talking, you know, at least the last conversation we had with the last group of people that we had at that one. We only we were only sort of waving our hands and saying, Oh, wouldn’t it be nice? And Oh, you could do it in virtual reality? And hey, people have done it in three dimensions already. We ought to know about it. We ought to be able to maybe even bring that librarian in at some point if we’re if the group is is totally interested in it. I think that would make sense. I’ve been up to over the weekend.

Frode Hegland: I think that would make for a very good monthly session, for sure. I mean, of course. Hang on. I’ll just reply when Brandel is here. Hi, Brandel. Bob was just saying a friend is suggesting someone he knows who has worked a lot with libraries and Second Life can maybe come and talk to us. And that would, of course, be very, very interesting. But obviously, I think it is very, very different 3D on a 2D screen than we are. But yes, we should absolutely use that experience. And Mark, you have your hand up politely like me.

Mark Anderson: Yes. Just a quick one. It’s actually Ron. So putting it in a way to what Bob was just saying, but one of the things about putting some sort of concrete structure in things, I’m a bit late to the past, but I’m working with frame just to actually sort of corral the journal. And I mean, first on sterling work in terms of lots of recording and the transcriptions and various people to put things together. One of the things I’m going to try and help with is just putting that into slightly more findable form so that our journal sort of becomes more of a real thing. You know, we got lots of stuff. I think at the moment it’s not particularly hard to find, and this isn’t about me suddenly sort of forcing people to write stuff. It’s more just making sure that given that at the moment we’re running out of what is effectively a blog which doesn’t actually lend itself to structure because it’s supposed to be a sort of rolling series of short posts is effectively just just to create some indexing and stuff so that, you know, for instance, if somebody wants to go and find something from three weeks ago, it’s thought it would be a bit of find.

Mark Anderson: And in the course of doing that, we’ll be able to find some places to pick up the structural points that we need to discuss in which you’re alluding to. So I think hopefully I’m going to be traveling throughout the next week and a half, but when I get back, I should be able to put some bones onto that. And I think that will help because it’ll give us once it’s in place, it’ll give us all something to lean against because if nothing else, we can refer to you as well of the thing we’re talking about. But also, I think it may help with the work that Rafael and Allen, who I know aren’t here at the moment, are doing in the wider sort of community in terms of outreach about what we’re doing because they can again link into persistent URLs. So I think that’ll sorry, it’s awful. Add many steps in all of us. It’s of it’s it’s lots of work to no immediate effect, but I think it will help a lot in terms of just putting some some sort of firmness into place behind the rich discussions we’re having.

Frode Hegland: Yeah. So Mark on that point because, you know, we’ve had some discussions by email on this as a community. So what has happened now, guys, is the journal is a future of tech journal. So that’s all the publishing it’s linked to from future text lab. Future of Text Lab is now more about our transcripts and things like this because of the reticence, which is fine of putting content on there. I really don’t want to force anyone to have to be listed anywhere. So if you go on the site now, there’s a lot less there. So, you know, we’ll keep having these dialogues. I’ll keep adding to the journal, but nobody should feel that they have to post anything, put up any bios. If people don’t want to be associated with it, they can still be part of the community here. So let’s not be to have

Mark Anderson: The librarian pass. That I’m sort of alluding to also is that, you know, in a sense, possibly it will just be sort of getting URLs to already public things like, you know, tweets. It’s it’s just it’s just a bit of sweeping up an organization because I’m very conscious in a lot of people are doing stuff and it’s enough, for instance, you know, it’s enough work to make a demo, do a whole lot more bucket load of more work and sort of putting a reference to it absolutely everywhere. So to a certain extent, as long as you know, if there’s something that we’ve been discussing, it’s got a URL. One of the things I want to try and do is just gently sweep those up without sort of without forcing any structure, without editorializing it on duty. But just that if it turns up in general conversation that there’s a much stronger chance that people can come back. And find it, and it’s it’s all on the thing, so it can be edited and re-edited, so it’s not anything we need to to. I hope we make anyone feel awkward, but I just think it’ll be easier to find stuff.

Fabien Benetou: I mean, it’s too quick remarks. I’m personally very interested in any librarian, but especially who has experience with weed and especially VR, because honestly, I did ask quite a few librarians or anybody like curating research or even museum. Most of them are interested but haven’t touched it. They have the physical books. They have their own organization system, but nothing is specialized. So personally, I’m very interested in that. I mean, everything like an extension of a mapping of the classification system to something else multiple layers all this. I’m really into that one. One quick remark, though, is, I mean, if you haven’t tried VR for the last couple of years, do try like it doesn’t mean buying a headset doesn’t mean spending an afternoon in it. But I think just half an hour to put the headset and manipulate, in my opinion, it’s really worth it. Just in the sense that talking about it or having a without a first person experience, I think is infeasible or unrealistic. So again, I say this naively that I don’t know what you have tried recently, but I think it’s really worth even going like an arcade. Does it mean again, spending much, but really, to get a sense, I think it’s really worth it.

Frode Hegland: Yeah. You see, Bob, when we looked at the mural in VR over the weekend, it was actually usable. You know, we could actually walk along it, move back and forth. It was absolutely, I think it would have been very, very happy. And then, you know,

Bob Horn: I have seen it, I saw it. Nobody had been Brandel and I have been have been communicating and he sent me that and I even critiqued. I even critiqued his his presentation, which to me was so jiggly and and and awful.

Frode Hegland: Bob, Bob, Bob, God, please, bob, please. That’s the point. That that is why probably in a suggesting look at NPR, because the way you’re sitting now as a normal human, I can see your eyes are moving about, your head is moving about. If you were filming us right now and you put that on YouTube, it would be an awful presentation because what you see with the recorded VR is a human head moving about. And that is why you have to look at it through VR because it is absolutely amazing in VR, it is rock solid and nice and sharp. So I think it would great to enjoy it. If you have the opportunity, please do. Also, I bought a copy of Edge Magazine today, which is kind of a computer games magazine. It’s, you know, very artistic and very cool. And there they point out that the on ramp for people buying VR headsets now is faster than the PC and faster than mobile phones. Last year, VR headsets the Oculus outsold the Xbox system, so you know this is the time to experiment. So that was the evangelizing from the group. And now we’ve got that part.

Mark Anderson: Well, actually interesting as we’re talking about and just say, I just want to say a big thank you to Brandel for getting that that up there because that’s, you know, this is this is something I mean, in a sense, this is, I think, a matrix like this is our sort of starting construct and we can start to think over it. I mean, and just an immediate reflection I had from with probe, we sort of had a look at it together online of the Oculus. And sort of the interesting part of it is that the picture is actually rock-solid. Current equipment limited. The Fresnel lens sort of focus things. I mean, of course, that when you’re standing at the distance, when it’s in focus, it’s not particularly in focus at the far end. And that’s a technical limitation. So I don’t take that as a negative, but I’m reporting it something I hadn’t expected, you know? And this to me is the interesting part of doing this because what does that say? Does that mean that we might want to present it in a way that made it easier to sort of focus on more? Because of course, it’s something that, you know, if I see that poster up on the wall, you know, if I was standing in Bob’s room at the moment looking at that wall, I can easily basically move my head, move my eye focus.

Mark Anderson: And it’s there at the moment in in the 3D space or certainly using the Oculus is at the moment I sort of find I need to move to where the focus is. And again, it’s not as a fundamental critique. It’s where we are. But I wonder if that if that offers us any ideas for making it more accessible whilst we have to live with those limitations? I don’t know, for instance, if it was more wrapped around. Thanks a lot. It’s a long thing, and I I’m very conscious, I’m talking about something I can’t do, but you know, I can’t make it, but I’m just wondering how we can sort of maybe play some of the limitations over depth of field to, you know, in a sense, to our advantage in the short term because, you know, if it becomes unnecessary later, then because it’s, you know, because it’s sort of code, we can say, move it back to a flat plain at a point where we have more control. Does that mean that sort of strike a chord with anyone?

Bob Horn: I’m not quite sure I understand. You know what? Okay, sorry. Well, exactly what the problem is. I mean, you walk along the mural you have, you’re annoyed that you have to walk along the mural to get to the end of.

Mark Anderson: No, I’m not annoyed. I’m making a practical observation so that when I look at it, when I look in the standing in the Oculus VR space, yes, I can stand in front of it. I can stand in front of the mural artwork when I’m when I’m at the when I’m at the correct distance that the sort of headset allows because the way it focuses, you know, the focal plain, it’s all crisp and clear. If I look right to the far end, I would be able to read it also to my reading glasses on at the normal distance. I can’t at the moment in VR again, because it’s to do with with Oh sorry, Brandel, you will correct me. No good,

Frode Hegland: Please. I think maybe Fabian had his hand up first. Right. Both of you will address this point.

Fabien Benetou: Yeah.

Brandel Zachernuk : I mean, we’ll be able to address it as well. But what we’re complaining here is the consequence of the accommodation, the Virgin’s accommodation conflict and the level of anisotropy that’s applied to the texture surface at this point. So I can tell you more about what that means, but we can we can probably make it more readable for you standing at that sort of oblique angle mark.

Mark Anderson: Yeah, but I just hasten to add it’s not a complaint. This is absolutely the whole of the experience,

Bob Horn: But it’s a limitation that you’re talking about. And what I understand is that that that that you want to be able to see at along great detail, at a long distance without walking over there.

Mark Anderson: Not necessarily. I’m just sort of it’s the thing that as as the technology is at the moment there, there are just there are some limitations difference. It’s sort of like anything. It’s I guess it’s the difference between reading an e-book and a paper book. There is. There is a point where some things are just a bit different, especially to start with. But I want to. I want to stress that I don’t want to think of them as negatives or me being sort of upset or angry. They can’t do it. I just think, OK, that was a surprise is the way I put it. And some things may be maybe a misunderstanding. And I mean, as friends rightly pointed out, I mean, you know, it’s just a presentation. But by the same token, I don’t want to be in the position of saying I won’t use it if it doesn’t do a certain thing because I don’t think that way progress lies. It’s a matter of getting in there, finding finding what’s a failure of my perception and what’s a limitation of the existing idea. Anyway, sorry, I’m

Bob Horn: Ok and I’m on the list on the list that I’m making. I’m putting it down as maybe a limitation at present. Would that be all right? Yes. Okay. I think you

Frode Hegland: Can address this point because it’s a little more subtle than that, I would say.

Fabien Benetou: But I would say actually be rough, to be honest, in terms of what what frustrates the usage. Because I think for the example I showed last time with the remarkable is is addressing limitation in the sense that reading text for a long time in VR, it sucks right now. It’s just not good and you get a headache. And but I think it’s also at the point and there are better ways. So I think based on the goal you have, it’s if you go into space to have the discussion about the posture and the meaning of this. So based on how we’re going to interact with it, what’s the goal for the session or the work or whatever we do with it? It’s perfectly fine to say well, until the text is readable. I won’t use it. It doesn’t mean the whole stack has to be thrown away because, for example, there are things you can’t do in your life you can do in VR. I imagine that you can have glasses, virtual glasses in VR that just to focus on what you’re looking at, you can have like an HD, like an augmented reality in VR based on what you look at. And there are a lot of things like this or tracking what you look at to compile it.

Fabien Benetou: Let’s say things that are just not conceivable otherwise that we can do so. I think it’s very fine to be harsh. He doesn’t. I think nobody will take it as an offence or a way to say, OK, now we throw the whole technology out, and that’s fine. Just one last point is, I think it’s also really interesting to discuss about how we do it. There are user experience design recommendation. So for example, you mentioned the curved screen that’s indeed something we see quite often. There are also how high or how low it is based on your neck position, because if you’re like this for 10 minutes, honestly, it’s not that great. It’s better to be there, to be done. But it also depends on how many people use it, because if we make it curved in, we’re five people together. Maybe it’s awkward. We have to get stuff together and. Yeah, social norms and all that. So I to me that that’s perfectly fine to criticize it, and it has to be regarding the specific usage or use case. Otherwise it’s generic. You know, complaints that don’t actually help to find a solution against those limits.

Mark Anderson: Well, an interesting point. And again, those with more experience can tell me because I’m still getting used to it. It’s just it’s a sort of the way that basically my eyes work a different way in this technology at the moment to the way I use it. And so I’m thinking, well, if you made the text much bigger but further away. But I’m not sure. I’m not sure because of the lens of how this works.

Frode Hegland: But Mark, I can address that. I wear very vocals that have three layers, right? Sure. Sure. This is for my watch. This is for the screen. This is for the difference. So already I’ve had to use my eyes in a different way. If I look at you guys like that, you’re very blurry right now. So, you know, yes, it’s a limitation. Yes, we all want 20 20 vision in all circumstances. But what Brandel was talking about earlier, there’s two different things going on. One of them is the focal bit because of the smell really isn’t that big, but it’s also a matter of the texture of the 3D. So even if you turn and you move, it will, you know, basically the little dots and the texture and all these other technical things come into it. But I have to disagree with everyone on one small point, and that is I feel that when I use immersed or one of the other ones, when I write on a normal screen that is quite small, but, you know, big and bigger than a normal screen. It’s actually very pleasant. I wouldn’t want to read a book that way, but I’m so surprised that if I decide is just going to be there, you know, when Adam puts the future of text into what was it, Adam we were in?

Adam Wern: Yeah, I think

Frode Hegland: And it was just an image, but it was just at a certain distance, it was flawless for reading and that’s why I wrote the piece that I think I shared with everyone on what I want NPR and one of them is I want to be able to, at my desk, define a reading box because and something like immersed, we have the virtual screen. It’s really fiddly to actually put it where you want, but once you can, you lock it. So if we could all have, this is the main reading bit. So that’s like a locked magic space with gravity or whatever. Then we can go in and out of things, but we have that safety thing. It’s really quite wonderful.

Brandel Zachernuk : Hmm. Yeah. So generally, the sort of the history of modern VR is the history of appropriating technologies for from other places where they had been improved and trying to apply them to virtual reality. And, you know, getting the benefits, but also suffering the consequences. So one example of that is that obviously the modern VR headset is a phone strapped to your face. The benefit of that is that people had, you know, billions of dollars of investment in making phones and phone display and micro display technology. One of the downsides of it is that the thermal dissipation properties like the like VR headsets get much hotter than phones typically do, and so they were much more thermally limited for a number of years until people could figure out how to do it without strapping great big fancy or down face to be able to do it as well. The other aspect of this is that video game technology has been a massive driver for graphics for the last 20 years, and the so the the concessions that are appropriate for being able to do textures within a video game have been the ones that have in the context of being able to display the textures. So when you have that glancing angle thing, that’s to do with the concept of mapping and reducing the resolution in a certain way that is typically acceptable for being able to discriminate that this is that texture, but not necessarily for understanding that this contains that information. There are ways around it, but because the typical context in which these things are deployed are pretty disinterested in the distinction between those things.

Brandel Zachernuk : It’s taking time for people to come up with mechanisms to be able to kind of fight against that. And that’s where technologies like the concept of sound distance fields as the text as Adam has been using with troika text, as Valve has talked about, and it’s not unreasonable to assume that people will make use of it in the future. Yeah, lovely. Those are those are advancements that that can and do come along as technologists, the sort of more hard nosed technologists than myself at least come along and recognize that these are actually finding problems within, you know, like I always say that Moore’s law is a force of nature. It’s a force of economics and a recognition that these things are valuable and important to be throwing people and money at. And one of the jobs that we have right now is yes to to to protect, but also to work within the bounds of what is working right now in order to demonstrate that these. Questions are pertinent and important for them for the medium in order to make sure that those technologists can then jump onto those and understand what they can do to improve it. And you know, I don’t know who watched the Super Bowl, I didn’t. But what I did see was the meta ad, the ad that Facebook had for for the future of virtual reality yesterday, and it was simultaneously paper thin, as efficient and incredibly bleak. I strongly recommend everybody watch it. Maybe right now it’s only 55 seconds long, and it’s just I. I would have cried if I wasn’t swearing at the time.

Frode Hegland: If you put a link in, maybe YouTube.

Brandel Zachernuk : Sure. Yeah, yeah.

Frode Hegland: Just to show you something on your earlier point, by the way, in terms of Moore’s law being about money flying from London to Norway today, sitting in the sky at no cost being online through satellites. So this issue that I showed earlier that had, you know, VR is now being taken up. It also has another article. It’s really fascinating. You can’t see it, but it’s on games that have absolutely no visuals whatsoever, that only sound and they’re still on a PlayStation or something powerful like that. They ask you to play the plate with your eyes closed and everything you do even fighting. It’s based on hearing the space around you because it’s positional sound has become amazing. Really fascinating stuff.

Brandel Zachernuk : It’s interesting that’s in this edge magazine. I’ll have to buy a copy. I was thinking about that. I want to talk with

Frode Hegland: Three, six eight.

Brandel Zachernuk : Education three, six, nine, thank you.

Frode Hegland: Ok, let’s let’s all watch the Super Bowl ad, shall we see who can find it the.

Bob Horn: Where where is it? Where are you going to show? Are you going to show it on your desktop?

Frode Hegland: It’s probably better if you watch it yourself because of the sound quality, but I’ll give a link it.

Brandel Zachernuk : I find it.

Bob Horn: I don’t have it.

Frode Hegland: Hang on. Yeah.

Brandel Zachernuk : Three. I put it in the chat there about that YouTube link. Is this the Super Bowl ad?

Frode Hegland: All right. Let’s mute and watch.

Bob Horn: You see, I have to turn my volume off.

Frode Hegland: That must be the saddest and dumbest piece of shit ever put together. How is it possible? How much money did that cost?

Brandel Zachernuk : I imagine at least as much as the spot, which would put it in the mid six foot seven figures.

Bob Horn: Wow. So the reason why I bring this up

Brandel Zachernuk : And force everybody to watch it right now is that is the most prominent promotion from the most well-resourced company publicly pursuing virtual reality at the present time. And their attempt to get it to the widest possible audience. And that is why I am saying that the alarm for me, Kuroda, is not that we have six months or 12 months to work on this vision, but that we’re working absolutely upstream because the people who are technically the best positions to be able to communicate this vision simply have none whatsoever. They’re like, you can maybe relive some glory days because Ernest Cline wrote a book that was a thinly veiled sort of lament about opioid addiction. Ready Player One is about people being irrelevant in the modern workforce because there is nothing for them to do. Rather than merely being 1980s fan service. And so, so from that context, I just think that it’s if if I saw that before I was in virtual reality, it would push me away. And it certainly doesn’t tell me anything about any any opportunities for solutions that people have within virtual reality. And honestly makes my job harder. So I hope you love other.

Frode Hegland: I mean, what have you got to say on this?

Fabien Benetou: Well, I mean, I mean, it’s just I mean, I hate to admit that before, but this is so, so I really I’m excited by the air, I think, as a medium. It is amazing. But I think overall escapism is very dangerous regardless of the medium, regardless if it’s TV or the hard drugs or whatever. We haven’t just doomscrolling. I think escapism is really, really dangerous and the worst part is it can be exploited. So when they put this as a kind of that’s the best we can come up with as a value proposition. Honestly, it’s extremely humid. Yeah, I was. I’m speechless. And what’s also scary to me is Jaron Lanier, who who work in VR for a couple of decades mentioned this kind of thing that like the Battle of Watson Skinner box, where you poke at the person. And of course, it’s easier to poke at the person and put them in the right path for whatever it is being business or politics when they are lost and confused. So putting people in the OR who don’t have a place in society anymore? That doesn’t sound like a good idea for democracy or an open world, so. Yeah, I yeah, that’s it.

Bob Horn: I just wanted to say thanks for for reducing the Super Bowl to a one minute clip for me. But now.

Frode Hegland: Well, I think that was despite being horrible to highlight, I don’t quite understand people running around after a piece of leather and claiming victory because it lands in a certain spot, that’s beyond my comprehension. Anyway, that’s that’s a different issue. So, Fabian, how was your journey looking into the reality of visual media, considering that it’s probably quite messy what we have out there already?

Fabien Benetou: So actually have something to show? It might not make sense, but I said I would do something, so I didn’t want to look like a liar. I might be, yeah, you will be disappointed, but at least exist. And like I said, this is how I learn stuff. So let me try to share my screen. Can you see? Yes. Ok, Ma, so. Yes. That first, that’s a bit unrelated, but it’s something I did since when I moved the notes from the remarkable in a virtual world based on their position, they can assign a tag or category and what it does, it looks back in my wiki. That’s why I put all my notes, basically. And then I can have this as a source of truth, basically, so that I can again position and have a result. Yeah.

Frode Hegland: Also, OK, that’s really important right there. So you use a wiki to take things from flat world into VR. Is that what you just said to?

Fabien Benetou: I mean, honestly, I use it from everything, pretty much. I use it as my it’s a public file system so that others can see whatever I’m working on or whatever I find interesting or some of the prototype. So I try to keep it as a source of truth, let’s say so that if I do something somewhere being changing the light on top of my head or reading about a book or whatnot to put it there, and it means that’s the active part or the explicit part. But I want also if I do something in the air to be able to say it there and load it back from it, it just it looks like this. Because I can I can edit it as a text. Normally I can also have metadata data which is in and then what I use for VR is this is a bit of the data itself. So that’s to be able to edit it pretty much any way I want and save it like an API, basically. And to have a historical trace here, I have a data visualization of the last digits for every page and modified. But that’s not for visual matter.

Fabien Benetou: But I did for visual mate, and that’s why I bother you. Also a bit is I took the example. So no, I took, I think, no. Yeah, I guess this one, yes, I started with this to be able to extract the meta data and hit a bit of a hiccup, so I use this example instead. And what I do basically is I passed the last stage and then I paused the deep text permit to get the actual meter data. And then from it, I generated completely arbitrarily couple of information. So I took the level one headings, for example, and one information, just because I’m a little bit VR centric. I think there was a part on, you know, maybe this one. Yeah, something about the old VR. And then what I do is I pull them in because I think again, great platform and social and all. And then it loads the different images and then you can either manipulate them manually or have a to put them in position and the information you have to colors to differentiate types that’s content in white and that header in gray. And that’s a you’re

Frode Hegland: Very good at setting up expectations because I think this is amazing. So all your nonsense talk in the beginning, this is really a very important step. So if I can understand correctly, number one, you use the wiki to take your PDF into your environment and then you parse the last page. And at this stage, you extract it headings that can be visualized in different ways, right? Yeah, I think that’s fantastic. That’s I just from the aspect of what I’m looking at, I think this is a really, really important part of the journey. So Vint Cerf and I are presenting tonight, so on Wednesday night, Thursday morning, it’s 1:30 in the morning, Norway time. So it’s nice to be able to say that we obviously I’m not going to give any details, but I can because I am presenting only visual matter. That’s our entire slot that I can say that we have started doing experiments with visual media into VR. And we’ll also look at putting it back out because what you showed and one of the pages there was the the data for how it’s in that space because that could quite literally just be written back in a new appendix, right?

Fabien Benetou: Yes. Yeah. Yes. But I didn’t have time. No, no, no, no, but they’ll be able to preserve to bring some permanence. The easiest to me to say back to my wiki. But then yes, you can generate back the big takes. Maybe you replace the last page or add a pen to it.

Frode Hegland: Yeah, I think it’s phenomenal, and a box of whiskey or chocolates will be sent when I finally get your physical address. So now for the fun of next steps and of course, we’re working on many different things murals, documents and so on. Adam and Brandel, if you were to work with this thing, let’s call it a thing that Fabian has made. How would you want to access it to take it from his world into your world? Is that a reasonable question to ask the three of you to discuss? I think it would be really worthwhile to know how that might happen.

Brandel Zachernuk : Uh, one thing that I will probably have to do is investigate how I haven’t I haven’t used it to to implement any sort of additional sort of features. Is that a frame driven?

Fabien Benetou: Again, yes, yes, it’s 18 herbs and herbs used as a frame, so is right. Ok, cool. Quick. Sorry to interrupt. Brandel just want to say any time Brandel and I believe Adam to show something with text. They are legit in the sense that they are manipulating text while I’m cheating. I’m generating and images or images or emails that have text in it. But they are not manipulable, for example, was monstering selection. I can’t do this with my solution. So there are like a bunch of limitation. It’s to show, let’s say, manipulating the one piece of information. But just to say that this is not something I even know how to do it,

Frode Hegland: But that that is exactly what is so important. I’m glad you mentioned that because I’m not talking about Brandel starting using a frame and hubs. I’m talking about Brandel taking a PDF that has normal visual matter plus yours, plus the spatial stuff going into his own world where he may have actual text or it’s just rendered as an image. At the point of visual meta is that you can write this stuff at the end. And if you have certain affordances, Fabian Brandel has others, outermost others that should be OK because at this point, we’re not trying to be meta or Microsoft and own the whole world. We’re trying to make it possible to move the knowledge and environment. So that was really nice to hear you guys talking like that.

Brandel Zachernuk : Yeah. What’s the other thing that I would do is I think I think it is worthwhile. The other thing that I realized as I was very gently maligning A-frame in at work the other day that, you know, Fernando Serrano, he works with us at Apple, and he was one of the major producers contributors to a frame while he was at Mozilla. And I was just saying that I I found some other places that it helps somebody to be a little bit misaligned with where I am with what I tend to need to be solving. So he was pretty gracious about it, but I have to have access to Fernando to be able to talk about what what might need to be done within the context of that frame stuff. So I might make use of that in order to be able to sort of play more fluidly with that, as well as leverage the co-presidents framework that’s already there. The other thing that I would do is make use of what Adam may put him in order to on his augmentation of my of my mural presentation of Bob’s work. So everybody working there where he was passing the PDF because that means we now have the native ability illustrator. We we have the ability to open a PDF and pull down all of the text information in it in order to to be able to to get to see that stuff and then work out what to do after that because I haven’t opened a PDF before in a page. And now I haven’t found a library for that to be able to kind of process aspects of it. So we can do regular expression searches to find the things that look like visual matter and and drop our interpretation of them in properly.

Frode Hegland: That’s that’s wonderful. I say it had Mark, but sorry, just one more thing, Bob. Your mural, which previously is kind of the antithesis of what I’m doing because it’s a thing I think now is a very powerful force multiplier for this because some things should be all jumble. Some things shouldn’t necessarily be. And I could imagine having, let’s say, the next issue of the future of text book have your mural at the back, you know, so if you have it on paper, it’ll be folded and it’ll be very awkward. But if you then choose to view it in VR, put it up on the wall size. And if you want to have all the articles underneath it so that can be read to. In other words, having only the interactivity of yourself walking up to it like you physically do is phenomenal. And what Adam and Brandel has worked on is phenomenal. So to have that as a combination and to be, you know, the the thing that worries me technically about this VR, I’m not going to use the word is we need different tools for different tasks. So how do we go between them, right? Literally. Different rooms is one of the things we’ve talked about. So yeah, this was very exciting. And Bob, thank you for being a part in helping highlight that.

Bob Horn: I will be very happy to cooperate with anybody who can help do that. And I have, you know, I have even more interesting murals than when I got on the wall working with with with with some that we could fly, fly through a little bit as a demo and for example. Excellent.

Frode Hegland: Yes. Yeah. Mark, yeah, sorry.

Mark Anderson: Let me see the reason that the reason I set my hand up was something to think of before, and then I’ll come back to reflection on what you just said is. First thing was when I looked at when I looked at Kobe and stuff, I think, Oh, it’s Ted’s cut and paste example. But but actually it’s more. I mean, it’s not as that may seem a trite observation, but but actually, I’ve seen a number of tools over and over again trying to basically make easy something that back in the typewriter or handwritten age, people literally wrote it down, cut up strips of paper, move stuff around. And the fact that one may be doing it on a screen it doesn’t, doesn’t lessen. It’s the affordances of being able to because, you know, having said OK that the text isn’t selectable, but actually, that’s less important. You have a token, you have a token, which in here you’ve associated with something. So even if you’re only working, say, at the level of a heading or something. I still find that tremendously powerful because it allows us to expand our brain space. You know, some people are great at imagining big palaces. Others, you know, their focus. What’s a different way? And it’s not a matter of being judgmental about that. I mean, the point is what this to me offers many people is a bigger exploration space.

Mark Anderson: And that brings me around to reflection what you just said about different tools and things because I don’t get so excited about necessarily walking around somewhere. One of the interesting things to me is more that actually, we’re bringing it to me and the the observation also that we’re seeing it was the Richard Scoble thing. And so basically saying, well, actually probably the high attention work is going to be happening sitting down. And I thought that I thought there’s a lot of truth in that and whether whether it was a throwaway observation or a really deeply offended one, I cannot, I think I think that’s quite true. So what that says to me is that nice as it is and to to have also have the avoidance of moving within a virtual space. It may be that actually sometimes what you really want to be doing is you moving the space rather sorry, the space moving rather than you, which is something it’s actually not very easy for most of us to do. You know, it’s ringing a bell and getting someone to bring something. It’s, you know, whereas we can do that now we can say, Here I am. I don’t want this stuff now. I want that stuff. And so you can sort of have this focus shift. So that, I think is also very pertinent in our exploration of.

Frode Hegland: Alan, so you just joined Fabian Hanson stand up, I just want to say we talked about a lot of interesting things. Please look at the video in the future. But Fabian has taken our journal into VR space. He has there found the headings through visual meter and allowed them to be manipulable as separate objects. And we’re talking about how that’s very different from what Adam and Brandel do in terms of how they deal with text. So the exciting thing is, of course, how knowledge pieces can be moved around in different systems. And also, Bob Horn’s mural really highlights that to me, I shouldn’t be too much of a snob. Something that’s big in itself is a huge improvement over what we have available on a computer. So how we can move in different spaces, so. Hi, Alan. Fabian, please, please.

Fabien Benetou: Yeah. So I wanted to. I don’t want to. How come and I put this diminish, let’s say the the image aspect number one, the image is a trick in the sense that that’s the it’s one of the primitive, let’s say, of of Magilla hubs. So in order to for other people to join the room and see it moving, as far as I know first, you can’t put text and if you put text, it’s not networked. So nobody else is going to see it. And in the old settings, I’m not saying it shouldn’t be done. I’m saying that first, I have no idea how to do it and to come back to what Mark just said. It’s it is already powerful in itself that you have an item and that you can move it around and put categories to an audience group and all that. That’s that’s. And we discussed briefly last time about zoomable interface. To me, it’s going to be also that you have an image at first and then as you go to actually manipulate it, then you switch to the text proper. So there are an image is a trick, let’s say, and sounds a bit vulgar newly in a group about the video text. But it does provide, I think, some of the quote, not necessarily the core affordances to have a reference to it already. And then that can bring interesting discussion to actually go deeper.

Frode Hegland: I see hands up, but I have to ask regarding that point, Adam. You put text on top of the Bob Horne mural. Could that be something that a user could put on there manually, like a post-it within the way you’re using it now?

Adam Wern: Yep, absolutely. And under quite trivial to implement as well. Or yeah, it is. So the main question is where that information is. The word in the long term, I think, because I most I mostly agree with you for it in terms of capabilities, I just feel a bit reluctant to put things back into the back of PDFs because it’s a very destructive process. Going back and forth and doing manual, manipulating old PDFs with new software. And so lots of things happened and it’s not as accessible. So that’s the main thing. But by putting things on on the screen in a fixed location and and persisting, that location is quite quite trivial. And that’s the next step for us in a way, I think putting extra commentary and bringing in other documents and other texts and maybe Bob’s comments on the poster on the mural, I mean, and so on.

Fabien Benetou: Yeah, I mean,

Frode Hegland: So sorry, Peter and Bob, but just a little bit with Adam here, so PDF. I think the benefits, of course, the issues I do agree with you that the benefits is almost all academic knowledge is PDF. So that’s if we’re going to address that crowd, we have to deal with it somehow. You know, Mark, you know, this weekend we went through looking for other formats and it’s just isn’t used. And the other thing is when you say it’s destructive, I actually feel it isn’t because this is a completely new page. So when you make that, you know, you save it as a new thing. So if someone wants to delete all the visual matter, that that should be trivial. But that is me being, of course, rosy tinted glasses. I do accept that any change for the document can bring with it issues. Absolutely. And I’m not saying that it all has to be a visual matter. Absolutely not. I think visual matter is the lowest common denominator. Hopefully, we will mature rapidly past it. But a key issue now is, let’s say I go into the room and I spent some time with Bob’s mural. And then for some reason, because we have been working on it, I have a copy of our journal in there in that room, and then I take out strands and I want to pin something. Now we have two different data sources. You know, where does that get stored and who owns it? That was the first piece that I wrote for a journal because it’s kind of scary that we have to think about that, but also good because it means we’re not living in, you know, like Microsoft Word and Photoshop space. It’s a different dynamic. So from the technical perspective, and I think we should let Peter and Bob speak, but if you guys can think about that, I would absolutely love to hear your your thoughts on that. Yeah, Peter Peter.

Bob Horn: Ok. On a nostalgia inspired by that Super Bowl ad, I put a link in the sidebar to the future port 80 to project, and that was using Unreal Engine. I think if I remember properly to create a full, highly detailed 3-D model of Epcot Center as it appeared on opening day. They have two versions of that one. They have a full desktop VR version that theoretically should be viewable with an Oculus headset as a fully immersive environment. And they also use QuickTime VR to create a whole series of panoramas to simulate walking through. That was the second link to the report that I put in the sidebar. So with that one, you get like a little Epcot logo at each alternate camera viewpoint and you can jump between them. Um, and I actually see a huge use case for recreating lost architecture and environments in VR that I think is pretty strong. Not the silly diner thing, but actual places that once existed that got horribly mangled with new color schemes destroying their original aesthetic. And, of course, reconstructions of archaeological sites. I know there were some people modeling some of the major cathedrals like Notre Dame and VR. And maybe we could get someone from one of those projects to come in and talk to the group. Moving on to an unrelated subject, has anyone taken a look at Science Spaces VR? I’m putting a link to that in a sidebar. Science base is an interesting company. They’re sort of billed as being a serious business alternative Second Life. They have somewhat superior graphics, but inferior avatars, so the average look more like mannequins. They’re not as well developed Second Life avatars. However, the world itself has much better graphics engine supports vegetation that looks far more realistic and even has mirrors that can actually reflect the scene that you’re in.

Bob Horn: They also have amazing support for third party VR hardware, including an open VR system that will supposedly work with the Oculus headset. I know that it works with my 3D mouse, so I have full six degree of freedom control of moving my viewpoint around within science space. It’s a much smaller company, UK based, and I think that they would be much more open to working with a group like ours than Metta would, since they’re trying to position themselves again as a serious alternative where the VR environment would be used for doing things other than gaming with apps from an app store. So that might be something worth considering. If you were interested, I might be able to try to reach out inside of science base and see if someone from there would be interested in talking with us here. As far as getting things in and out, I was doing some searching of developer sites for Oculus, and I found one discussion that seemed to be expressing considerable difficulty in getting data in and out, and it basically said they were looking for good external support for a couple of years and that the Oculus people just didn’t seem to be interested in that kind of a use case. So I’m afraid that Metta might be viewing their VR as Roach Motel, where they want to get as much content in as possible and then make sure that that content can never migrate to competing virtual world with that consideration in place. The suggestions and those discussions was that anything that needed data transfer should be done on a desktop projection into the virtual world so that you’d be using your PC screen to interact with data on a virtual screen rendered inside of the Metaverse.

Bob Horn: But the actual data transfer would be happening on the projection of your original flat desktop screen so that you’d be using the full internet enabled desktop affordances of a regular web app for the data transfer. Another possibility would be to project QR codes into the virtual space and assuming that you are able to mirror the virtual space on your desktop, you could then use a phone or other device to zap in the QR code that was being displayed from inside of the virtual world and then use that talking over the network to try to pull the data. And but the data would have to be living out somewhere in the wider internet and not be inside of the metaverse because apparently the metaverse at the moment is a roach motel for data. Whether that’s been alleviated in the latest build or not, I don’t know. The developer post was a few months old, so it might have been fixed in the newest development kit or it may not have been. And I don’t know. I think the science based angle might be worth investigating, since it works with the headsets you guys are already using. And it’s again a small company that has a serious business orientation to it. So working with their developer, people might be considerably more efficacious than trying to interact with Meta, which doesn’t really seem to know what it’s doing at the moment. And so those are my current thoughts.

Frode Hegland: Thank you, Peter.

Brandel Zachernuk : Definitely check out the attitude. I mean, for various reasons, sorry. Real quick. I’m reluctant to use anything by the web because I feel like the web is. It’s very difficult for people to have an implementation of the web that they don’t keep open. So whatever ill intentions any organization may have, if they’re actually making a claim to something being on the web, then like what we lose in performance, potentially, but not always. We gain back in terms of the interoperability, like if they implement the clipboard API, which they really must, then we have access to the clipboard and there’s a limited amount they can ruin it for us. So so I’ll definitely check out what they’re doing with MySpace. But the reason why I use web is not just because I know JavaScript better than other languages, but also because the concept of the web is is much more amenable to the intrinsic openness that I want and need in order to be able to continue this experiment.

Frode Hegland: Yeah, thank you. That was yeah, Bob.

Bob Horn: Well, I’m following up to something which Mark said 15 minutes ago, and it is that what I found in actually working with teams. Often in Zoom recently, but also in person and also with a variety of other things on the internet is that we need to structure the information that we have so that the mural that you see on the wall is structured in a timeline fashion with layers. And that enables people to organize their thoughts and. This is not the only way to organize thoughts visually. There are 15 to 30 different structures. That help groups that I have helped. I will give you one more right now because it’s connected with. Nuclear waste disposal, and that is argumentation mapping a different kind of structure. Uh, that. Organizes huge numbers of pros and cons. So that people can study them and come to decisions. And unless you’re doing that with 100 or 200 different. Pieces of evidence. Pieces of inference and and eight or 10 sub organizations. You can’t think about this stuff. So we’re really trying to improve thinking. With teams of people. In this situation, and this is the this is where we are right now, I see that virtual reality could help us with this kind of thinking and also just enabling the implementation of these kind of structures into in our hour to day 2D screens would also be of great help. So I’m, you know, I offer to help you know anybody who wants to to work on that. But I think that if you’re really interested in improving thought. Of all of humanity and science and and we need to we need to work on those, you know, work off those structures for now, the structures are not perfect, they’re not complete. There are many gaps and problems, but they’re better than the best thing. We have four teams working on very messy, wicked, big, complex problems. Yeah, I like social messages.

Brandel Zachernuk : I thought that was a good a great speech, but yes.

Frode Hegland: Yeah, I think, yeah, just thank you very much. Lots to think about and to act on. Mark, Mark, Mark.

Mark Anderson: Yep, yeah, right. It’s possible. I would think about the thing about VM and I put a note in the side thing. It’s not what we intended with VM, but you know, it’s worth bearing in mind that if it’s if it’s such, if in some context, it’s really, really problematic to have it in the document, there’s nothing to stop us having it separately. The point want because a useful thing to consider in the VM in the round is that as long as there’s got a link back to the document, it’s not what we intended, it’s not what we wanted. But nonetheless, it’s still there because where it’s where to me, it’s become increasingly important is I spend a lot of time in the last month or so trying to find anything useful in PDFs, apart from the fact that it exists and we have to live with them. It’s the case that, for instance, it is apparent that Logitech cannot produce tagged PDF, in other words, one that has structural data, the very structure of the sort that Brandel was talking about needing. And if we sort of say that using something like a sort of a web environment inside a virtual environment to handle our text, that’s a major showstopper. It doesn’t. It doesn’t invalidate what we’re doing with VM, but it also means that there’s actually added salience in having the VM because it basically can provide structure that that that PDF can’t and probably never will, because I don’t see who’s going to spend the money on it. So I can see basically, PDF is a a thorny legacy problem we’re going to have for a long while. But I mean, I made a comment that I think rather badly at the Tinderbox meetup yesterday, which which was that, you know, I don’t want to write an f anymore.

Mark Anderson: I’m wasting my time if I’m not writing in something that’s going into structured text and wasting my time because I’m investing in dead technology. It’s not that I don’t like typography, which is where people tend to get hung about things like PDF. But solving 1980s problem about reproducible print is not what we need for now. The next thing that should is sort of need to make the onto. We talk about taking things in and out. One of the things that rattling around in my brain is a warning to myself to not get to skeuomorphic in approach. It’s quite natural for us to sort of say right, to move this in and out of here, I’m going to do it and we pick a metaphor that we know. And I’m sort of steeling myself and hopefully others to sort of say, Let you know because we’re talking about all these new forms that we don’t have. Let’s not cut ourselves off from perhaps some really innovative new things, simply because it doesn’t fit our perception of how we transition between environments. And the last thing is just a very quick reference back to what Bob quite rightly said about having this temporal knowledge of things. And I just make the note that’s an interesting that’s an interesting problem in itself, because what time are we talking about is at the time we’re referring to is at the time of creation of the the asset itself. These are all valid, I think. One thing we can say is if you ask five people, you’ll probably get 10 answers as to what the time is. So there’s work to be done there as well, but I won’t try and dive down that rabbit hole now. Thank you.

Frode Hegland: Uh, thanks, Mark. I see you have your hand up, Alan, you’ve hardly said anything, I don’t think you’ve said anything but Typekit, but I’ll give the mic to you. But just, yeah, go ahead because I have a specific thing afterwards to go back. Please, please open.

Alan Laidlaw: All right. I’ll try and be quick, and I’m sort of out of the loop catching back up because I’ve done a workspace, but I’m going to tell a quick story. And for the purpose of is anybody working on what I’m about to suggest or thoughts around it? So I’m sitting on my desk right now. I’ve got my computer monitor and a secondary monitor. I’ve got this other computer here, and what I would like is like, I’m not going to be leaving this situation for a while, right? My work is always Going to be for the foreseeable future, is going to be on a laptop and external screen and whatnot. But if I had glasses, it would be nice to be able to work on these screens and then either with a shortcut keyboard shortcut or something, move some text over into a virtual place over here. Maybe sticky notes, you know, maybe some highlights from a PDF That Is on the screen, but I’m moving it over into an area to the right. And then, of course, if I have a meeting for work, I can just sort of dispatch them and replace it with what’s better fitting for that context and that kind of interplay between the desktops, the situations that we have now, the flat screens plus an AR layer that is a baby step into VR plus, even like, you know, I’ve got the stream deck, which is a nice little version of IoT, where you have just a series of buttons that you can program if you want to perform functions in your computer without having to use the keyboard. Um. Is there anybody working on the problem from that angle? And is that? It’s a while or shortsighted.

Frode Hegland: It’s a key thing of what we’ve been talking about. And I’m just going to put my comments in there for this. The issue is that if you use something like immersed or whatever it might be and you have your actual screen and VR, it’s actually pretty good quality right now. I think it’s actually shockingly good. And yes, you can have many virtual monitors, just millions of them. That’s not a problem, and it’s really cool. But to take a thing on that monitor out and into VR space is not a solved problem unless somebody is going to shout at me that it is, which would be great. Right? Yeah. So that is one of the things we’re talking about. So being a bit pushy to that because of the presentation tonight. So this week, there’s been a few calls on HTML visual matter and then everything gets really vague for completely understandable reasons. But so on a PDF with visual matter, you take it into VR. Logically, it could add another visual matter with VR information. Take it out with EdgeHTML. You cannot do that unless it’s hosted in a wiki. You can’t do it on a normally published page because you can’t write to it, right?

Brandel Zachernuk : Well, I I don’t think so,

Frode Hegland: Ok, fine. I see a lot of funny faces.

Brandel Zachernuk : Yeah. So I mean, the thing is that the format question is largely independent. One thing that actually has that that that PDF does is that it has a self invisible kind of runtime so that if you have a if you open a web page and it has JavaScript in it, then you would have the ability to make that JavaScript save itself into the web. So changes to that web page in and of itself. So it is. It’s a self-contained program that runs within the browser that you would be able to use to change the contents of that thing. Most people don’t do it that way because most of the time web pages are served over the internet as these static, comparatively immutable things, much like PDF. But it is a capability that could be leveraged. It’s just as I was sort of saying about the other things outside of the realm of imagination, of what people think HTML is, and it’s for at this point because there’s normally a relatively well appointed server mechanism that people are making use of. The other reason for that is that it’s typically the case that once you make a change to a file, you want to be able to make sure that other people have changes to that file.

Brandel Zachernuk : And so what I’m proposing doesn’t have that ability, but nor would I say so. So I would. I would say that there are options, there are approaches. And I would also say like my hostility is to pursuing options with PDF. Like you said, there is an inordinate quantity of PDF kind of preference for PDF within places like Elsevier and other ACM and things like that. And to that end, it’s something that we’re stuck with. And I also believe politics is the art of the possible, making sure that you can you can actually come up with a reasonable transition from the status quo and push it into the future is essential. So. So don’t don’t mistake my lack of enthusiasm for PDF as as a as an unwillingness to participate within the ecosystem. But yeah, I think we have a number of options and opportunities that that can work through PDF, but we can also work past with with our sort of vision for how to leverage a concept like visual matter, which still retains its utility beyond the point where we have to work within the primary constraints of video.

Frode Hegland: Yeah, that’s fantastic. Ok. It’s nice to be smack down like that. That was very useful. I do have the question, though I would like to invite Tim Berners-Lee into our community, but that means HTML has to be part of it and it can’t be very loose. So in terms of trying to narrow it down a little bit, can we do one of the following in the header of an HTML document dump visual metastasis. And can we also, because Adam is very much about keep the metadata around the data, which is totally fine. Can we maybe do it on a CSIS style that the data and the page knows what metadata is being used around it? Or I’m just trying to irritate you guys into saying something solid and return if you disagree with me completely. That would be absolutely fine. And I’m just typing a note for the next, so don’t stop thinking.

Mark Anderson: Sorry, did Fabian, do you want to go first? I mean, I’ve got a comment in what Fred said, but far away.

Fabien Benetou: Sure.

Frode Hegland: Oh sorry, I didn’t see your hand. Yeah. Please go ahead.

Fabien Benetou: Thank you. Just first, I apologize. I have some connection issues. I don’t know why. So sometimes I still see everything that I only hear voices and I can’t read lips like this, so it’s a bit tricky. So I’ll briefly, uh, I have two points first. Well, the first one is the web is everything. I work for the web and on the web, and I lead via the web. But it’s it’s not. It’s a religious belief. It just it works and everything we do, even though most of it all through apps on our phones, there are still the web in the back end. And even usually even the view itself is the web. So to me, that obviously is the web, and that’s what I know. So it’s pretty fragmenting on that end to to go back on Alan’s point about moving things around. So this I hope it’s a bit like what I tried to show last time is each device being there or others have their different properties and different advantages. And if we go, let’s say, through a process, if we go through Bob’s poster, not everything has to be done.

Fabien Benetou: Let’s say in front of it or different mediums for each medium has its properties and advantages. But I, unfortunately, I don’t have as a visual is this which is I stuck a little IoT device under my chair and then based on the rotation of the chair, it would do election for a device, for example. I would have the remarkable on the left of my screen and then the desktop there or my phone on the right. So that’s some waste, but that’s just the interface to do this. In the end, you still have to have some way to communicate between the device and using the web, for example. So I think in terms of interfaces, it’s not. It’s pretty straightforward, actually. As long as you have little source of truth, that’s the end goal for the different position that you have in front of you. And you can have, you say, manipulate that conveniently. And if I don’t use it anymore, it was a proof of concept. But yeah, it’s definitely feasible to.

Brandel Zachernuk : Yeah, that’s that’s awesome, and I’ve gone actually the other way with that kind of thing in the past. I don’t know if I’ve demonstrated the thing that I do with this projector, but when I have it attached to this phone in a 3D printed harness I put together, it can show kind of like a flashlight, things on a wall where they should be. And then and then show different things as you turn it to different places. So that’s that’s fun. One of the things that I’ve got a friend to help me with is to write a rest interface to a microcontroller to be able to drive via the web. A stepper motor to be able to do those angles as well. So then you can actually hit a website to change an article. So in the same way that you have a web interface and data returned about the orientation of something which you also get from the phones, we’ve got a very easily, by the way. You can also drive those things to buy on the web. And I think that in general, that’s that’s been and continues to be my preference for how to how to glue things together is even if you’re making use of capabilities that aren’t natively possible on the web. What I tend to do is write a minimal interface to get it into the web and then interoperate between all of those things. So, yeah, I agree. I think that there are there’s just a wealth of things that you can you can prototype and explore making use of it as an interoperability layer to do something.

Frode Hegland: That’s fantastic. I’m going to narrow things down a little bit, I don’t putting you on the spot. How do you want visual matter and HTML? Let’s say all we’re talking about is title, author, name and date. Let’s just start with the simplest. Where do you want it?

Adam Wern: I would put it in multiple places. I would put a whole section with with the metadata. If in one place you can get it all in one in one go and then I would decorate the document. Decorate the different HTML tags by as much information as I can, an author on a paragraph four on heading, for example. So whenever you copy that paragraph, if you’re lucky and the clients don’t strip all the all the HTML, you will get the author name or author ID or whatever you attach to it together with that clipboard thing so you don’t need a special clipboard for every page, but rather if the document has that data, continue to travel with the data. I think that is the approach approach I would take. Of course, there are many details to it all the RDF and micro formats and so on where to actually put that data? And that is a discussion. You could put it in several places. But I think the problem with the micro formats of the of is that it it really weaves the presentation layer with the data layer and what’s supposed to be the beauty of it. But it’s also the the problem with it. I would rather have a separate projection as a view and then attach the data in the background hidden.

Frode Hegland: So if you love me, Adam, can you mark that up? Um, even if it’s almost nothing, because then we have something people can agree or disagree with.

Adam Wern: But I think I think we should have a discussion about it. We have lots of web people here that have been I’ve been deep into this, very deep into this and I have thousands of hours into what I’m describing now. But but most people have been burned by that kind of metadata, and it has never taken taken off. We have seen so many versions of it, so we have to learn from the past mistakes. That’s why I’m a bit harsher on vision matters than the rest of the people here, because there has been a lot for metadata initiatives like that, and all of them has kind of fizzled out, basically. And the HTML has lots of capabilities for metadata. And the header tags and lots of so it’s it’s quite complicated, and it’s mostly mostly a social problem getting people to agree on things. So it’s not as technically hard but socially hard.

Frode Hegland: Oh, I completely agree with you. But the thing is the reason we chose Big Tech. Big Tech has many issues. It’s just academic default. That’s what that world is, and it is super simple. You can just have this. Is this. But so so that’s why I think but also the kind of people we have available to us in this dialogue, if we do something and we have the preamble that you just said, we could literally take it from the transcript and put it in. They could say, Oh, this part’s great. This part is awful, but no one has to use it. But the thing is saying, you know, Edge magazine basically kind of invented visual metaphor for me. That article with the crisis guy, if things this in the way the Vint Cerf keeps talking about visual media because he’s pushing it more than me, it’s really quite wonderful. He just says if things know what they are and can communicate it, you can have a really interactive environment. So, you know, we have the right kind of people to approach this, and Tim Berners-Lee, I’m not going to pretend he is somebody who knows me, probably never heard of me. I have had one very long discussion with him and I was shocked by his genius. It’s much more than the web. I really was blown away by that guy. But we work with the same people. You know, he knows Vint Cerf, so he is one of the big brains and names who could come in and say, Yeah, metadata. Here’s one way of doing it and throw a little bit of weight behind it. And then there is the other issue which is really, really important. And this is what Alan brought up. And this is what I’ve written some pieces on in our journal, of course. How are we going to do it? You take your laptop and your NPR. You have your laptop. There’s a thing. Let’s call it a PDF for now. How can you take it out and put it on the wall? What’s the mechanism, guys?

Brandel Zachernuk : And there’s something called the drag and drop API that like in a technical sense, the way that you do that is by interpreting gestures and making sure that they’re preserved within the context of the browser system. But don’t forget, we’ll do it.

Alan Laidlaw: No, but that’s even the goal, though.

Frode Hegland: Hang on a second, so really clarify because such an important point I’m talking about. I’m now in, let’s say, Merced. It’s a third party company. They allow me to have my actual laptop screen shared in VR space just to clarify where we’re talking about, right? And then I’m reading in reader a PDF with visual matter, and I now want to have the glossary over here. I want to have the table of contents here spread out horizontally, like Fabian was doing earlier. What is the manner in which the data that is there because the VR environment just thinks it’s a flat projection, it has no knowledge of what’s in there. How can I do that? What’s the technical way to to have that moved across? Are we sure that’s the. It’s definitely it’s definitely one of the goals, Alan, because the metadata that is in there, let’s say we want to take the glossary with, you know, hundreds of different definitions and put it in a constellation, right? Just see, the thing is probably and Adam and Brandel who live in these things, they are so beyond people like me that I don’t even know how to put something in those spaces, right? And I represent a normal user. So I’m doing a thing in this rectangle and I want it everywhere.

Alan Laidlaw: Ok, so let me present a different scenario just for the sake of understanding. Let’s say I’m reading a PDF here, right? But I have the Apple glasses on,And as I’m reading the PDF, I see glossary Floating in space over here. And as I’m going through points And highlighting or whatnot, it’s it’s activating or highlighting over. Here so I can see that the two are connected, if that was the case.If I’m extracting meaning in that way, do I do actually need to take the PDF and put it into space?

Frode Hegland: Alan makes sense.

Mark Anderson: It’s an important point.

Frode Hegland: Your glass, as far as I understood you now because I was also typing back to Fabienne, what is on your device and what is in space? I have absolutely no knowledge of each other.

Mark Anderson: Is that true?

Alan Laidlaw: That is a Trivial in the scheme of things, that’s a trivial hurdle to overcome in the big picture, like it’s inevitable that they will connect. That there will be a universal,

Frode Hegland: So Wednesday night, one Sunday night, evening time for you guys. Middle of the night for me. I’m doing it as a pre made presentation tonight. So the National Institute of Standards Organization in America land after, that’s going to be key. And I want to be able to tell them that one of the things we’re working on is you take a newly done academic document, you’re reading it and your laptop, your remarkable whatever it might be. You don your headsets and then you think this document is too dense, there’s too much stuff. Then you magically pull it out, you know, beheadings. You have your glossary, you have your references, there’s lines. All this stuff. Not necessarily every single word. You pull that out and you have it in a space. You can do amazing things with it. And one of them may be a mural like Bob Horn’s example. You pull that up, but right now, I cannot grasp anything there as far as I know. So I really want to know from the technical guys in the room. Am I wrong? Can we drag it? And if not, what do we suggest as a mechanism to take all that stuff that’s here and give it to the room, so to speak?

Mark Anderson: Well, I think I think if I may, I think overemphasize in the physical aspect of it either transition.

Frode Hegland: Mark, Mark,

Mark Anderson: Let me finish. Let me finish, please.

Bob Horn: Oh no.

Mark Anderson: Overemphasizing that the transitional change because the the physical thing in front of you is is very real. You can touch it. The other is, at the moment, almost an imagined thing. But I think Alan made the point quite correctly that it doesn’t necessarily from the it’s all ones and zeros under the hood. So there isn’t quite necessarily the hard, the hard definition you’re seeing. So you as we’re sort of making a bigger thing of that hurdle, there is an issue over what, what gesture, what literal action we take to do that. But but the transition between the two, OK,I need to see.

Frode Hegland: So why don’t we just let the guys who build this address the point, whether it’s important or not? All right,

Mark Anderson: I’ll shut up. I also got another.

Bob Horn: Please, please, please let the guys talk.

Brandel Zachernuk : I don’t mean to say stuff,

Frode Hegland: Mr. Technical. Yes, sure.

Brandel Zachernuk : So I mean, if by no means the only one, but so. Right now, absolutely at this instant. There’s nothing you can do unless you build your own reader that has the ability to interpret the user inputs that are necessary to do this. So if we were to build a web but that that said, if we were to take some, make some web page or other application where we have the full access to to hand gestures and whatever else is available, then it would be a matter of just interpreting the user gestures like a pluck or a or an dwell or a head dwell or any of those things are relevant and then identifying what are the relevant things. So I do believe that it’s it’s sort of technically trivial at once. The meta has been identified. But yeah, right now there’s nothing that PDF or an acrobat reader of any kind would be able to permit you to do that. Does that? But yeah, I’m not concerned by that and why I’m sort of more more interested in the what the sort of intended use and function is in order to be able to determine what would be the appropriate plumbing to be able to facilitate that user gesture and action. I hope that answers the question reasonably.

Frode Hegland: Just writing a reply here and takes it absolutely doesn’t, and it does all at once. You know, we do have an application called Reader that I own, meaning the community owns it. It is open for us to experiment on. It’s a Mac application. And if you guys have a mechanism whereby you want the user to be able to do that, I’ll try to make that happen. But of course, absolutely at some point you won’t have your screen in this environment. It’ll be a fully native VR component and then this problem is gone. I completely understand that. But before we get to that point? You know, the thing is, having spent some time in and you know, there’s been many shocking VR moments for me, the mural was one of them because it’s completely un interactive as an object. But the movement of walking around like it is on the wall, which hugely useful was such a surprise. So just being able to to even if it is having a normal PDF, forget visual mata. Just a normal PDF saying I’m watching it on here, I want to take it in here. Should we maybe have some kind of a browser plug in or something that says upload it to this person’s wiki page? Or what is the actual way? Because until we can do this, we’re dealing with only hypothetical data rights. Well, Nancy Fabian has a hand up as well.

Bob Horn: I’ll just answer on the mural. Part of it is that you can structure each one of the elements on a mural by grouping it, grouping them in illustrator. And so there could be some kind of overlay in illustrator or some other visual program. But I prefer illustrator that that enable you to grab eat any chunk of any of the either two to two hundred more textural chunks, which are separate and can be, you know, grabbed or any of the visual chunks or even of the grouped textual chunks could be grabbed. I’ll stop there.

Fabien Benetou: So I’ll make two points, I’ll go back then on the mural and and the software used to edit it. In the end, it’s the same principle as what I did for the remarkable or to extract the PDF with a visual meter. The challenge is it’s always custom. It’s always for every single file format, for every single presentation. We don’t know what’s important. We don’t know how to edit if we can understand that format and, for example, find a group of the illustrator format that makes it. Let’s either grab a bowl or what, but then we can. But for now, it’s every time custom now to step back on the question about the fortunes of having a virtual screening VR and grabbing something from it. Honestly, I think it’s interesting. I think it’s feasible. It’s visible today, but I don’t think anybody is doing it. I think what at most you have is something. A video basically, and from the video at most there was, I forget the name, but there was a researcher from the Media Lab who was his basically core idea was that you take photos just like you do in real life with your phone. You want to remember a moment you would take your phone, snap a photo, even if it’s from an invoice, a friend, whatever.

Fabien Benetou: And then that makes a memory enough audience you can come back to. And he was suggesting to do the same in VR so that you in VR, you see your screen, you take a photo and then you can manipulate that photo of it. That’s why I was asking in the chat, What do you want to preserve the photo with enough? Then that’s what you can probably do today. I believe in most cases, probably not, but it’s a step. And otherwise it I think from this screen use stream, you could probably have the pointer. Let’s say your controller becomes like a virtual mouse. That’s it. And from it, if you see something like a browser tab that has a URL, then you can pull the content from that URL back in there and then have a much better representation. And if it doesn’t, then you’re a bit stuck because maybe the person doesn’t want to share what’s on the desktop beyond public accessible, but it can be conceptualized can be implemented. As far as I can tell, nobody has done it.

Frode Hegland: It’s phenomenally interesting, and it’s so nice to hear that approach, Fabian, because again, I’m shocked by the usefulness of the damn. I say that would love Bob, you know, if we could build a mechanism whereby anybody could take any PDF, let’s forget anything other than the actual thing and just put on some glasses and view it in VR. It will be amazing. That same thing, right?

Brandel Zachernuk : I want to jump in and actually dispute the non-interactive video. One of the things that’s really important about paper and one of the things that I think Mark Weiser was really, if you’re not familiar with Marquesas discussion of ubiquitous computing pads, tabs and what was the other one I can’t remember. Then it’s absolutely essential for understanding what what virtual reality has the ability to offer. That’s why, as I’ve mentioned in the past, I’m not super interested in VR or HMD VR as much as I am in immersive computing, which encompasses all of these other concepts. But the point of the mural, as I’m sure Bob will attest, is that you can change your relationship to it and that itself is interactivity that we don’t typically afford ourselves when we’re looking at things on a digital display because they have a specific relationship and orientation that they work for. You know, unfortunately, in-plane switching on LCDs has has abated somewhat. But you know, there’s even a viewing angle that they completely break down at when you’re looking at a lower quality screen. So so there’s an incredibly and and taken it out of the realm of the imaginable to realize that we can change our orientation in the way that somebody who’s done drawing will actually move a document around in order to do that drawing. And when we’re reading something, I don’t know if people have done reasonable studies of it that I would be fascinated. When you read the paper, I suspect the viewing distance of it is proportionate to the level of difficulty that you’re having with it. Certainly, I furrowed my brows and pull a book in as I’m struggling with the concept, and we don’t have that ability when we’re when we’re working with things in typical pixel space. But we start to unlock just a smidge of that once we get those things into virtual reality. And that’s what I think the proposition. It’s also essential to remember that what’s happening with the mural is interactivity per say, but it’s the it’s my acting and interacting with it at space for reasons. So that’s that’s all.

Frode Hegland: I think there’s a I think there’s a huge agreement with that, and it was really lovely to hear that passionate speech, both for your interactivity as a body with the document and also for further interactivity. Absolutely completely. I know I speak for everybody here. I think the only thing that I’m being very pushy on is before you can have interactivity with the thing, you must have the thing. And that’s what I’m saying. Just having it there in the beginning as a very, very, very first step is so absolutely amazing.

Bob Horn: Well, we’re we’re, you know, if anybody will help, we can do what you ask for for volume three of future of the text that is have an appendix, which would be a demo of not just one mural, but I have several murals that were done together and which can be interlinked. And we can pull out some of the chunks of text that you know that I mentioned and show them, and we can fly through Zoom, zoom through them. You know, it seems all possible from what I’ve been seeing, so we can

Brandel Zachernuk : Definitely to pursue that. I’d love to chat more about how to what we might do with the data that you sort of collated and have already sort of ready to be interlinked in that way. It’s definitely something

Bob Horn: We’ll set it up.

Frode Hegland: Marc, you’ve been very patient for a very long time.

Fabien Benetou: Well.

Mark Anderson: Well, I mean, I relate to what I what I was going to say earlier in terms of a comment I put in the sidebar anyway. It’s just I think one thing to not overlook is it’s more if you want to do stuff with W3C, which is effectively Tim Berners-Lee, don’t just think of HTML. Remember, remember semantic web and I know it came to nothing really. But solid isn’t around, which I think is sort of basically semantic Web 2.0. So the most obvious question that will arise is how does visual matter fit with that? And as I sit here, I don’t know because I just, you know, the thought of saying only arisen since you mentioned it. But I think you need to know the answer to that before you go into the room. I know because I know his work is is protected, perhaps an unfair way of putting it, but he’s conscious. He made an effort to make sure that the sort of legacy of the W3C, which sort of basically keeps the web on the straight and narrow so that the various protocols and things interact. So I I don’t think there are any way antithetical to one another. But just just something to be aware of because otherwise you get blindsided by it.

Frode Hegland: I absolutely I’m not going to fly off any emails to him at any point soon, but I think it’s nice to have a persona in our mind as and who we want to present this to at some point. And of course, you know, his understanding is much wider. Peter.

Peter Wasilko: Ok, I think that we should consider having our own custom protocol. I mentioned that in the sidebar. And then. Burrow, Jef Raskin, the idea for humane names, for a way to get things moved around when I’m moving between radically different systems. I don’t want to have to remember full IP addresses. I don’t want to be bound to the domain name service even. But if I could have a short little phrase that I could remember, that would represent a given document like giggly purple grouse. The idea is that three human memorable words put together into a grammatically correct phrase are extremely easy to memorize. You could put that in your Oculus VR application and then remember that exact same name and type it into a web browser. You don’t even need a cut and paste clipboard mechanism. Your brain can be the clipboard. If the identifier is simple enough and memorable enough, then assuming that both applications have access to the internet, you’d be able to then go to some canonical resolving service to convert giggly purple growth into a real web address that would contain the actual file for the file format. I think we should really just. Make a conscious effort to advance something as an alternative to PDF, like I suggest in the sidebar, let’s call it augmented document format instead a portable document format and then make that format have several alternate representations.One representation can be highly optimized for a machine, and the other representation should be more verbose and be designed so that an ordinary human could generate it and understand it by looking at it, so it’s not going to be as compact. So instead of having, you know, a path m some number as some number where those single letters represent a command, you could instead have path. Started point and then give the actual coordinates move to point, give the actual coordinates. It would be a for both. For me, you wouldn’t be offering by hand necessarily. Your tool could automatically convert SVG into that for both format, but it would be something that would be intelligible to a person. Easy to parse and we can produce the grammars to provide that bidirectional translation between the compact Rosetta representation of the data. A verbose human readable representation of the data. If you wanted to write it in the human readable form, you could. The real event is the epic markup idea was advancing at our last meeting is that if it doesn’t look like any of the markup annotations and actually just looks like plain freeform text, it will pass through any systems that are filtering, looking for markdown or looking for JSON unmolested because none of those things will be able to parse it. Because again, it’s just freeform text. So those systems that aren’t designed to actually parse that will ignore it. That allows us to send the data through. You can email Giggly Purple Grouse to someone. You can send a instant text message giggly, purple grouse and that simple little set of three words out of all the permutations of three words from the dictionary gives you a huge namespace for potential documents that isn’t going to overlap. Worst case, you could qualify that with a second one as your personal identifier, so it could be no sharp color, particularly purple grouse. And that would tell us that we’re actually in fros namespace so that we could have, you know, three d ACS Giggly Purple Grass means something different, and he wouldn’t have to enter the ace. You wouldn’t have to enter the scholar identifier because your local system would know what your prefix is. So that would be the idea of using your local nicknames as opposed to a globally unique one global unique one. You could add an extra identifier to represent you, given the vast space to play with, and the actual files could be something that would be really, really easy to produce. And the advantage there is, instead of making someone who wants to support our format, go out and find tools to parse PDF and generate PDF, they could work with our layer and then we could provide one single canonical transformation into PDF space. Now what functionality would we want for our format? The main thing that PDF has going for it is that it’s used by typesetting systems in order to produce high fidelity line breaks. So if we bring in the new flat line breaking algorithm and a couple of the core ideas of latex, we can generate datasets horrible macro language. There’s a huge latex ecosystem that consists of macros layered on top of the text language. The text language had a crappy model for how to do function calls and macros because it was written pre the development of all the technology that lets us do sane, rational macros. So forget all that the core of the functionality provided by the underlying tech engine is the ability to assign weights and demerits to different typographical constellations so it can try a whole bunch and then do an overall weight minimization function to pick which one is the best looking how you’re going.

Frode Hegland: It’s interesting, but going quite far down a rabbit hole. I have a question, though. Sure, it seems to me that this means there’s got to be something on a server, right?

Peter Wasilko: You know, you could do a completely distributive approach with this,

Frode Hegland: But it would still

Brandel Zachernuk : Be. What it entails is is a protocol. It means that you need to be responsible for more of the agreements between clients and systems and potentially oblige them to supply additional stuff. I’m kind of with Bob, I’m not super interested in taking up that part of it in that it obliges one to be responsible for more than the most interesting piece. And and I believe that when it’s when, as I’ve sort of mentioned in the past, once I’m in a weird position in this group because I appear to be a technologist, what I what I am in, in technologist circles is is a conceptual designer working on the idea of things. And what in practice happens is that when I speak tech well enough that once they understand and identify a problem, then they can say, Oh, OK, it’s this, and they can squirrel away and come up with incredibly deft solutions for that specific thing. What I think is incumbent on a group of people like this is to enunciate the problems in such a way as to make sure that they understand the characteristics, the operational characteristics that are required for that solution. And as such, I think that that while that work is also valuable, it is. It has to be backed up and premised by a clear enunciation of essentially the user function. What it is that a person does the way that that that we sort of conceive of those documents in the first place.And as such, I’m less concerned with that sort of the technical plumbing even on the user individual level. But but certainly at a at a global level and in general, what I’ve found is piggybacking through existing formats like HTML, even like PDF, especially SVG. I made a really great pinball game using SVG once, where I used the IGB channels to represent the restitution coefficient of friction and density of all of those elements. It was really fun to say to the artist like, Look, we don’t need to write an editor. We have the editor at the Adobe Illustrator. Just make that thing magenta and it’s going to bounce right? So, so that’s the kind of stuff that I think we should be looking for at such time as people realize, Well, actually, we don’t need to call bouncing magenta anymore. We can make our own tool that happens later on down the page. Once that sort of requirement or value is clearly enough sort of stated through the use of the tools that people realize that that skeuomorphism can fall away and we can take the effectively the conceptual space to the lower energy level where we’re not wrapping it in those other things. But at this point, in terms of the evolution of these concepts, I think we’re better served by piggybacking inside and through things like PDF, like HTML like SVG. I hope that makes sense.

Frode Hegland: It makes a lot of sense, and this is why I think we need to make smart knowledge objects. So the object itself contains enough information to be useful. The web and the internet both great, but you shouldn’t have to have connections for everything all the time. So I think we’re on similar pages. But Peter, you have this habit of saying stuff that I don’t understand, and it sounds really stupid. And then a few months later, it’s like, Oh, and then I think it’s my idea. And then I remember it was your idea. It happened with the the hashing of the of the PDF documents. So this is one of those. I just don’t get it. And because I’m wary of the web effect, but I look forward to when I get it in the future. So I hope you understand there was a real compliment, even though I’m trying to do

Bob Horn: Some sort of a little demo and that’ll make it all better. Ok. Yeah, no.

Brandel Zachernuk : I think in principle it is a it is a valuable component to it, particularly if it turns out that there are aspects of this that we can’t sort of have access to through web or that we want to be able to have a layer that has different sort of latency profiles, for example. I think that there’s one of the least stupid things about blockchain as the concept of IPF. But I haven’t seen it proven out yet. And I think that that’s definitely a very useful way to conceive of the approach for it at such time as people have more of an awareness of the value proposition this kind of thing brings. But we have other work to do in front of that in order to be able to bring it to everybody.

Bob Horn: So I’d like to show you share my screen with one other thing today, if that’s possible.

Frode Hegland: Yeah, but not too much because we’re running out of time and we still have a very practical issue, but please go ahead before that.

Brandel Zachernuk : Definitely.

Bob Horn: Let’s see, is that has that happened? Can you see us? We can you see we do, yes. All right. That’s at the University of Illinois. This is what you can do in two in 2D right now. In fact, this was done six years ago at the University of Illinois, Chicago. And the students whom you can see in the foreground could put up Post-its. I could shift between 15 or 20 different murals that were that could pop up on the screen just with a with a with a pointer and move it around. I could zoom in like like in hour and zoom out and focus. So all that’s possible in 2-D now. And so we can, you know, it’s possible to include all that in a demo if we want to do that. Thank you for the time.

Frode Hegland: That was really nice to see, because that’s very much how I felt when I saw your mural KNPR this week. Right. So and kind of getting close to closing, I need to know very specifically from you guys. I have a PDF mural text, whatever doesn’t matter on my computer, I’m putting on my Oculus. I want to view it like what Adam and Brandel have been working on. What is the easiest solution I can have as a normal Mac user consumer that we could potentially produce us, let’s say, a free tool or whatever? How can it be done? So we can call someone on the phone and say what? You’re not using the two items together. Do click, click, click click. What’s the easiest we can do?

Brandel Zachernuk : Adam a website,

Adam Wern: A website with drag and drop interface, you drop your PDF file on on it. I loaded the PDF file directly into my the small tags. I show some of you so that you can have a drag and drop website so you drag your PDF from your desktop to that site and you get it in VR. If we want to persist some data from it, we could potentially add a PDF page with, but I don’t suggest that way. But I would rather have a small, small sidecar file that is your kind of 3D position, something thinks, but that is quite doable and less than a day’s work or a day’s work. So, yeah,

Frode Hegland: So does that mean that me as a developer, if I wanted to, I could add. In reader, my PDF reader, the keyboard shortcut V and it loads a post to a web page, I put on my Oculus. If I am in that space that I’ve already told my browser, What is, it’s there?

Brandel Zachernuk : Yeah, you’d need some more plumbing, so. So for example, I shared that page on glitch various naked pipes, and what it presented was the view of everybody’s cameras and and heads and hands to each other. I can like that again. That does require a server, but that sort of is apportioned to to the to the action. But what it does is give you a real time conduit between all of the different clients that are interfacing with that. And what you do with that is up to you. And that’s essentially what all of these things we are TED and second life we’re doing. But having having the ability to run, that means that if you have one client that is cycled self-identified as a desktop computer and you connect to it with another client that is self identified as your headset, then you can interpret those commands differently. And so that if you say, I want to pull this thing from the desktop client and signal to the VR client that I want, that I want to get this thing. So then you can you can you can press a V and you can make that happen.

Frode Hegland: Ok, so even more concrete than can we on our future text website, add a page and a thing where one of us can just drag a PDF put on our Oculus. And if we’re on that page in VR, we’d see it there.

Fabien Benetou: It can I show you have to pay for it? Yeah, yeah. Have you got it?

Frode Hegland: Yeah, probably in place. But Brandel, what do you mean? I have to pay for it.

Brandel Zachernuk : We did have a server that has the capability of being able to respond to those things. You can’t be just serving dumb files because you need people to be able to interpret those things properly.

Frode Hegland: Right? Ok, let’s see if we can show. Thanks, Brandel.

Fabien Benetou: This is what this is what I showed you earlier. That’s a bit how it’s in the back end. Let’s say the list of images, but the PDF that are used to generate or those that I guess, you know, pretty much by now. And what I can do is I can drag and drop it. And then I have my PDF in there, I can skip to the next page and then I can also take a screenshot of that page and I can move it around. That’s straight up for medicinal herbs, which is already open source. What’s interesting also is if I give you the link to that page now on the chat, you can join it. And here I have that link. So it’s it’s like you will see you would see me moving around and I’ll just put it down. It’s a cheap server because Brandel say you sell it to pay for a certain time, so it’s not free, so it might lag if you join me there. Also, if you join me there, do please put your microphone on mute. Otherwise it’s going to be total cacophony. But otherwise the PDF can be drag and drop just like I did. And you can then pin them in space so that they are there when you come back tomorrow whenever you want. And that’s. Viewable in the headset as well, the other window there is because if you ask a friend or colleague, Hey, open that PDF. Unfortunately, what I often end up doing is doing it for them, and once it’s a social space, it doesn’t mean it has to be social. But if you want to help someone or do a certain way, that definitely helps.

Frode Hegland: Well, that was wonderful, so then I have an additional question. Would it be useful to have an infrastructure whereby on our FTL websites, each one of the technical people has a space where they can choose to have code so that, for instance, Adam does a thing with a mural or with the PDF, with whatever it might be. But we always know that it’s on there. And here’s the kicker we can choose to drag our own thing in there so that we can compare what Bobby and Adam and Brandel is doing, and so can anybody visiting because we’re doing different things with these documents.

Adam Wern: What do you mean by comparing? Is it opening and opening different tabs, basically with different projects and look at them at the same time? Or is it something else?

Frode Hegland: What I mean is from what I remember over the weekend, Brandel started with the mural, and it would be amazing if we could have an interaction, but I could drag any PDF in there and it would be centered in that spot. And then you added the numbers coming out. So then I go to the Adam Page or Adam tab or whatever on our sites and drag in that or something else because there would be an incredibly easily accessible way for all of us to look at everyone’s work. All you need to do is send a tweet or whatever that you’ve updated it all in the same thing, and we do it with our own data. Or is that a big infrastructure that would actually impede your way of doing prototyping?

Brandel Zachernuk : So I can answer this, and I was interesting thinking about Bob’s description of the structured abstract that the mix, the NIH, right, that has mandated structured abstract. It’s essentially that’s essentially a file format in the same way that we have protocols that we’ve established for what constitutes a JPEG. They’ve invented a file format, but it’s a file format powered by people and bureaucracy. I like to say, you know, that software is just bureaucracy sped up, and my wife retorted recently that that bureaucracy is just software slowed down. And so to that end, you know, whatever whatever processes, whatever formalism you put in place are, you can do that right now. To my mind, in terms of the work that I do that would be burdensome to to work within the confines of unless those confines were relevant for the domain of my exploration. So Hubbs is a really great place to be able to do stuff because of the stuff that it carries along. I haven’t used it before because, like I said, it sort of provides opinions about things that I don’t believe I need opinions on right now. But depending on how important Co-present says, it may be worthwhile to be able to kind of pull in so we can do that. It would carry a financial cost, not a big one, but like dollars a month in order to run a server that is capable of doing that interrupt.

Frode Hegland: We have a server that that’s on. We can use whatever we want. It’s pretty powerful server and says, No, no, no, no, no, no, no. Ok, what’s the no? No, no.

Fabien Benetou: Guys, no, no. But it has been. Fortunately, it’s pretty specific. So it’s not you. I wish hubs could be installed and on any server, but you can do on us digital ocean only. It’s a bit annoying.

Brandel Zachernuk : Yeah. So if you if you have to have the whole keys to the castle, I have a website neon org, but it specifically stipulates that it only does hosting of static files. You don’t have the ability to program the back end to be able to respond in certain ways. So that’s why

Frode Hegland: The server we have is a complete machine. It’s not like a WordPress thing. Everything on there we can do whatever we want.

Brandel Zachernuk : Okay. Well, in that case, it may be possible to make use of it, but I’m not sure what other sort of requirements and constraints apply to what people do. But yeah, you would need to. You’d need the ability to be able to run some kind of socket server so that it is able to do the real time media connections to all of the different devices.

Frode Hegland: Probably on your restaurant.

Fabien Benetou: Yeah. That in itself is not super problematic to set up, and I’ve I’ve a couple of times, honestly, it takes me an hour. It’s as a Brandel say, a couple of dollars a month, but expensive. What’s tricky? And I think it goes back a little bit of last time the discussion about collapsing is at least for me. Yes. Anytime I need to describe or video or the link here, I hesitated to give it to you or not because it’s hidden or private or secret is because some of you are going to try it now. And some of you might try tomorrow, and I might break everything in the next five minutes or in the next hour whenever I don’t know. But I don’t want to have to care honestly. Like, I think that’s why I least I’m excited by the typing is because I can move fast and break things as so. No, but because I don’t have a lot like a whole social network behind me, so I honestly am. And it’s a daily struggle that specific to this one, I want to have both in an environment where I can do whatever the heck I want and break everything. But I also want to capture it or help for others to tinker with. I certainly don’t have an answer. But just, you know, if it’s not done right, it’s going to hinder the pace at which it’s possible to to explore my habits.

Adam Wern: That was very similar for me, and that’s why I even post screenshots at times just to show a snapshot of of a moment in time when the prototype looks like that. It may not even look like that in in 10 minutes because I do other things and move on and keeping preserved copies for people to view on the way and also mostly having instructions how to behave in that space because there is no no handrails anywhere, everything can break it. That is that’s that will take half a day’s work, maybe even or many hours, at least to to explain what you can do with doing it. So it’s not out of secrecy or being mean to show to show screenshots only or very small. So I usually send code to other coders because they know they know the feeling and they know that everything is off of a mirage and scaffolding that it doesn’t hold up to inspection. And the code looks like it just because you have pasted together from 10 different earlier projects and so on. So it’s it’s a bit different. It’s a bit hard to to. Things may look more finished than they are, and I want to share things much later, weeks later in the process. That’s my sorry for

Brandel Zachernuk : The sad answer.

Frode Hegland: Well, and none of this is really, really important because we don’t want to slow down innovation, but we don’t want to lose what we learn along the way. So we’ve now been fighting in the best possible way around the blog, in journal and all of that. And you guys don’t seem to be having a problem with me scraping Twitter to build journal articles. So I think, you know, Mark and I. Our job is to record and help you test and write about things your job is to build. So, you know, if you do the screenshot, there’s nothing wrong with it. I obviously would like some working code as well to be able to put in the journal. But also, I really think we need to not necessarily right now, but we need a manner whereby something we have built people can use their own data on. I think.

Brandel Zachernuk : Yeah. So that’s what I did with the Gutenberg thing. That’s not that doesn’t require a server there, but it does require sort of instructions in terms of the drag and drop, and we can definitely make things like that. It’s just the how how that scales into what level of friction you expect people to be able to put up with in order to to to to make it happen so you can obviate a back end if you are willing to say download the file and drag it onto the screen. So, yeah, definitely there’s a whole continuum of options in terms of how much you’re willing to put a user through versus how much you’re willing to kind of pave the way for them. But the more that you pay, the more opinionated add becomes, the more sort of fixed it becomes in terms of the the steps that you need to follow and what it sort of functionally they’re to do for people and for a well resolved concept. You know, I have no objection to putting something up for that. But but it sort of crystallizes it in that particular form for for that particular sort of question. And so without a doubt, it would be relevant to do that symposium. But until we have that done, it’s not a place where we would continue to iterate. That makes sense.

Frode Hegland: That’s fine. So what I will say on Wednesday night, Thursday morning is when it comes to VR, we are messing about all over the place and our workshop. But we do feel we have a need to do to get two pieces together. One is a phenomenally deep augmentation where you can do things we can’t even dream about now in order to learn, create, communicate. But also it needs to be as easy to get into not to use to get into is putting on your regular glasses. Right. So if you’re in flatland on a screen and going in through some mechanism, we will make it so that you literally click a button and it’s all there. Lots more discussion, lots more experimentation. Because if VR becomes its own room, its own space, its own environment. I mean, this is one of the reasons I hate the metaverse, partly because it’s a commercially owned name, but also because it isn’t separate. I mean, Brandel, you have, you know, punched a hole in my skull to highlight the fact that we are living in AR and VR now. It’s just that we tend to access it through flatland screens. So think of it, words like that, it makes sense. And that means I can talk about our intentions. I can talk about our ideas for data to be encapsulated that we don’t use the network only when we have to. And all that good stuff and then use the beautiful example of the mural, starting with something that in itself is not interactive at all, but it’s in an interactive environment. And then we can add layers to the mural, just like you were saying, Bob. And then we can take documents where the system understands the semantic structure of the document, work with that in the same space and et cetera. I feel much better after today’s talk in terms of understanding the realities of our world. I’m glad. Ok. Any final words from you guys? You should all wish me luck because it’s going to be stressful, but you have any other words than that.

Bob Horn: Oh yes, a whole lot of luck on on on that, because I think it’s really important to get them to get visual met into the national. Medical system and particularly the medical library and and that’s also a possible entry point for research funding for this kind of project, so much really appreciate eager to hear what happens.

Frode Hegland: Yeah, thank you. I’ll be tending, of course, a lot of other sessions. So this is nice. The National Institute of Standards Organization. I’m also been put in touch with the people who are in charge of the NIH, the National Institute of Health in America with this metadata approach. So to be able to say that we are really properly looking at the future is it’s kind of ironic because half of us here are old white men. Some of us are young, but you know, we do need a wider community and it’s really important for the perspectives on the future of text. But I’m glad to say that at the moment, we certainly don’t behave like who we look like. At least I like to think so anyway. Have a good week.

Bob Horn: I look forward to Friday. Good week, guys. Bye, guys. All right. Good. Take care. Bye. Bye bye.

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *