Chat Log: https://futuretextlab.info/2022/03/05/chat-4-march/
Frode Hegland: Ok. Yeah. Yeah, I think we made progress on there so we can fiddle with that later and just run it by the guys now see what they think. Yeah.
Mark Anderson: And what I’ll do is, I think in terms of the heading of the, you know, the how to rubric, I’ll write some thoughts that would just send you just that rubric without having to have the other 70 megs of information trained on IBM.
Frode Hegland: We’ve been working a little bit mark and eye on the journal today arguing like crazy as one should and the issues we have is to make it really easy for everyone is that is easy for contributors and readers. Mark has really been fighting in the corner for future readers, which is great. So, you know, we have at least three types of contents. We have articles which are mostly by me now, but there’s a few people coming which is great, as we’ve had for the books. We have stuff that I scraped that you guys just post on Twitter that I turn into an article. And then we have the guest presentations. So what we’ve done is for guest presentation, we write the name, call and guest presentation. So it always starts with the name. So here you know, these are two articles I’ve written. But then when it’s a conversation because it’s so many people, we just say conversation, and then we put that. In the metadata, we still know who wrote it, because otherwise it may get a bit messy. What do you think makes sense?
Mark Anderson: One of the things that we’ve just been batting about is, of course, that I mean, in the ideal world, everyone’s going to be reading this PDF with sort of visual meter capable reader. But but today, that’s probably a neighbor expectation. So what you’re talking about is is we’ll need to have a little explanation at the beginning to say that the layout may look a bit odd to your to some people’s eyes, but this is always been carefully done so that, for instance, if you quote, if you if you copy from the document, it will pick up the citation information. So we we have the metadata address ability. So to a certain extent, something has to give between probably traditionally, you know, a purely traditional type layout. And what we and what we can do. But I think it’s it’s very important. You know, the key part of the process here is to have the clean visual meter, which is hopefully going to enable all these other lovely things we want to do. So I don’t have any problem with that. We’re stepping away from some of the existing print conventions to get that part right.
Fabien Benetou: I think especially for new readers, if there is a. The right way, let’s say, to consume and to read what at least what usually makes me go the extra step of installing or testing anything is, how do you say, shortsighted benefits like as soon as I get that, OK, if I read this with a quote unquote right reader, then I’ll be able to do X Y Z, and that’s I’m going to install it to go a bit out of the usual pad.
Mark Anderson: So that’s nice. I think you’ve expressed that very nicely, I think. I think we can capture that.
Frode Hegland: We’re still working on that here. We also want to introduce the community. But one other thing to tell you far beyond before Brandel comes here, only because I have already talked to him about this is ironically enough, Brandel being so brilliant in working at Apple. He’s not really interested in Apple systems. So when we had the meeting on Monday and then another meeting, we looked at the package contents of a document from author. Which is called a dot liquid file and on the Mac, if you control click on such a file, you get an option to show package contents. Actually, I’ll show you that really briefly because it’s very relevant to our discussion. That’s a desktop, I do apologize. Trying to organize family pictures, right? So show package conference.
Fabien Benetou: Hello, Brandel.
Frode Hegland: Just introducing club into the show package contents thing on it. So this is what yen you know that Brandel on Monday you were there, right? No. Ok. Well, no, of course not. Monday was was very quiet. So what Brandel did, though, was quite magical. He went through this package contents and for instance, we have a glossary, so we open that and be added to whatever. And it’s quite organized because it is JSON data. In this case, there’s nothing but a lot of the stuff is here. So Brandel actually are in a position to show what you did, because that would be really nice to see now.
Brandel Zachernuk: Yeah, yeah.
Frode Hegland: So while Brandel is doing that for context, for ABN, the thinking is that, yes, we will continue to try to do something with visual matter. Of course, I think that’s important. But actually, I will do a screen show, but also because we have this format. And yes, my company owns it. But it is a completely open format. This dark liquid, so we can do things like this. So. Actually, now I’m going to open this. So what is it, the whole thing, Brandel or is it only the
Brandel Zachernuk: Oh I can it? So let’s just get it lined up so I have other. What I need to have shown,
Frode Hegland: This is so cool, I’m actually going to physically sit on my hands while Brandel does is double.
Brandel Zachernuk: So there’s a 3-D page. Sure, let’s do that. So there’s there’s a there’s a 3-D page here, this is this is 3-D, but there’s just nothing on it and here is an author file. I can actually look at it. Here. And this is mostly about being able to get this map view up and running. So there’s this cross-legged system where text is a word that has a definition and then both the words type and toy have a definition that depends on text. You can see that here this type is use for making text twice sometimes include text, and it also has a family with a type. And then this thing has a definition, but it doesn’t relate to anything. It’s not a useful definition.
Frode Hegland: And the reason it’s yellow, by the way, is that it’s not in the document rather than.
Brandel Zachernuk: Oh right. That makes sense. That makes sense. So, yeah, so there is a range of sort of things that are represented here in various ways. And when you go to the the author document here, you drag it into chrome. You end up with this,
Frode Hegland: Hang on, you don’t even have to
Brandel Zachernuk: Zip it anymore. No, so in Chrome, you just you can just drop the author document in Safari. It’s necessary to unzip it or not. It’s not so much you unzip it. In fact, you don’t unzip. What you do is change the file extension and then safari is able to recognize it. That’s actually so. So that’s something that I am going to talk to this time about whether it’s a bug, because when when you drag something that is a package or whatever it is that sort of clarifies to Mac OS that something in the package on in safari. It turns off as a single file, whereas as you can see from this, it’s it’s a large it’s actually quite a large. A list of multiple elements, and so, yeah, you get access to all of those other elements, but yeah, so then that means it’s possible to construct this link structure of saying that here we have those dependencies, those inward dependencies where tax doesn’t depend on those things. And this is all just done on hover. But you know it. And this is not a device, this Mac with PR attached to it, but you can see that there’s a page that would allow you to do that. So, so that’s good for being able to see this dynamic view. It’s also sort of an existence proof that it’s possible to pull all of these things out. You know, one of the things that would be positive. You can see this definition that has this doesn’t have any status at all because it doesn’t have a definition associated with it. This one has its definition underneath that.
Brandel Zachernuk: And this isn’t using the the tax package troika, but just using the very naive approach of just encoding those things onto architectures. But yeah, like you won’t be able to drag and drop this. This archive on Quest because Quest doesn’t have a file browser that allows you to do that sort of drag and drop. You may be able to select a folder, but you would be able to have a. You would be able to potentially make a connection between a computer, a Mac or PC with access to the author file where the liquid file and and then send the the processing of it over as a series of messages through a connection, or put these up online on a server where you could do that. And even though these files are large at times, so the proceedings file for the journal is 700 megabytes, the actual text files that are pertinent to this part of it are only something on the order of a few hundred. So, Bob, I was just saying, this is this is an author file in photos application, and it has these sort of cost links based on the definitions. This this Bjrn daily viper has a definition which includes Toy. And so you can see that it has a look in that direction, that kind of thing. And I was able to. I just really like that drag this file across and put it on here. And so this is something that happens sometimes, and I haven’t really managed to figure out why is that the sticks of that pointing in the wrong direction? So I
Frode Hegland: Just.
Brandel Zachernuk: So I’m curious as to why that is happening. But but yeah, so the. If it just the sort of the structure of the document, it’s the same and it means that we can ingest things that have these rich relationships. So yeah, it’s a useful first step. When you look inside a this author liquid files, then you. And you’re able to see all of the contents and what’s going on in them, and so yeah, it’s pretty straightforward to sort of obtain information out of this dynamic view and turn it into what you see on the screen here, except hopefully with this angle dropping.
Frode Hegland: I just have to add two things number one, Brandel, thank you again, and secondly for the rest of you, if you want to play with this format, as I said, is completely open and simple. It’s just a bunch of lists and Jason and that kind of stuff. That means that if we build a way to get it from author into Oculus, you can. If it has access to read, write both directions, you can actually move it about in 3D. And then when you get back to your normal computer, that will be encoded. But of course, you’ll still just see it in X Y, but it’ll be there for next time. So it means that we finally have the opportunity to build a normal document in and out of VR because to me, I’m Mark and I were talking about visual things earlier today, and you know, I’m I’m really hung up on visuals, you know, Chelsea School of Art, Background and all of that and pixel peep. It got about megapixels, but I mean, Oculus, it does bother me that it isn’t sharp. But what is amazing, of course, is the when you move your head, you know, it’s like you have, you know, looking through a fence, you move your head and suddenly it’s clear. So I think we agree that one of the chief benefits of working in VR is seeing relationships and items, not just lists. So therefore, it should be if we can have an authoring package and again, anybody should be able to write to this kind of thing. We can do things and VR and take it out again. It’s amazing.
Bob Horn: Yes.
Frode Hegland: And, Bob, currently it is document based, but that doesn’t mean we can’t make the document be landscape and suddenly we have a mural because these are vectors, so the scale is still infinite. Yes. Fabian, what do you think?
Fabien Benetou: And I had I did not try reader and author because I don’t have a Mac. So it’s pretty crazy. I know. And then when I saw the emails earlier or during the week about it, I was like, Yeah, but to me, that’s just remote. Lets say I don’t have access to that. And even though it sounds a little bit, how do you say not trivial, but anecdotal? One of the reasons I don’t have a Mac is because I want to be able to tinker with anything on my computer doesn’t mean I’m able to do things like that. The kernel level, whatever the web in my head or I never bother to. But that’s also quite important to me that whatever I were to wear, if I want to at some point tinker with whatever code of my computer, I can do that. So at that point, I understand some of the features, but unfortunately it’s yeah, I can’t play with that. So I’m looking at that from the outside, let’s say.
Frode Hegland: Well, maybe from the outside, of course, initially from the outside. But the thing is, and I have to defensively reiterate again, yes, I do have a small, independent software development company still haven’t made enough money to actually take money out of it. So it is just trying to support the development, but all the formats are open. Now, having said that again, the thinking here is that maybe you could have that package too. And what you’re interested in is just some of the adjacent files. And whatever environment you choose to have on Linux or wherever you want it to be, it’s just so we because we have this. So in a way, it’s free. So the fact that author is there for me, it’s nice. And you know, we do have a free version of author, but it means it is a thing you can take in and out of VR because I look at this as the difference between publishing and manuscripts published is, of course, in this world, in my world is still a PDF with visual matter. But this is so much easier to extract the data out of because it’s just a bunch of files that has the JSON. So even if you don’t have the full authoring environment, you know, Brandel, do you want to talk maybe about how the if you only consider, let’s say, the glossary and the map forgetting everything else, it’s still quite accessible.
Brandel Zachernuk: I mean, it’s it’s as I was kind of saying before we got to passing it that it’s if you have a a readable sort of set of relationships, it’s not a particular challenge to to process those into sort of obtaining relevant positional data and things like that. And so this is no exception. It’s Jason with with nodes, with unique identifiers and X and Y positions. So, yeah, like in terms of finding things, it’s just that most of the time when people produce documents at this point, unless they’re illustrator documents and otherwise, then there’s not a lot of relevant information to pass on process. And that can why for for something that constitutes a graph of things. So, yeah. Uh, it it provides a useful starting point for something that people have the ability to offer in a way that can make something that’s represented presentable in VR.
Fabien Benetou: Quick, quick question, because again, I did not I skim through, but does it mean the design and kill list are separated from the PDF and that the final file or the merge of PDF and this method data?
Frode Hegland: This has nothing to do with PDF, so this is the equivalent of a dot dot dot x. Rather than adopt PDF.
Brandel Zachernuk: Yeah. The internal interchange format doesn’t contain PDF at all, but also does target the PDF ultimately, and that’s where the visual media sort of components come in. One thing about that current PDF is that it doesn’t that current implementation of the visual matter is that it doesn’t include the positional data for the graph because it’s not sort of identified as a as a relevant component for the purposes that had currently been kind of stood up for. And so it would be possible for it to to be dumped out, but it just isn’t right now. Yeah.
Frode Hegland: Mr Anderson, who was muted.
Mark Anderson: Just just a quick one, because I TED Brandel and Ofeibea will understand the import of this more than me, but I’ve just been looking through the package contents. So one thing that’s not exposed in that at the moment is effectively the DOM, if I understand it. In other words, the document structure,
Frode Hegland: It kind of is. It is in the TFD budget.
Mark Anderson: Well, it’s in the TFD, yeah, it’s not in, not in a which is. Well, I guess it’s not. So, I mean, yes, it’s possible, but it’s only in there, that’s fine. I mean, it doesn’t bother me, but I was just wondering if I understand it’s in there. I didn’t think our CFD as such had the structure. It’s just it’s got the whole text or from the text you can get the structure. Exactly. Yes, that’s what I’m getting at, but I don’t want to go too far into the weeds anyway.
Frode Hegland: So there is no I’m
Brandel Zachernuk: Not sure that there is a very large list, particularly in the in the serious document that the proceedings document. And I don’t I don’t exactly know how to walk that list. I took a look at it, the next code, and it has a large tree hierarchy of our definitions that may pertain to that main document. And to that end, it may have semantically relevant sort of components to it, but I’m not aware of the way to read that right now. But given what’s present in the other things, the dynamic view of the definition is the glossary. But they are all there. How they are reconciled may be something where it is. One thing with a dynamic view of the relationship between the actual definitions is not codified in the script in the file. Those are constructed on the basis of the contents of the text. And so I just reproduced that dependency if it may be the case that the IDF is actually the ground truth. But if, if and if so, then it shouldn’t be too hard. Because because. Author doesn’t have free formatting abilities such that if you have text this size and that size and that color and whatever, if there’s a hard relationship between if Texas this size it is this, then we’ll be able to retrieve it from that,
Frode Hegland: Which is exactly the case. I mean, the reason I think this is a big breakthrough that we have this piece of infrastructure that we can maybe play with is, for instance. I expect that Adam will not agree with this because he’s very much off in a different tangents, and I think that’s a good thing. So what I would very much like is to for us to work towards a opportunity for an environment where I write a document and the document could be a sentence or two hundred pages doesn’t matter. But I think let’s think of it as an index card. And I put it into our VR space like Adam. This is my proposal and it’s there and whatever metadata that we’ve just discussed is available. So you guys can literally pick it to pieces and say, Well, look at this glossary. It doesn’t work with that. So all that is available, the main body, I’m not really worried about, but all these things. And then Adam can say, Well, actually, I think we should do this kind of a format and put a card up here in space. If the questions then become, how do these two know about each other? You know, do we right the relationship of one onto the other? Or is it that we have a new Jason to describe the relationship with the documents in the room? These are really important issues. I think we can work about, but in terms of us as kind of a community becoming quite real. The simple demonstration of working and author clicking a button and it is we already have the ability to post a WordPress, so it’s not that hard to interact with a web server. It puts that into the user’s registered VR space of whichever one that you guys have designed and you look at it, I can imagine sending that to Vint Cerf and saying, this is real and it’s a thing that goes in and out, that it’ll be much easier for him to help us network get funding and all of that stuff. Yeah. Fabian, please.
Fabien Benetou: So one again, I’m still trying to piece all put the pieces together. But one one thing I wanted to mention about not having a Mac part is because I don’t. But I do have a browser and I guess who all of us do, obviously. And the Quest also also does have one. And they are because Arthur, in the end, if I understood it right, does output to PDF with more things. They also like PDF dogs like reading readers of PDF in the browser and that are up to date pretty powerful and give visually good results. I’m wondering at what point it could be interesting to consider a web based version of reader that would be able to have those features, but without anybody having to install anything and even including some server side API that would be able to get any update from, let’s say, usage in VR.
Frode Hegland: Well, that’s a really important question, I think that’s largely up to the guys in the community who want to develop, you know, you basically build what you want, but there is.
Brandel Zachernuk: I think I think what I’ve been suggesting is that you could actually offer an app, even a paid up in the on the over the web, there’s very little about the way that the application works that would be dependent on acting as a native application on your local system. The benefit of that being that it would be visually and functionally similar, identical to the capabilities that you have within the downloadable app today, but with the added benefit that it would work on all platforms immediately and also have the benefit of being able to kind of jump into virtual reality at that same instant.
Frode Hegland: Thank you for that translation. That was actually useful. I don’t I spent a lot of my and my family money on building reader and author because the way that my brain works, I think, is the same with you guys, actually. Imagination is limited, you know, I build to what I know, and then you get to that level. Oh damn, we need to go down this alley. So to build it from scratch would not have been super expensive to build it. The way I did was quite expensive, so I can’t really afford to to change platform now. But logically, I think that’s complete makes complete sense only. All right. That’s my mother. It’s our birthday. I will tell her. I’ll call her later. I apologize. The other thing is performance, because author is optimized now finally for the M1, and it makes a huge difference even to generate a PDF. Takes quite a bit of time because it has to right, it has to do a lot of nonsense. But one thing that could be interesting is to also have a minimal version of author on the web. So if you know, to do at least the basics. I think that’s maybe what you guys are pushing at. That would make sense.
Brandel Zachernuk: Yeah. Yeah.
Frode Hegland: And that would then go past a lot of the how do you get the document up? You’d still for those who still want to use a normal offline author. They would still need a way to synchronize the documents. But I’m sure at least in the Mac community, iCloud can be used, but you should be able to use the author web independently of anything. Yeah, Mark,
Mark Anderson: No, I just was going to reflect on something that I really interesting point Brandel made in referring to when we just mentioned the domino effect of something to hold on to is actually not having text that has just abstract styling, which is something that whole generation or generations of us have learnt to do fiddling around in Microsoft Word or something so that, you know, in a sense, a benefit is both a mm and a benefit in author is you can’t color too far outside the lines in terms of what something means structurally. And if we want to have a good movement back to forwards, I actually see that as a good thing. We may need to change our attitudes to some things, but that’s fine. That’s all part of, you know, progress. But I really like this idea that that it removes some of the ambiguity. So we capture that structure. There’s a lot less guesswork.
Frode Hegland: So, yeah, that’s a good point. So guys. We’re having an argument now intellectual argument, which is good, so and kind of dog voting ourselves if we were now in a virtual reality space. We would probably want to put notes or documents or whatever you want to call it, into this space and say, here’s the thing for that. Yeah, but there’s the thing against that. What kind of thoughts do you all have on that? Because that same seems to me to be a. An early and real use case. I say that probably and the like you said, I will have a look at that also, but I’m just wondering. How would you want to do that?
Bob Horn: Would you repeat again what what what the question is? Yeah, I’m sorry, I was distracted. I tended to tune out when all there’s code code talk. So if there’s a switch, it’s a rapid switch. I have to. I have to pay attention again.
Frode Hegland: Yes. Yeah. So the question is we’re having now a discussion argument, whatever you want to call it, about different approaches to something. So if we were now in a virtual, augmented or B or whatever environment, how would we want to use that environment to help this discussion? We would probably want to put up floating in space somewhere an initial proposition bets that are for and against that and so on, right? To some extent. So my question is
Bob Horn: This is called an argumentation map in visualization. Yes, it’s one that British philosopher named Steven Tolman originated in the nineteen eighties and nineties in the 1990s. I took the lead to to take his simple little overnight classroom exercise assignments and make an industrial strength method out of argumentation mapping. It has now become a small sub field of philosophy and visualization and other things. That’s what I would use now.
Frode Hegland: Absolutely. That’s a good point, Bob. I’m just trying to be very. Not leading with the question, because I think this is close to what the guys have been working on, so I’m wondering what thoughts they have of how they want to build such a thing or is that not really on the radar?
Bob Horn: Well, there’s been plenty of recent academic research on the on the subject there in this couple. Am I not answering the question? I’m sorry if I got the wrong question.
Frode Hegland: No, no, no. What I mean is looking at it from a specifically VR point of view, which I think has some different wrinkles. That’s why I’m, you know, particularly wondering for the guy who the most experienced and B or how from a very primitive point of view or an initial point of view because OK. So the context for the question is it’s nice if we can build something together. But also I have to admit that if we did what I proposed earlier, once we have all the elements in VR space, I honestly really don’t know what to do with them. Which is kind of fascinating, so, yeah, see, you have your hand up.
Fabien Benetou: So the move moving things around to me, I always think of Post-it Notes, but it doesn’t really matter like a set of small chunk of text, let’s say, with chunks of text, to be honest. To me, that’s trivial. It’s pretty much sold. And among the among us, we have shown like maybe a dozen or more ways, and I’m sure even more actually from the best. The challenge there is capturing new ideas on the spot because we don’t have convenient access to a keyboard or an input method. I want to say keyboard with an input method that allows that. And though we already discussed about it a bit before, but the different trade off like, OK, you can use keyboard, but you can actually you need to find it. And then maybe eventually like. Sit down to find it. We can use speech recognition, but then if you use abbreviations or words that are less common, it’s harder, maybe noise. I mean, there are a lot of different ways, but they are all problematic and another one is a compromise where some of us or in VR and the others are not. But honestly, I think I imagine very few of us feel like, how do you say, doing secretarial work like we know it’s important, but at some point we want to be part of the creative actions that might be a solution. And usually I found that challenging that capturing at the speed of thought is hard. It’s usually it doesn’t go at the same pace. So that’s it’s not a solution. It’s more of a problem. I have that I find manipulating, moving around and even saving useful. It’s not even hard. I think we know how to do that, but how to get that one new idea while being in the space while keeping this to the pace of thinking and problem solving. To me, that’s still an open question. I don’t really know how to do that.
Bob Horn: So if I if I’m going to, I’m going to try to restate the question that I heard. And the question is, how would we struck, you know, struck how how in virtually the context is virtual reality. The context is virtual reality advanced a considerable farther than it is right now. Uh, by this group and others. And so within that context, how would we. Advance. The ability of humanity to have very complex discussions about about claims which they differ on. Is that the question?
Brandel Zachernuk: Yeah, I think I have actually a tentative answer. So one of the things that is really nice about speech like this is that it’s very easy for people to create and invoke things. Something that we do informally is add things to the chat as sort of tabling sort of components that deserve to be a little bit more persistent. I think one of the things that I would be very, very interested in is a more continuous and sliding scale for the ability to create and invoke artifacts that we can kind of put in place. It reminds me of a conversation that I had with my friend last night. I was kind of sick of the fact that he does pretty pointless things with these electronics tinkering. And so I said to him, rather than complaining about that, I said, What are the five most interesting things you find about fabrication and electronics? And so he listed those, and this isn’t a chat. And and so I kind of put them in my head sort of up as as specific points to look at and then and then send like looking across the things that he’s done. What is he like most about those and what are the challenges that he thinks are most interesting to address in terms of getting better at it? And so to that end, having having this capacity to have something that is something like conversation and but also something like diagramming as well, where you have the capacity for evanescent speech and other utterances, things that you say and then they go away.
Brandel Zachernuk: But then you also have fairly seamlessly the idea the ability to literally capture speech and words as they come out that you can then kind of position in a persistent space. And I think that would be very interesting from the perspective of being able to create these things, position them and link them. That would give you the ability to do the argumentation theory. It’s basically I haven’t seen whether Noda provides a multiplayer mode, but but essentially that would be that would be the least worst start that I can imagine. I’m not aware of other platforms or applications that have the functionality of note. I haven’t actually used it for anything seriously because I haven’t bought the voice recognition component to be able to have unlimited speech input. Something that I have written for myself so far is the ability to do speech, recording, inquest and positioning those in space. I’m not sure if I’ve shared a link to that. What I need to do is make use of a system that goes between my computer and my quest, such that it’s able to ferry over the speech recognition for free into that, into that shared sort of application space. But at such time as you have the ability for one person to connect via their VR headset in their quest as well as, say, AirPods on their computer, then it shouldn’t be challenging to be able to ferry that content to multiple participants and have multiple people doing those things so that you have.
Brandel Zachernuk: You may have to have two computers per person. There are virtual reality headset and there things that is responsible for doing speech recognition. That’s until I can get. I can convince the people who make Quest browser to actually enable that speech recognition at that level in the browser, because I think that hopefully the value of it will become clear as people start to start to make more use of it. But yeah, so to that end, you know, having having the ability to to place things at this at the speed of not thought then of performance, I think is a valuable thing having the ability for those to to become semantically relevant in the context of a conversation. So you may have an argumentation mapping mode that you that you have shorthand for being able to create the linkages of the kind that Tillman indicates. It looks like there’s a limited grammar of of what channels can be what to which other things. And as long as you have something that can be conforming to those, then you have the space to to argue. And and refer back to the components of that argument.
Frode Hegland: Noel Noda is pretty good, I have the full version of it, and I was surprised when I started labeling things randomly because I said things, so it does work quite well for a normal vocabulary. But my problem with it is that it’s very shape and color based, the notes. They’re not really interesting information object, so it’s a great feeling walking around it, but also on the notion of speed of adding new ideas and interaction. I do think this is one of the reasons it’s so important to have the hybrid VR and normal work so that I know Fabian. This is partly what you were talking about is, you know, I want to be able to sit down and do things in my laptop, press a button and that is now in the VR space, if I need to, you know, to have that back and forth. Mark.
Mark Anderson: I just thought in listening to Brandel this now and you were talking about the fact that whilst there are limitations broadly, you know, speech capture and transcription is getting pretty good. And this idea you saying, well, some some things are evanescent and just roll away essentially that roll off on the transcript. But if one was able to effectively. Pinch on, you know, take something from the transcript and the act of basically copying it from the transcript, which otherwise is lost in after some time a buffer, perhaps as then an object that you may then apply more information and in whatever means, whether it’s a sort of, you know, visual structuring or whether you just submitted it, some kind of the point. This is an was moving it from the evanescent stream to a persistent, persistent object that you may then refer to just a passing thought. And I’ve no idea how practical it is, but that sort of an echo that I took from what was being described.
Frode Hegland: So or no Brandel go on and then Fabienne.
Brandel Zachernuk: Well, that’s one of the things that I was most impressed by in terms of the function of it. In words, reality is the way that I had this visual word search automatically running on all other images at all times. And the neat thing about that was that it created that evanescent panel of image results that you could pluck from. So you knew it was there. It was actually out of one’s own field of view. So it wasn’t distracting. But but present such that if you if you had the desire to to find something to encode and to represent that, that’s a little trickier in multiplayer in the sense that you may need to have text under people and you also may want to, like you personally, may want to extract someone else’s word and and pull that out of the stream and say, Hang on. This is something I want to I want to refer to. And so, yeah.
Bob Horn: So what this is what you’re talking. Well, I understand you’re talking about without having actually seen what it is is that is a kind of hyper hyper index that is dynamic is created second by second. Is that right?
Brandel Zachernuk: I yeah, so I guess by virtue of us speaking here, if this was happening in a more and a more immersive context, although it’s not by necessity that it has to be virtual reality, I think these capabilities could occur here if, for example, that the transcript has been automatically generated and if but if you are in a in a an application like illustrator or there’s a there’s a effectively a multiplayer illustrator program called Figma, I’m not sure if you’ve seen that. And. It’s neat you get to see other people’s cars as well as your own, and you can sort of contribute to a document, but if you had so this Figma, if I can send the link in that thing. But yeah, having having the ability to identify the things that you want to hold on to and place in in to a document as a consequence of those things coming up. I mean, it’s very similar to the concept of that that was powering at least my impression of Google Wave, that recognition that there isn’t a substantial difference in kind, but only in degree between chats and emails and documents, which does does anybody know Google Wave or does everybody know Google Wave? Yes, that’s right, Mark. Google Wave was a was a very exciting concept that Google had in about 2009, 2008, 2009, where they were exploring some sort of new concept for a way of storing encoding information and communicating where they said that there would be these waves.
Brandel Zachernuk: These things that were the same as web applications and chats and documents, all at the same time, people would be able to put functionality into them. They would be able to take basically a chat log and turn that into a document or turn that into a page. Eventually, it turned out for what I’m not sure in terms of the inside story of why it failed within Google. But my take on it is that it failed because it was too confusing to people what they were doing with the specific thing. There may have been technological aspects, but the reality is that we, as human beings need anchors conceptually to whether what we’re doing here is chatting or producing a document and essentially the Overton window, what was thinkable about what a document wasn’t there again? And I presume there were some technical stuff, but that became the basis for the live editing, which, as I understand it, after that was really the first honestly on authentically live real time system for multiple people to edit the document. So the work conceptually didn’t go away, but it came back in a much more sort of obscure form where it was in support of of metaphors that we were more comfortable with and familiar with. Sorry about when I talked to just.
Fabien Benetou: No, no worries, and it’s it’s funny when you mention it, like, no, I don’t know, but yet I know, I know. So when I when and I don’t know if it was a. Technical, I think we will maybe. It’s not a kind of tool for everyone. Not everyone is having either a deep conversation or deep conversation in a group or wants to put the extra effort to evade capture and in Kuwait and then iterate. I mean, it’s pretty demanding, so I actually did put some. Interesting questions. Well, but I wanted to what what all this made me think of is also right now here we all we often talk about capturing and bringing it in and out of your way, even in the journal. And I’m wondering also at one point I have on my desktop open the API for Zoom because we rely on Zoom and I don’t know how usable that is. There are other platforms like Gizzi or, yeah, plenty of other video platforms that have different capabilities. And I’m wondering if that’s also one of the bottleneck that we’re relying on platforms that don’t give us enough control in order to bridge directly to maybe not necessarily a VR space, but a space and then a space that we can could. It’s one thing to be able to pop the headset in and out. Now with the quest, it’s relatively trivial and we can sell it to you will quite easily, but then it still requires us to prepare the space for the information. And I’m wondering how much the platform we choose to have. The conversation right now is limiting our ability to to bridge and connect, to have better or similar conversation in VR.
Frode Hegland: I got my head up, right? So Zoom is fine, we’ve had over a hundred meetings recorded since we started last year. It’s not brilliant. Absolutely not. But what one of the key things and I learned welcome to kind of a bit of a review. We’re talking about first. Yes, we can take PDF with visual meta into VR and extract. Fabian has built that. It’s fantastic. Then I’ve looked with Brandel about taking a native author document called the Dot Liquid Document, because in there we have lots of JSON for each individual type of metadata already beautifully laid out. So could we then Brandel has taken that into VR to do the map in author? And then you open it up and VR and it’s 3D obviously starts out flat. But all the relationships, everything is there. So that was the origin of one discussion then. Well, we got into now is that is what I think. So that is kind of like my index card floating above everybody. It’s my proposal. So then there’s going to be pros and cons. So we’re talking about how to how VR could help us resolve a direction forward. But I really do feel strongly fabienne that. Living only in VR for the general public is not going to work. Really think whatever is in VR has to go in and out. And I know that not everything can, but I just think that if we can have a if we can keep using Zoom to record our audio. But if we soon at some point end up all having our Oculus is on in the meeting and we’re looking through the web shared objects. That would be a really great step forward, but I just think we need a base. I don’t care. I mean, if everybody is OK with another platform, we’ll just switch doesn’t make any difference. But I think this is just audio with a couple of smiles. I think we really need to look at how we can put things into a VR environment properly.
Fabien Benetou: To clarify, it did not mean to spend all our times in VR, but rather questioning what are the limits of the current workflow in order to have. To be video ready if you want to, to have a workspace that will carry those conversation to VR because I think otherwise what happens is we if if I have to think about everything I have to prepare in order to make a meeting in VR, then I want to give up because it’s going to be pretty tricky. I’m going to forget some of the limitation, etc. So and it’s not a criticism that yeah, how, how, how we can bring the content, but maybe, maybe right now having the journal is sufficient. I’m just saying it as I think that that maybe also the the maybe here one bridge is potentially to have the video there now are available in YouTube after the fact or where they stored. Yes. Yeah. So it also means that’s one way like we already have a repository or a video that can be done on time coded that can be watched in the air. I imagine maybe the Zoom meeting could be live streamed to VR and could also somebody if some of us want to be in VR to watch and participate, that might be feasible and eventually side by side with the video recording of the streaming and then the past video recording and then next to them, the transcript and then next to them visually linking when they are used in the journal that that could be a well so that without changing any of the unquote infrastructure that is already working now, how do you have a workspace in VR that keeps on being updated? And again, not to say it’s mandatory or even a good idea, but that it it’s a life platform that can get the content as we produce it.
Frode Hegland: I think the sorry, Alan, I’ll give you the Mike just in a second on this specific points. I don’t think we obviously need to save everything. And the Google Wave issue, I think is quite useful here because we are operating in different temporal spaces, you know, whether it’s email or meetings when we’re together. My feeling and I’m not answering it, but I just want to be really, really clear, is that if during the week me, we all have different ways of working, but I will. Oh my God, this is going to solve everything. I write it down, you know, and then I share it with you guys. And some of you read it, some of you don’t, which is absolutely fine. Same here, but that’s the way it is. I then want to be able to put that bit of a statement into a space that we can then share go in and out of. And, you know, we can build this real knowledge base because it’s easy to think of it in general terms. But you guys have been there quite a lot. Know how incredibly not real it is yet. And one of the key issues in this is what Brandel and I was talking about earlier in the week is if you just take a normal document into B or break it up, put things around, you then have to record obviously where those pieces are.
Frode Hegland: If you’re going to go into 2D and then back out of but if you then let’s say follow a reference and you grab that thing and you place the reference document here. You’re also going to have to store somehow that now there’s a second document and where that is. And then of course, there’s the issue of where is it in the room? Is it on this side or is it on that side of me when I open it up? And that’s that’s Brandel has explained to me isn’t known by VR yet because the environment and the items have to be separate. So one of the great concerns here for me, at least, is. I want to be able to focus on the laptop for one kind of work for that kind of medium. Then I want to be able to see relationships with you guys. Somehow shared audio doesn’t matter as much yet, because that can be via Zoom, whatever. But then if you make a change to something that somehow needs to be recorded in a way that we own it, not just intellectual property or privacy, but so we can then open it up again in the future? Not that it’s only in that room. Right. Hi, Alan. No long time, no Brooklyn.
Alan Laidlaw: Yeah, yeah. I trust you all, recognize the much better camera quality I have now, I have a new computer out of necessity from the other computer freaking out non-stop. But happy to be back. And I do have a lot to catch up on based on what I just heard. I want to try and connect the dots from what I was understanding before. So humor me, if you will. The real struggle with the Zoom calls that we have Is not that we have them, not That they’re recorded, but that it’s difficult to extract information from them and that there are moments of relevance that either would be really nice to capture and put into a journal or moments that deal specifically with some form of research or a new idea, and it’s just very hard to unpack them. All right. So I can understand that being the motivation for doing this in VR. But that problem would still persist in VR. Just like you mentioned, Frode, I mean, it would still have to take all the effort to create artifacts and put them in spaces so we could interact with them. So my thought then is, is there a way we can try and solve this in 2D first, even just for the small case of not solving the problem, but but solving our. Problem of Monday, Friday meetings, right, and in my mind, a not perfect solution, but a step in the right direction would be if we met in Amero space or a mural space or any of these other tools that are part of the way there. Then we could sort of Live in and have Embodied experience of what it’s like to move ideas around while we’re talking about them. And I think that would transition over to what a VR experience could feel like or it would inform it somehow. So those are thoughts.
Frode Hegland: Ok, so really useful, Mark, and I had a meeting right before this, and we met a few times talking about the journal. And I don’t want to suddenly go back to, Oh, it’s all about the journal, which can be really annoying. But the benefit of the journal is it’s a monthly statement of our productions. So the two parts of the journal, logically, one part is every single meeting is transcribed and put up on the websites. So if we ever need to scrape, it is all there. The Journal has human transcribe the guest speakers and then it has articles. I’ve written a few. You’re all very welcome to write what you want. Plus, we are having more and more guest people writing that just like the future of text book. And then there’s the complicated bit, which is when I see you guys say something useful on Twitter, for instance. Well. Hey, sorry, does. Oh, yeah, sorry, Mark, that’s another point. When you said, Hey, I thought you meant Alan Benson, I should highlight for Alan was just all right. So for instance, Brandel and Adam have been building stuff, posting in our chat on Twitter, and I try to turn it into a journal article. It’s a bit hokey, but it’s OK. So I think that the term that came up in this discussion, I can’t remember the term, but the whole thing of not everything needs to be recorded. Not everything needs to be highlighted, right? But if we build the means, I call the documents, you can call it index cards, nodes, whatever things. If we just have the means to create a thing and put it on a space and discuss relationships, I think we’re almost halfway there. But I do think that we are offer something very, very different from 2D. I would have agreed with you a few months ago, Alan, you know, can we try to solve a lot of it in 2D? But it’s so. Experientially different, I don’t think it necessarily translates, so that’s why. Yeah. Before I was just saying like,
Bob Horn: I don’t think it necessarily translates. Who knows, you can’t, you know, that’s that’s speculation.
Brandel Zachernuk: Well, no,
Frode Hegland: No, no. I don’t. I don’t mean it like that. What I mean, Bob, is that, you know, you write a book and you go, see the movie. Oh, it didn’t live up to expectations. Vr is a completely different medium than 2D. It really is a difference between a book and a movie. It isn’t just 2D straightened out, it isn’t. Second Life isn’t VR. Yes, Brandel.
Brandel Zachernuk: So I don’t think the claim is that you get everything. Yeah, so so I think I think sounds like I’m speaking for everybody at this point, but I don’t think Alan is saying that it is the same as but that we can get something about the nature of that interaction. From this point, it’s that we can we can get more than nothing by carrying on one of these meetings in the context of a mirabaud or a figma document or a mural, things. These are all essentially just illustrated about multiplayer with their emphasis on various aspects of it. Bob Miro is sort of prioritizes the idea of things that look like sticky notes, whereas Figma is a little bit more. It has. It has. It has a pen tool and the ability to draw lines and things like that. And so so they’re all they’re all tools for more or less. What I don’t have is the automatic creation and reconciliation of the chat stream to the creation of those artifacts. But that’s that’s still a much simpler thing to incorporate by just sort of identifying a phrase as it comes past and typing it down. So. So I think that it would be very interesting to do that to to literally structure a conversation by not being in here, but being having this provide that audio substrate, but then actually all meeting together in Mirabaud next time and see what we do, what we create as a consequence of talking about different things and and putting them into into the record effectively. I while we were talking about this, I was also very intrigued by the idea of being able to consume something like the YouTube video of this and turning that spatial, you know, because you then have whatever ability for offline processing unit.
Brandel Zachernuk: And so you could take this grid of six windows. That’s the way I’m looking at it right now and peel those off into six people six panels in space and have the transcript, which also exists by virtue of the YouTube video underneath each of those people, as it as it turns out. So by virtue of the Green Square and the time codes and the speech, you’d be able to put those things there. Then you’d also have the. So it’s a video player, but it’s one which is opinionated about what you’re expecting to get out of the video. You’re able to pull words out and put them into your space. And then those are linked to those times as well as to that person. And then you might be able to create cross links by having those things relating times to other times and speech to other speech within it. I think that’s really, really interesting in terms of not just this interaction, this literal set of people and this venue, but just the recognition that there is an increasing amount of discourse done conducted in this way that you can take that view of an opinionated video player for the purposes of like, what is, I love to say, what is the photoshop of writing? But this is what is the Photoshop of watching YouTube and and how do you what do you gain from it and what can you intend to do? And that would be fun.
Frode Hegland: I think we’re looking at very different things to augment the discussion we had on Monday. Brandel, I thought, was one of the best because there was only three people. So it was very focused and you said a lot of new stuff that I haven’t actually heard. So to me, that one is worth reading the transcript. But most of this talk is just not interesting for anyone ever, apart from historians in the future if we have a breakthrough. So that’s why I think the kind of knowledge objects that we think are important, whether you call them document or nodes or whatever, as I said earlier, is really, really key. It’s not just capturing our interactions. I think that is a really important fundament, really important thing. But you know, if we, you know, like in the chat here, Alan just told me what he’s about to talk about idea from implementation hack. That’s a worthwhile thing. My waffle now isn’t actually that interesting. So maybe it’s because of my focus on text and so on. So I think if we just talk about the journal for a moment. Currently, the journal is quite traditional, but if we look at the idea that every month we have to produce an artifact that is a statement, it has links to the entire underlying dialogue. Fine. But you know, we have someone wrote an article. Someone replied. And here was a dialogue back and forth. You know, maybe we get closer to an idea how to carry on the conversation, but apparently it’s overdue.
Alan Laidlaw: Yeah, and the journal and newsletter is still something that I’m very interested in. I’ve got work on that to share. And so, yeah, I still want to be a part of that conversation regarding the implementation idea and the sort of the mirror approach. And perhaps not Mario, perhaps, is something else. But what’s useful about this is a scoping the the value of the conversation. That’s where hacks can open up, right? Like we can just have immediate quick fixes. So Brandel with the time stamp idea in the cross-referencing, I think that’s going in the right direction. And there could be what if we had a different mirabaud for each week, right? So like a week shares, if there’s two meetings a week, it’s going to be one mirabaud. Right? Well, then it’s a question of what happens when we refer to something that’s from a previous mirabaud or meeting. Or maybe it’s three meetings. Who knows? Maybe it’s all in one, but at some point that’s going to get really messy and it’s nice to start over. So I was thinking a hack could be along with the timestamps and visuals of the video going on. You take a you could take a screenshot of a portion of the spatial area, be it in Miro, where you know, and just paste that screenshot into the new area, right? So it’s dead. The links don’t work right, but you at least have a a hacky poor man’s way of referencing. This is the things that we were thinking about at that time.
Brandel Zachernuk: And I’m. Abstractly referencing that moment in time from before those those kinds of games seem like they could become possible, and that’s exactly the spirit I think that could transition into a VR or AR or, you know, all of those other spaces that still have kind of some technical hurdles as opposed to a flat limited to the version. Yeah. No, I buy that one of the things I don’t know if you’ve ever played the video game, fantastic contraption. It was a play on the incredible machine they made, but there was a flash version, but they also made a VR version. One of the really, really fun things about that was that there’s a level select sort of underworld. It’s made to be sort of like a Greek underworld ish sort of place. But one of the neat things it does is stores all of your so you create a construction and it’s it’s human scale. So they’re they’ll often be, you know, at or above head height as you’re sort of trying to build a trebuchet or whatever. And there’s a an array of sort of terrain, obstacles and whatever else you need to have. And so that’s, you know, the size of or larger than the room that you’re in, you’re constructing them. The level select has those each as sort of little about the size of a gold ingot, you know, so that you can see the detail on them, but it’s just a vastly, vastly different scale. And what I love about that is the way that it makes functional use of such.
Brandel Zachernuk: Astonishingly disparate scales for the representation of this same artifact that you yourself were a party to creating and so that it has that familiarity, but it’s just intensely miniaturized in the same way that a like an architectural miniature might be. And so it’s neat because it means that it does the job of telling you what it is. But it’s and to some, to some extent, you know that those details can be retained, maybe in small, but or maybe in a simplified form. But but. Gives you reference to the things so that you can never guess of the other hack. I’ll just jump this in real quick and then. But yeah, the other opportunity is that knowing if we had rules for the space, right, the terrain for these meetings, then we could go in ahead of time and like start to create the terrain before the meeting, right? Hey, here’s some things I’ve been working on or want to discuss or just topics I’ve been thinking about. So when we show up, it’s already been sort of pre-populated. It’s like someone messed up my space, but it would. I bet we would see that it would guide the conversation and be pretty, pretty fun. So there’s also the fantastic contraption reminds me of this canopy club that I’ve been tinkering with, and it’s really beautiful and playful and a lot there that is useful to discuss at some time. Ok.
Bob Horn: Oh, thank. Well, I I’d support the mural. I’ve I’ve certainly been a part of several different mural discussions. They were some were better than others because some of them were facilitated by experienced facilitators who knew what they were doing when they were trying to facilitate and and the group agreed that the facilitation was how it was going to be. I experience this group, for example, as very creative, but jumping around in all sorts of directions all the time. And as a result, I don’t particularly want to spend a lot of time in that kind of discussion. I don’t have that kind of time in my life. I’m very I’m, you know, ecstatic. About working with you on, you know, the stuff that starts out with some some murals that I make in two dimensional space. I’m ecstatic about working with that because I think that it has a it has possibilities both in 2-D and and and into virtual reality space. However, if if we’re jumping around all the time from one thing to another, we don’t get very far. And so I, you know, I woke up this morning thinking, Well, I’m going to say I’m going to decide that there’s an hour of the four hours of this meeting that I’ll attend if people want to, you know, if it weren’t, I am not involved in the in the coding, all that coding kind of thing.
Bob Horn: And and and I find it, you know, as brilliantly as as Brandel described the last thing I’ve forgotten the name of it. I couldn’t quite imagine it. And and five minutes went by. So you know what good was that? I don’t even know the name of I couldn’t catch the name of it because that went by so fast. So there isn’t, you know, there are so many limitations which experienced facilitators know how to handle. And I’m not saying that I’m one, but I certainly have experienced a group. There’s a there’s a group called visual facilitators, some of whom are really quite excellent and and they ask a group to to focus on a particular topic for a particular amount of time. And and and gently try to exclude or ask people to not right and so forth. And I’m, you know, that’s that’s where I’m at right now with this.
Frode Hegland: Yeah, I see you have your hand up. But just really briefly on Bob’s point, I understand that not all of this is interesting to you. It’s not interesting to everyone else, either. There are many different groups like people come and go, you know, that’s just the way this is. This is not a research project to do one thing. I would also like us to settle on something. I would like to have documents in and out of VR space, and that’s some of the discussion earlier today. There’s also a recurring theme of how to capture our meetings, which we’re trying to deal with in different ways. But but that is the nature of what this is. If you want to have maybe set up different meetings with some of the people who are willing to build the mural and only talk that you’re completely free to do that. You know, it’s because. It is, yeah, I mean, it’s difficult with these kind of volunteer communities to keep it together, it’s difficult to have a specific project. We have different priorities. I think in and out of VR is absolutely crucial. I also think that going further with murals is absolutely crucial. I think that capturing our own dialogue is quite low on my priority list because the important bits we kind of write down anyway. But I’m very, very happy to support that work as well. Anyway, just just as a comment, Fabian,
Bob Horn: Maybe I’ll just comment back saying I’ll comment back and saying, you just you just summarize three important subjects. It would be really helpful to say, let’s, let’s take these three subjects and let’s devote the first half hour to that, the second half hour to that one, the third half hour to that one. So that’s so that so at least there was some more coherence to to the discussion.
Frode Hegland: Yeah, I would like to do that. We have tried something similar in the past. We had Fridays to be more on visual matter, Mondays to be more in general. That was one phase we went through and we’ve tried other versions of it. But don’t forget we’re dealing with when it comes to VR, a whole new world. There are so many things that you put on the headset and it’s like, Oh my God, this is possible, and then you find out things that don’t even exist. This shocking thing to me that I found out this week that Brandel told me the software thing that you have as object in a VR space is not actually allowed to know what space it’s in, which is absolutely crazy. So there are all these limitations that are in the way of the way we think about it. So it’s it’s it is really difficult to chop it up and, you know, we can try that going forward. Adam Wern didn’t want talking about the journal, so he wanted only demo Monday and Fridays, but he hasn’t been here for a couple of weeks, you know? And Brendan comes in when he anyway. Fabian, please.
Fabien Benetou: I was wondering also, to be honest, to to go back on those, let’s say, three aspects. What’s the best way? Is it to plan in advance and say OK, or should it be a maximum amount of time permitting or not at all? I I don’t have an answer. Sometimes there are things to show discuss, sometimes not so. There is definitely something to argue for the natural with the organic aspect to it. And if, yeah, I think I’d be fine if somebody would say, Oh, I think next time we need to experiment with White-board or Figma or something like this or and OK, that becomes the topic for the next one. And yes, not everybody would necessarily be interested or have something to contribute, but then they will be able to not come and come the following one. Yeah, that makes sense to me. Well, I’ll briefly share my screen.
Frode Hegland: Hang on Fabien before you share your screen just really quickly, but also to you, Bob. We have actually started doing that, and I forgot to mention we started with one monthly presentation, which is about that topic. Now we have two. The one we had last Friday, he only had an hour, which turned out to be a bit of a blessing because that goes exactly to what Fabian and Bob are saying. We had one hour of a commercial companies selling us their stuff in VR, which was useful to an extent. And then the other hour. He was not there. So to try to get towards what you’re both saying, I completely wholeheartedly agree with so we can do that via a chat now or email or whatever. But Fabian, please continue. And I’m glad you brought this up, Bob.
Fabien Benetou: Yeah. Well, you’ll tell me, please, when you can see my screen. Yeah. Yeah. Thank you. So that’s hubs. And inside of it, you can see a screen. And what’s a little bit special with that screen is from a virtual machine or container, so it’s not from my computer right here from another one. And so the the first question is say what what for? What’s the point? And it goes it goes back a bit to what we said a couple of times, like you put the headset and you don’t have access to all the information you might want or need and you might not also have on the headset and power to do some of the processing you want. So that’s the first step is being able to. Provide back the information and eventually the computing power, and also because I think it’s funnily enough, bringing a 2D space in a 3D space with being able to move around in six degrees of freedom makes sense. But the other way around? Again, you collapse everything through a plane. So that was that part. And also, you can see a window there. It’s a very small, but my mouse with a mouse of the virtual machine is on the bottom. Right. And then I’m going to send a comment here. There is a huge delay, I think, 10 seconds.
Fabien Benetou: So it’s something I know how to fix, but I haven’t got to that part yet. But normally, if everything goes well, the mouse pointer is going to move to the top left of the virtual screen of the virtual machine. And that’s just to illustrate that it’s both ways the communication goes both ways from the display of the virtual machine. No, I made Typekit sorry about that. So I connect to the virtual machine and then I do the I store the. Of course, I’m just showing you there as a demonstration, but you would not type in VR or you would use the controllers. And one of the action to do this would be, you can see the controller. I hope now it moved on the top left corner. It’s very small, but it shows that it does work. And the point is that instead of typing that commands, you would take your controller, move it around and also punch through, let’s say, the 2D screen in order to get whatever you want from the virtual machine could be a PDF. With this, metal data could be a GTFO 3D model, and then you grab it out in the virtual space. So to try to make that connection between a flat 2D machine to the to the VR space.
Frode Hegland: Well, how do you know what? Don’t stop sharing. How did you actually do that? How would you send something from the screen in that space? I saw the back and forth communication, but you know, let’s say you’re looking at a 2-D version of a mural in that screen and now you want to extract it. What would the how would you do that? How would the actual data of the mural get into the 3D space?
Fabien Benetou: So one of the tool I use is called 6.2L, and it’s a way to control the pointer remotely. And you can get information on the current window, or you can manipulate the window on the desktop to either move them around or to get information like what’s the title of that window? So there was something that are easy to get. Like I said, I still love the Windows side of a window, but the thing is, once you have the size of the window, you can take a screenshot and then drop it to the size of that window. So you would have a 2D representation of it. And if it for some application like the browser, what I can do is I can get the URL of the Open tab. And if you get the you will have the Open tab, then you’re going to take the content from that URL and then inject it back into your space. And it’s also it links back to the, let’s say, the visual meter data extraction is if you have a visual mental PDF with a publicly available URL, then you could say, OK, I get the content and the metadata from it.
Frode Hegland: Fantastic. Fantastic, thank you very much. Mark.
Mark Anderson: Just some. If anyone wants to respond any question to what Fabian was going to say, I’m happy to sort of go next. Otherwise, I’ve got some sort of just a couple of general things to throw in. So if there isn’t any meat on that, just a couple of things to say. Now I was away on leave, basically for about a week and a half. So sorry for phasing out slightly. No, I’m back. I’m planning to get a Nodar subscription so I can do upload, and I’m planning to do some simple tests by taking part the ACM data set because it’s a publicly available data set. I’m just putting the citation thing in there and just, well, basically seeing does it break the whole thing? Does it even look sensible? And if anyone has anything they’d like me to do in that regard or particular aspects of the data, let me know. And the other thing is in, you know, it’s lovely to see the work that Brandel is done with taking Bob’s mural. If there’s anything I can do to help on the data side, I have. I mean, I’ve got no 3D programming smarts, but I am used to data mining. So if if there’s some sort of grunt work to do just to get some of the digital bits and bobs in the right place so that we can do the interesting stuff in 3D? Again, let me know, I’m very happy to help with that. Oh, and I’m working on the journal with with Fred, and I particularly I may I may actually correspond directly with Alan Rafael, just just as a fresh pair of eyes looking about how, if, whether it’s informative enough for the outreach needs that they’ve got. So I mean, that’s a bit far away from the 3D aspects of VR aspects, but it’s pertinent to the group as a whole. Ok, thanks.
Bob Horn: Bob? And I I just wanted to say two things. One is that apart from what I have said before about facilitation, I’m amazed and appreciative and and that that that proto has has been able to keep us together for 10 years. That’s an amazing that’s an amazing accomplishment for for any for any facilitator.
Frode Hegland: All of you. But thank you nevertheless.
Bob Horn: But no, I really appreciate it. And I’m trying to learn from it. And. The I guess the second thing is, is that, Mark, I may very well, you know, one of the one of the questions both for 2D and for 3D or virtual, whichever I call it, sometimes 3D is what you just said how to update vast amounts of data and then be able to compare the first one. One of the most elementary mental functions is our ability to compare things we need to be able to compare. And it’s, you know, I’ve made a bunch of structures at a particular times, mostly during the 2000s and 2010s, which which could be and need to be updated, you know? And they’re vital in the sense of there. They have to do with maybe you’ve heard about it, climate change, or maybe you heard about it sustainability, or maybe you heard about it, nuclear weapons. So any help along those kind of lines just to as a as a demo or or as a real project that that that I might be working on in those areas would be of great help. Thank you very much for the time.
Frode Hegland: I think I have something for us, and I appreciate that and I appreciate your earlier urging is just putting it in the chat as well. So I think the journal can be important for us because it’s a real piece of thing. I’m not saying the rest of the world cares yet, but it’s a real piece of thing, right? So if we consider that the mural can, the journal can be a mural, right? We should be able to fold it out at least. And this is not in any way to.
Bob Horn: Well. Very, very happy to have any mural put in in the journal.
Frode Hegland: Ok, so I was just reading Brandel comments. This is real quick what I’m saying, but also imagine if we decide to focus on reading the journal in a VR space while it’s also a physical artifact as an it can be printed or normal digital in that we can also have timeline, right? Easily. So if it just make it left to right, here is everything we we can then split the mural. Of course, we’ll rethink it a million times, but imagine the bottom bit is just video that you click on to watch whatever it might be. Then you have a higher level, which are the articles were published, you know, so we just start with having a normal document that can be spread out in VR and then we can add incredible. Interruptions to that, how does that sound to I say, some shaking in California. I was shaking in Europe, Fabian.
Fabien Benetou: I mean, what’s that? I demand to see it, it’s it’s to go back quickly that that’s why we need to try stuff. There are things that are that seem very evident and trivial and fun. And then in the end, they were a bit like overkill. So here I don’t have an opinion until I can give it a go.
Frode Hegland: Ok. So on that note, what Mark and I was doing earlier with our closed loop of author and reader was what to put where and stuff and a very Oh, are you going Brandel?
Brandel Zachernuk: Yeah, yeah, yeah. There’s a thing on in our second hour and Fridays that overlaps with something that I’ve been meaning to get to. I would like to be able to get to it. But yeah, not not out of disinterest with what’s going on here, but. Oh no. No, that’s fine. That’s fine.
Frode Hegland: I look forward to seeing you on Monday. And yeah, it’s OK. But right, so the thing is and author documents, as you probably know, you can select and sorry and a visual media can select text copying, go to author paste and paste as a full citation. That’s very relevant to this, because in the journal, different people will have written different sections, not just the owner, author of the editor of the document. So what we do is Fabian, for instance, the bits that you have written, the heading that is the the heading of your bet. The way it works is you can click and author click See More and then you can add an author name. So that means that this section is not tagged with Fabian. So when we then export to PDF, that is encoded in visual matter, which means that if someone copies text below that heading and paste it to site it, it is your name. Who is the author, not Mark or me? Who is the editor, right? So that means that we are encoding that kind of useful stuff. Now if we’re going to make the journal readable and VR, especially on a timeline, then we may want to add time data to it. We don’t currently have that. And the question is one, what would be useful? Additionally, all the transcripts are, well, the ones with a guest speaker, you know, as human, the other ones are machine. They are all in a different journal documents. So we have all that. So for you to have access to that where we have the speaker name, it sometimes will be an error, but they are at least there, so all that data is available.
Bob Horn: Up for me. Uh, just to answer a little bit on that, but the it’s not so much, uh, the fact that we have a linear set of sentences that that that I said something after you said something and you said something after I said something, which is what a journal is, but that we we try to and you’ve done this from time to time in your summaries to to to make some some organization and structure some clustering of of items, some layering. Perhaps there’s a coding layer and there’s an organization layer and there’s a presentation layer. And so there are a bunch of those kind of things that that that would that I find help structure both conversations and are essential for murals. Or any kind of visualization. So that’s the functionality that that that I find is is missing in in the just sequential timeline discussion format. And uh, although there are although using the timeline from, for example, if we had a list of problems, right? Imagine this. We have a we have a list or a display or a network of the problems that exist today of getting important structuring into into visual virtual reality. Um. Imagine what that would, if you can imagine what that would look like.
Bob Horn: How would we get, you know, how would we organize that? Because it might be there might be one hundred problems and 100 problems just with lines connecting them doesn’t help people very much. I have to be able to cluster them, you have to be able maybe to organize them occasionally by time, or maybe some of the clusters by time and so forth. That’s that’s the elementary sort of stuff in in visual organization anyway that I found. And there is, you know, there could be a whole set of problems of how do we take even what is the, you know, among the best in in visual murals, let’s say. And and what are the problems of actually bringing it into virtual reality, actually identifying actually naming them and giving examples of them? And once we have the names and examples, not the law, not the discussion, not the long journal of it, but the actual network or or list of those of those problems identified named in the example. Then we can do something with them and we can talk about them better. Yeah, I mean, that’s my experience in life, let’s put it that way, at least in organizing and working with very, very complex subjects.
Frode Hegland: Yeah. Thanks, Bob. Mark, just to reply to Bob. The thing is, the problem is also the solution because we have been Adam and Brandel. They the three people who actually make things. So if they don’t want to make things, things don’t get made right because they have the, you know, Mark and I can make a journal we can do. We can certainly all of us in different ways contribute to the workload. But when it comes to the specific of this, it is so important that they’re happy. So I’m trying to to change the environment for that. I saw that Brandel is not too fussed about what the focus is. And I think I know a little bit more about Adam’s focus. But for instance, Fabian, you know, you do have a day job. This is you hanging out just wasting time with some, some people. So I think it is very important that we all understand each other’s priorities. I think your priority and ours is not entirely aligned, not in terms of the result, but in terms of how to go about it. And I think that’s absolutely fine. But do you want to talk maybe more about. What would make you bother continue to be still part of this group? To put it as a basic question like that?
Fabien Benetou: So what what gets me going is to think better and freer, like I want to be able to build more stuff because that stuff is usually tools, and those tools help me to think through the very utilitarian perspective, let’s say. And and I think I’ve mentioned it a couple of weeks ago that why I usually like also when there is content, let’s say the new rules or the journal is because it crystallized toward the need because otherwise it can be a bit abstract or generic. And even though that’s intellectually pleasant when there is too much freedom, also you can’t do much, you just get lost into anything and it’s not that productive. So what I. What what gets me going is something that is is going, I can see for another, it’s going to be useful somehow, but it’s not immediate, and I remember we had a brief discussion about it, the more generic audience. So make it easy to access and make it. Yeah. Anybody could use it. To me, that’s not interesting because that usually goes at odd with being able to think further where nobody has thought before.
Fabien Benetou: That’s the that’s the desire. It doesn’t mean that it doesn’t. It should never translate to that. And if it can help the most amount of people, I of course, prefer this. But to me, that comes in a second stage at a later time, transforming from early on prototypes or proof of concept to an actual tool for others to use. But but yeah, to me to be very direct as long as I have interesting discussion and can learn from ideas, criticism, feedback or even documents to read, I’ll come back and build more on a superb, I’d say. Viewpoint, there are two reasons that make me build stuff, it’s I’m interested in it, and I bet that the better the outcome will reward the work like I start to cool on something and that somehow the using the tool that’s going to be produced is going to bring some return of investment. Intellectually speaking and getting paid for it. But I also don’t get paid to do stuff I don’t believe in, so it’s usually aligned.
Frode Hegland: Now, that was very useful, and Mark, thank you for your patience.
Mark Anderson: Now, I took it to reflect on the fact and sort of came back to in fact. So I mean, it’s just the fact that I mean actually usefully and I had a full and frank discussion about completely unnecessary aspects of visual sighting of PDF because the thing I’m really reflecting on from earlier today and then this conversation now is actually how we can how actually constraints can help us because what I’m really thinking about and now we’re talking about, maybe, you know, can we take the stuff of the journal and put it into a VR space? So I’m thinking, Well, okay, so how does this almost the content of each individual piece becomes incidental that we sort of know how to write that bit of text. The fun part is how do we provide the extra metadata bits and pieces so that, you know, Fabienne and Brandel and Adam can do the cleverness that we’re seeing in the 3D space, in the virtual space. But we can also track it back to more traditional means that which are where lots of people sort of head spaces at the moment and indeed to publish. It’s the fact that, yes, we’re going to need to update stuff or if even if that’s only to link this one with the previous one.
Mark Anderson: So essentially part of this is building the step we’re standing on, which always means it gets exciting because you’re always missing a bit of the bit of the solution. But that’s why I say, OK, I’m quite happy to get stuck into some of the mundane data stuff because it’s it’s the abstraction layer that lies behind what you see on the the screen of a PDF or a piece of paper or you know what we eventually render. Sadly, underneath it’s all data, and I don’t claim to have any special expertise there, but I guess I get I get less easily traded than others in considering that. And I suppose interestingly too, I do it not coming from an engineering background, but more from an information background. So I’m less worried about the sort of engineer that sort of formal standard side is. Does it make sense? Does it get us to where we’re going? So, you know, I’m very much down on the demo concept in that sense. What’s important is do you have you have a usable, understandable outcome if it if it if it has to be done in a slightly strange way doesn’t bother me. An element of pragmatism is always needed.
Frode Hegland: Yeah, I mean, the reason this journal issue is delayed is because the guy doing the transcription is in Poland, so he’s been too busy with the Ukrainian situation. You know, he’s almost done and of course that’s important. One of my close friends left this morning with a van full of things that we all got together to go across to Poland, both to drop off and to pick up family members. So, you know, climate change is probably the most important thing in the world, with maybe the exception of Putin shelling nuclear power stations. You know, so, you know, at some point we have to kind of ignore it and get on with other things. And of course, this community is based on one thing we want to help people think and communicate, OK? Things. So I think that we are at a point where we can do something useful. And I want to bridge the gap between demo and reality. That’s what I really want to do because. Ok. I come from. I have an artistic background. I couldn’t code or do serious maths to save my life, quite literally. So I come from that part of the world mentally and my imagination is very good. It’s the one thing I know is good. But I do know that even my imagination is very much limited by experience because to build this castle in the cloud is completely bloody, useless.
Frode Hegland: So what I feel here is this thing about imagination. Virtual reality offers an augmentation dimension that is so far beyond what we can understand without the things on our heads. So I really, really want us to build a thing which I can show, first of all, to Vint Cerf or so so supportive of this work in general, where he can say, I get VR now, we need this to work together. And, you know, we’ve been going in lots of circles with a lot of creative people in different ways, but I do agree with Bob coming in here as the Grand Senior always been around the block many times and saying, you know, it’s time to do something useful. So we will, I guess we will follow the advice on having different discussions. There’s only four of us now. We should probably plan what to talk about on Monday to have an agreement on that and maybe even next Friday. And then I really think. You know, I’m seriously opinionated person. Of course, I’m trying to be like a facilitator, but I have my opinions as as all of you do. But I am very willing to bend a lot for us to build a thing that can help other people understand why we think this is so important.
Bob Horn: Well, I’ll I’ll say, first of all, I just want to as I was wondering how how you got the transcription made for the for the journal because there are, you know, you’ve got to somebody does that rather than there’s an automatic.
Frode Hegland: It depends, it depends the week. The normal meetings like we have today, I use a service called Sonics. I’ll give you the. It’s S O and i x dot i. It’s quite good because it recognizes people’s voices. But what’s annoying is that when people come in late, then it doesn’t seem like the algorithm tends to understand them properly. So then at the end, that’ll be someone else’s voice and better cleaning the specific may have with guest presenters. We have a wonderful human, Danny Lu, who does to make sure that they’re there. Well done.
Mark Anderson: Good. Well, Fred’s put the Sonics in the sidebar for you.
Bob Horn: Ok. Right. Yeah. You know, and I will look for that in chat, although I I hardly ever read chat because I try to pay, either try to pay attention to what people are saying or or or or if it’s, as I say, if it’s things that I don’t, we’ll never understand. I tune out then. I do appreciate your your your offer to to do this. And, you know, happy to have to work a little bit with you on the on on on what kind of an agenda to have and to ask because it will require, you know, maybe something, don’t do it doing something fairly uncomfortable, which is to ask people not to talk about certain things for a for a period of time.
Frode Hegland: That’s doesn’t go down well. It doesn’t go down well, you’re right.
Bob Horn: But there are there. There are there are, you know, one of the things that I’ve noticed that a skilled facilitator does is saying, that’s really important topic. And I’ve written it down over here on the next topics that we’re going to deal with in the future. Yeah. Now let’s get back to the topic that we’re talking about, for example.
Frode Hegland: Exactly. So a couple of things, first of all, requests to you, probably on if you could please either to me or the group, write a sentence or paragraph of within the kind of stuff we’ve talked about what you would actually want to work on. Key to share that with Brandel and Adam, to see if they share enthusiasm and if they don’t, maybe we do different. But you know, this is where we have to be, even though we haven’t met all of us, just open and close friends and say, I want this. And if that doesn’t work, we try to modify both sides, you know? But let’s just be at least very, very, very, very honest because we are trying to do the same thing in the end, right?
Bob Horn: And then I will do that also because what I would, you know, I can say what I want. I would, I think, the next step. For, you know, having to do with murals, the mural stuff is to is is to continue to build a demonstration of the connections that could be made out of, I believe the Vision 2050 mural is probably one of the better ones because I’ve got quite a quite a bunch of things that could be connected to it thought about even updated if Mark was interested and so forth. Yeah, that’s great. It could be the demo subject. And I’m worrying that willing to work with people, if that if people want, if people want to do that in virtual reality or in my view, it, probably we would have to do some of it in 2-D right now just to just to do sketches and ideas.
Frode Hegland: Uh, yeah, thank you, Bob, for being.
Fabien Benetou: Yeah, it’s a quick note on one of the beauty and peril of prototyping and proof of concept and all that is if you’re genuine about it, you can’t project too far because you really don’t know if it’s feasible or not or if it’s going to make sense or not. So I’m not trying to justify my lack of planning or maybe, you know, trying to act like a bee going from one flower to another or whatever it is because I genuinely can’t plan because I don’t know what’s going to work or not. Sometimes it looks trivial, and I think, Oh, I’m going to be an hour and boom, it’s done. And I use later. That’s why I take notes, to be honest. That’s because ahead of time, I have no, I don’t want to say no idea, but it’s really hard to predict. And sometimes it’s the opposite. I thought, Oh, it’s going to take me months, and I don’t know. So I’m still going to do the to try to plan a bit and define what I buy. At least what I believe is interesting and should be explored. But I’m just saying the perspective that it’s very different to produce software that exists or has been done in some way before and something that is at the fringe or at the edge or what? What’s actually new? Because maybe it’s also feasible but pointless. And then it’s pointless to think of the steps ahead because we’re just not going to go there.
Mark Anderson: What’s it what is interesting to know, and because it’s something that probably you sort of park in a different part of the brain when they’re doing stuff is it’s a simple thing, just the the information you’re working with. If it if it doesn’t have things that you need need in the sense of, you know, if only it had this structure in it, I could do more. It’s really interesting to know because it’s not a problem. I think it’s useful for you to solve because you’re doing other things. But but in the round, it’s something we may be able to assist with. Or, you know, if I just had something, you know, this long structured in this way and I I sort of well know that these things, you’re lucky if you trip over them. So one of the whole things a bit like the AKM stuff which let Adam do his his visualizations was just sensing that, yeah, you know, and also real world data is messy and that messiness is really hard to recreate. So if you set out to write a test, it’s never that good a test. You know, this is a test data because it’s not. It’s just just not messy enough. So taking some of the projects that we’ve been discussing here and perhaps doing a bit of extra work, you know, metadata scaffolding around them that would enable you to do so because that’s really interesting in itself, because it leads us to understand what we need to do to our existing information to make it more useful. So it’s a really useful. It’s the boring, unsexy bit, but I think it’s a really useful part of this whole whole process to.
Frode Hegland: Yeah. And on the mercy, both in terms of world things. Yeah, sorry. Yes.
Bob Horn: Know I forgot to lower my hand. I don’t have anything.
Frode Hegland: Oh, OK, sorry. Yeah, so. What I want that is so important because we have lent is to give him a real world demo of something doesn’t have to be big. You know, the first iPhone did almost nothing, but it was real world. You know, all the other stuff came later. He is sending the document. I wrote about ownership to Tim Berners-Lee because Berners-Lee did a presentation last week where he talked about VR in quite a different way to deal with ownership. And what I’m learning more and more about ownership and VR, it’s kind of scary. Also, I’ve been thinking a lot about mapping lately. Not not intellectually a lot, but because I’m probably going to be driving to Poland. I’ve had to call Tesla for them to help me with the charging stations and all of that stuff. So what that reminds me of is the lack of being able to put things in a VR space, for instance. You know, if we were just doing a demo or a movie instantly on this table, I would have a map of Europe with all the charging stations lit up in different colors and all kinds of stuff, you know, that would be trivial. And then I would be talking to you guys, and at the same time, I would have a document over here the fact that we cannot combine different kinds of knowledge things in a single space is absolutely crazy.
Frode Hegland: So I think that that is definitely one of our challenges, and that’s why document with mural together as a multiple data object to try to do in a very open way can become very important. And I do think map is the next thing. We have time in a mural, but we can also have map rights. Yes. You know, when I when my Ukrainian friend was here on Sunday with his kids, I wanted to do this, you know, to talk about the different areas you was showing me. His parents live in Donbas, all these horrible things. And I wanted to do it on augmented reality on the table. But it turned out we had a large iPad, so we used that instead. You know, so it’s like this this size. I wasn’t very big, but it’s really, really crazy that these things aren’t really being solved in a bigger fundamental issue. You know, there are all these spaces, but there are one single space. That’s crazy.
Bob Horn: Well, there are several different structures, which, you know, I can provide the demo material for if people want to bring them, you know, to work with that. You know, there there are several documents that have been created, the report that went along with the mural. And then I wrote it and I wrote an article, a report that we published at Stanford, which was which was co-authored with two of the CEOs from the from the the major companies, plus a professor at Paul Ehrlich at Stanford that is who’s been critical his entire life of professors. So I would manage to put the put the whole bunch of the whole bunch, the whole four of us together in an article and got people to agree on it. And it’s published. And you know, so it’s. So there are different and I have some some presentations, PowerPoint presentations again about the mural.
Frode Hegland: I think we should definitely move ahead with
Fabien Benetou: Mural work, but
Frode Hegland: I think we should. Ok, but you know, we should. We’re running out of time now today. I just also wanted to say, I think we also need to look at the creation aspect of it because the mural is important, but it is created by a team and then given to the world as a published object, you know? But you know, we also need to look at the what can people do with creating new ones?
Bob Horn: Uh, yes, but first, the first step would be to, I think, would be to say, how can we update the one that we got with the team? You know, because that’s that’s a step in the right, in the right direction. And if we if we go too far out and actually trying to create this, you know, the the demo then then we create an awful lot more work. Or for people who are all part time, I’m part time, by the way, I’ve got several other projects, some of which are, you know, they they work.
Frode Hegland: Absolutely. Mark in closing.
Mark Anderson: Yeah, I just just just a very quick one. So say, yeah, I did actually think deconstructing in a sense, what’s there in a finished artifact like the mural is necessary, antithetical to the thing of how do we make stuff? Because part of the process is pulling all the pieces out and saying, Well, if I had to make a new one, how you know, so here’s one I made earlier. Let’s take it to bits. Oh, crikey. Well, I don’t know how we do this in the way that we want to do it. So it’s actually gives you quite a good, real world framing. So it stops us sort of rushing off into this thing where we, you know, we could do anything. So in that sense, funnily, I actually think it’s helpful. It’s it’s a totally counterintuitive way to start the process of saying, how do we make one? But in a sense, knowing that we’re trying to make something that has been made before gives us some boning. I’ll just leave it that.
Bob Horn: Yeah, that’s right. Based on a structure and the structure is completely explainable. Yeah.
Frode Hegland: Before we go with all these different things, Fabian, what do you want to build, if anything to for yourself or in this or whatever? What kind of thing? And how can we support you?
Fabien Benetou: I’ll think of it more, I don’t want to just answer on closing to say something, but it
Frode Hegland: Sounds good because, yeah, it’s all good. Ok, have a good weekend, everybody, as well as it can be. And if I’ll be in, if I do drive across Europe, I promise not to come to Luxembourg so you can relax.
Fabien Benetou: It’s OK. I’m in Brussels.
Frode Hegland: Oh, of course, Brussels. I have driven through Luxembourg once. Maybe that’s why I said it. But yes, I promised not to come to Brussels unless
Fabien Benetou: We were warmly invited. Ok.
Frode Hegland: It could be fun. All right. Take care of your one. Bye for now.