Tr : 18 Feb 2022

Video: https://youtu.be/e0nsbJvSi4c

Chat Log: https://futuretextlab.info/2022/02/18/chat-18-feb-2021/

Frode Hegland: Hello. Maybe I should turn off my music.

Fabien Benetou: Hi, how are you here? Hello, Peter. Good morning. Morning, morning. Hello, it all. Oh.

Frode Hegland: So I’m not sure who’s coming today. I guess we’ll wait. A minute. Suffice to say, from my end, the meeting with NISO, the conference went very well. I’ll give you a few more details and then I’ve had some discussions with Vince and other people. But where to go? And here’s Adam. I think for this particular aspect of the call, Fabian and Adam, with all respect to you, Peter and everyone else are the most pertinent. So I think I’ll just start recorded anyway. And this is a little bit in the response to Adam’s very honest and positive tweets saying what it is or isn’t interested in. So if we can spend the first five minutes on that to clarify? I’m going to try to make this thank you, I’m going to try to make this even more of a dialogue and research efforts. I think we’ve had a good month to waffle about on structure. So in summary, I think the structure is the journal will be invisible except for as a resource. If anyone in the team wants to contribute an article like I do, very welcome to do so. We’re also going to invite more people externally to contribute to the articles and they automatically external ones, not necessarily ours. We will decide that ourselves will become part of the volume three of the future of text. The website has been scaled back because of people’s identities and what they want public. That’s fine. So future of text is a little bit of a scream because I like to do that, but it’s also got a few links on it, so it makes it easy to share each other’s work.

Frode Hegland: You just put it there and your VR headset, you click and you go through and. Yeah, so the article that most of you have read, the one that I’ve written for ACM, that’s the one where I’m saying two things VR has to be better for work. Information in VR is not automatically accessible and visual meta is one of the approaches of that. And what we’re doing in this lab is doing, of course, we’re not all visual matter, but it is one of the things we’re looking at. So what I plan to do now with Vint Cerf assistance is try to get some money. Initially, primarily to hire programmers to do the boring stuff, so you guys can do the exciting stuff. And then see how that goes. Any comments on that? So update from the Nisoor thing was very, very interesting. I had two sessions there, one that I was a panelist and the one after that where I wasn’t there were both at 2:30 in the morning. So first one I was OK. Second one, I actually felt drunk at the end of it. I’m not 16 doing an all nighter anymore, but in the presentation that Vint Cerf and I did tonight. So by the way, Nisoor is the American National Institute Standards Organization, so it’s a pretty big deal. I talked about visual media because that’s what Vint is supporting me with. He’s there all the time for that. But as I’ve spent a couple of weeks persuading Vince, the Metaverse will need open metadata access. So he has agreed that it’s OK for me to scream about the metaverse and visual matter too.

Frode Hegland: He was a bit skeptical to begin with, and that’s fine. What I learned at the conference or rather relearned is that metadata and documents are messy. No surprise, but it was interesting to hear it again and again. And what was really nice was the session I went in the middle of the night, the next night on semantic document structures and things like that. Actually twice special matter was brought up by other speakers, partly as a I wish it could be like this. I think it also was implied that our solution is a bit naive, but at least it’s something that they see as being positive. I also have one meeting with an academic next week. From this environment, and I’ve been in one email dialogue with another one who says this is all very beautiful and simple, but it’s outside of what they do. And finally, somebody involved in American records for government is excited about this, and I hope to get something useful there because if they make it clear officially that they would love to import documents with this, that would be good for us. I Brandel. One few seconds recap. A number one, we’re going to try to keep Friday meetings for dialogue and demos. The Journal is going to be kind of a record of that. If anyone wants to contribute articles, they’re welcome to. The Nisoor meeting went really well, and it’s starting some good dialogue, including into the Metaverse, Metta and the Metaverse. So there we go. That’s my report. How is everyone else?

Fabien Benetou: Pretty good, pretty good.

Frode Hegland: The link that I sent you to the journal, which will only be public next week when we have

Fabien Benetou: The

Frode Hegland: Next speaker on Monday and it now has a new section at the end that has a few links to books and resources and whatever. That’s just something that I will be scraping from whatever conversations we have to, you know, to put something in. If somebody feels I miss something. Please do tell. I will have to talk to Alan about how this will fit with the newsletter that he’s working on. But I thought it was worth mentioning, even though you think you know the newsletter, there’s a new bit at the end of it. One of the new bits is about Google’s efforts in the Metaverse, and Vint has been very, very positive this week about support. So when the piece that you all may have read that I wrote sent today, but we’re going to share that with Tim Berners-Lee and try to get his perspective on what we’re doing. And then I’m going to go to try to get some money, partly for coding of boring things and then whatever we can do as a group to further what we do. Yes, Peter.

Peter Wasilko: Ok, I wanted to report that I’m nearing completion of a set of widgets using EMBA to handle the user interface for my website, and I’ll certainly share that with our group website. It will have. A set of options so that you can have accordions, you can have single selection, multiple selection tab panels, single selection tabs or multiple overlaid tabs so that we can have more than one layer like and a GIS system where you’d want to have multiple things on in the same physical space. Getting composited together and I’m refactoring all of the code using some new features that I’m introduced only recently. Once it’s all done, I’m going to try to start combining that work with the code that Adam sent me using three JS for desktop VR. And then it’s a question of how desktop VR would play with the headsets. I think 3G’s said that they can create a button that would automatically move the 3D Web VR into the headset world for people to have headsets. So. That would let us use the EMBA user interface affordances. In the broader VR world without my having to be dealing in headset land myself.

Frode Hegland: That’s wonderful, Peter, and I look forward to seeing, you know, paragraph or right up or whatever, so we can put it on the journal, but please also buy a headset. The project will pay for it because it literally cannot be described. You literally have to do it literally. And seeing it in a square screen is so far removed. You really deserve that experience and the amount of effort you put into it. I think it’s absolutely right that the project can can send you one, so I will keep bugging you by email if you don’t do it, and then I’ll have to find an investigator to find out where you live to just send you one 2D and 3D or not the same.

Brandel Zachernuk: They’re not, you know, minus capex and stereoscopic and having a fixed vantage point similar to what you see with a monitor versus head track. I mean, there are ways of simulating aspects of that. One thing that was very, very interesting was using head tracked stuff. So if you use a webcam and then make use of the eye tracking, for example, to to move the stuff around, then that is pretty interesting. You still don’t get the stereoscopic cues that come with it. If you are not, if you’re a flat out not willing to use an Oculus system yet, as far as I’m aware, is still necessary to use a create a Facebook account. They have said that they’re going to drop that requirement. If you have a Windows machine, then you will have the ability to use an open access thing with access to that so you can use what’s called a Windows Mixed Reality headset. So they may be aware of Linux alternatives, but I’m not aware of any way of connecting a Mac at this point to a headset.

Fabien Benetou: Now, I’ll just briefly, if you. I don’t have a Facebook account, and I think I really feel so checked with Adam about this. You should get get to second hand. Quest one. And I think if you can have an Oculus only account still still owned by the company, but hopefully they are not. I don’t I’m not a lawyer, but they should be up to a certain point blending all the data together from the two companies. It’s one. So that might be way in otherwise, of course. So yes, for Linux, actually, it’s what I wanted to share with the North. It’s another how do you say it’s another order of magnitude in terms of price? But then you have the valve index, which is made by a company that sells stuff. So in terms of, of course, it depends whom you value system and they are not selling advertisements. So they were just trying to sell video games and to sell hardware and distribute Linux. And then our customers can do one thing that is annoying is this you have the cable, so you need a good computer. And indeed, as Brandel said, you need an operating system which is supported, which is either Windows or Linux. As far as I know, Mac is not an option. But I want to briefly show my screen on this specifically just for a minute. Can you see my screen?

Frode Hegland: Absolutely no rush. So please take your time sharing.

Fabien Benetou: Ok? Help for that. This is a demo I tried. I made two years ago or less. So it’s running the index on Linux, and then I have a couple of different Windows because it’s running fixed desktop, which is an open source window manager. So the windows you see on your operating system to move them around, it is supported in VR so you can move them around the windows themselves. And what’s also interesting is you can have on top of this VR application running not just inside the operating system, but that’s in the background. Webex application so that you can paint and still have the windows of your desktop there. And because they are from your desktop, if you want to put a PDF, you can do that. If you want to grab images from the PDF, you can do that. You can move your controller as a pointer and it’s something I wanted to show this because also we had discussion last week Monday about how to pull content from the screen. That would definitely be a high resolution, high quality way, but requires a setup like this. It’s not standard. Do you need an actual Linux desktop with a powerful graphic card to do this? But that’s definitely an option to have and rather quickly window manipulation and their content in the headset. Does it make sense?

Frode Hegland: And yeah, it makes a lot of sense. Just sorry, we’re talking a bit in the chat here.

Fabien Benetou: Well, not everything

Frode Hegland: You guys do is web based, so if we’re all going to share each other’s work, we need Oculus for everything. But some of it we can share by having by using Web-Based VR, right?

Brandel Zachernuk: So yes, I haven’t shown you any non web based stuff. You. And for the most part, when you’re if if you’re not doing things in web, then you’re typically still using, say, open car or whatever. They and everything in unity tends to be open at car as well. So, so they aren’t particularly opinionated. You have to build compile targets. And so those those might be specific. But if people have code, then for the most part, nobody is targeting things that are specific enough at this point to one platform because people haven’t differentiated their platform. One So. So index may give you a hand tracking, but no other headset other than Quest will give you all that. All right. Maybe the barrio, the Barrio three will, but that is also a $10000 headset, so not necessarily your sort of appropriate for your first one.

Frode Hegland: So another thing I wanted to mention was so I’m here in Bergen, Norway, this week and we went to an Exploratorium science thing for my son. That was all very nice, quiet because COVID restrictions have just been lifted, so they had this bike on a wheel for a centrifugal thing. So I went on that and I was able to cycle completely 360 degrees five times in a row. And the reason I’m saying it, Peter, is I didn’t feel queasy at all because even though my body was being, the gravity was being faked because of the centrifugal force, I could see everything as a human meant that I knew what was happening, so I didn’t feel queasy. But I still feel queasy when I move in VR. If my body isn’t moving, if I’m walking, no problem. But if I sit down and pretend I’m on a plane or walking, I get queasy. So I think that’s what you experience. So I think most of what we’re all building here in VR is either sit down or stand up or walk about a little bit. None of it is get in a vehicle and fly around because the magical thing of the Bob Horn mural was not. And this is really bizarre, actually was not that you walk up to it. You pull it towards you. And I wonder if there’s something really deep there, because visually, it’s the same thing, obviously, but because you’re using your hand, your body and brain and particularly stomach isn’t fooled into getting sick. So these are the kind of things we can only learn by being in this space because before you are there, it’s very theoretical anyway. We will arrange for something for that, Peter, and I’ll have to mute you for the rest of the discussion if you try to argue against that.

Fabien Benetou: Do you have a PC panel? And I have a Mac MacBook Pro. That’s right. Well, it’s not.

Peter Wasilko: I do have parallels so I can run windows underneath it. I don’t know whether that’s good enough because I have Windows and

Fabien Benetou: Linux, both running and parallel machines. So there is a third solution. I don’t know if I mentioned it the first meeting or not, but there is cloud saw by Nvidia, which does the rendering in the in the server and actually tried it. And even for Valve Index, which is a zombie games, you definitely don’t want to be laggy, but it’s super demanding in terms of graphics and that worked pretty well. It’s a bit of a finicky setup, but it means you can also have a full on desktop experience on a standalone headset assuming you do have a good connection. So that doesn’t prevent, let’s say, from this kind of experience, even with standalone feasible.

Brandel Zachernuk: I thought we were just talking about the imperative of actually being able to inspect and interact with these environments in actual sort of head mounted display virtual reality.

Frode Hegland: That’s a funny sentence you were about to say that Brandel in real VR, in real virtual reality. Yeah, just briefly to catch you up on Bob, the nicer conference was very good this week and we’re going to keep these Mon-Fri meetings being about interactive text, especially in VR and demos in terms of how we’ll organize ourselves, do the journal and also how to look for a bit of funding we will do separately, maybe on Wednesdays. So today is all about that and we were just doing just like what Brandel was saying, telling Peter got to get an Oculus so that he has been fully convinced. And now maybe we need to convince you or not.

Peter Wasilko: Yes. One of these days, I’ve got a lot on my plate right now, but maybe know a few months I can do that.

Fabien Benetou: I have a visual argument to try to convince you.

Frode Hegland: Oh, well, every screen sharing.

Fabien Benetou: Yes.

Frode Hegland: Screen sharing is going to happen, everyone.

Brandel Zachernuk: Thanks, Heidi.

Fabien Benetou: So let me try to convince you initially. Uh, can you see my screen?

Frode Hegland: Yes.

Fabien Benetou: Ok. So I mean, I’m in my space. That’s it. And then I’ll pull up some data you might hopefully recognize, which was the the poster you shared a couple of days ago. I.e., my offset is not perfect, but you can manipulate and put it back. And I sliced it on what seemed like meaningful sections. Yeah, go ahead. And then you can. Of course, the idea would be to be in the space. And ideally, I don’t have the I didn’t explore the inside of the PDF, but this I manually sliced, which is why it’s a little bit off. But I think there are zones basically in the poster that would allow to do this. This is semiautomatic because it can be, of course, any poster with whatever dimension. And then, of course, you can put whatever size you want so that you can bring more people in at the right size and that’s still again in hubs. And I think the quality of the text also is still pretty good.

Peter Wasilko: Yes, very good.

Fabien Benetou: But just has to be tried with the with the machine. And as also Brandel said, I think were saying Virgin with being able to put it in a go around with being able to make the more tangible, that’s a connection with the content.

Peter Wasilko: Mm-hmm. This is really inspiring me. There are. For one thing, throwed up. I looked when I looked at the mural that he’s showing right now, the the vision of a visual language for the next few years. I realize that that what you inspired me was to see what a table of contents of such a mural would look like. Uh, and then, of course, how to make how to make a a visual metadata of of a mural like that. So that’s what I’m now thinking about. Haven’t made a lot of progress on it, but maybe I’ll be able to report in the next week or two.

Frode Hegland: That would be great. Thank you, Bob. And one of the items, the journal that I put in at the end, one of the resources they came across a series of links for design as a traditional visual design in the metaverse. And so I included a link to one on how a new generation of architects are just having fun and having making proper environments and VR, but not with any physical constraints, obviously. So I think from our perspective, it’s mostly a diversion, but it may also help free our thinking a little bit. It’s kind of it’s kind of nice that it’s being looked at as well.

Adam Wern: Those articles, it was a fun because I had talked about zoning. Zoning laws or zoning regulations and regulations for VR in real life, it may be how tall the building is or if, yeah, if you destroy the views for your neighbor. But in VR, it’s it can be about the weight of the site if you’re in the same world, how how big the images are, basically. And the models are, so it loads quickly for everyone and it’s not shot B and so on. So you will have different, different regulations for VR if you want to stay in the same world. It’s very interesting to see those social problems or, yeah, playing together in the same environment. Hmm.

Brandel Zachernuk: You know, there’s a there’s an architecture sort of personality thinker called Under Beyond Cojocaru, and she she’s an architect who’s got a lot of stuff to say about what virtual reality should learn from architecture in terms of the design of space for different people and purposes and and roles, which is really interesting. And similarly, I don’t know that he’s made the jump into thinking about it in the context specifically of virtual reality. But a fellow called David Kirsch, who’s a psychologist at UCSD, has a great series of lectures. I may have mentioned them before, but thinking with the body and other things. And that’s specifically to do with embodied cognition that the idea that having heads and bodies and hands and you know, ultimately, hopefully also feet and legs is essential for a lot of the way that we do our thinking. It’s not this kind of Cartesian model of of just sort of cogitating in a dark room, the skull. It’s an act of engagement with those things. And so those are very useful reference points because he also talks about the way that architects think with objects and dancers in some really interesting and somewhat quantitative detail. So really good sort of reference points. I’ll dig up some useful links. That’s Andreas Twitter account, but I can find one of the more specific talks that she’s made to things like the University of Washington Reality Lab, because they’re really interesting in the context of what architecture means and what what we need to pull on because of what we already know of designing spaces in other contexts.

Frode Hegland: Yeah, that’s really interesting. Thank you.

Fabien Benetou: I also put on that, to be honest, that’s one of my motivation for VR. And that’s also why initially, motivation for VR as a medium in order to manipulate information and text, because yes, our body basically is pretty smart. And I think our memory. I’m always blown away with memorization of path. When you go from A to B like I can, I can tell how to get to the bathroom of a friend that I haven’t seen any for 10 years. This is just like to me, that’s mind blowing. I cannot maybe remember what they were, but somehow how to navigate through their place. I found that incredible. And to be honest, that’s also my hypothesis for the use of VR, for manipulating abstract content is that the location how you navigate through make sense, which is also why. So I put a link on a call for paper, for an architecture publication for use here, I think in two weeks and they were trying to see also links between VR and architecture and how one can learn from the other. And that’s also why I wanted to come back. Also to the point about if we if we have a headset like this, either sitting or standing, or if we can move around honestly, ideally, if we can move around, I think it’s better. I think if you can have an empty 10 by 10 room and move, of course, it’s it’s tricky to have, but it’s still it’s the best or even 50 by 50 or whatever. But it’s the best condition and say for quality VR sitting not so much, just standing, but not being able to walk. It’s it’s not as good. So I would argue for really physically moving in the room with the content.

Frode Hegland: I mean, one of the exciting things, of course, is how you can map not only set up a guardian area, but also map things in the environment so you make them virtual objects as well. So right now, of course, with Oculus, you can do a desk and so far you can’t do a chair, but it’s coming along. So you know, when you move about a bit, depending on the environment, the table is the table. Even that makes such a huge difference. So it’s going to be very interesting to see how physical architecture will change to accommodate virtual architecture for exactly the reason you’re talking about. Maybe we’ll have different kinds of shelving systems and different ways of systems can get out of the way. It’ll be absolutely fascinating. And of course, you know, having Bob in the room and thinking about what do we do with the wall space? Initially, you can just put up murals and every wall if you want. But then what? How do you change into another one and find the old one? Do we go back to roles like we had at the Library of Alexandria? Or do we have other methods? And I mean, of course we will, but it’ll be. It’ll be wonderful theater.

Peter Wasilko: I wonder if we’ll have a new era of end user architecture. Right now, it’s the highly licensed profession because you have to make sure that physical buildings don’t go collapsing. But I see no reason why if we got into a modular system where you could grab predefined components, snap them together, restructure them, the computer couldn’t go and run all the structural analysis automatically. And you could sort of cut out the middleman at least up until you get to final design stages where you’re actually thinking about building a real physical building. But certainly for all the conceptual level of work, you should be able to have the computer tell you, OK, no, you can’t deliver it out too much. This thing will rip out of the ground and kill everyone, so you’d know that that’s a bad design. But it could really like alter how we perceive what needs to be regulated as far as the profession goes and where those boundaries are going to go between people using augmentation systems to move into the currently dominated by profession space. And what’s going to remain the province of people who’ve gone through a rigorous certification and licensing standards?

Brandel Zachernuk: Right, well, certainly in the context of where people are designing architecture for make space. I think there’s a sort of ample opportunity for people to do architecture that remains within a virtual and digital space as well. But but to both ends, it’s true. There are, there are there’s active work being done at all of the kinds of places to do that kind of stuff. So Autodesk and Trimble is a company that makes a product called SketchUp. They bought it from Google. They make the software in general. So building information management and my friend of mine built their computational geometry core. The idea of being able to make a meta configurator in the sense that you can configure a system that allows you to configure parameters for a range of, say, chairs or windows or stick frame, wood plans and things like that. And so that’s definitely happening. But I’m also interested in what architecture for its own sake in a virtual context does. But yeah, to that end of how spaces change, I think it’s worthwhile remembering how much space is already have changed. Mark Anderson linked to that global piece that asserted that sitting in VR is the best for the present moment.

Brandel Zachernuk: And while I think that’s true, I think that says less than it sounds because we will have and the necessity and the opportunity to change spaces for the benefit of this. You know, one of the things that I like that is sort of less popularized within the mother. Doug’s demo is when he talks about his partnership with furniture companies with a furniture company to design the lap system that he had in order to get office furniture that was appropriate for those spaces. And we have to remember that the office has been transformed several times in the past century. Herman Miller Yes, of course. And as has the kitchen as has meant, have many intentional spaces and coded spaces for the benefit of accommodating the modalities that are most amenable to them. So the intense time and motion studies that that totally refigure the shape of the kitchen in the 20th century, the shape of the living room has changed as a consequence of various constraints related to televisions, and they can, and they will change again. So yes,

Frode Hegland: It makes me a little bit scared, but in a fun way, because, you know, once we had the desktop publishing revolution, those were some pretty awful desktop publishing documents. And so if we now have the home architecture revolution, we’ll have some pretty awful houses. Having done a full build here and a complete renovation in London, I can tell you that. Of course, having an engineer present is important, but the architect’s basic experience is so important. You know, yes, you may want to have a thing there, but he’s built 100 houses before or she. So therefore it will be explained that maybe if these rooms go together in a different way, it may work better. So I look forward to having better access to have an amazingly talented architect consult on your own build rather than just doing it all by yourself. And maybe we can have similar situations with within the VR environments themselves, not just the physical ones on the outside. Fabian and then Adam?

Fabien Benetou: Yeah, I’ll briefly tell the camera just interesting and a very small scale that’s happening. So that’s a screenshot that I took 10 seconds ago. Can you guess what it is?

Frode Hegland: Is it a 3-D printer?

Fabien Benetou: Yes, so that’s the 3-D printer in my basement. And this is a design I executed in completely city, but this was just two days ago. I have a little timer in my kitchen and the nose of the penguin was broken. It broke my heart a little bit, and I spent honestly three minutes redesigning a new nose, sending it from my desktop. Then I went down tools to get some clean clothes, and the thing knows, I can’t stick it on. And it’s not as large scale, of course, as furniture’s proper. I really wish it was, but for now, it’s gadget inside. But in a way it’s it’s to me that’s mind blowing. Also to imagine that that I couldn’t conceive that like five 10 years ago. And now I can conceive an object, but not I can be object to physical, tangible object. And it’s funny that it’s the same, of course, with the naivety of I’m not an architect and not a designer. Whatever I do is breaks every day, but it forces me actually to have the same sense of control, like how do you print what is going to be, how he’s going to resist the kind of materials and all that and what the failure I have on this specific project for the 3D printer is. My hope was to design the headset and pick it up after the session, but the time scale is completely different. The headset takes seconds or minutes to do, and the printer is just like ours for something minuscule. But but I do hope and think at some point we’re going to converge and then you can see also that I don’t have an example here of people that print for VR, like have better straps or better things to hang in everything. So different time scale, but still some ways to customize, but indeed requiring a ton of expertise that is not only for everyone, but then also you have simulations, you have other tools to facilitate the steps of design.

Adam Wern: So this week I’ve been thinking about the mural, the discussion of the mural and what that would. If we take a digital fully where we’re in 3-D and hyper textual and interactive, the different aspects are one what should you do where it’s a good limit, so it’s accessible through different mediums. If it’s a full VR experience, it’s hard to look at the flat screen, for example. It could be. And so where where are the good lines to draw, but also what it could be if that mural behind Bob there would be a 3D thing and it ties back to having an architect’s or professional designers that is professionally made that most people wouldn’t be able to do such. Such a nice diagram. You need lots of experience how to place things and find a good proportion. And so I think we will get a new kind of typography or a graphical designer that works in three dimensions with knowledge of objects, not just aesthetic objects or social spaces, but also knowledge objects. And how how would you go into, for example, with argumentation maps or timelines is a bit easier because it has a more of a fixed direction, so it’s easier to visualize.

Adam Wern: Of course, it’s it has many properties, but for an argumentation map or a big diagram of some sort where you need to follow something complex, how would you go into how would you? Would it be a tunnel you go into with backing material material behind things? Or how can you lay out an argument map in three dimensions that have it? And also if you open supporting material? A hyper textual material, things are not shown from the beginning, but that could be interesting if you are interested. If you click or interact with something, where should it go? Should it go in the front or should it move to the side? Or where could you put those things or how would you interact with it? There are, of course, a thousand ways to do it, but what are the good ways? What are interesting ways to explore for that kind of mixed hyper textual content and argumentation map that has kind of complex flows between different directions? Yep. Or.

Frode Hegland: A wonderful question, Peter. Unless someone has a specific response to Adam, of course, then obviously jump in line.

Peter Wasilko: I do, too.

Fabien Benetou: Yeah, go your friends, bob.

Peter Wasilko: I know is there is I think one of the things that we can do is experiment with some of the the the maps that the and the murals that we have. And and I’m I’m hoping that that some of the folks in this group will will be interested in doing that. We, you know, I have I have sketches here right in front of me in my notebook of how to how to take apart and rearrange something like the the the mural behind me and what maybe some of the we will be inventing some of the new functions that will be a command that shows me just show me the history, for example, or just show me what the key decisions are, or I don’t even know what those functions will be. But but they will come together with some of these different kinds of murals as as as Adam said, you know, I’ve worked a huge, huge amount on argumentation mapping. I built the earliest one with 800 moves in the argument. We can experiment with that, and it also has to do with the early history. It arranges the early history of the arguments about whether computers can think or not.

Adam Wern: Bob, so what’s your experience with? I’ve read about some argumentation mapping software and they are. Some of them are quite interactive where you can follow a line of argument and open things up and close things out. How does that play with our sense of sense of location and the fixed kind of the fixed mural where you really get to know the material by place? The thing? Fabian talked about that you’re human.

Peter Wasilko: Yeah, human beings need some of the fixed, fixed sadness when they’re dealing with with complex thinking. And complex arguments, for example, we can’t have. And what, David, what’s his name? Oh, I forgot. Oh, I used to have lunch with him all the time in London when when I would go over there took all of the all of the moves in. The eight hundred moves in the argumentation map that that we made on Turing and put them in in a much more flexible way. But I found it harder to trace through, but it was a worthy experiment. We need these kinds of detailed worthy experiments in different with different structuring, and I’m hoping to I’m hoping to be able to do some of that with some of the people in this group with the systems that you’re already making. They’re really quite quite amazing.

Frode Hegland: Are you thinking about Simon Buckinghamshire? Maybe.

Peter Wasilko: No, no. But no, he was at Open University, but you know, yeah, I’ve had contact with him and I lectured at open for him back in the room. I don’t know the early 2000s, probably, or the nineteen. He’s now in Australia, by the way. Yeah. And and also, it has has some thoughts, although I don’t know how what he’s doing with three dimensional thinking. Uh, one of the other experiments that I’ve I’ve learned about relevant to what we talked about. Well, a couple know a few days ago is that I have a friend who who spends a lot of time in Second Life, and it turns out there are lots of libraries. Oh, yes.

Frode Hegland: Yeah, I remember you mentioned Bob. That’s really interesting. Yeah.

Peter Wasilko: And so now I’m now, I’m curious. I haven’t gotten back to her yet, but but I wonder if there are geographers. In Second Life, it’s a huge space. Does it have has anybody made up maps of it because it is a large, complex space and three dimensional? And if how are those maps made? Are they made three dimensionally in any way? I mean, there’s a kind of those are the kinds of experiments that may already have been done that I don’t need to do, for example.

Fabien Benetou: So I want to come back on the two points that that Bob and Adam raised on the because there is one word that sticks to me is that it’s layout. Either you have something that’s simple enough, like one item and you don’t need to have a layout algorithm. You just put it there. That’s it. Or it’s complex enough that you need to find one way to say and then you mentioned like a tube or something like this. You also mentioned that we could have roles that unfold on our walls. And so to me, that’s the tension we have is we have something a digital medium. So it’s complete freedom. You can update it every milliseconds. And on the other, the opposite end you will have. Ok, it’s not stone carving, but still you’re not going to print a rule of 10 meters every five seconds. So you have some that tension between the two medium printed versus digital. And the additional tension to me is and that’s what we mentioned, it’s about the body intelligence a couple of minutes ago is we can be in VR, our body can and mind or in VR, and then we start to move around and move objects and whatnot.

Fabien Benetou: And when you don’t have any stability, for example, I don’t know if you’re familiar with the graph layout where you see all the graph moving at once and then using like physics based simulation. So the little nodes of the graph are going to just move in every direction until the graph is stable enough to optimize, for example, for space. And then the graph stops moving stable and that looks very sexy. But in practice, if you have to be in that graph physically move around, it’s completely fucked up. So you basically have to resolve the tension of something that can be moved. If you have an argument map in a new argument shows up. How can you optimize for that map to be stable enough to still account for the novelty, but not change all the time because change all the time? Again, if you want to leverage the memory, that’s so good memory of going through power that we have, it has to be stable enough so that that’s, in my opinion, one of the biggest design constraints we have with building for an abstract topic. However, you want to represent it, but we then have stability.

Peter Wasilko: Very good points, excellent points.

Frode Hegland: Yeah, thanks, Fabian. I have so much to say, too, but Peter has been so patient.

Peter Wasilko: Posted I posted a link in the sidebar to Doug Engelbart mural showing his life’s works been out a long timeline. So I don’t know if you’ve ever seen that before, but it’s a nice little gem. I just wish we could find a copy with a little bit higher resolution. When you try to zoom in on some of the small print, it starts to break up and fix the light.

Brandel Zachernuk: It didn’t feel like it

Peter Wasilko: Was quite enough fidelity. But very fascinating.

Brandel Zachernuk: Yeah. Yeah. The clerk presented that at the future of text in 2016. I’m just saying that from the Wikipedia article, I think that it’s been four weeks before it coming before.

Frode Hegland: Yes.

Peter Wasilko: And also, I remember very interesting game interface back in like, Oh, it must have been early 1980s, but it was some sort of a driving racing game. And the visual metaphor was running through tubes that had some tunnels branching off on sides. And that could map very nicely if you were trying to visualize some sort of a data flow structure and you were interested in looking at it from the perspective of a datum moving through that structure. And there might have been some work a while back sort of echoed that in the software visualization area, but I can’t for the life of me, remember the project. I just kind of a vague remnants of some of that. Oh, that reminds me of a video game I saw in the 80s. Wouldn’t it be nice if they could have better replicated that interface for what they were doing now?

Brandel Zachernuk: Yes. To that end, there’s a there’s a data visualization architect at Apple used to be at Mozilla. Perhaps, you know, I’ve been actually Ali al Moussaoui and who did some really, really wonderful things for our Information Security Group, where he showed all of the IP packets between all of the virtual network connections over a 24 hour period in order to assess what might be fraudulent information and the kinds of patterns that emerge from being able to have meaningful special arrangements like that, but also to see the transitions between things rather than static visualizations was really quite wonderful. I’ll have to talk to him a little bit more about virtual reality and the opportunities afforded by it is no longer an infosec. But he said theory so still remains relevant.

Frode Hegland: Brandel Yeah, that sounds absolutely perfect. I was just thinking one of the pieces I put at the end of the new journal is about, I think Marc Andreessen also tweeted about it. It was you play chess against an AR system and you have physical pieces. Your opponent doesn’t, but it scans your physical pieces. So it has a full picture of the whole board and you can see how the opposing system moves just through overlaid video. So there are so many opportunities that comes into your thing, your headset reading the environment, right? So I could easily imagine you use your normal printer to print out key things and you put it on the wall. Move it as ever you want when you have your headset on and the headset is aware of every single thing you put on the wall. So you can then do extra things with connections and all kinds of good stuff. So you take your headset off, you still have that artifact on the wall. But when you put it on, you have all these extra levels of activities. So similarly, I could imagine we start having something associated with our phone, for instance, because that’s an item we usually carry with us where when you take your phone out and you put your VR on, it has all kinds of sprouting bits or anything else. We have even maybe clothing or different rooms so that you it reads the environment more usefully and you can leave artifacts more usefully. There are so many opportunities there. I mean, imagine even using paintings in your own home. You have a painting of a ship. Ok, let’s have fun with that. We can do a lot of interesting things with that ship. Where is it going? You know, what’s the history of the ship, et cetera, et cetera? I could go on forever. Please take over.

Fabien Benetou: Yeah. So Brendan showed us last time the small video projector he has, and that he was looking for a step motor for it. And I was thinking the alternative is a robotic arm. And ideally, if you have this spinning fast enough, then you project back in 360 so that whatever you modify in VR is going to be displayed back in the world at the end of the session. So ideally, a set of video projectors in practice. Still, it’s it’s something obviously that I think to the world and it’s I’m using the word again. What was it last time that you have to flatten it? And it’s it’s a trace and it’s good to capture some of it as an invitation back what I also laser video projectors, which consumes a bit less, I believe, but definitely to have some kind of permanence despite having left the medium.

Peter Wasilko: Yeah.

Brandel Zachernuk: Well, I think one of the things that I’ve been really excited by is the idea that you can have a continuum of representation so that an HMD is a is a very high fidelity array of lights that you have the ability to configure a projector.

Frode Hegland: So did you say HMV or head mounted?

Brandel Zachernuk: What? Sorry. Ok. Yeah. Yeah. Hmd and I put a video up to to interview for a job actually of what this looks like when you have it sort of moving around. I’ll put the link in because it’s too bright in here to see it at the moment, but with the brighter projector, it would work better. But another thing that I realized is that I think I’ve said before that that just led arrays, as relatively coarse displays can still code for things. And so you can have representations in your space simply by scattering LEDs around. In fact, they don’t even need to be structured as you put them on as long as you have a mechanism for identifying the spatial relationship between them, particularly from different vantage points. You can just throw a coil of LEDs into a room and then by virtue of being able to observe it, turn it into a configurable, dynamic display that can show things on it. But what that means is that while you are saying far beyond that, we don’t have the ability to print out things five times a second. What we can do is use lights to encode meaning that we have previous familiarity with and then we do have the ability to configure those quickly. And so having that whole continuum of fidelity for being able to represent and remind us of certain sort of artifacts, roles and positions means that we can create a space that has persistent encoded meaning that we can kind of refer to with the levels of fidelity that we would acquire at any given time, delving back into an HMD or projective stuff as it becomes necessary, but remembering and knowing that it means what it means through more course mechanisms for signaling those things that nevertheless remain dynamic. That’s.

Adam Wern: Oh, I think we shouldn’t forget that regular screens are doing quite good 3-D. As well, it’s easier to jump between flat land to the PDFs or documents directly to the head mounted display and not not affording the screen the 3D regular 3D. So I think it’s very good if our if we do, if we do a kind of a mural in 3D in VR that it’s viewable on on a regular screen in 3D because we should really remember that hundreds of millions of kids every day and older people as well Frodo are playing 3D computer games with high fidelity, wonderful graphics and really enjoy those environments. But we haven’t got so many information environments to really try out there, so we don’t know the limits of having flat screen 3D. They’re pushing the limits to flat screen 3D. So I think we should. The baseline, I think, should be 3-D on the screen that is jumping out because it’s harder to do that flat to 3D conversion. We are really losing a dimension. I think it’s easier to have 3-D on the screen and than have all the wonderful peripheral vision and hands and all the affordances of RV or other to that

Peter Wasilko: I would like. I’m not quite sure what your idea is. I can’t quite visualize that. Are you saying starting with, you know that starting with, first of all, in a VR environment, starting with three dimensional representations of knowledge, let’s say that that you that it might be. Are extremely useful to be able to flatten them. Is that with that, with that what you know?

Adam Wern: Yeah. Flattened nothing. Display it on a regular screen, but I really mean that it’s 3-D. So you have some sort of some way of going in that, you know, like regular 3D game. Yeah. And then there you go around. Even if it’s on a flat screen, people play 3D games all the time. And so I mean, having the exactly same 3D representation, which you fly around in and walk around it. And my experiences from playing with large amounts of text, the view, the last few weeks. And finally, I’ve got my VR debugging set up going. Thank you, everyone. It’s that I still prefer 2D for most things when I’m navigating large amounts of things. Of because the display resolution is there and the and the motion sickness is not there, so really. And people play 3D games for long hours and I have also logged a few hundred thousand hours before in it and without ever getting motion sick. But in VR, I got motion sickness immediately or VR sickness. So. So there are strengths to the regular flat screen doing 3D on regular TV or flat screen, and it could be TVs or big monitors. I don’t think they will disappear immediately. Maybe in the long term, but not the next 15 years or so.

Frode Hegland: Yeah, that’s a big part of what I’m writing in my Blue Skies paper. Absolutely. I mean, my thinking now is that information is inherently multidimensional. So a metaverse type thing is a natural aspect. But yes, it is absolutely crucial that we should be able to take the slice that we want out of this for different kinds of use, but also 3D and in 3D and 3D and 2D completely different experiences. One key thing coming from this older gamer that you mentioned, Adam, is, you know, when I do my run around and shoot them up in the evenings, every once in a while, I’m interacting with things at a distance. You know, I’m to interact with something nearby is actually really complicated because I kind of have to go. You know, it’s I can jump up and down. And if I’m doing long movement, no problem. But if I have a knowledge thing in front of me, it’s just messy. So that’s the positives and negatives for the different environments, but also on the point of. So this is what I wrote in my blue skies. You haven’t all read the paper I sent today, right? Ok, so one really, really important point is in the physical world, you can print something out going to someone’s office and wave around the document and say all kinds of things, you can underline and show it to people.

Frode Hegland: You can fold it into a paper airplane if you want to. But in a virtual environment, you can only do with the document what that room allows you to do. And that’s really important. If you go into a room and you have this document that doesn’t have annotation abilities, you can’t write anything on that document, it’s not possible. So the portability of data is really, really important. And that’s one of the things that I keep bulldozing on the whole visual meter thing because you also when you’re in a virtual environment and you have a computer screen as far as the virtual environment is concerned, that computer screen is just a texture. It could be someone’s clothing it doesn’t know and it doesn’t care. So we really to be able to allow so many of these things to happen, we really need to look at mechanisms to move things around because one of the things that is so exciting with the mural thing is it has time, it has an x y axis that is time. So that means that it can be used for so many different things. I would absolutely love to have a wall in my office dedicated to time and murals where if I’m working seriously, I can have something that Bob has there. If my son comes in just like you were saying, Adam, I can show history of dinosaurs on there, but it’ll always be left to right.

Frode Hegland: Same place overlay whatever stuff that I have available, but we need to work together to find out how do we define time in this environment? You know, what are the actual standards and how do we define layering and categorization? How do we save a view from this, not just technically, but user interface wise too? You know, imagine Edgar who has, you know, for instance, he learns that the difference between the stegosaurus and the Tyrannosaurus rex is actually longer than the depths between the T-Rex and us. You know, he goes, Oh, wow, right? How do I frame that and put it somewhere? You know, whether the elements that I take with me then or not? And ideally, I want to be able to put this into a flat space so he can have a piece of paper and bring it to his teacher or something on his lap and say, Look, I have learned this and that. And the teacher can take it back onto the school wall. So I think that really goes to the heart of some of our noncommercial efforts because as I said a million times, all the companies want to own every thing. We’re trying to make sure that can happen. As we’re trying to really make a new web here.

Brandel Zachernuk: Yeah. And I wanted sorry, I wanted to. Get the dates, extraction that Adam has in front of Bob, I don’t think you’ve seen it, but but do you have do you have it available, Adam, or do you want me to show you?

Fabien Benetou: Yeah, I think it

Frode Hegland: Will feature in the journal Bob that I send you a link today.

Peter Wasilko: Yeah, but I I don’t know. I didn’t get up early enough to read it.

Frode Hegland: That’s fine. But it’s one of the articles there. It’s really cool. I hope Adam can show it.

Brandel Zachernuk: Yeah. One of the things that I’m really excited about it is so he’s taken the there’s a PDF up online that that that is the thing that I extracted the image from. Of your of your of your mural. What he’s done is taken the PDF content and actually identified the four digit numbers and then extracted those and put those off at 90 degrees so that there’s additional sort of things that you can see from a from a glancing angle. I mean, one one issue is obviously because the mural was never designed for additional dimensionality. It’s perhaps a redundancy, but I think it clearly indicates that if there was if there were additional pieces of information that were layered into the document explicitly, visually, the visual might away, but that were for the deployment and display within a richer environment. Then you’d have the opportunity to create progressive disclosure of showing what works for in 2D, but then have have a system that is more nuanced, able to to hydrate that, rehydrate that with more positional sort of dimensional data. So I’m not sure if you can follow that link, but oh, and

Adam Wern: Just just noticed to have a budget resolution bug in it, so it only works on high resolution screen or high retinal screen. It does. It will completely fill the screen for you. Or is it just one for fourth? On my other computer was one fourth Brandel. Could you show it?

Frode Hegland: It looks great there, by the way.

Adam Wern: Yeah, but you probably on a retina something or someone?

Brandel Zachernuk: Yeah, yeah. I’ll load the dates this time,

Fabien Benetou: So I’ll be

Brandel Zachernuk: Happy to. All right.

Adam Wern: So, yeah, it’s set in the right place as well. Some. Yeah, go ahead. I will say something after.

Frode Hegland: No, no, you speak, I’m just moving around to illustrate you’re talking about.

Adam Wern: I just extracted the actual because there are some different times I matched all the times I could find in the text, not the 1980s or so, but a specific time specific years. So not not the decades, but specific years until not in the far future, but what nine hundred two thousand years. I’m marched here and just put them 90 degrees out. So in this way, of course, you could put them on the floor or in the air together with with some text, if you like to. But it’s just a way of bringing one type of information out there. But look, look at the signs here. If you rotate here, the actual text is to the right side from both angles. There are many things we have to consider when we do labels here I have. I have had to do double double labels, so they are transparent. But now they are three left to right. And if you rotate, the actual thing rotates invisibly at the. There are small things we have to consider when we do things like this, because texting 3D has meant it always has a backside. And how do we handle that backside? It’s really an interesting problem.

Frode Hegland: It’s bloody marvellous to put it in British.

Peter Wasilko: Wonderful.

Adam Wern: Yeah. Just a small detail for you, nerds here, but we have to consider all those things when we put fixed labels. But of course, this is not perfect. There are dates, the dates that appear in. Places that were never intended to have something, I guess 1954 is so hidden, hidden metadata is brought forward here and also.

Peter Wasilko: Well, I might say that you’re working off a JPEG on this case, I think.

Adam Wern: No, it’s a PDF, but it’s not. It’s rendered, so it’s not super sharp. We have to do additional things to get that extra sharp. But it is.

Peter Wasilko: I might, you know, I want to say that all all of the murals that I’ve made in the last 20 years, about 60 or so of them of comparable detail and many of the most important subjects in the world, at least to too many people, are done and illustrator. So they’re all so any object in this, in any of the murals can be, you know, could be taken out and at least made as a kind of demonstration of what’s possible because then we can, for example, each one of the little newspaper headline things, which I use, each one of those can be a an object, and one one can pull those out easily and change them. Even each one of them is changeable in illustrator. So, you know, and I offer any of the any and all of the 20 or the 60 or so murals that I’ve made. Plus their background stuff to to this group to experiment with any, any way you want to. And I’m happy to provide, you know, if you if providing the illustrator parts of it are useful to you. Happy to do that as well.

Adam Wern: Cool. And, Bob, this is the kind of 3-D that I was talking about here, you have a 3-D representation, you don’t have a flat object because the text is rotated out in 3-D. So this is a 3D scene, but on a regular flat screen, we will look at it at a flat screen. So there is I really think we should preserve the 3-D in some flat screen. So have that as the baseline for virtual things. So we could worst case show it on a screen or on a project or in or in other forms. It’s not

Frode Hegland: A worst case, and I

Adam Wern: Agree on one case. One case one. Absolutely.

Peter Wasilko: Yeah, I wouldn’t.

Brandel Zachernuk: That reminds me. Sorry, go ahead.

Peter Wasilko: It would be. It would be very interesting to see the the dates that you have extracted not not done perpendicular to the mural, but just late pulled out as on another plane. And see what what lines connect them. If that would be a function that might be quite important in mural in timeline making over, over, over the longer run.

Adam Wern: When you say a lines, connect them in what sense, just chronologically or

Peter Wasilko: Just chronologically because there are, I see, for example, two to two thousand one date. Are there there multiple dates? That that are are the same, right? Or could be. And and so one begins then to ask, well, OK, what does that mean? What is the meaning of of of different chunks of information with the same date on them in different places on the mural? Looking at my own mural, I can’t answer that question, but it intrigues me. It. Do you see that $2, 2001 right where you’re showing it right now? Yep. One two three three different to two thousand one dates that I can see on the screen right now. How are they connected? What’s what’s going on there? And there are different places of the mural. That’s what some sort of, you know, lines that connect those, perhaps. I don’t know. You know, I don’t know. I’m just speculating.

Adam Wern: We should try 100 things.

Frode Hegland: Any of you? I’m not going off topic, even though it sounds like it saying the Netflix series Snowpiercer.

Brandel Zachernuk: I saw the film I haven’t seen this year.

Frode Hegland: There’s a TV series now called Snowpiercer. It’s about global cooling 1000 car train going around the world. It has a perpetual engine. If it slows down too much, people will freeze to death. It’s pathetically silly sci fi, but it’s really, really engaging. But the reason I mention it is in the beginning intro we see the the train tracks. Obviously, there’s not just one track, but we see a red drawing on the global. It’s easy. We see that in the main person’s office space, but also in two different environments. They have done it with strings like a murder wall, so they have a little index card for the key points, and it’s just string strong around anywhere they want. So I’m wondering if maybe we need to start looking at not only the environment reading what a thing is, but also doing something like QR codes. Because I can imagine if things are coded, you can have that string either physically or virtual. And you can start moving things around, and I could even imagine having something downloaded to my Apple Watch. And if I then go into the wallet? Let me see if I can actually do it. Yeah. You know, QR code on the wallet that can be seen by the environment at some point, I’m pretty sure Apple will probably do that for one. How amazing would it be to draw it out of your watch, right? Or whatever it might be? Just lots of different ways to get away from the thinking, not that we are in the thinking of a static demo of one single mural, but doing exactly what Bob is talking about is what are the mural components we can build?

Brandel Zachernuk: Yeah. I would also like to submit I can share the screen for it, but there’s a really wonderful video made by. Sure. Ok, so I’ll load it up and then I’ll share a screen. So this this is a video by I’m not sure if anybody’s heard of John Boyce is the the lead is a creative director at a place called Super Bowl Nation.. So this is about sports, not something that is particularly dear to my heart. But, but nevertheless pretty interesting. It’s a video called the bulb emergency, and it’s a case study of athletes named Bob over the over time. But the thing that that voice does is build a sort of a large, persistent, three dimensional data sculpture of all of this kind of tracks of information about about a thing. So he’s done stuff about the wins and losses of various teams. But this is an entire object that sort of represents the entirety of his story. And we can see here that he kind of rotates around it and talks about the number of bobs in years. And then then this object has a shadow that is being cast as a consequence of the type. And then he has terms in the shadow. He turns it into this section of all of the different leagues and organizations that are the Bubbs reside with him and then creates little components whose visual area is proportionate and related to their specific events. So this person has a, I guess, a boxer and these are their things, and then he’ll probably show their the boxers wins and losses and the think.

Brandel Zachernuk: So one of the things that’s very interesting about it is that it’s making use of a very large and arbitrary scale kind of thing. So this is talking about the history of the term battle royale and the sort of the racist origins of those for those who are applying battle royale in sort of the modern day. But yeah, being able to pull back and then recontextualize all of these bombs, all of their sporting achievements and their relationship to fame and whatever sort of circumstances. So it’s fascinating. I mean, probably the more fascinating if you actually like sports, but but really amazing as a sort of a case study, I think I’ve reached out to him before about the possibility of trying to put some of these things in virtual reality as it is. But he hasn’t replied. But I think as a as a as an exemplar of what spatial information might be when somebody really cares about something as much as this fellow obviously cares about sports, I think it’s really interesting. So yeah, definitely give it a watch. It’s it’s an hour and a half. So. So it’s the sort of thing that it’s an entertaining watch, but it’s a long one, but really, really interesting in terms of the breadth and the scope of the sort of the visual techniques that that voice has developed for being able to do this. And I think really, really provocative in terms of what might that mean once you actually get it into virtual reality to.

Peter Wasilko: Wow.

Frode Hegland: This is clearly, clearly fantastic, but a very specific thing from yesterday’s Nisoor meeting was one discussion of what is metadata and what is data. Of course, TED Nelson famously says there’s only data, but anything like this. If this was to be done live, what I expect we would need is a mechanism to say is lift this data. And put it in this form. And so how do we put that into a CSB or spreadsheet or whatever has to be some part of the magic, right? Because if you can’t extract it into whatever this neutral format is, of course it will be wrong. It’ll be imperfect. But considering all of you with so much experience, if you were to do that to take one, let’s say from a map view to a graph, you what? How would you do it now?

Peter Wasilko: Whoa.

Adam Wern: And it’s a thing we do all the time when we program in the wake, whenever we have a document or when I put all the dates from the PDF on the side, I extract. Extract things that they are not as organized in a table there, the dates, it’s not so structured, but it’s still data that I put there and render, and sometimes it’s destructive rendering that I don’t attach the actual data to to the visual object or its representation. And a PDF is a kind of that was so it’s a rendering of a manuscript that is a bit destructive on the way, and many things are like that. But I guess what? You’re not what you’re asking. Orbiter, I really think we should attach more data to the objects we put, we in the representations, so the data stays there that has many, many, many real benefits because we can do things very locally and we can take an object and just extract that object without pulling the whole data source with it. Sometimes we need to do that as well, but oh,

Frode Hegland: Absolutely, but the thing is, the problem with Bobby and Adam and Brandel is that actually also you, Peter, just in a slightly different domain, you know, too much, but you can always pull in some kind of environment, write stuff around, you know, change some of the things and then put it somewhere else. Most end users can’t do that either, because they’re not smart enough like me or there’s a mental block. So maybe we need something like a data palette and misspelt that, I’m sure. But you know, when you paint, you have all your paints hair. Maybe we should have something that gives you basically a spreadsheet. Which looks like a spreadsheet with as few controls as possible, and all you can do is make sure that the X and Y axis at least is correct. So, you know, it’s like a really strong visual like it would if we have Bob’s mural and you say, I want to copy from this and then it may copy everything, but when you paste, you may want to specify what aspects you’re pasting and if you’re going to do that live, there has to be a method whereby you can specify which things you want to paste, right?

Peter Wasilko: Yes. Yeah, it would be very interesting to to take a mural and list the. And just make lists of the of the verbal elements in it. All of the all of the chunks of information that I generally do have are divided into headlines that are boldface in some way and others, so one could extract all the boldface ones, lists them in some sort of fashion. It would be imperfect, but all. All indexes of all books are imperfect. Having had to make a few of them myself and supervised some others, but they’re useful. They’re nevertheless very useful. So that kind of an extraction function would be very helpful to people.

Frode Hegland: Yeah, some sort of a way of letting the end user while doing things with gestures is absolutely magical. Also have access to quote unquote under the hood type stuff.

Brandel Zachernuk: Yeah, and I’m glad that you mentioned spreadsheet, I think that that’s the that’s the the basic metaphor that that should be followed. I’m not sure if you’re familiar with there’s a there’s a researcher, the hermit, and she says that that the biggest group of programmers in the world is is Excel users because there are there are maybe 50 million, which sounds like a lot, but it’s not people who who are sort of nominally software engineers or programmers in some sense. But there’s closer to a billion, maybe a billion and a half people who have used office and excel and some of the things and without without drinking like people have been able to implement things like neural network, back propagation and incredibly elaborate algorithms, sometimes unwittingly a lot of the time and software engineers showing off. But but you can implement incredibly elaborate sort of functions and systems for managing and manipulating data within something like Excel. And so, you know, as I’ve been sort of working on what is information in VR, I’m also conscious that what is excel in VR is just as important. And I think that that the sort of the meaning of the information needs some kind of representation that’s more related to being able to layer in that multivariate stuff in maybe tabular form. But but having that representation, that flexibility of representation, that that view specs component of it is essential.

Adam Wern: And also one one tool that I really miss for end users is the equivalent of Microsoft Microsoft Access, the very light database that you take. You put your recipes in, and not everything goes well into a spreadsheet like two dimensions. Sometimes you need a bit more. Yeah, objects of objects. You want a few pictures, ingredients. And so yeah, it could be HyperCard. But HyperCard and Microsoft Access is are two things that we have dropped along the way that I think could really have a comeback today. Now it’s usually pushed into a web service somewhere people do web services, but we could have something very, very light that is personal and sensible, more like a document than an online database. A.

Fabien Benetou: I’ll briefly show you something, it’s a bit of a joke. Ibm Yeah.

Frode Hegland: You’re not allowed to say briefly again, take the time you need.

Fabien Benetou: I’ll I’ll take. It’s going to take too long. You’ll see. Can you see my screen? Yes. So it’s excel in VR because in the end, the metaphor or not, the metaphor. The biggest, the most programmers, like my better half is not a programmer, but she can use excel and she can start to combine information, and it’s reactive. It has actually some pretty good programming principles will not Excel spreadsheets in general. That being said, that didn’t really become popular. So there is something there that might be interesting, maybe as a transition and maybe as a way to help. And yet in practice, it’s yes, I’m not so sure. And so that was the joke part. The slightly more serious part is for me, that’s not interesting. The part of. And that’s why I do prototyping for a living. That’s what gets me going is to be and is going to sound a bit grandiose. But we’re never nobody has been before in terms of thinking, in terms of tools, do you think? I don’t get excited and I don’t want to sound elitist, but I don’t get excited for popularity. It’s really because it’s new because it allows me to see further that there are going to put tremendous effort into something. And there are people that are much better at user experience and design and making sure onboarding is excellent so that anybody comes and it becomes intuitive. That’s definitely not me. So it’s it’s a little bit of, I don’t know, goal culture glass there in the sense that I know that the success of the popularity of the tool is going to link to how easy it’s going to be for a random person. But I’m not that random person and I don’t want to be because, yeah, I invested also a lot of time and resources in learning, programming and learning different stacks and programming language and whatnot. And it’s to be able to do whatever I want, but whatever I invested more to to get to go further. So I just wanted to highlight that.

Frode Hegland: That’s phenomenal, absolutely phenomenal on so many levels. Bob Frankston is someone I know well, not not very well. I was trying to say he’s in the book First Future of Text. Wonderful guy. I have met him. He is the co-creator of bicycle. Maybe worth talking to him to go back to sort of one of the early systems to see if there’s any lessons that we have gone down from any routes. But also just to emphasize, I think today’s chat is absolutely wonderful. I’m sure you all agree, but also my mission. I mean, look at meta. They spent 10 billion per year on VR. You know, I’m sure Apple is spending a good amount of money to and Google, et cetera. You know, we’re spending our time and, you know, we’re in a way competing with them because we’re competing for attention, right? So if we’re going to do that, I think one of our absolutely unique focuses has to be, of course, novel interactions because we can think without having a senior manager above us telling us what to do, which is incredibly valuable, but also the thing of how to move the information about. You know, the spreadsheet in VR that you just showed us to be able to take that to take something from a Bob Horn mural. Put it in that spreadsheet through an interaction and a technical system and then spit it out somewhere else and then maybe combine it with some wiki data. This is the kind of magic that is not a demo, so that was just wonderful to see.

Peter Wasilko: Yes.

Frode Hegland: Your hand is raised, please.

Fabien Benetou: Yes, it’s to show something, it’s not going to be very visual, but I am rather convinced you’re going to appreciate. And I don’t know how I can share it. So that’s that was the other. On Monday, I was doing the text extraction from a visual matter of a couple of properties, let’s say. But what was missing was to be able to, for example, if if I put them back, but they’re going to do this because I moved on to other things. But that still works was to be able to save back and ideally to the file itself in the PDF itself. Despite some of the risks that we mentioned about maybe the conversion, you did lose some data, etc. but that’s the result where so that’s let me

Frode Hegland: Please zoom in.

Fabien Benetou: That’s so that’s the document that you notice. But then if I go to the very bottom and if I zoom quite a bit more, so you can see also some of the presentation is gone, but I believe it still is not up to speculate, say, but readable by a human and so by machine. And maybe more interestingly, is here. I added also some of my own content, which is as silly as it can get. But just the proof, let’s say that I’ve added it. But one has to imagine, instead of this test here of a position in space, for example, where rotation or some property, the other, the result of being manipulated in VR. So I had a couple of technical challenge, the same team of servers and security and whatnot, but it’s it’s pretty much there. But yet so it’s modifying the the original PDF with some assumption like, I just assume that, for example, all the visual data fits in one page, which is the last page. I imagine that’s correct up to a point, but maybe not all the way. If you have 20 pages of data, I don’t think it’s a good idea, to be honest, but not impossible. So that’s one way to basically manipulate in that your environment and server side say it back to the former original file.

Frode Hegland: It’s no surprise that I think that’s absolutely fantastically amazing. You shouldn’t have to. Of course, if you’re starting with a normal visual document, you shouldn’t have to rewrite the basics. You should only do an additional appendix. It is for in special 3D world, something like that. But now I get to a question that is very close to atoms way of thinking, I expect, which is saying all these things. How can we have a thing that knows what it is? What kind of knowledge object can we have in this thing we’re talking about, right? It is so crucial because, you know, since the 90s, I’ve been fighting against information ghettoes, which is what CD-ROMs were right. And now we’re moving into this environment that is completely fully enveloping us. So obviously today with on the Mac, using author reader to copy from reader and paste to author, you’re you’re copying the selected text plus the entire visual meta. So that’s why when you paste it, it is a known smart unit or whatever it might be, what might be equivalent be in the world we’re talking about now. And finally, an addendum to that question is or in addition is should all of this be possible in VR initially? Or do we maybe need to do some of it in VR? Go out into Flatland to do some of this data stuff and then go back if we’re going between environments, I’d love to hear what you guys have to say about that.

Fabien Benetou: Well, just jump in for the last point. We don’t have to do everything in VR. Most of the stuff I don’t do in VR, and that’s perfectly fine. Again, I go back to the same. The remarkable part is that we’re going to be good enough in VR, and that’s fine. So combining, I think, is more interesting, let’s say, through different either tools or or processes. And I’m going to show just that to answer that question too high to illustrate it. This is a bit also what I’ve done this afternoon, which is I have hubs again on the left that I was showing a lot of the demo. And on the right it’s a 2D 3D VR painting tool on the web, and it’s served in save the data on the server and in hubs. I can refresh and see the modified outcome. So it’s two different websites that. Don’t know each other, don’t care for each other, but they have one GB file format that they understand. And then they can communicate. And also it’s here for recording could be in 3-D or behind my screen, but in the other. So it definitely doesn’t have to be through the same way to interact as long as they communicate.

Frode Hegland: Thank you very much, Bob.

Peter Wasilko: Yes. And your question is, I understand it is how to how do we have a thing that knows what it is? Uh, and we have to look at, I think, the question a little bit more, how is how is anything know what it is? Well, it knows it’s alive and and knows its context is one possible, maybe limited, but at least one possible answer to that question. So my general answer is contexts. And what do we mean by context? Well, that’s a whole that’s a whole question of Oh. How to represent relevant contexts to the thing. Um, because one has to limit them, otherwise, context is infinite. And and hence not all that useful because we become, you know, just overwhelmed by trying to figure it out. I’m told I haven’t read, I haven’t read them, but Arthur North Whitehead, philosopher of the first half of the 20th century. Used to say that any one thing. Is connected to everything, and one has to. One has to be able to specify that, and I don’t, you know, I think that that’s an overwhelming kind of thing. But I would suggest that that there are. Maybe a limited number of mappings that could be visualized in murals or other other kinds of knowledge maps that would give useful many, many, many useful structures, maybe more useful structures than even any individual might use at any one time. I’ll stop there.

Brandel Zachernuk: I think that’s a really important sort of point about like I definitely take Whitehead’s point that things are sort of irreversibly connected if you’re talking about something in an essential sense. But I’m also reminded there’s a statistician, George Box, and he says all models are wrong, but some are useful. And from that, I take by that, I just have to mean that every time you make a representation of something, it’s opportunistic and it’s intentional toward a specific end. And so to that end, only the only the features of it that matter for the specific mode of analysis and the contents are relevant component, you know, and that’s one of the things that people struggle with when they when they hear about things like racism and empty three format. But the fact is that empathy does a specific the reference materials for encoding certain frequency ranges were biased toward areas which are a very specific kind of musical performance, and they were works at other things. And so there are biases baked into the format. That doesn’t mean that the format is racist, but it means that it’s it’s prioritized toward a specific thing. And the same goes for photographic content that certainly people familiar with the Shirley cards and the concept of the color curves as they went through beige skinned versus darker skinned is aware that there there is. Not only is it possible for a medium to be biased, but but it’s almost inevitable to be biased toward something and against something else.

Brandel Zachernuk: And so, you know, that’s not to say that we need to reinvent those formats to be more inclusive. I mean, it wouldn’t hurt, but what it means is what we’re doing with stuff depends on what we mean to do. And so what we what we represent inside a virtual space, inside of information mapping relation doesn’t need to invent the universe. Know the sort of the the the initial refrain for a poorly supervised honours or master’s student is to have to define every single term under the Sun as part of their thesis. And that’s not necessary because because you’re in a discipline and you’re in a domain. And yeah, so I agree we don’t need to do everything. We just need to think about what are the things that matters. But we do have an opportunity to be more holistic with it. We have an opportunity to be more inclusive. And that’s where I’m really excited by the idea that your your murals could be could have things that are true, like just absurdly too small to read if they were at a fixed scale. But given that we opportunity to potentially scale those things up much in the same way that John Voice is emergency sort of data sculptures work, we have the opportunity and the ability to inspect them. And that’s not just in a kind of affinity like, well, technically the information’s there, but but actually we have a real opportunity to be able to inspect it. So yeah.

Peter Wasilko: Well, the the context of the of this bureau, which is nuclear, the name of the nuclear mural is nuclear waste in the United Kingdom. In you, but still, now the contexts possible contexts are many. For example, it doesn’t say anything about the context of nuclear wastes around the world. And that could be a mural. There’s an agency in Australia that that keeps track of all the the nuclear waste kinds of things and nuclear plants that are being made in and and are running so that, you know, that would, you know, their organisation and what they know is a possibility of a of a mapping of a context. The same thing as drilling down into it. There are places in the UK where the nuclear waste actually is and the the towns nearby know that the nuclear waste is there and so forth. Similarly, there’s context for different disciplines. I mean, part of what I was engaged to represent was that. The agency in charge of nuclear waste in the UK had. Different researchers who were geologists who or radiologists who were chemists who were even social scientists all having to do with their context of the nuclear waste. So what? But but this is not an infinite number of things for practical purposes, but what one had one had to want if one wanted to integrate all of their as many of their perspectives as possible. So that’s why we did this. That’s just a decent example of of if, if, if there was a list of these that would begin to answer this question. Which is then then the mural would begin to know what it is. Right? And a list is just the beginning, you know, you have to maybe represent the list in different ways and so forth at different levels of detail and whatever.

Frode Hegland: Yeah. Thank you, Bob.

Fabien Benetou: Yeah, it’s to actually echo the initial remark and Brandel comment on top, which is I don’t know if you’re familiar with Donald D. Hoffman, the professor of cognitive science in Irvine, and he wrote a book called The Case Against Reality, basically saying there is no such thing as reality. And he gave a couple of examples and some which I find particularly interesting just some evolutionary simulation where you run some organisms against another as abstraction. And basically, the ones that survive are not the ones that have an absolute perfect view of the world, but rather something usable. Because if you want, for example, if you look at the wall and you see all the different shades of white and you can’t look away because it’s so interesting, you just need more information, you’re going to starve to death, so you just need to have a good enough representation of that white wall. It’s a white wall. I don’t want to bump into it. There is no grizzly or food, whatever. I move on from it, and I think I find this perspective of being pragmatic, basically. I don’t know if it’s correct, but thematically it looks like it. It makes sense to me, but it basically whatever is useful for the goal, whatever is useful for that context.

Frode Hegland: You’re going to give me a headache, shaking my head when you say things like that, absolutely. I mean, this is why one research was done on trying to create kind of a color chart of smells of scents, you know, like if you have a molecule like this, you change it like this and it’s going to become more and more like charcoal or whatever it might be. It turned out it couldn’t be done because the human olfactory system isn’t gradual based like vision and hearing it is. We smell things that were good for us and bad for us. So there are thousands and thousands and, of course, combinations, but you can’t smoothly go. So yes. And this is also in the blue skies thing in here. There’s neuron stuff going on, and with VR, we’re going to start reconfiguring this as well as that. Absolutely. And that which is useful is what’s going to happen. And saying Edgar growing up is so exciting because the concept of affordances is, you know, that thing over there is a chair. That’s the key thing. It’s not the fact that it’s blue. Right? Those are those are aspects and the whole thing of non-objective for human perception is going to be coming into so much. And of course, I know I’m speaking to the choir. Absolutely. I was just wonderful to hear that. But to the point, going back a little bit is what the time stuff because I’ve seen, you know, you guys have worked on different aspects, but if we can start on the interfaces for this to generalize, it would be really, really wonderful.

Frode Hegland: You know, X Y is easy to keep the time for most of the time, obviously, but if we then build layers on top of this, one layer can be arbitrarily the history of America, whatever you know, use wiki data. And then on top of that, we can put Bob Horn’s data from an entire mural, but it happens to be on the same page. And then we can pinch in to go into details and go out. If we can manage to get even that completely smooth and have the data be able to move about, then for instance, our own data. We’ve had a hundred of these over 100 of these meetings, and Brandel has already done visualization on some of the texts. Imagine if we can take all the transcripts for all our meetings and do different analysis, which will then be color coded in and shapes moving in and out. And then we, you know what I mean? You know what I mean? I’m just going through saying, I want to see this for real. I want for us to propose standards because you know who else is going to do it? Others are going to do it for their own things. We need to do it in the open way.

Adam Wern: But then we need to narrow down a bit. What is that useful for? What do you learn from that? Or what do you gain from seeing all our conversations that we must have something of value that is pushed forward through seeing it, not just the value of doing things?

Frode Hegland: So I’m going to use you against yourself and an agreement with yourself because we can talk about these things forever. And we also need to to test, but we also need to know what we’re doing. So I think what we’re getting at and these different things is really good tests and also good issues of plumbing while also looking at what’s useful, my mother in law who unfortunately passed away a few years ago. She taught me one incredibly important thing I had built a globe on the iPad and iPhone, which had no information at all. It was just terrain. The key was you touch something and then draw the outline of the country or the body of water that you’re in right the name. If you zoom in enough, you could see the city and getting city data border was actually really difficult. But the key thing is, even though I developed this and I was very proud of my son in law, she didn’t care about any of that. She immediately went to the place in Japan where she came from. And she wanted to show her village so, so that’s why I’ve been hammering on about real data. So I think that if we do a little bit of plumbing for this and then we can actually start doing the trying to find out the useful cases because if we have access, if we have an X Y. And if I can say to you, I want this stuff as time data and you guys helped us find what that how that should be formatted, I can start putting in things and saying, Oh my gosh, I didn’t know that Peter and me had an overlap in Syracuse or whatever it might be. I think we have to go back and forth between use case and implementation and infrastructure. Infrastructure has to be there. I’m not arguing against a single point, you said. I think you agree with all of that, Adam.

Adam Wern: And I don’t think it’s your your thing to come up with that as well. It’s just it’s mine as well. And I have the sense that of course, timelines contain. Useful thing, so I think it just we need to articulate those things because that will inform the whole whole representation and the interaction with it. And it’s hard, sometimes hard to start the other way to just throw data, find the usefulness and then do a different thing. We can work that way, but it would be nice to have some a bit more if we come up with something a useful thing to put to action, not just serendipity that we were born on the same.

Frode Hegland: Well, I say you have your hand up Fabian, but just to get the little thing in there, the way that I’m looking at it now, my personal interest is documents with digital matter and timelines. Of course, there’s an overlap, but you know, those are like two kinds of projects, right? And when it comes to a document, I am shocked and you were there the first time I could read and VR. It’s actually pretty good. It’s not bad at all. You know, when you have it in front of you and it’s all the right focus, it’s actually completely readable. I want to be able to take some of our documents in and start playing around with them. And then I can start finding out my pain points and my opportunities because lying in bed with my eyes closed and dreaming is really good, but very limiting. So if we just start with our own reading and I think one of our user groups is kind of academics, you know, students doing research right? So so that’s why getting those kinds of documents and but the timeline, I think in the beginning, we need to be broader and more open because that will allow us to get to the use cases to find out. So let’s say we start with just Wikipedia or wicked data. You know, zoom in and out and then we can play with visualizations and we. Let’s say we agree, not necessarily today, but in the future session that we lay a certain things up and down on the y axis. And then we choose to have the z axis for this and that. I don’t know. But that kind of thing in dialogue and then we play with it. We realized this. And maybe that’s the axis for connections. Maybe lines go there or something. Just sorry that went on and on there, Fabienne and then Bob.

Fabien Benetou: Yeah. So first of all, if we talk about it’s a genuinely handling data, even spatially, there is something called the grammar of graphics. We don’t need to reinvent how data is going to be laid out or specialized or what there is like, really. And it’s still ongoing research, so it’s pretty active. But I think to tie it back basically to we discussed for the last 15 minutes, we turn about being useful, having an interface that is useful, being able to have a difficulty projecting how is going to be used without a goal. And that seems to be natural and we seem to all agree on this. And I’m going to sound a bit harsh, but I don’t play with visual data to please someone, and I don’t tinker with the poster of to please someone else. It’s because they exist. They are not perfect. No offense, but they exist. And that allows me to have a ground to play with, tinker with and also be challenged. And I think that’s I would follow something similar, like being able to run a workshop or whatever you want to call it and how you lead that. Not one of us, but someone else would use the tool with that data to achieve a goal that would be a proper evaluation of whatever work we can put out. Otherwise, I do enjoy, how do you say, philosophical discussions? But I think the only go to a point so running somehow a workshop with ideally someone else and based on the data structure or the metadata or the interface we can come up with and with the goal, I think would be a lot more productive and useful for the building anything, really.

Frode Hegland: But I think you’re actually reflecting on what Adam has said earlier about, you know, in different aspects of this conversation. I think that’s very good. But you said something different from what we’ve thought about in that you’re talking about someone presenting rather than someone learning. We’ve been very much about thinking and learning, but in terms of a demo, maybe what would make sense is if we try to create a learning a teaching environment where someone has access to two things a timeline, because almost everything you teach has some aspect of time and academic documents. So how we interact with them individually and in combination could be hugely spectacular. Just the idea of being able to drag out 1970 as a thing. And then you can then choose, do you want to see everything related to the timeline on that or document or whatever it might be? The opportunities are endless. So if we go at it from a teaching rather than thinking, maybe that’s useful. What does everyone think? But please bob first.

Peter Wasilko: Well, I just wanted to say that in that I and several other people have sort of routinely squished together time and visually maybe represent it by some dates. I and we’ve never had, I’ve never had a discussion with anybody else that does similar things that people have objected to this, that the people that viewers. Routinely accept that that I that I represent twelve thousand years plan with this much space, whereas on the same mural, I represent 50 years with about the same amount of space. So people are happy, are happy to do that, at least do not object and and and I will say in general, make correct inferences from it. So that’s another aspect of it.

Frode Hegland: It’s a great point, Bob. It leads further on the question that I’m asking the group because imagine you have the timeline and you do some sort of a gesture and say this slice, I want to pull it out, put it somewhere else so you can then zoom in there, but they can be connected visually when you want to or not. So, so what’s the general feeling of considering a let’s say for the demo, maybe want to do in November, December a teacher, but teaching to a live audience where they can ask questions and so on?

Brandel Zachernuk: Maybe I computer that idea. Well, so I mean, I’m not sure if anybody has has actually used Timeline VR, but it has the ability to load multiple parallel timelines of Wikipedia articles. So in the video that I showed, it shows the Internet and Hypertext and TED Nelson. I’ve also done. I was considering for that demo to show the the timelines and cultural history of 20th century American hand-held foods. So pizza and hot dogs and hamburgers, because they follow very similar patterns where if you if you cross-reference those against things like the New Deal and various aspects of American political history, you start to see some interesting patterns of where demographics flow in and what that means for the sort of the cultural norms of what people consider to be everyday food. So, yeah, I’d love that. And one thing that I when I wrote Timeline View, I didn’t have time tracking and so so there wasn’t the same level of nuance and manipulative. But I think that maybe I I’m happy to do a rebuild from scratch. It’s not that much work, but but in terms of exploring and considering a case study, it may be fruitful to to look at and think about how, how those articles, that data can form a basis and what sort of what sort of needs people sort of discover as they’re thinking about manipulating and what else might be fruitful to try to visualize.

Brandel Zachernuk: Fabian Schar But it broke before I was able to get a capture of it. It was pretty macabre anyway. I got a a map of the United States and turned it into a large sort of room scale, 3D sort of thing that you could walk all the way across. And then I built these spiraling spires for the COVID cases in the early days of the pandemic. And so you could stand and look at what was happening to Washington. But then to see in New York, you had to look all the way up because of how much higher it was at that time. The scales would need to be updated in order for it to have any meaningful information at all. But yeah, there are other things numerical and quantitative data that can be scraped and retrieved from Wikipedia as well that can sort of form and constitute some pretty interesting stuff.

Frode Hegland: I was just going to say that I wasn’t able to run it. I would absolutely love to be able to view it properly and to have a discussion around that. Adam.

Adam Wern: I think the mural is interesting for me because it has many, many of the properties in its hypertext textual, or it feels like it will become hyper textual. It feels like many of them have a timeline aspect to it or can have it. I’m really interested in argumentation maps or mapping arguments because I think it’s really hard for people to grasp arguments, be bigger arguments. Everyone has their own limits, but it’s I think it’s very useful to have that kind of information instead of linear arised text. But also, I really find that if we’re talking teaching, I found the things Brandel have done with the speaking voice on the hands floating hands. I would really imagine that we could have Bob inside that mural or someone inside that mural close as a teacher, close to the board. But not, yeah, it could be a virtual teacher. So not having too much presence as a body hiding the board, but being a kind of a VR ghost, doing demonstrations and some pointers. Often we use many things to highlight the laser pointers or big fonts or things, but having real hands doing gestures right in front of the information and talking through the information kind of like a museum guide where you have different stations, depending on where. So not a linear thing through the whole thing by default, but more a thing where you could pick it could be linear. But it would also be interesting to have it more as a museum guide where you could just go to a place and get a short, short guiding for that place with hands and voice and gestures and so on. So that’s a that’s a teaching teaching thing that is very. The ties together, all of the things that the timeline that yeah.

Brandel Zachernuk: Oh, yeah. Has has everybody in camp parlance, Chuck Talk and Barbara Tversky mentioned it in her speech, but it’s a really interesting and valuable point of reference. He’s a researcher, professor at New York University and one of the things that he was playing with. One of the things he has in this video. So has anybody not saying it then, I guess. Are you familiar? Okay, so. So, so Perlin is a he’s a he’s a I

Peter Wasilko: Can’t Typekit and I and you talk so fast. I can’t understand who you’re talking about. Oh, I’m sorry. Ken Karlin at Yale University.

Brandel Zachernuk: That’s right. And I’ll find sort of the main talk is from twenty seventeen. And it’s it’s very what he does is draw and similar to to Ivan Sutherland, 62 or 63. It turns into meaningful marks. So if you draw a sort of an L, then that turns into the opportunity to plug that in as a time series. And then he draws something that looks like a pendulum. It turns into a pendulum. If he plugs that, if you sketches onto that, it turns into the sinusoidal graph of the pendulum moving. The only issue with it is that it’s the domain of its applicability is limited by what gestures you sort of determine to be relevant for demonstration purposes. But as a long time teacher, you know, he knows what he wants to talk about, and I imagine you do, too. And so thinking about what kinds of gestures give you the opportunity for this sort of latent palette of representations and modes so that so that they can nevertheless remain sort of performed at the speed of conversation? You know, we talk about working as a tool at speed of thought, but performed at the speed of conversation is just as relevant from a from a pedagogical perspective. Then I think that would be a really interesting thing to sort of pursue. What do you tell people about this? What would you need to be able to do? I’m actually in a conversation. So somebody who’s a again a fairly senior within virtual reality has started trying to put forward the idea of a of an x r for AR and VR guild and the.

Peter Wasilko: Oh.

Frode Hegland: We lost him. It’s just being very thoughtful.

Adam Wern: What’s the secret guild that took him?

Frode Hegland: Yes, kill him. I don’t know if you’ve seen that.

Adam Wern: He’s like, Oh, you’re back. Brandel, we lost you for a good deal after you told us about the guild.

Frode Hegland: It sounded like they they took you out.

Fabien Benetou: Yeah, I feel the guild.

Brandel Zachernuk: Sorry. So the guild is about is there to present principles? And one of the things that I’ve suggested for the website is for those principles supposed to be rather than merely stated to be performed. And I think that this is very much in line with that too.

Peter Wasilko: I agree, and I would add I would add, you know, in most of them, let’s say many of the murals that I that I had contracts for from from international organizations or governments. It was for decision making, not only for teaching, but yes, people walked around in and gathered around it and did a lot of pointing and chattering to each other about the different aspects of the murals. They didn’t stay home recordings of that. No, unfortunately, I do have no none of those wish we had, but we could make some, you know, in the future.

Brandel Zachernuk: Yes, I think observation of the way it’s actually used would be a really interesting method of investigation into what interactivity might may add to those abilities to use it.

Bob Horn: I made a mural which I’ll send to people next time on the next 40 years of sustainability. And it was it was constructed for the CEOs of the top two hundred companies in the world called Vision 20 50. It’s on my website if you’re if you if you don’t want to wait for me to send it and. I ask one of the I asked the senior strategist of one of the companies in America, the Weyerhaeuser Forest Company, how did you use the mural? He says it’s in our boardroom. And I said, Well, what are the what do they do? I said, Have you ever been there in it? And he says, Yeah, he says, you know, every so often in a meeting, in a discussion of the board, somebody will wave at the mirror or point at the mural and say, Yeah, but what about such and such? Because the mural was a set of of of of something like eight hundred. Maybe I may be exaggerating here for four to four to eight hundred requirements for the next 50 40 years. And and and they and what was it? And I said, Well, why is it? Why is the mural in the in the boardroom? He says. We’re a forest company. Our product takes 40 years to make. And so what’s relevant in the context of the next 40 years and what may be required in terms of climate change and sustainability and energy and manufacturing and and so forth was relevant to those people? Yeah.

Frode Hegland: Yeah, that’s that’s really brilliant, Bob. And we have to finish them, by the way, I’m afraid at least I have to go soon.

Fabien Benetou: I have to go. Minus four minutes ago, so I’ll be very, very quick. The first thing is honestly learning teaching to me doesn’t really matter as long as there is a goal. I’m very happy to help a learner or a teacher. Honestly, I feel they are about the same or somebody who is in a in a company that have to optimize for hostages or whatever. But as long as there is at least one person and they have at least one goal that is sufficiently explicit, then I feel I can help somehow. But if it’s too generic, that’s my challenge. But learning, teaching, having a presentation specific to that meal, that’s all fine to me. And on a much more lower level, let’s say yes, talk talk from campaigning is really amazing, but word of warning it is tightly coupled with that person, and that person is a brilliant programmer and teacher and a lot of other things and extremely creative. So I try to use it and I’m not Kimberlin, and it didn’t go well. So the potential is amazing how it’s combinatorial and recognizing gesture and all that. But I think it’s very hard to reproduce for someone else, unfortunately. But the potential still is there and the Adam example of you’re presenting with Brendan’s tool and being able to even have the gesture and yet go back in time because, for example, maybe that was a bit too quick for me. I think the potential is is amazing.

Frode Hegland: I completely I’m on board with that 100 percent. But also a little bit of context, we are competing with billion dollar companies in this sense, so we should absolutely do it. But I think our magic sauce needs to be that we are not Apple, Google or Metta, so we’re not going to own everything. I know I’m a bit repetitive here, but there may. It’s moved around through visual media or other means. We should definitely not be stuck in one way. It’s so important I could imagine a teacher doing a thing in this environment very much like described, but that is then actually in a normal textbook. And then it can go back into a completely different software vendors environment and be able to produce something useful. But I think we’re beginning to settle on a few things. We’re doing wonderful demos and thinking. So unless anyone has something else, I look forward to Monday where we’re going to have a jet as per present, which will be fun. That’ll be our second monthly guest presentation. And also, I think we should up the speed and have one every two weeks rather than every four weeks because there are just too many interesting people we have in the community. And the fact that the full transcript goes into our journal hopefully will have value for them as well. See you all on Monday.

Fabien Benetou: Thank you.

Frode Hegland: Have a great weekend. Have a relaxing weekend, everyone. Bye for now. Hi, everybody. I.

1 comment

Leave a comment

Your email address will not be published.