18 March 2024

18 March 2024

Speaker1: Anyone who opposes him from Palestine definitely has a learning curve. I want to tell you my tips and tricks I’ve picked up over the last six months, and tell you our one simple change will drastically improve your performance. I’m guessing.

Frode Hegland: Good morning.

Speaker1: Two reasons why.

Andrew Thompson: Good morning.

Frode Hegland: I can turn off my Lumix autofocus video. How are you? How was your weekend?

Andrew Thompson: Weekend was nice. The weather’s good over here. Finally got out of the rain for a little while. It’s coming back tomorrow, but we had a sunny weekend.

Frode Hegland: Yeah, that’s very good. Rob is coming.

Andrew Thompson: Right. I noticed that I have less to contribute in Monday meetings, so I’ll probably just be developing on the side while I’m in the meeting, if that’s cool. Okay.

Frode Hegland: So I just energizing myself with some chocolate there. Yeah. No, absolutely. Now this meeting. Will be a bit about the September thing. I’m just going to text people and see who’s coming. I see. There we go. There are, at least partly. And I say, Rob is there?

Speaker4: Yep. Hi. I gotta go make coffee.

Frode Hegland: Not a bad idea. I’m just going to send a few texts out. Because of the time difference. Europeans are a bit confused. Maybe. Boom sense. Yeah. Hi, William. Good to see you again. I’ll introduce you when there’s a little more people in the room. Okay. Yes. Hello. Good to see you too. I’m just arriving here at the lab, so excuse me while I do things like put on my indoor.

William Waites: Glasses and things.

Frode Hegland: Mark is here. Excellent. Morning, Mark.

Mark Anderson: Hello, all.

Frode Hegland: Hello? Hello. Just going through our people messages thing here. Yeah, I saw that he couldn’t make it today, which is fine, because he has already told us you won’t be here many Monday meetings. So that was not unexpected. But of course, it’s nice whenever she has the opportunity to join us. Fabian is here. Excellent. You’re the key person for this meeting, Fabian. Wow. No pressure. Sorry to you know, put all the social pressure on you. I’m getting messages here. Something buzzed.

William Waites: Yeah.

Frode Hegland: You no doubt have a delivery. Please excuse me for a minute.

Peter Wasilko: Good morning, Brunching in sunny New York.

Frode Hegland: It’s funny what delivers, isn’t it? It used to be that kids would ring doorbells and run away. No. Adults ring doorbells and run away, but leave nice packages. So anyway, it’s at the door. Right. So we have enough people here to to start. I’m going to introduce William in a second, but first For the next future of text. It’ll be nice. It’ll be good. Also, we have the September hypertext conference, which is really, really important for us. I really think that we should spend a little time on Monday meetings on how to make. Let’s just call it one event. Let’s make that event kind of amazing. It’s easy to look at the past for inspiration. Maybe a little cheap. And also the question of who are we? And all of that stuff. But I really do think we need to aim to make the experience for someone at the hypertext conference a real. Big moment. And that’s not going to happen with one headset being put on one person’s head. That’s going to be a real group effort. And that’s why, Fabian, it’s so important you’re part of this dialog because your time opportunity isn’t really here right now for that main quote unquote work. But it’s not going to be a one thing. And it’s easy to mention Doug Engelbart 68 demo, but one of the key things about his demo was that he didn’t show off one thing. He showed a huge amount of connected things. So that’s why we also need to decide now who to invite for the book and the symposium. Something we all need to put in a spreadsheet.

Frode Hegland: I have one to start. Obviously this is something very important for me to go through, but it’s the whole community that comes up with suggestions. That’s really important, but I really urge you to think really, really big about this because, you know, we can easily say, who the heck are we? And everybody said that through the ages. But you know, this is the first year ever of a modern headset being available to people, even though it’s expensive and that all kinds of problems. So that’s why I’m hoping we can do a lot of things outside. So, Fabian, the work that you’re doing. If we could find a way to integrate it. And please remember Denny’s words. She says it much better than me. The research and the effort that we all do is all properly credited. And that’s not about ego. That’s about the ongoing academic opportunities that we will have. And on that note, I was in Southampton on Friday, went to see the famed and fabled Dave Millard, who Mark keeps referring to just to make me ridiculously jealous. He is the only advisor I have that actually makes an effort because he makes the time. And I was then very happy to talk to Chris Gutteridge as well. And Chris was a bit busy because he was going to see William for lunch, but they kindly invited me. So I met William who put the headset on. He has extensive experience in this field and near fields, so I think would be very appropriate. William, if you could spend a few minutes introducing yourself.

William Waites: Thank you. Yeah, sure I can it was a pleasure meeting you on whenever that was. That must have been Friday, as you said. A complete sort of coincidence, because I haven’t really. So I spent, like a hunger time in the late 90s at the University of Toronto working in wearable computing with with Steve Mann and doing all sorts of things. And that’s always been really interesting, although I kind of left it to the side. And so this was yeah, this was, this was, this was really nice to sort of meet and sort of rekindle my interest in this and start looking at what is actually available now after, you know, 25 years or so have passed and to try out the to try out the Apple Goggles, those were those were those had some really interesting sort of sort of features. I guess one of the things that I have often wanted to be able to do is actually I mean, it all centers around text. I mean, I’m an old, old school Unix person. Really. I use a terminal and a and and a and a web browser. That’s that’s that’s about it, really, but so Working with text as I’m moving around. How to how can we do that? Can we do that yet? How can we I mean, one of the things that was that sort of struck me about the about the Apple thing is as compelling a sort of visual experience as it was. There’s no good way to do input. How do we solve that? And I’ve I mean, I’d like to be able to solve that, actually. I mean, anyway, this is just really super interesting. And it’s stuff I haven’t thought about for a while, and I’m now sort of inspired to think about it again.

William Waites: So thank you for inviting me. I should say I’m also, at the moment sitting in the Edinburgh Hack Lab. I can give you a bit of a tour if you’d like. Yeah. Okay. Sure. I’m going to try to do something that I’ve never done where my my computer here has been threatening to use my phone as a camera for the past while, and I keep saying, no, don’t do that. And. Oh, well, look, it actually sort of works, right? Okay, so here we are. I’m going to wander around. That’s Oh, that’s that’s that’s Colin there. Sorry, Colin, but you can, you can see the the sort of group there. So Colin also has been very interested in, in wearable computing and, and the sort of thing for, for a while. And you weren’t in Southampton, but you could have been and if you were, it would have been a great sort of conversation. Yeah. So this is the hackerspace. This is the main room by Costa. That’s. Costa’s just arrived there. All right. And in the main room, we have a social area and a bit of a kitchen. And then we have, you know, microelectronics and radio and all that sort of thing. We’ve got more than that, though. We’ve got like I think that I might I’m not totally sure how this is communicating with my laptop, so I’m going to carry it around as well. If I go too far, we’ve got and maybe I’ll even be able to see what’s going on. So we’ve got a door here that’s going to open.

Fabien Bentou: Foothills.

William Waites: We have open this one. You go to a workshop for, you know, the things that the things that go on in a workshop, you know, lathe and saws and all these sorts of things. We’ve got oh, this is a lovely artwork. I don’t know, can you see this? If I almost can see this. So this. Lovely artwork is a generator in the bottom and some capacitors and rectifier diodes and and a switch that lets you choose whether you want an LED or an incandescent bulb. And so what you can do with this is you can get an experiential sense of the difference in energy it takes to light up an LED or a bunch of LEDs compared to an incandescent bulb. It’s hard to transmit that over the internet, but if you come up here and visit, you can try it. And another room with, you know, the usual sorts of stuff a couple of laser cutters and some computer controlled milling machines and these sorts of things. So this place is really nice because, well, first of all, because it’s off campus. It’s off campus, which means there’s a lot more there’s a lot of freedom of access to the equipment and doing things because you don’t need quite so.

Frode Hegland: So now we know about the Wi-Fi reach in the lab. Effective demonstration. So when it comes back. William, can you hear us?

William Waites: Different sewing machines now. So sewing fabric seems to be a thing.

Frode Hegland: You know, the wifi comes and goes. I think it’s probably safest if you’re back at your desk.

William Waites: Yes, yes, going back into the going back into the main room now. Anyways, that is the let me change this.

Fabien Bentou: Beck. That’s the very quick tour.

William Waites: So the whole point of that is you know making prototypes of things is something that we can very easily do here and do for all kinds of things, which is really nice.

Fabien Bentou: And did you hear that?

Frode Hegland: I suppose prototypes. The P word. Excellent. So that’s really, really good and really nice. William. Thank you. I think we should take that as an excuse to have 32nd introductions of everybody, starting with you. Fabian, because you mentioned prototypes? Yes.

Fabien Bentou: Yeah. Thanks. Well, this is what I do. Prototyping. Mostly motivated by learning. I feel like I’m not a real developer. I’m too lazy for this. But if I can still develop tiny things that help me learn, then I keep on getting excited. So, yeah, that’s my excuse not to do QA and this kind of things. I mostly do it for the European Parliament, and was finishing a project for Mozilla recently, and yeah, whoever has interest mostly in webXR. So VR and AR solely on the web. But then I find ways to intertwine this with, like, robotics, IoT and whatever. And to connect a bit more there with the group. My interest is in order to do better prototyping. And my motivation actually for doing VR is because I have a bunch of notes in any kind of form, shape and form, actually and I think this kind of devices, like being able to have an entire constructed world around you that fits in your backpack, helps to organize all those notes and thus do better or at least more prototypes. So that’s my motivation for being here every week for the last year or so.

Frode Hegland: Perfect and just randomly on my screen. Beta.

Peter Wasilko: Okay. I’m formally trained in law, mostly IP law, but I found that the IP was much more interesting than the law side. And I’m also a hybrid with programing. I’ve been doing that since the 1980s. Interested in information science, innovation management, a whole range of topics. Basically a polymath at heart. And I do a lot of prototyping, very interested in end user programing. So I’m always playing with building new parsers and things like that.

Frode Hegland: Wow, that was brief and perfect. Rob. Hello?

Speaker4: Okay. I am somebody who works with text. That is to say, I’m a writer. So I started doing. I did computer games, I write novels, and. I have a Vision Pro and I’m very interested in what I can do with it. So far not much except watch very enjoyable movies. But that’s not really what we want to do here. So I’m trying to see if I can contribute to this from the more journalistic side of things. I did a lot of computer journalism and technical writing in the 80s. So that’s my background.

Frode Hegland: Yeah, very.

Speaker4: See what? The four as it comes.

Frode Hegland: That’s a great quote. Andrew.

Andrew Thompson: Oh, boy. It’s my turn. Yeah. I’m a newcomer to the lab. Essentially, I got brought in with the Sloan grant. And I’m basically the webXR prototyper. So every week the group gets to do all this really in depth planning and thinking, and then they toss me the bone, and I come back next week showing some of it working in the headset. And then we do it again. And that’s kind of been the rotation for the last couple months. I started in January.

Frode Hegland: So before I come to you, Mark just mentioning Andrew is probably the probably in quotes because obviously the youngest member of the group is also the only professional member of the group because, thank goodness, currently this is part of his job to actually work on this. And so that’s nice. But what I think is really brilliant is I’ve worked with a lot of programmers of different skill levels, and technically I can’t judge any of you because I’m just not good enough. But what I can say is even the best programmers very often tend to do what you ask them to do, which is obviously really bad when there’s limitations. And every single week under what you show us is your deep listening skills, not just reading some spec bits, but actually paying attention to what we actually mean. So you should probably go into real estate at some point. You know, the same skill of what house do they really want? Not what they say. So very, very grateful to have you here. Andrew. Hi, Brandon. We’re doing brief introductions. Will, you will do a ten second introduction after we’re done. So it’s just you and Mark left, so. Mark.

Mark Anderson: Oh. Hi. I’m Mark Anderson speaking from actually now, not sunny southern England. My connection here originally was Fred and I both studying the waste department. That’s a web and internet science, not waste. And the branding, the branding, which was which has been a sort of also a center for hypertext research quite a long time, which is what brought me there, because I’m I’m only recently in academe, I basically moved to go to naval signals officer and got involved in sort of information and documentation alike, and then went on to basically work on metadata and information things effectively as emergency plumber, basically fixing broken systems after all the expensive people had left. So I’m well versed in real world rather than theoretical systems. And then I did a PhD in hypertext at Southampton. And since when I am, I’ve basically stayed on and I’m an independent researcher in hypertext. And my, my interest here is in effectively in deconstruction and the evolution of documents for the, for the post paper age.

Frode Hegland: Thank you. Mark. You should have been there on Friday when William and I were hanging out in a poll. Smart was there too. It was really quite a special day and we had excellent coffees with the brownies. And yes, you should all come to Southampton. We have coffee and brownies. Can you believe it? Brandel, please. A brief introduction from you as well. And then William will reintroduce himself a little bit.

Brandel Zachernuk: Great. So I am from the commercial world rather than the I work in Silicon Valley at Apple. On what text. What actually not what text is what what the web should be on in in in space on Vision Pro, but also with the rest of the web standards. So I represent on the World Wide Web Consortium and massive web there. For the last ten years, I’ve been sort of fixating somewhat on somewhat reluctantly at times. What what virtual reality could do for ideas like word processing and information navigation, building VR prototypes that are both sort of a mixture of functional and evocative of how to write and how to read and consume things in space. And what what can be encoded by space really inspired by folks like Andy Clarke and Lisa Feldman Barrett and David Kirsch and the concept of thinking with the body and other things the sort of extended mind thesis being something that I think is very productive to pursue for how to use space as an additional component of actually being able to construct texts that that sort of includes it, rather than being limited to sort of things that are like document sizes that are limited to the sort of the folio dimensions that that paper has existed on hitherto. So, yeah, that’s that’s what I do and why I think it’s fun. I came to this community a few years ago which had been interesting and, and focused on the future of text, but hadn’t yet considered virtual reality and made them do it. So I’m very glad for having been able to, to to make them make that jump.

Frode Hegland: Yeah. William, before you reintroduce yourself, now that Randall is here, he’s in the Pacific coast of America. So it’s early there, along with Rob and Andrew. Just wanted to mention to you, Randall, I think I told you already Paul smart on Friday, he built an LM of Andy Clarke. So there are some papers on that that he sent me, and I’d like to forward them to you guys. However, they completely, utterly lack metadata. It isn’t even written in the document. Paul is a close friend, so it’s not like I’m backtalking him, but the papers themselves as normal PDF exports don’t even have the date of publication, so I can’t put them into my library. So once. Which of course is great for me, you know, wind for visual media or whatever. But anyway, once I get the papers, I’ll distribute them to all of you because what he’s doing is just really, really cool. And also, that’s the reason I met Paul again, my old friend was because of Williams. So the connectedness here is very Hypertextual. William, please. A brief intro for Brendel.

William Waites: I think the lack of metadata is kind of on purpose. But maybe I’ll explain what I do. I’ll instead of jumping immediately to off topic, maybe I’ll tell you what I mean. After. No. So the maybe a more standard other than I met Frode on Friday and used to work in AR and and now I’m excited to be thinking about this again. The more standard sort of intro is I’m I guess I’m a research engineer in the electronics and computer science department at Southampton. I’m physically based in Edinburgh. Most of what I’ve been working on in the past several years is actually mathematical models of things in biology. And my main sort of connection to text, I guess, is I read a lot of it, and I write and I write some of it and. Well, yeah, there that’s that’s that. And I and I know something about metadata. And its absence and maybe why it’s absent.

Frode Hegland: But thank you, thank you, thank you. Fabian. Fabian. Fabian.

Fabien Bentou: Just also to, to maybe connect some dots here. This weekend, on Saturday, I went on a metal working workshop to cut rods of metals and bend them and whatnot. Because I like software, but it has its limitation. Doesn’t have a body. And for some stuff, it’s really good, like iterating really fast and for some stuff, like putting your feet on it. Yeah, it doesn’t work. And why do I mention this? With William, but with the rest of the group. It’s because one of my temptation. So I, I try to stay in the workshop just listen and do whatever the teacher was saying. But I couldn’t help but think. Okay. And what if I had all my gadgets with me? What if the workshop was augmented? Because a lot of the information I was storing it in my mind and then replaying at the moment of using the new tools and some tools or simple. But when you have I mean, I was literally with hundreds of different tools around, so it’s overwhelming. So I couldn’t help but think, okay, and I have some tools also at home in my basement and in my home office and in my office. But when you discover a new workshop not having contextual information while doing the physical task. Was a problem. Like I’m convinced it could be done better. I’m not going to pretend I know how, because there are a bunch of limitations, especially when I discover such places, which I don’t know and I don’t know. I don’t know the risk and whatnot, but but this weekend, on Saturday was a quite convincing moment. First, I’m going to go back without any other R or X or whatnot, just to build more stuff with metal. I’m going in two weeks again. But but I’m rather convinced that it’s it’s a very interesting intersection. I don’t want to disconnect from, let’s say, the, the traditional paper or less traditional or, but I think they’re thinking in context and learning in context on a physical space with complex tools. I can’t help but see there is there is a lot to do there.

Frode Hegland: Yeah. That’s perfect. Leon, we have William here, who is in Edinburgh working with Southampton on a AR and biology and coding and numbers and magical things for a long time. Randomly met up on Friday. Your 32nd introduction, please.

Leon van Kammen: I’m Leon and I’m working on a daily basis with webXR. Specifically with Addressability using URLs in webXR, I’m running XR fragment org, which is an exploration in using URLs and webXR experiences. And I think that’s my 32nd introduction. And I’m, I’m a big fan of everybody here in this room, and I’m here almost every week whenever I can.

Frode Hegland: And we’re so grateful for that. The people here are so complimentary. We are not cookie cutters, that’s for sure. So there’s a couple of funny things. So Fabian was talking about kind of a. Theoretical use case. On Thursday I went to the London Book Fair and it’s a friend of mine. She’s a big boss in audible, so it was all kinds of fun, but I couldn’t really network. Even though Book Fair is all about text, right? It should have been perfect for us. But you know, what do you do? Go up and say hello? No. So I went to have a coffee, and I sat on a little bar stool and a little bar table. And of course I put on the headset the Vision Pro, just to see what the reaction of people would be. Nobody actually talked to me, which was fine. There were a couple of looks, but nothing big. So this is already broken into the culture. It wasn’t like, what are you doing? You’re monster. Nothing like that. But what was extremely interesting for me is despite Andrew’s work being in a gray space, despite Deeney talking about the need for a full 360 workspace, which I fully agree with. When I was sitting there, this was at Olympia, so I had it was a really comfortable area, but it had a lot of open space in front of me.

Leon van Kammen: Yeah.

Frode Hegland: And that completely changed how I used the headset, because I could open up a huge author window to write. And very often I have something in front of me, a physical thing. But because there was, even though digitally it makes no difference, it made it really pleasant. So it’s the first time I’ve actually worked in the headset productively and pleasantly. So that was a really, really big surprise how that actually played into it. And last week, I don’t know if it was Monday or Wednesday. One of the things we discussed was how when people put on the Vision Pro, the biggest wow we always get is when they look at the environments, whether it’s Yosemite or the moon or whatever, always. Which is kind of annoying in a sense, because it’s literally the background. But it shows that if we are in an AR environment, the environment is still so important. I just thought that was for me. It was maybe obvious, but also a bit of a wake up call. I wonder if anyone has any perspectives on something like that. Okay, Fabian.

Fabien Bentou: I think it’s kind of I think most people miss the flexibility of being able to choose the background of your world is it’s not just the background of your desktop, and you can adjust it or you can reshape it like doesn’t have. Okay, if it looks good, it’s better. But even if it doesn’t, just being able to selectively fade away the rest of the world to pay attention when you have to. I think that in itself, for my own experience of this, is in itself. That’s really powerful. It’s a little bit like headphones, but visually and it’s really hard to do if you have a laptop, you can do this, but if you have a laptop, you don’t usually walk around with curtains that you would put around the thing. And I think most of us I know that at least for me, managing my attention is a constant challenge. And so again, being able to selectively manage your attention. Is in itself very valuable. The beauty of it and the esthetics and yes, okay, people do indeed. Tend to highlight this a bit much, but but I, I, I do enjoy this. Quite.

Frode Hegland: It makes sense. By the way, guys, we have such an incredible community and everybody’s really respectful with timing. But if there is just nobody talking, please do talk. You know just otherwise it can be a bit formal. So. Leon, please, please go on.

Leon van Kammen: Yeah. So I I totally agree with the, the whole the importance of the, the real background. And this also reminded me of a conversation I had with Fabian that when it comes to prototyping. Thing. A AR and VR can be an incredible different scenario. And I discovered or actually, Fabian told me that actually, if you want to prototype something quickly, then VR can be quite convenient, because you don’t really have to think about all the the myriads and the multitudes of of real world AR settings. You don’t have to think about that. You can just pick a scene and that’s basically it. And then you can sort of adjust whatever your application in, in that space is, and it’s, it’s very easy to control. But R on the other hand, is, is really can make a complete sense in one room, but it can make no sense in completely different environments. So there’s a it’s a big topic, very big topic. And I also think that the word mixed reality is easier said than done.

Frode Hegland: Yeah. Perfect. Absolutely. Brandel.

Brandel Zachernuk: Yeah, I was at we had we finally had our launch party for the device. We rented out Levi Stadium, and so everybody came and hung out for a while. And I was talking to the folks who are doing SwiftUI and saying, like, I’m so grateful that you’re you’re taking on this very, very difficult problem of being able to construct. Layout. In space because one of the big differences between other systems, and this one, is that this one is hopefully going to be easier to program for it. It may not it may come as a surprise, but one of the reasons why I successful is because you don’t need to be very good to make something with it, because it has so heavily invested in being able to build things that will be able to construct user interface systems. And but the thing is that continuing that is going to be Exponentially more difficult, adding to not just a third dimension, but, you know, at some point, presumably a larger space because mixed reality is so complicated to configure, to be able to kind of understand the, the, the logical constraints that are imposed on an interface system by there being a cat tree next to me or too much stuff on the ground for me to be able to walk over and reach a certain part of the interface that I might otherwise wish to compose of that sort of area, or if there are sort of conceptual implications for things pointing to the West versus the East, maybe to do with the direction of the rising sun.

Brandel Zachernuk: What’s going to land on that floor? Just that I don’t think that color is very nice over there. I’m in a rental and the color is not very nice anywhere, but like, the all of these things are very, very complicated. And So mixed reality, depending on how seriously you take that term can be as as difficult a problem as you want. Whereas virtual reality, most of the time people say, we’ve carved out this kind of region of space, you can do what you want with it. Which doesn’t make it easy, but it does make it a lot easier than having to to have a, a full, like, full throated kind of integration with the environment around you, which, you know, long term, I think everybody agrees is awesome, a very exciting thing to be able to do. But it’s going to take a while for there to be proper CSS media queries for where your couch is.

Frode Hegland: So this leads really nicely into nicely into the initial question that I asked at the beginning of the meeting today or provocation rather. And that is on for the hypertext conference in September. We should do something massive. So that the centerpiece will be what Andrew’s building. But you know, brendel’s talking about are and many, many layers. I wondered if we had all the time in the world, all the money in the world and everything. What would we want people to experience on that day? Would they explicit stated goal that an academic, either professional or, like Brandel, someone who works in a company but really, you know, cares about what they read and write, that they interact with the system and just quite simply think, oh yes, more work needs to be done, but I can really think and work in their.

Fabien Bentou: I mean, to me, the ideal one in such a demo would be a mini eureka moment or a small realization on whatever topic they’re working with. Like, they aim to give a super naive and short example. It’s like, oh, I have a review of a dozen paper. I tossed them on the floor or on the wall during the demo, they stick to that surface or whatever organize them. I do my modal world or whatever. And then I remove the headset and. I never thought about those papers that way. And I can’t wait to go home or to the office and then write about the outcome. And I don’t even care about XR. I don’t care about the headset. I don’t care about how it’s been done, because I don’t have time for it. Because I need to write about this new idea I got. To me, that’s the the ideal scenario. It doesn’t explain at all how to do this. But but that’s how I everybody, me included, I believe, would be convinced by, by this set of technology.

Frode Hegland: That’s really beautiful and central. And we should absolutely hold on to that. I heard once that a sale, if you’re doing it face to face sales pitch to someone the sale isn’t made when they’re listening or asking question. The sale is made when their eyes glaze over because they’re thinking about what they can do with what you’re selling. So. Yeah. That’s wonderful. Fabian. Leon, please.

Leon van Kammen: Yeah, I was You asked the same question last week, and that was sort of a bombshell which ended in silence because it’s such a huge question. Sorry, Mike.

Frode Hegland: You’re scratching up.

Leon van Kammen: Oh, am I back?

Fabien Bentou: I don’t think it’s a connection problem. I don’t think it’s latency. It’s like super noisy microphone. And it worked better during your intro. So I might be software.

Leon van Kammen: How about this one?

Fabien Bentou: Much better.

Leon van Kammen: Okay. So apologies. I have too much microphones here connected to this browser. Anyways so, yeah, you asked the same question last week, and I was a bit flabbergasted by the huge question, but I did think a bit about it. And without being too specific, I think at a hypertext conference and also in general, I think people are really impressed by navigating infinite amounts of connections or information. So, for example it’s, it’s one thing to have a knowledge graph or a personal library or that’s also impressive, of course. But when we’re talking about networks, then I think what really fascinates people, if they see a certain infinite aspect in there. So I think infinite navigation is also something I’m maybe I’m a bit biased because this is also why I started XR fragments. I would love to see an infinite 3D model verse. Sort of. But so this infinite browse, infinite browsing, I think, at least for me personally, that would be really something impressive.

Frode Hegland: I think everyone’s nodding, either visually or on their own heads on that one. I think that a dream would be to do that, and it would for me, part of it would include connected in a graph because it has visual meta attached. But other things would need to do that at all. Right. But if we could provide an environment that was literally as large as the venue in terms and, and bigger, you know, seeing in different ways and huge, that would be amazing. William, please.

William Waites: Yeah, I was just. I was just thinking, you know, how So I used to be a network engineer. So, you know, if you imagine something that is a network or something, that’s a graph, it has vertices or nodes and it has edges that connect them. And then you imagine what it means to navigate on this graph, which is something that we kind of conceptually had even with hypertext from the sort of issue or whatever. But what it sort of really means is situating yourself first at this vertex. And here I am and and I imagine you could give a sense of being at a vertex or at a, at a node, and then navigating it means following an edge, maybe with the physical impression of actually following an edge, you know? Now, here I am with a different perspective at a different vertex. And the act of navigation is the act of changing perspective as you move around. And I think if we could somehow give a sense of this, that would be that would be really very compelling. But yeah, that’s that’s just the thought I had.

Frode Hegland: Yeah. I wish we had better ways to note this down because there are so many things coming out. Please keep that in mind and we’ll keep talking. And please, Randall.

Brandel Zachernuk: I like that. I like the obviously the the notion of embodiment, that notion of perspective. One of the things that I think a lot of people have been kind of. Taken aback by and confused by it is the way that

Speaker11: Virtual reality always entails a.

Brandel Zachernuk: Recentering on the viewer, so that the sense that people sort of exist at a point. And I really love Stanford here. The

Speaker11: Surveillance and.

Brandel Zachernuk: Lab. The Virtual Human Interaction Lab studies virtual reality primarily from a, from a, from a psychology department sort of perspective, rather than computer science or, or cognitive science or anything like that, but actually psychology sort of the perceptual impacts of the psychic, like what what happens to you from doing it this way? And they find that it’s, you know, a lot of the time media is intrinsically more voyeuristic. That might be. But that you are a thing, you are somewhere. And so there’s not stuff below your feet and like, why is that? And I think those are really interesting sort of necessities to attend to. But I think that there is also really powerful. Opportunities for leveraging our understandable but inevitable egocentrism of like, well, now I’m here, what am I what’s that to do with here? There was a really good book saying like a Rover written by a sociologist who was attached to the Spirit and Opportunity program back in oh seven to oh nine. And she said, sometimes the people who would try to resolve problems by standing like they would imagine the rover to be to sort of try to encode the different parts of the very non-anthropomorphic shape of the the Spirit and Opportunity rovers as though they were on they were their own body in order to try to figure that out. And in fact, like it was kind of easier for them to conceptualize it by drawing a little human face on it in order to kind of frame themselves. A similar thing happens with being able to assess the chirality of complicated shapes. So, so reconciling oneself with the network, I think, is a really apt and really fruitful thing I wanted to say about what Fabian was saying, about the sort of being able to sort of drop the whole thing and come away with a conclusion that kind of independent.

Brandel Zachernuk: So there’s a it’s loud in the background is a kettle boiling. So that’s probably not my headphones. I’m not sure if sometimes they’re broken. Anyway. The. The thing I wanted to say was that the this, this this guy working on RF engineering, radio frequency, sort of charging radio modem, kind of, you know, communication, wireless communication, various kinds of sort of power transfer or whatever else. He’s been building this five dimensional latent space essentially on CodePen, I think, for but he just was able to understand dependencies of the convex hulls of these properties. And the tendency for overlapping by attenuating different dimensions. Without having to attend to the numerical values because there are gesturally available. He was just moving his hands and pulling these sliders, and was able to understand that just so much more deeper level than he’s been able to in the past, how these volumes intersect and what their tendencies are. And it was it was just wonderful because it’s exactly what I had hoped would happen. But it’s also proof positive that somebody who’s not sort of deeply, deeply invested, sort of white knuckled in like sort of gridded and trying to make virtual reality work for them is finding these things. But yeah. So, like being able to tell things about values and relationships that are very, very important to people. And that are not especially important that it happened in virtual reality, just that it helped that I was just thrilled that that had happened once already. But, you know, I’ve got a couple more folks waiting in the wings to that. I’m looking forward to trying to get discoveries from like that inside, inside and around the company.

Frode Hegland: Okay. So this is very, very exciting with all these perspectives. One of the things on Friday was, first of all, and this is Fabian, what you just showed is really kind of amazing. Everybody, if you haven’t clicked on Fabian’s do please. Right. So I was driving down to Southampton. I was thinking about the notion Doug Engelbart had of a handbook. A handbook is this is the state of the art of something in the world. No one in reality contributes to a handbook. In that way. You need an editor, and it becomes an Encyclopedia Britannica. And it becomes, you know, you know, going away. But imagine if we had a way where we could contribute knowledge, but in such a way. But I’m imagining The hand part of it phobia. And I hope you’re not going away because imagine if we can start building these proper shapes of knowledge that we kind of talk about that we’ve really been talking about today. And the interaction is not a controller. It’s not your eyes and pinch. It is actually your hands. Hence the term handbook in this new way. Right. I think that will be absolutely, really, really fascinating. And you’ve all seen what Fabian posted, right? Okay, Fabian. You’re here, right? I kind of need you because I’m going to show you something, okay? As long as you’re here, as long as you can see us. Because here’s the thing. We had this discussion because Paul Smart, he Sorry.

Frode Hegland: Also, I noticed last Wednesday I turned my camera off a lot. That was not on purpose. I think it was getting chocolate or something, but yeah, that was a bit more than I usually do anyway. Not that it’s a problem. Anyone else do it right. So one of the things fall smart has made an LM of Andy Clark was a philosopher. It’s really cool. You can do interaction. So all of us were just bouncing ideas around. And one of them is, imagine if you’re on this office space A or B or whatever. In one corner you have the Andy Clark LM represented like a glowing orb just for fun. Right? A little bit of color. You’re reading a document and you come across something and you, that orb speaks to you maybe through voice and saying, you know, what you’re reading right now relates to this other thing, you know, having fun with different kinds of eyes. And I’m not talking about random eyes. This Andy Clark one is made with Andy Clark’s explicit permission and contributions. I could imagine another one, as we use the example on Friday of a Marc Anderson, one that he has built that specializes in checking citations, which is one of Marc’s many specialties. Right. So you have this room full of these orbs that represents perspectives. They can represent things visually, orally, whatever. And here’s the fun thing.

Frode Hegland: This is what I wanted you to see, Fabian. I could imagine a heated argument with someone sitting in the same room. They all, some of them have headsets on, and one of them relatively angrily, just says, but you’re not looking at it properly. So they they do this, so they pinch their hand together, and all the orbs in the room go into their hand, and they throw it to the other person as a collection, so they can then choose to get this environment built for them, so they can then see the information presented as that other person did. And also when you leave the room, you take the orbs and you put it in your top pocket. That was the Fabian thing. You know, you have things on your arm, but why not? Front shirt pocket, right? So you can put your views here. If we start embodying whatever things we come up with, different places you put different things could be really crazy. And then the last thing just to throw out, since I’m talking a bit too much here, is all of these ideas. Every single one of them is implementable, there’s no question about that. But we all have different times, inspirations, opportunities. After Mark who’s next talks. I really would like to hear more about how we can have an event a day, a presentation where lots of these things work together. Over. Mark.

Mark Anderson: Sorry. Yes I sorry. My hand was in a sense raised your earlier point, but you know what? What might sort of why people. And I’m thinking that I think it’s blending. It’s getting as well as the perspective that’s just been touched on and this sort of moving around, but is to step outside the the fixity of graphs that are beloved of people who make them, because a lot of people in the hypertext field are thinking about basically the elicitation of knowledge, you know? So in other words, it isn’t yet in a graph because the links haven’t been made. So I think I think a real wow thing is, is something I think we’re already looking at in the the sort of Sloan work is, is, in a sense, trying to trying to break down the, the boundaries of things like documents. I mean, in a sense, it’s just a notion that we have because we for several hundred years had them on rectangular bits of paper, and we’ve sort of carried that forward. But actually we now have the means to take information you know, from things that we have of interest and connected to connect it together in different ways or, or putatively connect it together in different ways based on essentially extra information, metadata, if you will around it. I mean, the chicken egg here is now what we need to do that. But I think, I think the thing that goes beyond another graph, which isn’t exactly new to certainly folks in hypertext is, is is stepping outside that the fixity of of a calculated graph and looking about, well, okay, well, you know why. How can I bridge from here to there? And at the same time, I think the thing that’s already been raised about being able to stand within this framework and literally assume different perspectives is also useful in that. So I don’t the two notions don’t work against one another at all. Brandel.

Speaker11: I’m.

Brandel Zachernuk: Yes. I think that there’s there’s a lot about the the totality of our. Literacy in the context of these, these graph views and things like that, that it’s going to take a while. I had a really strong and tangible sense of that. I think the first time I heard about it was in like the book 1700 Saints from London Life. But other people have reinforced it. Is that like, prior to mass, mass literacy? You know, people sort of sometimes will portray what a city, city street might look like in 1400, 1500. But one of the things that people consistently get wrong is that there’s too much writing everywhere, that the world wasn’t constantly and, and sort of suffused with, with, with writing at an annotation layer over every single thing everywhere. But then we got it, and then it was in the same way that we like literacy is a little bit quite a bit older than that. And and so that there were some people who could read and, and so there was writing, but a lot of it, you know, like things like ancient Egyptian hieroglyphs, they are there as spectacle, as monuments. They’re not there for informational purposes, for all but the few elite who have the ability to read it.

Brandel Zachernuk: But most of the stuff that they read is not the stuff that that that goes on those great big columns. That’s very, very much for showing off and only incidentally sort of legible as sort of informational stuff. But it made me think that that annotation, that sort of mass illiteracy and that lack of kind of inclusion within the sort of the visual lexicon of how we navigate the world and see things. We did that once and actually not that long ago. We had cities. We had spaces that were completely unannotated with with the expectation of literacy, mass literacy as this kind of common substrate upon which we can all rely. And now the city teaches you to read. It’s I mean, people still end up, you know, and especially various governments around the world succeed in making sure that there are, you know, broken, lost generations in terms of their capacity to be able to navigate this literate landscape. But for the most part, the city does. I mean, it it helps, you know, kids get to see street signs and stores and, and and fliers and whatever else.

Speaker11: And

Brandel Zachernuk: And we get to impose that on us all again by by including not not just literacy, but another form of it and but at the same time, we don’t have it. We are illiterate. And whatever this new literacy is, even us. And so one of the things that I’m really interested in is trying to find the lowest hanging fruit, the people who have the strongest and most self-evident reasons for being able to kind of navigate these things in space as an early indicator of what the tendencies, like the broad shape of that literacy looks like, what they do, how they feel about it, and so that we can capture not necessarily. That everybody’s going to be these expert RF engineers. But to to think about what that feels like and how we can transpose that feeling of what they’re doing with it into the other sort of expertise domains that might be able to do it in the same way that bookkeeping accounting has privileged so much of our sort of knowledge work and knowledge representations you know, starting with cuneiform, but also with graphs. You know, I think it’s like balance of trade graphs or some of the first actual non you know topographical or topological kind of maps of the world. Like, we can, we can learn if we find people for whom this is an easier jump, we can kind of use them both by entertaining their needs and hopefully making them satisfied, but but also to to then look at what it does to their work and how they feel about it to to kind of broaden and deepen its impact for other people elsewhere.

Frode Hegland: Yeah. I mean, we are fairly illiterate in this by the way, the images I uploaded, I’ve been running in the background at Magnific and so it’s made a mess of it on purpose. I just thought it was a bit amusing because we definitely need to start playing with AI in this community. But we need to be playing with it with the respect that we have discussed. Not just jumping, but William.

William Waites: Yeah. I was just When Brandel was talking. I was just wondering, do we have what do we have for mapping tools? Like, if I wander around and I. And there are things in the world, and I have one of these things, and I want to put an annotation on something here so that you can see this the next time you come in. Can we do that? Do we have the do we have tools for that?

Frode Hegland: Oh, you mean in in this meeting?

William Waites: Well, I mean, just sort of just sort of generally like, I mean, like, it’s a conceptually. Sure. It’s it’s it’s it’s it’s really easy. But like, suppose we wanted to actually do that. Can we can can we? Is this a does a software exist? I mean it could be written, it’s implementable. But I mean.

Frode Hegland: Hang on, are you talking about watching the video again and annotating?

William Waites: No no no I’m talking I’m talking about the. So so I’m going to leave a note for you here so that when you come and visit and I’m not here, you see my note and it tells you where to go or what to do or what this thing is or or, you know.

Speaker11: Yeah, yeah.

Frode Hegland: So that’s what Douglas Adams called virtual graffiti.

Speaker11: Of course. Yes.

William Waites: Yeah. Yeah, sure. Exactly. Yeah.

Frode Hegland: Randall, please.

Speaker11: Corbyn is probably.

Brandel Zachernuk: Just as good at answering, but I will jump in because I was done by my account in first. There is a so in the context of web standards for immersive content, there is something called webXR. And one of the things that has been proposed, I don’t know if we’ve ratified it fully is the idea of webXR anchors. And those are things that can be persisted between sessions. It’s currently only sort of It’s only honored, to my knowledge, on other platforms, but it’s possible. No. And HoloLens, obviously. I think they may be actually proposed it. So that you have the ability to to identify real world features and then pick them back up between sessions. The standard itself only sort of goes as far as saying this.

Speaker11: Website.

Brandel Zachernuk: Website is, you know, got a lot of scare quotes around it. Is is entitled to know that this thing is what that thing was last week. And then it’s up to the website to decide what to do with that. So you would need to have, you know, storage elsewhere and things like that to be able to to be able to put that voice note, you can use local storage limits to the amount there are, especially on Apple platforms. There are time limits to how long you’re. But yeah. So, so obviously that that still that only takes somewhat closer to implementable, but there are some components of it that are implemented. And to that end there are some experiences that have been able to leverage it. If you have a native application, then there are clearer kind of things in terms of the capacity for local storage, the ability to be able to send things around. You still need to have network infrastructure for it, but you can share an iCloud. And yes. So.

Speaker11: Yes.

Brandel Zachernuk: But in terms of the sophistication of anybody having made stuff with that, I don’t I don’t think that they have got the memo that perhaps that you’re sort of reading from, but it would be nice if they could. I would look forward to that very much.

Speaker11: And so. Sorry.

Fabien Bentou: I just jump on because I had the hand and they need to make it formal. So sorry for.

Frode Hegland: Running over you twice.

Speaker11: No worries.

Frode Hegland: It’s a classic.

Fabien Bentou: As the video recording shows. Anyway. The. So it’s definitely visible technically speaking, up to a certain extent, etc. in the sense that when you say this space, if it’s, for example, an experience that I designed that you visit thus I have control of it, and then you drop a flag there and then you send your rail or an ID or something. Sure. Not a problem if it’s an experience that Mark designed and that Andrew visits. And then they did not have the that feature implemented in there yet. There is no, for example, like an operating system level way to say, oh, in this virtual space, I can create an annotation. I wish they were in, conceptually speaking, of course. It does make sense. I don’t know if it’s actually. Yeah. I mean, it’s not it has a lot of side effects like literally doing a graph outside in the real world. And you don’t want people to do this. That’s. Yeah. Problematic. But. Yeah. And I put a link in the chat in terms of Federation social VR spaces how you can go from one world to another but still keep some of your data, for example, your name or your avatar. But of course, each of the experience you visit in such a way must support this kind of. Protocol, basically. And in that sense, as soon as you implement it, it works. And it’s not particularly complex. But yes, it means preview before the annotation is done, then the solution must support that. My, I really like Emre Space that I shared there. It’s not perfect. It’s not popular. But that’s exactly the kind of use case that would be feasible with it. So if you do social VR, it doesn’t even have to be social. Just social to the point of having an ID. That that would be enough. But yeah, there is no, like, every operating system with any platform. You do notation across content, as far as I know.

Frode Hegland: So that is a very worthwhile discussion as to the space of the world. Well, of course it relates to the space of knowledge. Not that there is an inherent space of knowledge, but how would we go about. Obviously there are some standards in the world. How will we go about having someone work in, let’s say, my software do all kinds of thinking and blah, blah, blah, and then was like, this just doesn’t give me the right view. So then they open it in Fabian’s software, you know, are there good standards for that kind of knowledge maps? And does it have the right kind of metadata to know where it’s from and how to go back? I mean, one thing I would really, really like, and this is going back, I’m sure you’ve all had the same thought. I have this graph of stuff of my work, and then this bit I really would like Brandel to deal with. So I do a thing and Brandel gets this thing and he can then choose to action it, reply to it, ignore it. But my thing? Well, no. Provided he is given the access. Well, no. I mean some of this is basic HTTP sending back and forth, but are the things we can use so that during the September hypertext event, for instance, there can be 2 or 3 separate demos going on, but the user can send their their thoughts, their stuff around. Randall, please.

Speaker11: Yeah.

Brandel Zachernuk: So on the one hand, you can say that something like

Speaker11: H, HTML.

Brandel Zachernuk: Or doc X or some kind of document format. We’ll be able to do this job for you. I also don’t find that tremendously satisfying. I think that dot HTML is not a format for editing or for updating. You could think of it as an amalgam of something like HTML or markdown with something like git, so that it’s not so much the file per se that is doing the work of being able to kind of be mediated through these different places. But but actually the fact that the file exists and there’s a, there’s a, there’s a git repository backing it. That might not be tremendously satisfying either, I will say. And while it’s not for text and information this is exactly what Universal scene description is for USD. It is not yet specified. So it’s just it’s just whatever Pixar says it is. But that, that is now a committee of both Pixar and us and Adobe and Autodesk. So there’s at least that the idea with Universal scene description is that it has this kind of CSS like over overriding hierarchy so that you can do nondestructive editing on it and things like that, as well as being able to send, you know, all of the open ended data. So things that open USD are supposed to respect, things they don’t understand, are supposed to leave it where it is they should be. They should be able to kind of pass things through that are not relevant for their views.

Brandel Zachernuk: And that’s, that’s useful. So USD is primarily for things like making movies like Moana or Big Hero six, and in fact, there is a single USD file where file hierarchy for the Big Island in in Moana, it happens to be 25GB. That’s with all of the geometry and instancing and constraints of behavior. So woe betide anybody trying to actually open it. But it is technically it’s, you know, an existence proof that this does that. So I, I actually kind of like the idea that you know, markdown or HTML plus git is that file. But USDA is also an interesting one to be able to look at that. It doesn’t have the idea of text very with much sophistication. Apple jammed a format in that I don’t love. It’s called preliminary text in USD. I don’t think it would make it through committee. Autodesk is trying to propose the the M text format, which is what they have there. I also don’t love it for for similar reasons. It’s just what they like rather than what, what text is. And, and you know, I harbor the suspicion that the committee will find it overall much more difficult to to actually come up with a tolerable, tolerable implementation of what text constitutes than any of them really understands at this point. But when they when they do, we can use that.

Frode Hegland: So we can, of course, cobbled together our own standard based on this, which which is interesting. Mark, you know much more about that than me, please. Oh, you’re the quiet, quiet voice of Mark Anderson.

Mark Anderson: I was thinking xkcd. We need another standard, but I knew that wasn’t what was being meant. The reason I stuck my hand up was I was just wondering in if I could elaborate slightly in terms of mentioning these different sorts of text, sort of on what sort of access to they differ. Because an interesting thing came out. I was yesterday was a weekly tinderbox meet, and one of the interesting sort of reflections I had there is, is to watch how people who are doing actually sort of deliberate writing tend to not use things like word processors. So that’s fine for doing office stuff and writing the council and things. But actually, if we’re trying to write, you don’t, you don’t actually need something. And going back to your point about, you know, way back when, well, you know, when, when you were there on your, on your sort of stool in the monastery, you didn’t have. Well, maybe you had an italic brush and a bold brush, but essentially you just wrote and to a certain extent, maybe we’re beginning to row back from we’re sort of we’re, we’re detoxing from Wysiwyg back to getting back to sort of what we’re doing. And so that’s sort of my question is, so how do these various forms of text differ in a way that sort of people would not like them? If that’s a reasonable question to ask?

Speaker11: Yeah.

Brandel Zachernuk: So

Speaker11: Let’s go ahead.

Frode Hegland: No. No, please.

Speaker11: So.

Brandel Zachernuk: And so. Autodesk is coming to it from the perspective that they make AutoCAD and they make all of the other applications where you have, you know, fancy expensive models. You want to have intermittent annotations that serve the purpose of being able to say like don’t drill here or whatever it is that engineers have to say to each other. I know it’s more sophisticated than that, but but it’s very much seen through that, that like to the point where they don’t really understand what else text might need to be. Similarly, the stuff that’s in the USD is there to be able to provide pretty much if anybody seen the USD files for For All Mankind it’s the, you know, the, the pro space propaganda thing saying like, what if the Russians had won and then everybody won the initial space race to the moon and, and then it was just sort of the space race was kept alive to the present day out of a sense of personal vendetta, which I, you know, it’s fun for me. But yeah, so that Apple made a couple of models as part of this sort of promotion of it back in season two. And there’s a picture of Jamestown Base, which is the name of the base on the moon, and it’s got a bunch of text labels on it. And they need to be a certain size. They need to be bold. They need to be able to kind of be oriented toward the view of the person holding the iPhone, which was the then current thing.

Brandel Zachernuk: It does work in Vision Pro. If you download it from the the apple apple AR quick look website the developer.apple.com then you can see it in VR as well when in mixed reality and spatial computing. But that’s the extent of what they think text is. But it also doesn’t include some of the sort of the web standards like being able to understand and honor ligatures understand that there’s a difference between letter spacing and line spacing word spacing and character spacing. And people may want to switch and choose between different fonts within the same piece of copy. They might want to change font weights. All of these things are superficial to some people and absolutely life and death for others. If you’re doing sort of cross-linguistic scholarship, then you need to be able to switch between RTL and LTR languages and and to be able to deal with the glyph rendering for Asian scripts and Ruby and all of these markup things. So I, I’m deeply concerned that either you do the absolute worst job you have to do either the absolute worst job of text, or you end up with at least the full web, if not more, as a set of requirements. And and everybody saying, oh, but we can have a little bold, you know, as a treat, we can put a little drop shadow on it or, or whatever else. And I just, I think that that dragons are sort of broadly roosting in that direction. So yeah, hopefully that answers your question.

Speaker11: So.

Frode Hegland: Maybe. Okay, let me just ask. We have a couple of ways we can go down kind of texts in general in this how we can present amazing stuff to persuade people. This is a place they can think and get ideas how we can represent it, how we can share what would be the preferred topic of the group for our remaining time. Hello, please.

Leon van Kammen: Yeah. I would like to suggest specifically from Brendel, perhaps some thoughts on, you know, where is the you know, where is there overlap between textual documents or more intelligent textual documents like HTML and for example 3D file formats like Usdz, which can also contain text. I find this a bit of a puzzling overlap because in a way 3D file or model, it is a graph. It is a graph structure of meshes. But now with Usdz basically text can also be part of this graph. And I’m just curious if what the thoughts on what the thoughts are on this overlap between well, it used to be text or images, text or sound, and now it seems to be all converging into some kind of super document. And I was just curious if if there’s some some thoughts on that.

Speaker11: Yeah.

Brandel Zachernuk: So I, I’m personally, not personally. I’m professionally, but it’s my my job and not really anyone else’s. To propose the HTML model elements, the HTML specification. That’s the idea that inline inside a page, you will have the ability to see a 3D model. We do that kind of thing a lot today with something called WebGL. It’s a graphics engine that has the ability to use the 3D acceleration capability to configure pixel. But what that means is that the web page only sees the pixels. It doesn’t see the model. It’s not responsible for actually making interpretations and understanding that. I mean, did you want to jump in or are you you just okay okay. So so so like superficially, there’s not a significant difference between model and what people are doing with WebGL. And you can do see that stuff like that on Apple.com. But.

Speaker11: The kinds of.

Brandel Zachernuk: Benefits that you would glean from the web page being the one responsible for that. And moreover, the underlying operating system is that, for one, it’s allowed to know more about where you are and what you are and things like that. You know, you can’t build a virtual reality headset that doesn’t know exactly where your head is, but you don’t have to have. And but when you do, webXR Vision Pro Vision Pro is down there. But when you do webXR, you have to know exactly where your head is 90 frames a second. And that’s cool for the right thing, but maybe not every single thing where you may still derive some benefit from being able to see objects exist in 3D space. So if you’re just looking at a web page that’s advertising an iPhone, I don’t actually need to know about you that much. If I just want to give you the real view of how beautiful the glint of the aluminum is this year, because of how we manage to get the lighting, art directors you know, whims catered to. I love them. They’re lovely people, but they are very picky anyway, so Yeah, but, William, did you want to jump in? You had something to say?

William Waites: Yeah. Well, sort of, sort of that. And actually this you know so I spend a fair amount of time reading mathematics papers. Right. And mathematics papers are nice. And they have, you know, a bunch of text explaining things and some formulas and stuff. And then often what they’re explaining is in some is hard to visualize. Right. Because what’s being what’s being talked about is in more than two dimensions, right. If it’s a two dimensional thing, we can draw a picture of it on a flat thing, and that’s fine. If it’s a three dimensional thing, it’s much harder to draw. And it sounds like we can almost do this with what you’re describing, with the model where, you know, I have a mathematics paper there. There’s a figure in it. I should be able to somehow take that figure out and have it become a three dimensional thing that I can look at from different angles and rotate and sort of get a feel for how that is in a way that I wouldn’t be able to do with flat paper. And that could be really helpful. When you go beyond three dimensions, that starts becoming hard again, but at least three is better than two.

Speaker11: Yes, absolutely.

Brandel Zachernuk: And and one of the great benefits of an interactive three dimensions is that you can encode composite dimensions or sort of navigate through them by using time or gesture or any number of these sort of bands have 27 degrees of freedom. If you have the ability to intervene on a space with sort of multimodal, multi-dimensional input, even if you can’t sort of take a snapshot view of what that that higher dimensional space is based on your three dimensional or dual two dimensional kind of perception of it, you nevertheless have the ability to be able to develop a sense of it as a consequence of kind of peeking through between the composite of your perception and your actions on these things.

Frode Hegland: Fabian, please. Unless there’s more on that topic.

Speaker11: So I.

Brandel Zachernuk: Guess just to wrap up because I actually gotten to my point, my apologies. Is that model in that context is has the ability to do that. It also has the ability to kind of portray it in 3D space so that your head is wrapped around and things like that in a way that is just much more efficient, but also satisfying in terms of what it means. The job of a 3D element can do within the context of a page. And to that end, I would expect that you would have less text in it and more text for it or around it. And that’s not that’s not to say that it’s a solved problem, but that’s the goal is to have page and God knows what that means. The page then has to become. But you know, that’s a problem for next year.

Speaker11: Or, you know, some.

Brandel Zachernuk: Some indefinite future where we have have the time and resources to do it. It’s not there’s nothing specific about next year. It’s just like once we get modeled, then we need to think about what that means. A page needs to be with a model.

Frode Hegland: Indeed. Fabio.

Fabien Bentou: Yeah, I. Two quick things. First, I, I mentioned shift space before. I’m not sure if people are aware of it, but that was 2007. It’s it was it’s an annotation level on top of the web so that if we both, all of us or some of us visit a web page, we can say, oh, that paragraph is interesting. The data isn’t stored on the server that hosts the web page. It’s on the shift page server. Doesn’t matter exactly how, but the point is it wasn’t on the target. And so it’s an overlay on top of the web. And so if the person that hosts the server doesn’t like it because we criticize the paragraph of their paper, it’s still there and we can engage on a conversation on this. And, and honestly, I think that’s amazing. And them closing is a big loss. But I don’t think the idea I mean annotation and social annotation yeah. I don’t think it should be give up upon and that’s why I was glad to use the web archive to link back for people who did not use it back then to understand a bit what it means, and even visualize it a tiny bit. Just the two layers I think helps a bit to see this. And what triggered me to insist a little bit on that part was Leon’s question to Brenda about file format. Yes. Hypothesis, indeed. It’s the same kind of principle. And I think having such an annotation level so that, for example, when we visit one of our content it would it’s easy to imagine that there is a client that go look for annotations and thus display them and in the right position in space.

Fabien Bentou: And why do I still insist on this? I’m going to implement something like this. I’m going to try it because I think even like I give a workshop to kids on Saturday and they’re going to have ideas and they should annotate without making a mess, so that if it’s a silly things, then the the next kids don’t see it. But next workshop I do with other kids. Maybe I’ll have some time to remove some of the annotation the first kids made, because it will be actually implemented because their suggestion will exist as the experience itself. I mentioned kids but for us, let’s say for the demo in September, I think that’s also one of the most valuable thing is, okay, I mentioned the example where people go there and they have an idea after because they find structure in a set of documents for their own research. But for us, let’s say who provide the demo, the actual value is like, oh, you guys should do it this way, and this is better, and I need this or this or that. Just not like, oh, wow, everything is amazing. Okay, cool. But not productive for us. Like we always look for the new idea to actually implement. So if in such a demo in September there would be a way to annotate so that participants have some kind of feedback through the content, and they can also have that annotation layer in flat 2D when they go back to their office. I think that would be valuable for both parties.

Speaker11: Yes.

Frode Hegland: So I’m going to ask some very hard questions as an ferm hard. Not difficult. Can we make a way to pin to surface in AR where when the user chooses to pin if they haven’t already defined what the surface is, they have a choice to do so, and that is entirely up to the user. So I may choose a Pin, a knowledge thing, whatever it is, over to that wall. And I’ve defined that as my front wall or main wall or blue wall. It doesn’t matter. It means something to me. Right. At least. And the system then stores that, plus other things like is it a vertical or horizontal surface, that kind of stuff. And then isn’t there a way we can share that with other people when we share our knowledge? I mean, in the real world we have geographic space, which is great. But there’s got to be a way where, you know, look at what Fabian has built, where, you know, you’ve moved things with your hands. And it’s possible I’ve heard some really good mentions of USD and all these other things, but. Isn’t there something simple and open? You don’t need to worry about a server. You literally. You know, to throw it to someone. Couldn’t we make that? Because here’s the thing. We shouldn’t make perfect the enemy of the good, right? If we define, let’s say, a volume, we call it a volume to use the spatial computing language that has x, y, z as a scaling thing on the outside.

Frode Hegland: Can’t go anywhere. There’s got to be a way we can hand that over from one person to the next. No. Now before I hand over to Peter. I just wanted to add to that. You know, we’re talking about addressability and actuality. You know, what actually is in the world. Because one of the things that I’ve been obviously working on really hard in the sense of frustration hard, is with visual media. You can instantly connect documents, you can instantly see them, because we just happen to use the name of the document as one of the citation means one of the addressability ways, right? It’s not super robust. Someone can change the name of the document and the whole thing breaks. But it is a real world thing and it works if we are not trying to make it nuclear bunker proof now. But if we’re talking about something that is decent, works within a community of trusted sources and academic community. Isn’t there some amazing way that we can just. Let’s just call it mock it up, prototype it? Something like that. I’m not giving away the mic until someone says yes and tells me how, because this has to be possible, guys.

Speaker11: What?

Brandel Zachernuk: Depends on what you mean by possible. Because like in terms of the underlying sort of technical capabilities, the hardware that most people are talking about certainly has that technical capability, but it really depends on who’s who’s sort of ecosystem and constraints you’re sort of living with in webXR. It would be possible to have something that’s close, but you would be beholden for some to, to, to, to supply some of the infrastructure on Apple like ecosystems. The web apps are the only virtual reality, so that there’s no meaningful way to be able to kind of relate it back to those spaces. But also the spatial computing sort of modality and metaphors are largely sort of constrained to saying, like, you have carved out a space that is entirely un Unencumbered with physical objects within it right now. So I get to play with this, you know, two by two, by two meter cubic space. And you have the ability to, by virtue of shareplay, interact with other people with and through it. But there’s no expectation that the idea is that it’s the same two by two by two meters. Conceptually, not in sort of geospatially sort of bound to the same location. So two people who share that are expected to be doing to doing so remotely. And so it’s conceptually coordinated, but not physically in terms of the underlying space.

Speaker11: So but you.

William Waites: Can you can do an HTTP request, right.

Speaker11: Oh sure. Yeah.

William Waites: Yeah. And so I can encode the either get parameters or some convention on your eyes. I can encode the geometry that I’m interested in in an HTTP request. And I can get back some data. Right? I mean, that’s that we can implement that in an afternoon. And then you query it and I can query it. We can now query it. And we can all have have an index that we can query. And then what we put in the index. Well that’s that’s that’s the next question right.

Speaker11: Yes. Absolutely.

Frode Hegland: So the exciting thing then is you know, Andrew is building something really, really incredible. As most of you seen, you all have access to. Yeah, I know Peter has a zendo. We just have to get past this bit. I just want to see the realities here, because the reality is that if we are in the same room and to share that space, there’s a lot of added complications. But if we say our knowledge thing is in a cube, right. And we’re at the hypertext conference and someone has built it in Fabian space, they want to be able to, you know, I just love gestures, you know, grab that thing, make it small, throw it somehow to another user, not necessarily throw it, but send it to another user in a completely different system. If we have the coordinate space, if we have the means of sending that, all the little nodes will have to be written as being something, one of them could be a 3D model or a video, it could be somewhere else. Of course it could, or it could be just text with other stuff. But isn’t this really within the realm of something that we can we can build, make really simple, use the open strategies of what people do so that when the other person receives it, it’s kind of floating as a small thing and the user has to decide the size. So we reduce a lot of the external context, so to speak, because we, you know, if in Fabian space it’s, you know, like ten meters and then Mark Anderson wants to use it in a one meter, that should be okay, right? It should be able to scale the whole thing. Can we solve this as a community? Do we want to solve this as a community?

Speaker11: Leon.

Frode Hegland: You’re supposed to say yes. A lot of what you’re doing relates to exactly that. Okay.

Leon van Kammen: Well, I want to say that I think it’s not that impressive. I don’t want to. I don’t want to insult you, but I, I think it’s not that impressive because from from the audience perspective, it’s not really clear what is really happening. So we could make an animation or, or trigger an animation that that happens, and you can do this or that, but this whole the transport layer of, you know, sending it to another user is completely invisible. And. No, no, no for sure. And that is the hardest part of the problem, I think.

Speaker11: But if it can.

Frode Hegland: Be encapsulated, it can be sent in any way. Let’s say it happens to be on Apple devices. It could be as a text message. If it’s between different platforms, it could be maybe via a server. But the point is, they will be so powerful if we all, in a nice way, compete for these knowledge bases. And you know, Andrew is building phenomenal stuff, but it would be amazing if the user in the demo on the day can do a modification of that and do some sort of interaction, and then they go over to Fabian’s booth, so to speak. And it’s the same thing with that change. And then maybe even on a flat thing, I think that is what’s key, not just having one thing, because that can feel almost like a CD rom can demo, you know what I mean? Peter. Is it on the same topic or different? Just want to check. I really don’t want to keep you hanging so long. I’m sorry.

Peter Wasilko: A couple semi-related things that have been queued up.

Frode Hegland: Okay. Let’s just see if anybody has anything else on this. And then we go straight over to you. Leon, were you talking? Did I interrupt you? I’m sorry.

Leon van Kammen: And so I’m still thinking that it’s a bit like. If we change the the conversation or substitute it with copy paste and we’re talking about networked clipboard and networked clipboard, basically clipboard over internet or whatever. Then

Speaker11: Yeah I think.

Leon van Kammen: I’m still having trouble to see the full package, because if it’s only the the the visual the visual representation of a copy paste to one person to another, then yeah, everybody understands that. But at the moment the audience is going to think like, how is this? So the what the what part is easy, but the how part is such a can of worms that I think it you know, that is the part I think, which people impresses them or not. Because what Brendel also said the underlying ecosystem is, is pretty, pretty important. So whatever the user is authenticated to that is going to be pretty important to a lot of people because just let’s say a simple HTTP request as a demo. I think that that is not going to impress people. And on the other hand, just a visual presentation is also not going to impress people. This full package you’re talking about will inevitably involve a certain transport layer. And the choice what that’s going to be, I don’t know. I don’t know if there is a open standard for a shared clipboard or that kind of stuff. I don’t know what is already if that has already some experiments in the W3C space.

Frode Hegland: On that. And after this, I promise. Peter, it’s your turn. As you all know, visual media is partly clipboard as well as written rights. So the thing is, what we could do, we could even copy this space and copies in multiple formats, as you can do in the clipboard. And if you go to a Microsoft word, let’s say you paste it and paste all kinds of stuff, JSON style, whatever, it doesn’t matter. But it says this is a special environment and let’s send that to someone else. Open it up in that environment, it gets read and it comes back out. So of course it would be nice to have proper digital sending, so to speak, like zoom, but it could also be stored like that. I just think that the problem of describing a knowledge environment and sharing it will be really, really huge. Just as a very naive thought, I am very naive guy. Theater, please. And then I’m just clicking on your links. Fabian. Well, Peter.

Peter Wasilko: Okay. A few thoughts. First, for the demo, I think it would be highly desirable if we could have sample data from more than one academic discipline. So not just have an example, say, one drawn from literature or the works of Shakespeare or all the Sherlock Holmes stories ever written. Maybe Sherlock Holmes would be really good, because that just tickles people’s fancy. And we could have visualizations of. Locations that stories took place. So you can have that geospatial link in. We could have. Different plot elements that might be reoccurring. Just sort of look at the different tropes that appeared in them. That’s one possibility for like a literature grounded kind of a person. Then some sort of a timeline example history of the Reformation or something like that, where we get some sample data with just a temporal layout and, you know, more than one. Stream of events that are juxtaposed to have an architecture band. We could have a political band we could have a works of literature coming out at the time, band for that sort of thing. So that would get people with the historical orientation. Then definitely we should have some sort of a bioinformatics choice, which is really hot with people these days. I don’t know if we have anyone in the group with that kind of a bio background, but I’m sure we could find someone who would come in and suggest how they’d like to have some of their data visualized.

Peter Wasilko: One example that comes to mind there was the work that the Human Computer Interaction Lab at Maryland did about normalizing. Event sequences for medical students, and what they found was that you’d have a bunch of people come in at different times presenting with the symptoms, and what really mattered was what events happened relative to the onset of a particular point in the series of typical events. So you wouldn’t really care. That one guy came in last Tuesday and one guy came in next Wednesday. That’s really irrelevant from the perspective of doing the analysis of all of these medical records. You’ve got to what you really want to do is sort of temporally align everything so that. Fever reaches 100 and whatever degrees Fahrenheit. Okay, we normalize all of our temperature records along that point. So you might want to have some sort of a 3D thing like that. So you get somebody with a medical perspective, and software engineering definitely would have to fit into this. Just trying to maybe visualize our own system in 3D. How could you represent that? You have projects, you have the sub projects, so you have a natural hierarchy there automatically. How can we play with that spatially? And cyberspace first steps even alluded to that, suggesting plotting things based upon size of a code base, edit frequency, all those kinds of telemetry data that we can get out of studying a git repository history over time.

Peter Wasilko: So we could have a booth for each one of those disciplines. And then across those disciplines, try to find a set of interactions and affordances that would be applicable in all of them. So that’s where you start getting into the notion of your clipboard. Do you want to grab a hunk of data? Name a hunk of data describe a filter on a hunk of data. Name that filter so that I can take the filter and give it to Leon. Leon can drop it into a completely different visualization, but the underlying filter of how to select stuff from your knowledge base is relevant there. Then I kind of like the idea from games of having a tableau of cards, so we could describe some logical organizations of our three dimensional spaces. Freud might have one way that he likes to represent things. And again, sort of going at a logical level, abstracting out what the underlying data is. He has something that’s his primary number one display. Then he might have a primary number two, whatever it is that he’s concerned with, he might want his number two thing to be arranged with the tree map.

Peter Wasilko: He might want his number three most important thing to be done as a prospective wall. And we had stuff like prospective walls, cam trees, those visualizations from the MIT Media Lab, stuff that was dating back to the 1980s, so we could find some of those stock kinds of ways of visualizing things in 3D, bring them forward as named entities. So you get the core elements. Are your set of data, your filter on your set of data, your abstract arrangement of things based upon importance. And then you could assign different things to different importance things. So then Freud could say that he wants to take. Williams data and display it on his secondary visualization. And he wants to apply this visualization strategy. And the composite of those elements also should be able to be factored out so that you can say, you know. Capture what I’m looking at now. Give that a name and then hand that off to someone else in the group. So we’re getting multiple levels of abstraction. You have some core visualizations and some core services, but try to think in terms of like a composite toolbox with a bunch of different pluggable elements. A reasonable number of those, and then let all the different permutations get played with at the demo day.

Frode Hegland: That was a lot. Thank you. Yeah. Anyone have specific comments?

Speaker11: Yeah. I mean, I think.

Brandel Zachernuk: That those are all reasonable sort of candidate sort of domains. My only concern is you know, having the time and expertise to be able to treat each of them well enough that it’s sort of that it’s. Helpful to kind of propose things in that way. And as I’ve said before, I’m afraid I’m not capable of being super helpful in terms of providing some substantive support to that end. But definitely like, I think that being able to kind of draw people in with at least the provocations of, of those things is useful. One of the things that people do, they That people do in places like Apple, but also elsewhere, is they draw a lot of pictures ideally with the help of expert practitioners. And so you get people to come along and kind of presume things about how difficult it will be, but at least with enough consultation to be able to say, like, if somebody built a such a system like this, then it would then it would be useful. And doing a lot of faking and fakery. And you would be surprised how but you would be surprised how little people outside of the actual sort of R&D, commercial R&D world do that. So, like, I was at the user interface software and technology conference in San Francisco last year, and a lot of people who were doing really cool things at the poster sessions especially had interesting interventions, you know, specifically interventions on the real world physical objects and various things. But there was only one group of people who had actually kind of rendered what sort of the mundane life would be with them, you know, what would be an apartment if it had these things integrated? So so it’s not easy. It’s not trivial, but it’s at least easier than building the full working prototype to do that. And so people have an interest. It would be great to actually, you know, consultant contract artists and designers to, to kind of sketch these things out as well as as well as to try to do the, the technical proof of concept work on some hopefully finished places to be able to prove it out.

Speaker11: Mark.

Frode Hegland: Please.

Mark Anderson: Yes. I sort of looping back to the start of that in the timelines. I managed to find something because I remember Ben Shneiderman did some stuff, and there was a guy actually at Southampton, one of the hospitals, who came, who was using, I think, one of the tools that grew from that, and it was just this amazing thing that actually if you start to plot someone’s interaction with the medical system and you even just color coded the what, the what the intervention was people could almost just tell by the shape of the timeline. Oh, that person’s got this condition. I mean, not quite that simple, but but in a sense, that was the underlying thing being hinted at. And so one thing that’s maybe closer to us, but because I totally caught with the last thing, I mean, the real difficulty here is actually building necessary and sufficient data to make it look more like something that might be good when it’s finished. Is that there have been more people turning up in the sort of hypertext conference from digital Humanities, and I thought quite an interesting area would be because some of them are looking at past past sort of literary stuff which has an innate time element to it. And thinking of the work that people are doing with offset metadata what they call it standoff metadata. And this, this basically is about the really rich linking you need where you, where you, you can’t do basic markup because all the markup needs to sit on top of one another, you know? So it’s not like doing HTML tags or something, just, you know, it’s messier than that. So you basically you’re tying you’re tying things into characters within a probably a textual underpinning, but that might be something quite interesting.

Mark Anderson: To look at as a, as a demonstrator to make out is possibly sort of believable. Something could be done. I and I was just thinking, you know, the idea of for instance, we have a timeline that’s in future of text that’s got lots of stuff in it. We don’t have to use the whole timeline, because I know that goes back to, you know, first the Earth cooled sort of thing, but we could take most or some of it and think about. So that’s almost a simple bit. So you’ve got a, you’ve essentially got a two dimensional, you’ve got an axis, that’s it. You’ve got an axis in your Excel space. That is time. So the question is, okay, so then what do we usefully gloss onto it. How do we how do we how do we meaningfully decompose what otherwise just sits on a line. And I don’t know the answer to that, but I it’s a provocation that’s been there for me for quite a while. I mean, I think that’s an interesting thing to do. And, and in terms of people being drawn in or impressed by it, I think that that offers some tractability because it’s inherently doing something that you can’t just do even on a piece of paper, because that’s 2D and, you know, and you can put more things on a, on a screen, but eventually it’s just noise because it’s just too much. You can’t see the wood for the trees. So I think there’s something potentially we could do there if we’re looking for a because we can’t do all subjects. But, but something that that might step us outside the computery world or, you know, sort of tech focused thing something in basically involving time might be interesting. Great.

Frode Hegland: Yeah, there’s a there’s a lot there. And I think, you know, we only have two meetings a week and one of them is on the specific project. But there are so many levels to this. And I do have a question about the the sharing. I just really think that is important. I really think that we need to make something that also a developer can come in and try our system and just say, oh yeah, no, I want to do something better. And they can take the same data and do something better. I really think that’s important. So let me just present this, as most of you will completely understand already, because I kind of said it before. Imagine you’re in a space, you perform a copy command. Then you go to another space and you perform a paste command. That space could be plain text, because almost anything can be described in text, at least to an extent. Right. And you have two things. Of course, if the system is optimized for it, you wouldn’t need two things. But even in a non-optimized system, you get two things. One is the thing that you paste in the body of a document, so it will render as maybe something like spacecraft number one. And then you have something that goes in the appendix when you then put it in this thing word, HTML, PDF, whatever. When that’s opened up in a webXR environment that understands this stuff, the user will be able to, with their hands or whatever, literally extract that spatial thing and anything on that spatial thing. Some of it’s going to be included. If it’s just text, there can be other things. If that is something we can do. Right, guys? Yeah. And is that something we should do? I really, really think we should, because what I just saw earlier today that Fabian has with his environment is completely different from what Andrew is doing, but it still is knowledge and space. Over over, over.

Speaker11: So.

Brandel Zachernuk: William, if you wanted to jump in and feel free. But I can answer a little bit more about this.

William Waites: No, no, no. Go ahead. That’s that’s it’s a thought that will take us in a different direction. No, no. Please go.

Speaker11: Cool. Well, I’ll be brief.

Brandel Zachernuk: So, I mean, so, like, in a sense, yes, but like the the details of which regime you’re sort of these, these, these high level capabilities that are existing under matter a great deal. So copy paste is, is, on the one hand, just an abstract philosophical concept related to the idea that you can hold on to some data and opaquely sort of retain it for the purposes of being able to duplicate it somewhere else. So that’s, that’s, that’s copy like with scare quotes. And then there’s copy in an operating system. And and in order to actually copy you need to use a trusted user gesture. That means that the operating system understands and respects that you’re giving it going to be able to do that. And so for example, for, you know, you’re one of the neat things that you’ve managed to achieve with visual data is that you you have the ability to to use the actual OS copy to be able to do those things, but it also sort of carries with it some constraints in terms of where and how you’ll be able to do that. For example, webXR doesn’t have the ability to construct use safe user gestures such that you’re able to retain that. Which is not to say, though, that you can’t have some sort of conceptual equivalent to copy paste, that you have the ability to kind of lean on within the context of a webXR regime.

Brandel Zachernuk: What it does, what it what it means, though, is that it’s only relevant from within something that either is in that webXR or something related to it. So, so, and like I say this not to rail against, you know, the hegemony or the tyranny of of the specific boxes that Apple may or may not want to put you in. But just that, like the specifics of where these things are achievable, what are the sort of related constraints kind of matters? I mean, if you’re, you know, inside the company and prototyping, then Bob’s your uncle. There are all sorts of secret cheat codes, but they can’t go into sort of published apps either by us or anybody else. And the same would go for any kind of research. So in a research setting. Yes, absolutely. In a, in a, in a B, that sort of academic or corporate in a, in a sort of a safe for the public to be able to kind of distribute to the millions. Still. Yes. But it’s you know, you need to start piling on provisos and technicalities and, and additional caveats as to where and how and what kind of things people have to have installed in other bits and pieces like that. So, you know, like and that’s not to nitpick. It’s just like it matters where and how and how much you expect this to be true.

Speaker11: Right.

Frode Hegland: Okay. That is important and valuable. Now, as an analogy, the way that it works in author, when you copy from reader, you have different payloads in the clipboard. So when you paste an author, it does all kinds of magic. But if you paste in mail or word, it pastes it in a different way. It paste the text and quotes and stuff underneath and so on. So I hear what you’re saying and I absolutely agree. So I could imagine that what is pasted is also multi-layered. So if it’s just in plain text, it could have something that it could even include an auto generated JPEG of what it looked like when it was made. It could just be a list. Absolutely. But the thing is, going through yet another revision of of corrections for my thesis, what comes up again and again? The two things we have to really look at for interactions is views and connections. This discussion seems to really relate to that. And I think we’re going to do some amazing things together. So I think it’s really, really nice lenses indeed. William. So I think it’s really, really useful to have this discussion as to not just having someone else implemented, but us doing something. It’s good enough and hopefully others will jump in. Leon, please.

Leon van Kammen: Yeah, I think you’re totally right. I was also thinking, like something. What? I recently kept repeating. Do myself is that sometimes 2D is king. Like whatever beautiful 3D solution you have in front of you. And in that sense I think it also matters. Till which degree? You want to piggyback on existing stuff like I could imagine, like, if you just accept the fact that the fastest way to copy paste to another user right now is 2D. So and since, for example, Vision Pro and, and the rest as well is going to sort of embed more of our 2D digital world into VR, perhaps that imaginary demo could also be a copy paste from a 2D application, and then you don’t really have to worry about the whole transport and the whole security thing. It’s basically your starting point is I’m receiving a copy paste from somebody in a program and and go from there. Then you’re, we’re skipping a lot of critical comments. When yeah. Yeah, that’s basically my my thought.

Frode Hegland: I like the notion of cheating. I like the notion of graceful degradation of what is being transferred. Absolutely.

Leon van Kammen: Yeah. And just to add like the most important is that somebody can copy paste something to you. And if that is a string of visual metadata which you, the receiver can sort of explode back into something spatial. That would be pretty impressive that that would be a yeah, sort of next level projection, because now we are projecting HTML text into some kind of web experience, a web page. And yeah, it would be impressive to see a piece of text, whether it’s copy pasted or not, or just served somewhere on a server to see that sort of accepted by the user and projected spatially by after approving it, or maybe some kind of pop up box, like, do you want to explode this into your world or import this into your world?

Frode Hegland: Yeah. Thank you. That’s thoughtful. Absolutely. Randall, please.

Speaker11: And the subject.

Brandel Zachernuk: Of a string of projected spatial data. William, do the lights behind you tell you anything? Do they mean anything, or are they just cycling?

William Waites: Oh, at the moment they’re just at the moment they’re just cycling and it’s just decoration. But they can they can do their entirely. The entire space is programable. So there’s an MQTT server, you can tell this to do that or whatever you like. You can write program. Yeah. So not really, but they could be.

Speaker11: Yeah. So have you.

Brandel Zachernuk: So I mean, it was flippant, but it’s also true. Like these are the kinds of things.

Speaker11: That that.

Brandel Zachernuk: Spatial data can kind of come from and kind of represent and live in. And I think that it’s really important. One of the things that, you know, that the sort of multiplicity of the destination spaces that we kind of reside within the information means that we need to have an intrinsic flexibility of display to be able to have things do things for us in different places for different reasons.

William Waites: Oh, I have a I have a whole notional plan that one day maybe I’ll get I’ll get around to it for a navigation system for use in down below in a sailboat. That will give you quite a lot of information about what’s going on just by sort of glancing at a ring of these around the ceiling, basically. It would be. It would be great.

Speaker11: Yeah, that.

Frode Hegland: Would be super, super cool and super related to what we’re talking about here.

Speaker11: You know.

Frode Hegland: We are talking about many ways to have things in the room now. I may have to log off for a bit in a in a minute. Hang on, let me just check. I’m so sorry.

Speaker11: Okay to jump in to my day pretty soon.

Frode Hegland: Yeah, I have a Yeah, I have to go to to a meeting. It’s Edgar’s teacher thing that starts on the hour when we on the when we stop. But so next time, obviously, Wednesday is about the project. Next Wednesday. Monday, we’ll move on to other things. But please think about two things. If you have the chance in in addition to interactions, please think a little bit about this sharing stuff and also who we should invite to the book in the symposium. If you have any ideas, just, you know, email them or whatever, and then we’ll go through as a group and decide where we want to go. Of course, you’re all extremely invited to both the symposium and and the book. I thought I just mention it for the sake of it. Any final questions or comments?

Speaker11: I’ll be out until April.

Brandel Zachernuk: I have a combination of things. I’m going to a symposium in Markham next on Wednesday, and then I’ll be in Seattle for for the W3C immersive web discussions that I’ve been sort of pointing out today. But I look forward to hearing everything that comes out of it from the discussions in the next three to a week and a bit.

Speaker11: Yeah, thanks.

Frode Hegland: For saying that. I’m going to Asia on the 4th of April, so I’ll be able to connect at all times of day. But if anyone has suggestions or I should see in Beijing, I’m there for three days and then Tokyo, please do. Whether you know people or not, I’ll be very happy to to automate the. I’m just looking at the comments here. Yes. All right guys, thank you very, very much for today. So I look forward to next week. Brundle have an educational and good time with your other events. Bye, guys.

Speaker11: Bye bye.

Peter Wasilko: See you Wednesday.

Leave a comment

Your email address will not be published. Required fields are marked *