25 March 2024

Frode Hegland: Okay, so I’m two minutes early work.

Fabien Bentou: I don’t know.

Frode Hegland: Hey, man, how are you? Oh my God, you are crept up on me. Here I am, working peacefully and author. And you come and ruin my my Monday morning afternoon. Anyway, I’ll go then. Bye bye. Do not do that, Fabian. Oh my God. Anyway, how are the remaining two visually present people? Oh, good. You’re visual again. You know, Fabian, I want to come see you in Paris soon. Are you going to be in Paris?

Fabien Bentou: Depends. When. Yeah, well.

Frode Hegland: That can be negotiated.

Fabien Bentou: It’s just not going to be doing the whatever games happening soon ish.

Frode Hegland: Oh yeah. The throwing things around and running and stuff aren’t they.

Fabien Bentou: I think it’s the summer like around early August. If I’m not, I’m not sure. But yeah, that’s when I plan not to be there.

Frode Hegland: Yeah. No. Cool. You’ll be in London. We love that. I’m just sick and tired of not seeing you guys. It’s it’s not the best. So Deeney won’t be here she told us. But there is still a Deeney kind of project related thing to look at which she’s working on. I just wanted to have a few minutes with you guys on that and that is just stepping back, in fact. And so stepping back, I’m going to share my screen briefly. Because the right people in the room. Oh come on, no one is going to finish that sentence. What happens? What kind of Hamiltonian writes anyway? So this has been a bit stressful for me. You know, I have said, and I do mean that the work we’re doing for Sloan. And don’t worry, this is not going to turn into a Sloan day is going really well. And we’ll be fine in terms of what we showed them, because we’ll show them options. But at the same time, just to spend a few minutes on, to paraphrase Doug’s question, what the hell do we want to augment, really? Right. And then me. I don’t know about you. I’ve been putting everything into knowledge. Graphs and maps and stuff. It doesn’t work for everything. Very often the list is the best way, right? So I started doing this kind of list thing that we all do every once in a while. And what I want to show you and ask you a question about is this very last sentence organizing and finding books, articles and notes. Isn’t that what we ourselves are most interested in? In addition to, of course you read something. This thing should help you find out if it’s nonsense or not. But isn’t that a key thing? Troops. That was the wrong. Asterix. Not a rhetorical question.

Fabien Bentou: I would argue it’s it’s only an intermediary steps. Like, I don’t think we’re supposed to organize for the pleasure of organizing. It’s like. Yes, I also have a bunch of notes, and I’m very both frustrated and happy with my wiki. But what’s really gets me excited by this when I rely on it to do something like if I find some information I stored months, hours years ago even, and I use it for it doesn’t like can be a project or getting even just me to feel good. Like I’m think we discussed about whatever topic, whatever long ago and just to be able to verify that the information indeed was exchanged makes me feel like safer, I guess, and probably better. But it is not just the organization itself. The organization is for another for part of a broader process, which I guess is different for everyone. But I think it’s more it goes beyond this, I would.

Frode Hegland: I completely agree. By the way, Peter and Leon, we were just spending a minute stepping back a bit to find out. What the heck. Do we forget all the other things? What do we want to augment? And having written lists and all kinds of nonsense. My sentence currently is pretty much where did I put it? Organizing and finding books, articles and notes, basically kind of knowledge management. And the reason this is happening to me now, having been all, my gosh, kind of a thing, is. Good morning Andrew. By the way. Andrew, I’m so impressed being a programmer and an artist, to be up at work at 8:00 in the morning is going above and beyond. So yeah, that’s I couldn’t always do that. Right. So here’s the thing with a Vision Pro, which unfortunately so far Andrew’s used a bits and Fabian has used Andrew. You’ve used it a few times now right.

Brandel Zachernuk: Okay.

Frode Hegland: So the thing is, this is what I feared for the two years before it was released. Apple is now defined some things for us in webXR. We are missing the background. We won’t be able to use the room. And according to what Brandel says, that won’t be possible for a long time for privacy reasons. And that’s fair enough. So that means that if we want to use memory palaces, something Peter has talked about a lot, which I’m finding to be more and more important, I had this obvious idea the other day that when you of course, coming from my software, you’re reading a document and in you choose to do the map view. But of course it’s a cube three dimensional graph. Okay, we talked about it a billion times now in and of itself you could scale that and it could be nice. But you then need your other things in the room, right? So the problem with the Vision Pro is that if you place one of these over there and one over there and over there, and then you restart your vision or close the programs to bring them up, they’re gone.

Frode Hegland: It doesn’t have persistence. I’m sure they will have window management or space management at some point, but they don’t have it now. It also does the same way you can connect these things. Also, the maximum size of a volume is two meters square two meters, two meters, two meters. Right. So they have all these things in the reality. So even if we went commercial it would be difficult. But. And the frustration that is so huge is now I’ve been showing it to people who are more like us in the community, not just as friends, as parents. They really, really lock on to the fact that you can place things around the room. And I had a really nice discussion with Mark the other day where we were talking about this issue. And, you know, we were talking about, are we really talking two and a half D or really three D if you’re sitting at a desk 3D isn’t that amazing? It’s good, but it’s not that amazing. What is amazing is if you can get up and go to the other side of the room, because that’s where you have that stuff.

Brandel Zachernuk: Okay.

Frode Hegland: And that isn’t possible now, because if you do it in WebEx, are you going to trip over a chair or a table or whatever? It’s very inelegant. And you can’t really do it in native. So I’m really wondering what the two yellow hands will comment on that you first place Mark and then Fabian.

Mark Anderson: Okay. I and this is sort of partly responding to your earlier question about, well, what are we augmenting? I mean, I struggle with the question only because it seems too literal. I, I don’t see myself organizing book. I see myself organizing knowledge in my thought. Whether the object I’m dealing with is a proxy for a string of text, or a table of numbers, or a picture of something is completely by the by the the, the intellectual work. The augmentation I’m doing is about the the structure is broadly hypertextual. It’s creating the the connections and the understanding of the whole. To the extent that I actually find putting literal labels on it actually makes it harder to work with, because you get lost in the detail of what that thing is, which is to miss the point of what you’re really doing. And it’s been interesting, for instance, to look at our experiments thus far in in the same work in terms of what we started doing is deconstructing documents. So we’re basically taking them almost into their internal hypertext and relations. Not both within the document. And beyond that. And this is where the malleable nature of XR is interesting. I, I can’t get excited about 3D per se. I it has all sorts of uses, but for, for knowledge for knowledge and tools for thought, which is really where I come from. It’s the plasticity of the thinking space, which is the, the real augmentation. The rest of it is, is as esthetic as you want to make it, and that will vary from person to person. But the real, the real benefit it’s giving, which isn’t necessarily available elsewhere, is this ability to, to take things that in most of our other engagements are sort of somehow fixed or solid or not all in one place, and to be able to draw them together into an interrelation that we find meaningful for us in the moment.

Frode Hegland: But talking of that, there’s also the. Yeah, talking of which can you just show me the the possible textbook? And point was made. You just used your spatial memory. That was a trap. A fun just to say we’re talking about. No, I’m not arguing against you. But I just thought it was fun. There’s different kinds of hiding and finding and a concept you wouldn’t necessarily do that with. But I just thought because of Peter.

Mark Anderson: I totally agree in that sense, in that, in that, you know, I can’t find my music with the wrong without the right album art. It’s just not not known to me. Right. Plain text search ain’t going to help because that’s not how I store it. So no, I get that. And I wasn’t trying to be provocative in my earlier comment. It was just to make the thing that there’s an unintentional sort of misstep, I think, in thinking too literally about whether something is a book or something else. And that and nor does that argue against the, the really powerful things like using memory palaces and things. But but those tend to be very, very individual. But in the broad sense of sense making and stuff, I think it is it’s just the objects and I, you know, I think about Fabian’s work and I think about Leon’s work and, you know, they they are quite abstract and in a sense, all the nicer for that because I’m, I’m I’m not being seduced by the fact it was this one a book and that one’s a paper or something. Because actually, in the moment, it doesn’t really matter what what is interesting is the affordances they might offer me and the interrelations that I might, might make between them. Because essentially certainly in a, in a sort of thinking or knowledge sense, that’s really what you’re trying to do. You’re trying to draw associations, especially ones that aren’t yet explicit in any form because you’re effectively trying to surface what, what is presently hidden.

Frode Hegland: Yeah. Except for one thing. When you want to cite the paper, you need to find the paper, not the idea. So that’s why this is such an interesting and useful discussion, because the the different needs for the different pieces of information.

Mark Anderson: That’s. Yeah. No, indeed. But you’re going to find the paper is is search. I would I mean that’s that hasn’t changed as a task any I so that’s quite a defined thing to do.

Frode Hegland: Yeah. No. Absolutely. Fabian, please. Sorry.

Fabien Bentou: Well, actually, but I I’ll jump on that last sentence search. I think the idea of spatial computing where you would have your virtual library as. Something that is not listed, let’s say a list of ISBN of ideas, etc., but rather things in space to piggyback or hijack. How good for most of us, but not all of us. Our sense of moving in space to remember a path and whatnot. It is kind of trying to redefine search, though. It doesn’t have to be exclusive, like just out of it’s not a coincidence, but I was working on search within my notes just before the meeting happened, and it was not a spatial process. So it’s also, one can imagine, like here, a row of books or papers and everything, and then physically grabbing them, but still having search and let’s say novel ways to do search and retrieval and all this it’s not necessarily. Only one way or another, and maybe, maybe finding which one is the best in which context or in which process might be the most interesting one. I yeah, like I was saying, I’m not giving up on good old search. Even like just keywords, we eventually like semantic search with embeddings or whatever is going to be the new thing. And because, yes, it’s still it’s still a way to do it. Now, what I, what I wanted to say initially was that well, two things one on a non-technical aspect is in term of prototyping, you have to give up being frustrated and not everything being perfect and ready yet because it’s. Yes, it’s true. And we always like now that we finally, finally have hardware rereading text doesn’t have to be like the size of like a billboard, but like an A4 more or less document and not getting a headache.

Fabien Bentou: We get frustrated by other things, and it’s an ongoing like it never stops like it’s once you finally can search amongst your millions of nodes. Well, you start to have billions of nodes. It’s like So yes, we need you can be critical, but it should not stop us from trying things, is my overall point. And then for this to, to finish is that’s, I hope, kind of what I tried to express last time, namely that true with the Vision Pro, which, by the way, is not the only target, it’s just hardware wise, the most exciting headset at the moment, but nothing more than this. It’s namely when I say this is going to be replaced by another nicer headset soon, I hope. I don’t know when, but it’s a couple of years, I guess, or hopefully less than this. Anyway, that headset still allows us to place both tabs and windows of the browser. I mean, only Safari, as far as I know. I installed Firefox on it, and I could synchronize my tabs from desktop, which were actually very convenient. And I might show something about this actually later. But. So we can have different browsers from Safari, different browser tabs, windows from Safari. We can have Firefox at the same time with also multiple tabs, and they’re all spatially positioned. And we still see the rest of the room if we want to. So that’s that’s already feasible today. The problem and Randall is not there, but I’m relatively confident about this is that we can’t position those windows arbitrarily.

Fabien Bentou: Let’s say if Randall is here, so then he will correct me if I’m wrong. Perfect timing. Hi, Randall. So now the situation is positioning those windows spatially manually. So I use my hand and I put them there. But if I say those ten Safari windows need to be on a line or circle, I can’t do that programmatically. Same for let’s say Firefox tabs. So we put them in there and that’s where they are. And they also don’t know where they are. So if I could position them on the line, that would mean I’m maybe also be able to know where they are relative to each other. For example, I put all the Safari tabs or windows related to a specific workload on my left and all the other on my right. So right now that can’t be done, which is also one of the, let’s say, the reason like WebEx or just being able to programmatically manage windows is interesting. But right now that’s not feasible programmatically on that device, as far as I know. And actually on yeah, on any device except by building your own custom browser basically where you would have managing your own tabs using Volvic, for example. So it’s very frustrating. Feels limited. At the same time, on the prototyping process, we still have other things we must try, and maybe we can hope that a couple of months down the line, either that will be feasible from the OS or vendors or. Yeah, building another browser, if that’s the only way. That’s also something to consider.

Frode Hegland: Yeah, absolutely. Hi, Rob. Hi, Randall. We are doing a little question of a kind of pasted it in here. The last line is like, what are we really trying to augment? What are the interests in the room? This is independent of Sloan, which is relatively well defined but still stressful. And one of the question is organizing and finding stuff. Finding our own stuff might be one. So, Randall, big question for you. Am I correct in understanding that webXR cannot show the room under any circumstances, not even as a non computational background?

Brandel Zachernuk: Webxr is a system for both augmented and virtual reality, so anything that supports webXR VR by definition is. Anything that supports webXR R has the ability to use math. So class three has the ability to do Webxr immersive AR mode. I believe some Android phones do it as well. That means that you can position things in the world with them. And you can have the view of the world as well. You don’t, in those circumstances, get full capability to read every single attribute of the world. That would be something more like raw camera. As much that companies like Niantic sort of agitate for it more. Most of the people who are providing systems and services like WebEx are.

Frode Hegland: I’m having problems hearing you.

Brandel Zachernuk: And.

Frode Hegland: I’m having problems hearing you. Do you know if it’s the the the mic setting or the bandwidth or something? It’s is everyone hearing well or not? Well.

Brandel Zachernuk: I’m on my phone. Okay. Odd. I’m on my phone. I don’t know the Wi-Fi in this hotel.

Frode Hegland: Oh, okay. Right. Yeah, I heard most of that. So will the Vision Pro also allow for webXR? Pass. Pass through, so to speak.

Brandel Zachernuk: That’s future planning that I’m on. And Apple doesn’t talk about future plans.

Frode Hegland: Okay, no that’s fine. It’s just that the experience I’ve had over the last few days has been very frustrating in an educational way, where it’s amazing Andrews space, but because it is constrained, I can’t really get up and walk because I don’t it doesn’t know the room so well. And then I have the vision, you know, I’m using author here, got this there, and so on. The whole memory palace thing is absolutely fantastic, as we see very often. The problem, of course, with with the Memory Palace kind of thing. And vision is that every time you close the app, all the layouts go. I’m sure that’s something they’re going to fix in the future. You know, a little bit of windows management, but that’s what we’re talking about these issues now. Peter, please.

Peter Wasilko: Yeah. My biggest concern is that when Apple’s worried about privacy and security issues as a device owner, I’d want to be able to override that and say, I trust Andrew’s code. Let Andrew’s code have access to everything. He’s doing some interesting stuff. Maybe give me a programmer switch like the original Mac had, that I can stick into the side of the headset to tell it. Override your paranoia. You’re not dealing with an average consumer who needs to be protected and bundled and wrapped and guarded. You’re dealing with a sophisticated end user who wants to have full access to the capabilities of the device. You just have to have some way to get around that. Don’t nerf the product just so that you can protect the lowest common denominator of end user. Worry about the power users a little bit more. In recent years, Apple seems to be just sort of casting power users to the side, and you know they’re going for mass market entertainment. Whoosh viewing content. Wonderful. I can have a great theater, but if I can’t access the full capabilities of the device for real serious work, the device isn’t worth getting. So I may just pass that sentiment on to Apple if you can. I know you can’t tell us whether you’re able to tell them, and they don’t let you tell us anything on that end. But, you know, just sort of push the feedback in, let them know that our users are deeply troubled. And I’ll probably wait around for the Semlow one to come out at this point until we see what’s happening with it.

Brandel Zachernuk: Yeah. So I mean, the issue, if you can hear me, is that Everybody is a novice user. Sometimes even power users need those fences and gates for the times when they’re not being power users, and everybody can be convinced through various means to flip all kinds of switches. So just because it’ll get them free stuff or it’ll change. And so it’s it’s a very challenging line to walk.

Peter Wasilko: Yeah. I feel that I should have the power to make stupid decisions if I want to make decisions. You know, I don’t want to be protected. And.

Brandel Zachernuk: That is not an ecosystem ultimately because. Well, so the issue is that that that you.

Peter Wasilko: Know, have me type in three passwords, sign a disclosure agreement, get it notarized and send it to Apple. I will not sue you under any circumstances if my making full use of your hardware winds up shooting me in the foot, give me the option to shoot myself in the foot. And I’m willing to suffer the consequences.

Frode Hegland: Sound like a gun. Nuts. Peter.

Peter Wasilko: Yes, we need a Second amendment for computer hardware.

Brandel Zachernuk: There are herd immunity questions. Yeah. And there are herd immunity questions for our computer hardware.

Frode Hegland: Yeah. And it’s I mean, it is a fascinating new world because already privacy has kind of gone in the real world, you can be taking a picture of walking down the street, and then it’s digitally uploaded to something like Facebook. Someone else tags you in the background and you’re screwed because you’re supposed to be somewhere else that day. So you can imagine, even here as a power user, someone else may be impacted in a completely new weird way. So it’s a fascinating and and dangerous new world. Yeah. Mark.

Mark Anderson: I’m happy to wait because I’m going to go and, well, I’m going to go back to what we were discussing before. But if anyone wants to speak on this issue of privacy, otherwise I’ll wait. Otherwise I can catch up. I. Looking. There are seemingly no one. Okay. I was just. I was just thinking what we were discussing earlier, and I’m wondering if one of the sort of disconnects in terms of, you know, listening to the comment about, well, I can’t put things where I want is whether there’s, to a degree, a disconnect. Leaving aside the rationale, the reason I this isn’t, you know, good, bad, somebody made a mistake. But probably in our mind’s eye, we’re we’re much more plastic in our sense of that thing over there is this thing I don’t really need to know whether it’s a table or a book at this point. It’s this thing. And the moment I’ve decided that this actually is, is a book, well, it’s in my mind, it’s a book. Now that, of course, in, in implementation terms within the environment actually comes with quite a lot of consequence because all these various affordances that we expect that that that object have to be created. And if we didn’t tell it, if we didn’t say in advance they needed to be, then there might be known in Camden available. But I’m just wondering if that’s one of one of the sort of in a sense, challenges in this space is that unsurprisingly, perhaps our tools quite can’t quite sort of mode shift as fast as we can in terms of, you know, this, this thing is now a different thing. And we do that, you know, in an instant, in our mind’s eye and perhaps at the moment we imagine that the tools we have at our disposal at the moment can do that. And perhaps we’re being overly optimistic at this point because we don’t yet fully understand the implications of how that how that is affected with speed. Freud.

Frode Hegland: Yeah. I mean, you’re very eager to jump out of the the rectangle prison, as I agree with. But it’s also interesting. I really think the notion of publishing is really, really important. Publishing can mean, quite simply, sharing with someone. It doesn’t necessarily mean going through a journal, but in academia, of course it does. Because at that point, whether we call it a book or a paper, it doesn’t matter. But it is a self-contained unit because you kind of have to have a self-contained unit to site it kind of thing. And also if you want to have a really fluid environment, which of course I desperately want, you need a few things to hold on to. So that’s why I think I want to have frames for books. I want to have PDFs, but only as a holding point, you know, even if it refers to an entirely floating data somewhere else. So I think we’re full spectrum people on this, and not just before I forget, I was lucky enough to have two interesting conversations last week. And if I’ve already told you, please, you know, tell me to shush. One of them was with the inventor of CSS who turns out to be a friend of a friend. He’s Norwegian, so that was fortuitous. It was really lovely talking to him about maybe using CSS to stretch into XR dimensions, something that some people kind of done. He hasn’t really looked at it. I would desperately like him to be part of our community because he’s so visually driven. So while some of you are very code and data driven, he is. But what is it going to look like? And Randall, you’re slap in the middle, I understand, so he’s going to join us for at least one session after Easter. So. So that.

Brandel Zachernuk: Was nice. Yeah. So. And CSS is so, so one of the issues that we’re facing with WebEx are that we’re doing today is that webXR has very little. Has very little to do with the text, layout, information, all of these things and it as as a necessity for the not the browser, but what’s called the user agent code. The thing that the website consequence all of the advertisers have the ability to know everything about your delivery, distance and motion, the head motion. That is why we released transient transient pointer. It is a more privacy preserving capability to prevent user input within web if you let it. But but but that, that, that sort of attribute of web provides more below which privacy cannot get. You can’t get the disclosure out of that. So so the reason we can’t hear you, well, we can’t hear is actually not to use.

Frode Hegland: We can’t hear you.

Brandel Zachernuk: Well, you can’t hear this at all.

Frode Hegland: It’s really garbled. Can you try doing without headphones and just trying the phone just for a moment, just to see where the. Because you really are worth getting every word right. I was getting some messages from others saying, what’s going on? So let’s see.

Brandel Zachernuk: Is this sounding better?

Frode Hegland: Oh my god, yes.

Brandel Zachernuk: That’s better.

Peter Wasilko: Oh, it’s like you’re in the room with us.

Frode Hegland: It’s like being on a concert stage and being outside. Yes. So like you’re.

Brandel Zachernuk: In a fishbowl or. So so this this.

Brandel Zachernuk: Is relevant with regard to CSS. So what I was saying is that webXR provides a like a flaw to how much information disclosure is required. And that is why as as much as it is fun for, you know, late hacksaws like me and Leon and Fabian and Andrew to use, it’s not a really sustainable mode for the future. My goal is to make the web spatial. That’s my team’s job. Is it? A name? Is not the web team. It’s the spatial web team. And that’s why I am tomorrow in in Seattle here proposing or extending my proposals for model and you know, one of the things that we need to understand from this is what happens to CSS in 3D. I also heard recently that Swift UI and was heavily inspired by some of the modalities of CSS, and that is also swift. Ui obviously works in 3D on visionOS. So I’m really excited to, to to talk to somebody who is conceptually responsible for the the birth of CSS. Obviously, there are a bunch of folks in Apple who understand CSS had an implementation and did detailed level very well. But I think it’ll be incredibly exciting to talk to somebody who helps give voice to that initial requirement and understand the sort of the, the, those more philosophical aspects of what constraints entail and things like that. So thank you. I definitely look forward to that.

Frode Hegland: Yeah, that’s I’m glad to hear that. I thought it would suit you very well. Would you like an intro before then? Very happy to just enter you guys. Oh, no. You come back to headphones. Okay. Yeah, just to just communicate with thumbs.

Speaker6: I can use my microphone for my Mac and use my headphones. It’s just so that other people can’t hear. So you can hear me still fine, right?

Frode Hegland: Yeah, absolutely. Yeah. No, the microphone for the mic is really good. Yeah. If you want, if you’d like an intro before then just, you know, I’ll happy to do an email. I’ve only talked to the guy once, but he’s super friendly and super involved in the same problems.

Speaker6: Well, now I think I’m okay. I, you know, I know people who know CS at Apple as well. I mean, obviously, I know CS, I’ve been writing web pages for 25 years, but I, I know the design of CS the other people who are sort of part of standards deliberation. But yeah. No, I think that’ll be really cool to, to to meet and speak in the room like that.

Frode Hegland: Okay. Perfect. I also just wanted to mention the second conversation one of the guys from Tana it’s a Norwegian kind of knowledge management company. Please have a look. We met in London, had lunch, had coffee. Lots to talk about. He’ll also see how he can connect with our community. And I have a full Tana account, so I’m going to look at it a little bit more. He’s also very eager to work on our interchange format thing, which Leon and I have simplified to be almost nothing which he was very appreciative of, actually. Leon, would you mind spending a minute or so on summarizing our conversation from recently? Because I think you had some really cool conceptual breakthroughs. Yeah.

Brandel Zachernuk: Can you hear me? Okay, so I will try because.

Frode Hegland: It was rather a creative and not super super precise brainstorm. So but I.

Brandel Zachernuk: Will try to sort of like, in real time.

Frode Hegland: Get back into.

Brandel Zachernuk: That state. I think what.

Frode Hegland: We were describing.

Brandel Zachernuk: Was.

Leon van Kammen: I think it started with your basically you send out an email about is there a grass language? Which which can be used for sending sort of a knowledge graph to somebody else. I think that’s how it started by email. There were various answers to that. There were some answers which were sort of like giving some pointers. Some other replies were almost warning against that. This is kind of like an an adventure which has been existing for many, many years. And there’s many sort of it’s almost a sort of like, outlandish kind of feeling, you know, some, some heaven or Paradise which only exists in our brains. And I think we started to talk about the fact that. You know, it’s very easy to get stuck into thinking about file formats or even designing your own file formats to sort of compress that particular data. You want to sort of transfer and convince yourself that a new file format is needed for that because everything else doesn’t do exactly what you want. But then we realize that actually all the file formats or if you look at 3D file formats, for example, they all have an incredible overlap. And they just, you know, the differences are really small, like they add some new features here and there. But in the sense they are just a note graphs. And we were actually realizing that for an LLM, it doesn’t really make a lot of difference.

Leon van Kammen: What kind of note data is you know, what are what the specifics are of no data, because the moment you start to enter some sentences to an LLM, it can immediately detect that as note data. So if I paste a bit of CSS into an LLM and I tell him that here’s a snippet of some note like data, he will start to see, I’m saying he. But let’s say the software will will understand, will be able to detect nodes based on some syntactic characteristics. It doesn’t mean doesn’t really matter if I paste CSS or BibTeX or HTML, it will start to realize what are the nodes. And it can also see or detect nested nodes like with XML for example. And that basically Drove us to a certain thinking direction like, hey, that’s very interesting. So this if we paste textual note like data in an LM, then this and we also tell this LM like, can you make this into this sort of output, let’s say JSON, then it will basically do this whole trans, this translation layer can be with a bit of tweaking, could be a sort of flexible translator from one format to the other. And yeah, in a nutshell, and sorry for this long story in a nutshell, we realized that this whole business of, you know sitting down, analyzing an input format of note data and coming up with some codes and designing some kind of formats so that it can produce the same data in another format, like this whole manual development process is in a way well, not challenged.

Leon van Kammen: It’s still very useful, of course, but it’s also interesting to think in terms like what? You know, what if there what if we can come up with specs which are maybe specs which can be sort of consumed by LMS and which then translate the input to the output in that way. And one small note I want to put into the chat is it’s actually it’s called struct. I don’t know if anybody knows this, but basically this this is a library of file formats and parsers and generators. And they are all generators from specs. So. And I was just we were just brainstorming. Brainstorming a bit like, what if you just feed LMS these specs of how to parse an input file format and how to generate a binary output format, then yeah, this this will sort of prevent us from writing a lot and a lot of sort of input output translating manually. So that’s a whole mouthful. I hope you could make a bit of sense of since our sense out of that. Thank you.

Fabien Bentou: I. It’s also I don’t know if somebody was trying to speak, but it’s too late now. I’ll be super quick. Cathay is indeed this kind of formalism is interesting. It’s always a little bit tricky. I remembered it for the canvas file format and for example, potentially bringing the remarkable which has his own file format to canvas. It’s like it’s another level of indirection. But if you have the right tooling and then you can juggle with file formats more efficiently. Definitely quite useful. So it’s indeed always in the sense of bringing any content from any device to any other device with any other kind of content let’s say embedding or including anything to be able to make it manipulable, having affordances basically to any format. Definitely valuable. That’s it.

Frode Hegland: I think. Thank you for jumping in there for being perfect. Peter clearly a parser is better when you have one, when it exists between the two things. The kind of shocking thing for me with LMS, though, is that the gosh darn work. So, you know, I give it all kinds of stuff. And then just as a recap, I’m not sure if you were here when I mentioned it, but I ran a document that happened to have visual meta through an LM, and I said, what metadata can you find? And it said based on the visual meta, I had found this. So because the visual meta has a description paragraph in tiny writing before the data, I realized that actually became a secondary prompt. And that was a bit about what for me. So what I’m thinking about is Is Brando leaving just for you, Leone thing here. Okay? Yeah. See you later, Randall. Thanks for coming by. Have a good time in Seattle. Yeah. So the idea is just. You know, we’re all relatively old, obviously not all of you, but in terms of we have all been trained on older ways of thinking before this stuff, and we’re in a much more kind of a fluid data situation now, which is not always great.

Frode Hegland: And we should hold on to to the hard stuff when we have it. But, you know, we’re painting in a very different way. So that’s why Leon and I were talking about, you know, do we need to try to use massive data structures to do something? I mean, one thing we’ll probably still need is x, y, z coordinates. Right. But, but the really important thing in a knowledge space very often is just the stuff, not necessarily where it is when it is being transferred, when it’s in your own space. Of course you want to have that. Right. So one of the interesting things from discussions, including with Stian from Tana last week was. The timeline stuff that comes up every few months may now be possible. Right? Because let’s say you have a timeline program that has basic stuff. If you can then say, go through this document and find anything to do with the timeline, you get a format, you can review it and see if there’s rubbish and boom, there it is. You don’t have to import specifically correct things all the time. That’s where it’s beginning to get quite exciting. So it’s not something for all the time. It’s not something that’s perfect, but it might allow for rich interoperability. Mark.

Mark Anderson: No, I find that very interesting. And, I mean, I’m not surprised. And, you know, the way the things are coming on this, the degree to which some of the translation can be done for us in terms of sort of display and layout type thing. The, the sort of interests me still remains how you capture intentionality. So I can make as I can lay out some information and say in a virtual space and I can maybe I can just say to the AI, hey, just send this to Liam, you know, reassemble at the far end, you sort it out for me, and I don’t have that doesn’t surprise me. That can probably be done. And if it can’t be done today, it can probably be done fairly soon. The interesting thing is the degree to which to which my intention that was behind how I laid out the graph can travel. Or whether the most we can hope for at this point is that at least the graph, for instance, can turn up with Leon in the shape that I intended, and then I might have to do the the last mile myself explaining it and I. But that’s no worse arguing than we are at the moment. But it sort of begs the question, because I don’t know the answer of how how we how we capture that, that that aspect of insight because it sort of doesn’t lend itself to codification. And I think if there’s one thing that the last few years have taught us is, you know, trying to build graphs of arguments is is an exercise in the pointless broadly because people don’t disagree with us because our graph isn’t good enough. They disagree with us because they disagree with us.

Frode Hegland: Yeah. I mean, the spatial structure is another layer on top of this. This is primarily the data. So we would have to figure out something there. But Andrew you’re still here, right? You don’t have to go video back, but I just have a question for you regarding this. I put in the chat as well. Actually, I’m sure Leon and Fabian would know. Oh, yeah. Andrew, obviously. No, no. That’s fine. It’s just a question regarding some of this, and that is. This. Okay, I’m really trying to connect different environments. Sorry for harping on about it. And one of the things I want to be able to do is an external thing, to be able to click a thing, and the specific knowledge thing is suddenly in webXR underworld. Right. And what I’m finding out more and more is I don’t necessarily want the whole thing. So imagine I have a PDF. Yes, PDF, and it has a button on that says open just the reference section or concepts. One thing right in this environment, could that then generate a URL with all the data encoded, the entire thing, not a reference to a file somewhere else, and send it and open it in our webXR system where it’ll say all this extra stuff, I know what to do with it.

Andrew Thompson: So we’d have to look into how much text you can append to a link. Because you would end up with a massive block of information there. If you’re trying to basically quote an entire citation and throw that in it would work much better just to reference the document. So if you have, say, the PDF online somewhere, you could. Send that link because that’s a reference directly to the document. But then again, like because we’re developing fully inside a web page, it doesn’t have a way to communicate across. So you would have to create a separate plugin that does that and then send the information.

Frode Hegland: Fabian. Three body problem, new Netflix series. It’s amazing. You should watch it. Just saw the comment here. Andrew. Yeah, that makes a lot of sense. But, Fabian, do you have any comments on this as well? Before I blabber on?

Fabien Bentou: Yes I did read the book and I watched the first episode, but I he he’s talking about a headset, so I might be different. What do you mean? The actual headset from the show? Yeah. Then I’ve seen them, but it sounds fiction, so we can move on. Now, on the URL. Yes. There is a maximum length. I don’t know what it is, but it’s not that big. It’s like thousands of characters or something. So you could put, like, a mini PDF or a couple of paragraphs or not even. But it’s it’s long for most of the purposes that like sending hashes or ideas or other URL or redirection. So it’s it’s quite long, but it’s not long enough to put entire documents in it. And you can have like URIs, but at some point the URL you can send to another browser that it will open. I like you’re not going to put a one megabyte PDF in there as far as I know. Once again though, if you make your own custom browser, you customize an open source browser like Volvic. You could do this, but it’s a big endeavor. And of course, anybody that doesn’t have it will have. Well, I mean, you can imagine some fallback if, for example, the URL can’t be processed. At the very least they have a redirection URL within that data. And that could be enough. But but overall, technically speaking, no, you can’t make big documents with just the URL.

Frode Hegland: And thank you, Fabian and Leon. Please go on. I just wanted to say that it just seems that we need to solve specific problems. Obviously. And if we can’t do that with the same data, then we’re very remarkable. Is it’s a great thing, but it doesn’t for everybody always integrate. It’s not part of the flow. So, you know, we need to build something people can go in and out of. Leon, please.

Brandel Zachernuk: So. To.

Frode Hegland: And you’re muted yourself, not unmuted yourself. You’re already scared.

Leon van Kammen: That’s that’s pretty weird. I was screaming no I, I I this is a problem to my heart. I have built various applications with this, and indeed there is a max. So either you have to save something somewhere on a server temporarily, and then you get into the information ghetto trap. Almost. But luckily, there is a relatively new protocol. It’s called remote storage, and it is basically a response to a G drive and Dropbox becoming information ghettos, so to speak. It’s an open protocol. I will paste it here. And

Brandel Zachernuk: I will.

Leon van Kammen: It basically allows anybody to create a small directory linked to a web application. I will show you an example. This is my screen here. So I use, for example, joybox. And oh, wait, let me let me launch Firefox here because it’s a bit easier to show. One second. I hope. Yeah. Let me see. I’m moving my. My. I hope you can see this. So this is my joy.

Brandel Zachernuk: Box.

Leon van Kammen: Here. I put all my Yeah. Can you see this, by the way?

Frode Hegland: Yes, yes. Yeah.

Leon van Kammen: Okay, so this is basically just a list of links. I send links to this thing, and then later I can just click on it and watch it. But the the fun thing is this is not stored on the server of this web app. As you can see here, I’m online and you can basically connect to a remote storage provider. Here you can see Coder of Salvation at five apps.com. Now this is one of such remote storage providers. And you can just create this is sort of like a Dropbox, but then run by nonprofits and it’s completely encrypted, and they have no idea what’s what’s on there, but you can use it. And what this allows you to do as well is not only storage, but for example, if I select some, some things here, this is what I like. And this could be potentially some kind of graph in a PDF or even also a PDF included. I can then share this. And then on my remote storage a file is generated or these files are generated as sort of like a snapshot. And then I can basically share this by I’m getting a link here. So if I share this, this is the link there’s a little bit of information in. Link as well. But there’s a file connected and this is a bit hidden. So if you log into your storage you can see these files being files being generated. But this is a this whole remote storage is basically a solution to you know, have people, you know have apps. Have user specific storage. So basically bring your own storage. So this is a potential solution to, to this kind of problem of not being able to embed everything inside the URL.

Frode Hegland: This is very exciting.

Leon van Kammen: Because my explanation is very mediocre.

Frode Hegland: So yeah. Question. Well, can you do turn off file sharing? Because otherwise I’m just looking at myself, not the ideal. Right. Does this mean that, for instance, if I were to implement this in my own software, I would refer to a drop box? Or does it mean that I can actually send it live by making my own software, a server for the time needed? Because the key thing would be I have a thing in my software, and now I want to press a button and have it available in the headset inside Andrew’s software.

Leon van Kammen: Well, I think the answer is probably yes, because just look at it like this. Just imagine that people would launch the application and then if they want to save something or share something, that’s the moment that a button will pop up and it will say connect your drive. And then you can either select Dropbox or you can select five apps where you can also have this kind of storage. And from that moment on your, your storage is basically happening there. So all your data, everything you, you want to store or what the application can store will then be stored to, to your own buckets, which and it only requires you to answer these specific email address of that server or your Dropbox email address. And then if you’re already authenticated, it will it. That’s it. You don’t have to do anything. No register, no nothing. It’s just connecting your storage, and you’re ready to go.

Frode Hegland: What about in a case of having iCloud? Which is what most Mac users use for such things anyway. Would it be easy to tell Andrew. Hey, there’s an iCloud document open now, please.

Brandel Zachernuk: I don’t know.

Leon van Kammen: I haven’t I don’t know if iCloud has some kind of API like Google has for, for G drive where you could sort of generate links or public links with certain permissions. I don’t know what about it. I do know that remote storage is an answer to is an open answer to the closeness of such API. So there you have to imagine that a lot of businesses have been annoyed by not being able to do anything. Not everything with these drives. So that’s how such an open remote storage protocol sort of was born. So, yeah, you have to be lucky if iCloud allows something like that. But there is zero guarantees if such features will work. Usually with these API, it’s like once the usage gets above a certain level, then the security lights are starting to flash and and features are disabled in the APIs.

Frode Hegland: But a fair point. Thank you very much, Leon. Mark.

Mark Anderson: Yes, I, I this was sort of sparked by, I think, the last thing Andrew said, but we seem to have got to the same place almost via a different route. So maybe this is turning it back. But what sort of seems to be emerging is that we actually want to be want more of our links to end in. I don’t want to use the word document. I wish I had a better word for it, but you get something back. The reason I say this is when I think about it. I mean, I’m old enough to remember, you know, sin stamped addressed envelope. Allow 3 to 4 weeks for delivery. And that’s where that’s what a lot of our referencing and addressing comes down from. Because if you if you link to, you know, something about a book, well, you might get a PDF of the book. But but, you know, the point is, if it’s a literal, physical thing, it’s going to have to be delivered. So it made sense. Step back 20 years or something that you sort of linked. You linked the place where the thing was known. So for instance, if you link to a paper now, if you have a Doi or something, it will actually take you to basically a database query page that tells you a bit about the thing that you want. It doesn’t tell you to what, it doesn’t take you to what you want. It tells you to. It takes you to a place where you find you might find the thing you want. And I think what I’m if I, if I sort of read correctly from what sort of Andrew is alluding to that maybe, maybe we want to maybe we want to change perspective on that. And that we want in other words, we want the end of the link more usefully to actually deliver the thing rather than a reference to it, I don’t know.

Frode Hegland: Yeah. Yes, 100%. And. Busily on changing stuff. Yeah. So yeah. So Andrew, I’m asking in chat here. I know you’re working properly and I can’t see the messages, so sorry for for speak interrupting you. I’m just wondering if you have a perspective on how you would like to receive the data, because I think one of our dreams is really so strong that we’re working in something that, let’s say I made and then. No, I what Fabian is doing, I want to see it in there. Well, that’s cool, but I’m doing a different task. I want to look at it in Andrew’s stuff. It can’t be so unbelievably difficult to make such a thing happen in this new environment. I mean, back in the days of Microsoft Office owning things. Fair enough. There was a problem. But right now it can’t. There be an easy way for us to move about?

Andrew Thompson: Yeah. So you’re correct.

Frode Hegland: It’s Andrew.

Andrew Thompson: That’s each one of them.

Brandel Zachernuk: Okay, now.

Andrew Thompson: We need the asterisks. Which is Yes. All of these things on their own are absolutely possible with today’s environment. The problem is we are building the entire thing in our own space. So we have to rebuild everything we want to implement. So right now, I’m trying to get sharing to work. As in, like, you can export your layout and send it to someone else. Right now, it’s just going to be like a text file export. Fundamentally, it’ll always be a text file export, but that can be stored somewhere in a cloud and then linked to. Or it can be sent and you physically send the file like that. That can change, but the data itself will be the same. Trying to get that to work right now because I believe that’s a big step to this. And also what you were pushing for last week being able to share workspaces. And that does tie into this. As for receiving data, I assume you’re talking about the document itself. And we can receive that in any form we want. Right now, we’ve built it off of the assumption that it comes in as an HTML file, like a converted PDF which has the only benefit there is that it has tags. So the different pieces of the document are already tagged for us. That’s just why we’re doing that. Assuming we continue down that route which, you know, we can change. But if we go that route so far it can be either once again, an actual document that you upload or because references work so nicely and let you have local adjustments. You can just straight up reference the URL. And that’s something very easy to append onto a link.

Frode Hegland: Yeah, that’s quite literally music to my ears and I’m sure to many others. The reason I wrote this list this morning about what do we actually want to augment and I just it’s relevant. I’ll just quickly skim through it. So there are different kinds of views. We have. We have lists and graphs. Obviously, we also have different. And this is kind of key. We have different kinds of explicitly selectable data. For example plain text concepts, notes, references, images and 3D right. These are explicitly selectable. Then we have the processes of reading, thinking, researching and writing and all of that stuff. I’m bringing that up now because you know, like with photography, you don’t use one app for absolutely everything unless if, if you just want to quote unquote develop the picture goes for a lot of other things. So you wouldn’t necessarily want to send your data to the same app all the time. This is not you know, Doug Engelbart NLS we’re not trying to own everything. So in terms of what you’re doing now, Andrew, you are focusing on a really cool visualization and interaction of the reference section. So one thing that would be very useful is just to be able to send you that. An HTML form is fine for the end. Please.

Fabien Bentou: I’m going to speak without thinking specifically on purpose this time. I would always give a reference to the document. I would use a URL. I would not make assumptions about how the content of that URL is accessible. I would just assume by default it works because that’s the default state, let’s say. But yes, I would still. It’s not the only state and all that is already handled at protocols. We. I want to say know and love, but at least know like http. Like there are so many error types of error that just like for, for anyway getting the document this way. Doing then whatever you want with the document. If it’s a known type of document, then leaving it at is. Like if it’s a text or dot PDF or whatever. No need for this. If it’s something different, like gltf, then again, if it’s properly defined, no need for anything more. If it’s more and it’s for example, spatial first, then I would extend Gltf or Usdz again, I would pick even one of the two formats if there were the most popular. Because they exist. Pick one. Gltf. That’s the one I know. What Gltf with some extension. It’s like the escape hatch. The same way that you can have some comments in most other formats, and it means you get you can piggyback on a lot of reading and writing libraries.

Fabien Bentou: So just like the most, I would argue, simplest way or the most the direct way at least. And then the rest would come from the specification in terms of what does the user want. And then writing, the user wants to exchange whatever and like, is it a graph, is it a 3D model that embeds code? Is it. And this is really like a per usage basis and everything else. In my opinion, without is theoretical. It’s like, oh, how it could be done. Sure. A lot of edge cases. But again, in term of prototyping and trying things, this you need to remove all those edge cases because it’s like the 20% left doing 80% of the time and more. You want that the core of the idea. And if it means assuming that the document will be available, then sure, that’s again, that’s not a crazy assumption. But yeah, then it goes back to what do the user wants when we say, oh, we exchange from one experience to another, what do we exchange? How is the new user going to use whatever that document is? And then eventually, I imagine, share it back or loop it back. And the rest are just technical question about how to do it, which in my opinion are much easier.

Frode Hegland: Yeah. Wonderful. Just writing notes for our chat here. So, of course, there’s a question of what happens if there is no URL, but we need a way for what Andrew is building. And please think of a name. We need to have a name for it. I have some ideas, but anyway so, Andrew, what I would like is if you have a way to receive data and there is a way for you to, for the system to question, what is this? Right. It could be a full gltf spatial document. It could be reference information, it could be many things. And I’m wondering what the simplest way for the system to receive the data. Because sometimes you want a whole new space. Sometimes you just want the bit of additional data. That’s why the notion of having a timeline is important, because while you’re in a timeline, you may want to receive more timeline stuff. So that may be need a maybe have some sort of a

Brandel Zachernuk: Protocol.

Andrew Thompson: So this would have to all be done in steps. And right now I just have it on basically when you load the page before you enter the full XTR experience, you’re sitting there, you see the browser if you remember from a previous version I released, you had like a URL bar inside the web page that you could change to get a different document. I’m just right now. That’s how it’s receiving data. Once we get that stuff working, you can then give it a more elegant interface. So for the export and import of workspaces, which I’m still working on because I’m having lots of troubles with it it’s going to right now, just be like a text file or a JSON file. I’m trying to decide between the two that you upload or download. And that’ll be simple enough for the testing. It won’t support any other file types right now. But if we get to the point where we’re like, okay, well, now we want to add gltf files that would be like an additional functionality we add on top. So it just I would then update the code for the same sort of upload feature, whatever that looks like. But I think in general the way we’ll be adding information would be before you enter. Into the XR mode of the web page. You get the headset on still, but you haven’t yet entered the web page XR, and that’s where you would add the data, because that’s where you get the keyboard. And you can copy and paste information and upload files and things like that.

Frode Hegland: So this is very exciting. I think Jason is probably fine, right? Everybody would think so. And maybe we have because the notion of sending the JSON or whatever it is with a prompt in the front if you can identify what the data is, you don’t need the prompt under. But if the data is sent with a prompt from a system that was maybe LM generated, it may not be flawless. So then maybe we can go through a different process. Now, just as an aside, before I forget, I’m going away to Asia. Soon. So these dates that I just put in text, I won’t be able to do the normal times. I’m wondering if it suits you guys to have 1 p.m. Pacific where Andrew will be properly awake. And then 8 p.m. UK because this will be 5 a.m. for me in Japan, which is doable. Does that suit any of you? And if not, maybe we’ll do a bit of holiday that time, I don’t know. Any strong comments?

Andrew Thompson: I’m currently checking my calendar, so I’m not leaving you unread. Here. Hold on.

Brandel Zachernuk: Yeah. Cool.

Peter Wasilko: Sounds doable. At least for the East coast.

Frode Hegland: Yeah, that would be great if. Unless there’s any major problems. Because. Yeah, it’ll be. It is the most amazing trip. It turned out to be cheaper to fly via Beijing. So we have three days in Beijing with Edgar, and I just cannot imagine anything more exciting. But. Right. I’m also trying to do something in the. App store connecting which is not working. That I do need. I know this is kind of Wednesday talk, but because it is a community thing, anybody have any suggestions to what we should call what Andrew’s building initially we’re just going to use reader and author, but I don’t believe we will be producing two different pieces of software for that will probably be an integrated environment. Right. So if you have any thoughts, actually, hang on, I have to step away for two seconds.

Leon van Kammen: Okay. Since everybody is silent, I will throw in a ridiculous one. To to get us started. Andrew’s thinking machine or thinking hat?

Mark Anderson: Because I’m just wondering, I mean, I, I hadn’t really thought of the things that we’re looking at as particularly being an app, but more an interaction that could be achieved by any, you know, basically any number of tools in the space. So I guess what I’m saying in that is so what part, what part of this actually demands that there is a one or more applications that work with it? In other words. Couldn’t it be more like, you know, any sort of web tech where you just point the appropriate sort of browser in the looser sense of the word at it, and it does that thing for you as opposed to necessarily needing a specific app. I say that with no preconceived notion of which should be I. I just wondering out loud.

Andrew Thompson: I would just say since it will be online, it will have a title in the tab. So we should put something there. It doesn’t have to be the most elegant thing. If we come up with an elegant thing, why not? Right now, I’m just. It’s just alphas. Because it’s. It’s in development. I assume by the time the convention rolls around, the conference will have something else to put there, but.

Fabien Bentou: So I would think.

Andrew Thompson: That’s less important.

Fabien Bentou: Sorry. I would take this. Who are we trying to get to click or read on that title? And I would say there are knowledge workers academics that are slightly enthusiastic about XR.

Frode Hegland: What are we talking about? Just what was the question about the aspect of the name you talked about? Mark, I’m sorry I missed that.

Mark Anderson: Well, I was following on from what you said, which is just what should we call this thing that that Andrew is making? And Andrew has to clarify for us that it’s essentially, well, what do we call the the tab that shows the thing and that makes that makes a lot of sense. And I would accord with what Fabian said. I mean, the two, probably the two communities for whom it is probably of interest will be sort of people doing knowledge work. It’s knowledge work, tools for thought, all that crowd whether they’re in the academic slash research sort of camp or more widely around it, I think those are the people who will probably have the most interest in such a thing at this point, because they will have it. They’ll be immersed enough in some of the, the potentials and, and the limits, you know, to, to make, to make sense of what they see. Because to everyone else, it’s just literally what you see.

Frode Hegland: You used the word immersed, which is nice because as most of you know, my philosophy is based on the notion of liquid information. And I’m thinking this is very liquid, what we’re talking about making information flow, how you want to be viewed, how you want. That would that would be a preference for me, but I am very personally wrapped up in that one.

Fabien Bentou: Again, if it’s the audience or academics, I would keep it extremely factual and nearly boring. Boring is a bit. I’m being provocative here, but no branding and eventually the branding will come after this. While the value is kind of proving that if like, we can just throw keywords at it like interior knowledge, extended reality, knowledge, interaction laboratory like a bunch of different experiments. And if we need, we can make it shorter or sexier. But in my opinion something like soup of real words that actually describe what we’re trying to do. And. Yeah, branding will follow.

Frode Hegland: I think you said something very useful there as well. Lab. You know, we are a lab. And if this turns out to be a useful product, still, calling it some kind of a lab is not a bad idea because academics work in a lab. So maybe we just keep calling it the Future Techs Lab. And. There’s a lot of free.

Mark Anderson: It frees us from the overhead of having to think up one more thing that. No, no sooner have we found something really good to find that it’s either in use or that somehow it, you know, has a has a meaning we didn’t really want. So I would sort of certainly accord with the notion of, of, of sort of not trying to to think up names and things for it now. And I’d agree also with Fabian’s point about, I think, for a broadly academic crowd that they don’t get excited about brand names who really don’t. They’re more interested.

Frode Hegland: It is very much it was just a moment to see if, you know, the naming would help us think what we’re thinking about. Sure, sure, sure. So have we gotten a bit further in thinking about how to do the interchange of stuff? We’ve gotten a little bit further, right? I mean, what I would like to see very soon, Fabian, is for you to take something in your environment and send it to Andrew. And whatever means necessary and under to. Either use it or tell you how to change it, right? That would be amazing. But we have Leon. Hands up first.

Leon van Kammen: Yeah, thanks. I switch the microphone or actually switch to my phone. So to close the last topic, my boring suggestion would be text lab or something because it’s boring and it’s going to be about text anyways. More and more the second like the interchange thing. Yeah, I, I. I think it’s a bit. And maybe I’m sort of re restating what Andrew said, but I think. I think it’s a bit too early, perhaps, to To decide on on that particular exchange, because it might depend on the progress. Andrew is still going to make. And that will perhaps decide how easy or difficult it will be to sort of shoehorn a certain graph in there or out there. So yeah, I’m I’m not. I don’t really have a good answer to that.

Frode Hegland: On our page. Just update it here in the link. For now we’re going to call it Future Text Lab System. That’s neutral. And we’re happy right. It covers what we’re doing. I reservation day on It’s just the funny thing is, I’ve started using the Vision Pro for more work related things. I’m forcing myself to use it more because a lot of it is absolute rubbish. For instance, just in author, if I select text with my eyes and pinching it gives me one context menu design. If I use the trackpad and select and control click, it gives me another context menu design. So clearly there’s a lot of stuff Apple hasn’t cleaned up yet. But it becomes very, very clear that we do not want a big build. A one thing you know desperately what I’m working on this. I want to be able to go into this, the future text lab and look at how the references work, because I’m crap with references, but they’re so important. So what Andrew is optimizing and optimizing. I want to go in there and back again. So I’m not saying we should interchange everything, but it will be very useful to have at least a subset of data. Yes, Mark.

Mark Anderson: I was just thinking I don’t know if you noticed. I uploaded some stuff to the base camp. Sorry, I know this. For those who have access to that

Frode Hegland: Please send a link while you’re talking. Which is?

Mark Anderson: Everyone is here. Oh, crikey. I won’t be able to find it while I’m speaking, but this was came from discussion earlier about you know, things we could find in a document. And I was just thinking, okay, well if we have a document you know, is it actually useful pulling the pulling the pictures out if they’re photographs of places or artifacts or something? Maybe if it’s just a graph, the graph is not much use without the thing that describes the graph, because there may be a one liner sort of caption, but that’s normally really standing as proxy to some text which links to it. And so what I’d done is I apologize for using an article I have to write, but I just one I knew knew well enough. And when we had an HTML form. So it took my 2022 paper that’s in the corpus we have and just did some screen grabs of effectively a graph. And luckily in that article, the text was actually in the same bit. So I could literally just take it from the web page. In reality, I don’t presume that they would necessarily would be like that, because one of the vagaries of the sort of the way the whole print process tends to work is that you’re, you’re your graph gets shoved somewhere else in the document for layout purposes, as opposed to next to the text that describes it. My point being that as, as an entity, to pull out of the document, to do something with having. A pictorial element with the textual context that really gives it a meaning would seem more useful than simply listing pictures.

Mark Anderson: That can be useful for sort of visual recall and things. So it’s not I’m not suggesting that is without use, but just just pulling the pictures or just pulling the tables actually is essentially a list making process that doesn’t necessarily move us further forward, because the next thing you have to do is, okay, you go and look at the list of pictures and now you say, well, what does the picture mean? So you, you innately got another level of drill down there. So the pictures I put up and I’ll go and try to find the link to it in a moment. Were just that. So I, I did some snapshots of, of some snippets. We have the, we have the document in question in the library. It’s one of the ACM 2022 papers in HTML. And it’s really a provocation for people to go and look at other ones and think, okay. Yeah. So as we try and in our, in our, in our text lab when we’re looking at an article and trying to make sense of it, what are the meaningful levels at which we begin to pull it apart? Because I sense there is, in a sense, a there is a form of internal hypertext, if I can call it that, to the document. If you think of it as not living on a, on a white rectangle, it’s, it’s just a collection of text. And we can and therefore we can have this richer interaction with it. So for instance, we might want to be able to drill in and go go to a table see what the table is about and perhaps even reach through into some data behind it.

Mark Anderson: Unfortunately that in the article I chose the, the, the tables are just basically, you know, small graphs. So there isn’t an exciting table behind it, but it could be something where, you know, there’s quite a large data table where we might actually want to think about, well, how would that be if, although there was a static picture in the document, actually, that’s a proxy for you know an actual interactive chart where you could perhaps change the visualization somehow even if only to make it a different set of colors because, you know, it might not fit with your, your vision or something. There, there are all sorts of things there. And I’m thinking of these and trying to think and go back to this by doing things we can practically do. Because if it is the case that probably we aren’t going to be writing as a sort of primary act in this space just yet, because the tools aren’t quite there. There are a lot of other things we can do, and the things we can do, I think, is to interact with the parts of the document. Although the documents as we have them may not necessarily today be presented to us quite in the form we want. So there’s this interplay. Well, what can we do if only you know? Is it, is it believable? I can have the data, the document, in a slightly different form that would enable me to to deconstruct in this way. That’s where my mind’s going. And I’ll go and look up for this this link now. Oh. Somebody got a package.

Frode Hegland: I’ve added the links. Here, Mark, and thank you very, very much. On exactly that point, Mark, an example I think that would be extremely useful is if you’re reading, you come across an image in a document and it’s a bit too detailed. And you in the description there is a link, you click it and it opens up that thing in the Brandel Bob horn mural environment. That’s why I think sending the data and sending I think that will be absolutely amazing. But it does require either sending the image or having the image accessible.

Brandel Zachernuk: What would you.

Mark Anderson: See in that setting?

Frode Hegland: Oh, so just to remind you, in that setting Bob Horne had a mural with lots of detail, hugely horizontal and all you can do. It’s in a flat room. If you look, if you pinch, you move the whole mural. So even though it’s huge timeline or whatever, you have instant access to any scale anywhere, it’s absolutely insane. So it’s an example of a super simple, super focused, quote unquote application, that one item of knowledge or one category of knowledge can really be useful for. In this sense, the wall.

Mark Anderson: Was a timeline. In design terms, it was a multi strand. It was multi stranded timeline. So it is your desire to you want to see the picture where it sits on the timeline of the document.

Frode Hegland: No no no I mean even simpler forget what that content is. It could actually be even a picture a panoramic picture of a jungle let’s say. So no semantic knowledge needed necessarily. It’s just a cool thing. So you when you’re viewing that a click, it opens up and you get to you get to see it at absolutely ridiculous size with no distraction. When you’re done, you just go back. It could be lots of numbers, it could be the timeline. But it’s it’s the idea of normal rating jump to explode.

Mark Anderson: Because it seems to me that you get more out of things like tables and graphs than you do for pictures, because you just get a bit. You basically just get a bigger photo, which is which could be good. I mean, if there is the a photograph of that quality and depth behind it I think, I think the thing that certainly is interesting is where you have a, a table or a graph chart or what you will of something. And although there is in a sense, a static presentation of it because it’s the canonical representation is in the static form that you could actually either view the data table behind it or that you could actually, to some degree control the visualization. So you might be able to show different binning of the numbers or this kind of thing. I think could be very interesting.

Frode Hegland: We’re talking about exactly the same thing. So I’m very happy to hear that. The point being that when you’re viewing something, you have options to view it externally with completely different systems, but in a specific way. So yes, 100%, that is exactly why I want to be able to to send things about and Fabian, you’re the one with the least time opportunity to be part of our Wednesday thing. And today it’s been a very mixed discussion. I’m wondering if you want to talk a little bit about what you are really interested in at the moment, so that we have a useful overlap in the future, and we don’t ping pong too much. I didn’t intend today to be so much about this, but I really want to draw you in, at least by you doing your own thing. And then we’ll see what happens.

Fabien Bentou: So a quick thing then I have a bunch of which side is it? Headset behind me, like for quest one, and I have a bunch on my backpack. It was kind of funny because this weekend, on Saturday, I went to to give a not a class, but to make some kids discover XR and programing in a, in a school in Brussels. And it was kind of annoying because I prepped the thing like, I have three different Raspberry PiS, some with those, like the Lego kits with the motors. I had some IoT to turn light on and off. And what else? Like a bunch of things. And I prepped all the headset because if one kid put it on and they go to another spot, it basically you need to redo the the room setup. It’s like it becomes a mess so quickly. But I had like a scheme where if they go in this, it’s a black one, it’s numbered everything. And and in the end we sat down and one of the kid asked me like I asked them what they were interested in and whatnot, and they asked me so how what what is like if we imagine somebody that jumps off a bridge and they’re dead, what’s after death? Like, okay, well, that was not my plan. But it’s indeed a very important question. It’s it’s a deep one. So I shared a bit my perspective. And we started to then try to actually prototype it. How would we do such an experience with VR? What would be something interesting? I don’t want to say legitimate, but not just like a crazy thought. So yeah, that was. And in the end instead of doing one or going all the headset together, we just, like, sat down or we rather stand up and then we, the kids went one after the other in the same, like, VR headset.

Fabien Bentou: And not even the quest two. Just the quest one, because they could look at each other trying it. They could see the wow moment when they step from the path through to the VR thing. So just to say that my initial plan of flooding with tech and showing them so many things went through the window we just use one headset and discussed about life and not life. So that was quite interesting. But to bring this very pragmatically to this context my goal, I don’t think, changed so much. Meaning can I answer that question when kids ask like, can we do something like a VR experiment about life after death namely more prototyping? Namely when we have this discussion, can we have the discussion with whatever hardware we have? Vision Pro quest three, quest one, quest ones multiple of them a laptop, no laptop, a mix of all this and and legitimately tried, namely not missing. Oh, life after death is not or is like this, but like mix those different opinions. The kind of music jam where we have different talents different perspective. And try it and when, when I sorry if it feels a bit long, but it was a legitimate surprise. I did not have this topic planned. It is a very good topic. We did not. We had just as a discussion. Like, what could it be? But ideally it would be let’s let’s build it. Let’s try it in 20 minutes, 30 minutes, an hour, a day, a week, whatever. But like not not having to just talk about it. So that would be what I would need, as per usual faster and more diverse prototyping in in in XOR thanks to XOR.

Brandel Zachernuk: Perfect.

Frode Hegland: So I have a question. Then I’m just going to Mr. Wassilko there. So I have a question based on that. Because you’re talking about kids, kids, kids. And I’m not thinking kids, kids, kids. But of course I’m thinking Edgar, Edgar. So of course I am. And one of the things that stay on from Tana was excited about was that, you know, we’re sitting at the bar at the Groucho Club having a chat. So I’m talking to you now about timelines. I’m just wondering if timelines should be our first interchange testing thing, because I think kids would really, really work with that. You know, I want to see the dinosaurs. No, I want to see this. Right. The very, very demanding clients. So if we have the ways to easily get the stuff in and easily do the scales, you know, that’s something that is so adjacent to what Andrew is doing because for. Even with references, of course, to be able to see them based on publication time, it’s actually quite useful, but it’s even more useful if we can then say, I managed to scrape some data about the social history of something in the United States. Can you overlay it, please? Dada.

Mark Anderson: Well, I just one quick thing. One of my thoughts when I posted that stuff earlier with the pictures and stuff was really was really for an opportunity of the likes of Fabian or Andrew and things to come back and say that’s great, but it will be more useful if the context, the contextual information that I could get for this was X. So what I’m thinking about is so we know that in, in the case I used, it was taken from an HTML document. But that doesn’t mean that actually the information in the HTML document is semantically structured in a way that makes it easy to remove that stuff. So we can fake that part. But part of. So part of my thinking was a thought experiment. Say, okay, well, if if we started down the path of saying, well, we knew we would want to do this with a document that had charts or figures and things in it, what sort of extra, what sort of extra, basically semantic structure would be useful to have, because that pushes the other way down the pipe in terms of, okay, so the production, the tools that are generating this if they’re able to do that, then we can provide the thing that that then will be effectively fluidly usable in the sort of design spaces that, you know, the people here in building prototypes can use.

Frode Hegland: Indeed. And also Fabian Schlenker. To the animation of. Yeah, I mean that’s, that’s beautiful and non quote unquote dumb timeline. So to speak. Yeah. I mean, it’s just it really is so stressful. The lack of knowledge sharing. You know, some of you are not always happy when I do my theatrical language, but I do think that to us. For us to help us think. Wider and deeper. We need to be better at how we share knowledge. We need to share knowledge more dimensionally. This is a really, really important task. Because there is a lot of disagreement in the world, obviously. And if we if, you know, we have amazing thinking environments, but we can’t share them, it’s a huge bottleneck. You know, I used to say, to everybody’s annoyance, that the web browser was the biggest bottleneck of knowledge in human history. Because compared to what you can find through the web, the little web browser is a serious bottleneck, right? So now that we’re moving into webXR, we have an opportunity to do more, but it’s still either a single user experience or a shared at the time experience, rather than I build a multidimensional world. Why don’t you step into it? And I don’t want that to be meta or Apple or Google owned. Obviously, I know we all agree on that. So that’s why maybe if we literally choose one dimension time. And do a few things where we can easily use LMS and coding. All kind of stuff is we.

Mark Anderson: Have we have the we have the oh sorry, I so just briefly we have the future of text timeline in JSON. It’s a bit of a rough cut because that turns out that the labels we used in the, in, in our writing didn’t lend themselves to being, you know, the sort of length we wanted, but I, I, I definitely did that for Adam. So it’s around. So we, we’ve got quite a bit of the data there so we can, we can jump hopefully more quickly to doing something with it and then sort of, you know, backfill how we’d imagine on a more repetitive basis generating the information. So, you know, if I wrote a document that without without too much extra effort, I would be able to to seed such data into a document because there are sort of two parts. We’re broadly dealing with the the experience end. But most of these things are all predicated on having documents of a type that don’t really exist at the moment. You know? So we’re doing all sorts of things like shimming with visual matter and things for perfectly good reason. But implicit in what we’re doing is, is, in a sense, I think, a, a request to those who are going off to build new tools is, is to, to rethink something that’s beyond a sort of digital version of paper and pen.

Frode Hegland: It doesn’t really address the life after death issue, Fabian. But. It’s a thing. Yeah.

Peter Wasilko: As long as there’s reading after death.

Brandel Zachernuk: Yes.

Frode Hegland: I was going to be a bit morbid when Fabian used that brilliant example and say, you know, oh, that’s easy to do a visualization, just put the headset on and make sure the battery is not charged. That’s my view anyway, unfortunately. But Yeah. I’m glad, Fabian, that you have access to use the vision and Andrew that you’ll hopefully be using it more as well, because it does both go more into Actualities and and also constraints. So Leon’s point posted something here.

Brandel Zachernuk: Oh. Cool.

Leon van Kammen: Yeah. I allow me to elaborate a fraud. I have an unpopular request that I think maybe you should prioritize between PDF and knowledge sharing. And don’t get upset. Right now, I know this is not easy to hear, but I’m showing you the tool loop. When you said knowledge sharing and you know, it triggered this tool loop. It’s actually a tool which is just a web page. And you basically share graphs, and the graph is actually embedded in the URL. So no storage is extremely simple. And I was actually thinking when you were talking about sharing graphs and people should share graphs more first I was thinking people should use loop more, but actually there should be a loop for webXR in the sense that in exactly the same way and the fastest way to get there, if you really focus on sharing knowledge more and faster, then it should just be the graph. It should not be so much about the PDF, but the graph should be the the sort of center central point. And yeah, you maybe can link a PDF onto a node of a graph, but I really I would really like you to play around with this loopy a bit and think and realize that this is just a URL basically being shared with people, which renders a graph, and people can basically press play, and the graph can actually also move to certain connections have weights. And these are all extras. But yeah, I was just more thinking that maybe the graph, the knowledge graph should have a more should be more on the foreground and the PDF annotating of PDFs should perhaps be more of a cream on the cake as, as metadata of a node that rather than the central point of the the whole knowledge sharing goal.

Frode Hegland: I will let Fabian. But first, really briefly, I agree. First of all, I see PDF as a very simple medium through which we’ll probably always be around, and most of our data is in text form, so that’s why I shove it at the end of a PDF. Notice everything PDF is only important in academia right now. I want to escape it as well, except for an archival version. So in our Monday forum rather than Wednesday forum which is more constrained, I would absolutely agree and absolutely love to talk. And Mark, as always, has been saying this for a while to focus on knowledge and concepts and knowledge sharing rather than the three letter acronym. So yeah, absolutely. Sailing in the same boat on that 100%. Fabian, please.

Fabien Bentou: So my also interest is different than a lot of people in the sense that my added value compared to let’s say developers is to show something new. And it means I’m literally always looking for something that hasn’t been done before and loopy or other solutions that share a state or a graph or whatever by URL where it already exists. And I’ve done it even with webXR. I couldn’t find the link there, but like sharing. I think it was like a pre-configuration of either a room or part of a room or whatnot. So to me, that’s it’s an interesting solution, but it’s not a novel one in term of prototyping. And it can be done as discussed, like you can show part of the state as long as the target solution can interpret that data. So I don’t see a need to be honest to elaborate that a lot further. It’s it’s a good method, but that’s it. I don’t think in term of like, if we need to extract what is a novel interaction for knowledge management or to, like, have a eureka moment according to either research or designing prototypes. I would be surprised if that something that is. That unlocks. That is very how do you say not productive, but that unlocks a lot of new possibilities. Because, again, it’s been done before. Again, it’s a good solution, but it’s not like opening a lot of new doors, in my opinion.

Frode Hegland: I’m very happy to hear that. And, you know, the only way to get forward in an intelligent manner is not to go in one direction, because, you know, then you will be at your local minima. You won’t be able to climb the mountain to go to go further. So I really think metadata, metadata, metadata makes things happen. So the thing I’ve talked a bit about, the bit of visual meta copy with magic Notion, all it is is when something is copied. It’s got a ton of amazing stuff with it. So in this context, I could so easily imagine we, we write like a prompt thing together for our environment. And with what you’re doing, Fabian, you should be able to go from a PDF or from this amazing loopy thing or from whatever and put it in your new thing, but it lands there so well contextualized. So imagine, you know, with children, for instance, you put something in a new world and they’ve done something else. It’s like, how did it know this and that? You know. How did it know that I made it five minutes earlier? That’s a bit basic. Or how? How did they know that kind of stuff to to do it richly. You know, because the issue of PDF came up again and goodness gracious me, if we just copy black and white and move black and white information, that is absolutely horrible. Sorry for using that word again. I’m not trying to be funny. But we need to do it richly dimensionally and for you to do amazing new stuff. Fabian, you should be pulling us along with you saying. In order to do this entirely new thing. What kind of stuff? Can I go in there? Mark and then Leon and I will attempt to shut up for a minute.

Mark Anderson: I was just thinking of something that perhaps pulls together sort of Leon’s and Kevin’s comments is is, in a sense, exploring what we don’t know rather than what we do know. So we often talk about, if I take all this, how do I show it? And you saw in some of the sort of email feedback to you that perhaps misunderstood the thing you sent out, but basically saying that, yes, just another graph doesn’t necessarily help us much. Which isn’t to disparage the graphs per se, but it’s about whether they actually yield more understanding. And I think it ties into what Fabian is doing because in a sense, if you give children a bunch of something, you know you can give them a bucket of Lego bricks. They probably won’t build a house. They’ll build well, they’ll build a house that couldn’t possibly exist because no one’s told them the house has to be a certain way, or a door doesn’t have to be a door. It can be a portal to another dimension or something. So. I suppose it’s partly exploring that and what that says to me in a practical way and in terms of what certainly being done in, in the Sloan stuff is saying, well, what does the new what is this new environment give us? That we can’t do at the moment. And the biggest thing is definitely the ability to reshape things, but basically to represent them in, in, in some unusual ways. And I, I think that’s sort of I mean, that’s been there for a while. I think we tend to get in a lot loop between which end to start, whether we engineer better, better information so we can make a better thing or, or trying to sort of imagine an outcome and then getting lost as to how on earth we populate it with the information it would need. Learn.

Brandel Zachernuk: Yeah, those.

Leon van Kammen: Are really, really good points. I think the yeah, to to really dive a bit deeper in the learning moments out of all of this. I think what I’m thinking about right now is that I just read on my browser here that a graph viz, which is a 33 years old graph language. So it’s a textual graph language which allows people to render a graph. So this urge to see something and be able to describe it in a simple way and sort of render it in different ways, very old. And I also see that the VRML periods, you know, when VRML was all the rage as this is a term I have, I actually learned from your wife Frode. But this. So when that was all the rage then they actually also create Graphviz received an VML export. So basically you could write Graphviz, you could sort of like in text shape your knowledge graph and then it would render to VML. And I’m just thinking like what can we learn from that? First of all, we know that VML was a bit ahead of its time. The headsets were not great enough. There was no hand input. There was the there was not much basically. So I’m thinking that like this urge to bring a textual graph to to, to a better or a different resolution either in pixels or in in 3D. I think this is something which is possible now and maybe, perhaps in a two way kind of way that you can either import this Graphviz text and render it to a to a graph in 3D, but it doesn’t have to be graphviz, like fraud. And I brainstormed about it can be anything, can even be CSS code or HTML code and you can just render it as a graph. Via LMS or not, but I think perhaps this is a sort of moment in time which might be educative. Like, why was that that why did that not work this VML period? And, you know, why would things perhaps now work better concerning this urge to bring the graphs into VR?

Frode Hegland: Double hand. Triple hand. Okay, that’s very exciting. I also saw what Fabian wrote here, and I’m jokingly saying maybe just add a Z dimension to that language. I’m sure it would be more complicated, but even sending a graph easily between systems whereby people can choose to dimensionalize it later is interesting. And here’s the thing. Question for Andrew and Fabian. Have you both tried to look at the stereoscopic video in the Vision Pro headset?

Fabien Bentou: Yes, I would like two five minute video the dinosaur and the not paragliding. Walking on a line. I think I’m.

Frode Hegland: Talking about the ones you can make yourself.

Fabien Bentou: Every time I tried with the device itself, it says the light was wrong. And then.

Frode Hegland: Oh, that’s very strange. Andrew, have you had a chance to to look at that.

Andrew Thompson: Not recording video? No.

Frode Hegland: It’s okay. It’s really important to this part of the topic. I’m not changing topic. I generally use iPhone 15. I think 14 also works. It is stupidly good. The motion is not brilliant, but I know when I’m in some family thing, I just sit back and just film. You shouldn’t move the camera too much, just film a thing and when you view it on the headset, it isn’t super big rendered, but it’s big enough. It is as though you are viewing it live. The added dimension of depth is absolutely freaking incredible.

Brandel Zachernuk: So.

Frode Hegland: When we’re talking about graphs and visualizations there is a I’m not saying knowledge would necessarily be better in 3D unless you can get off your desk, which you can’t always do, but it was just something It doesn’t work for your eyes. Why not? Bobian.

Fabien Bentou: One of the two is too lazy, apparently so. Stereoscopic images do nothing. To me, that’s why I’m excited about VR. Webxr because I physically move my body even just a centimeter and then I see depth. But if you give me stereoscopic images like, yeah, it’s the same as a 2D as a normal image to me. So unfortunately it’s yeah, just doesn’t do anything for me.

Frode Hegland: Okay. So it would need to be properly 3D. Yeah, that that makes sense. So this graph. Graph is, is that what the software that Peter, you gave us a link to earlier is based on? As is the format and the software by the same name, the same thing.

Peter Wasilko: Well, there’s a dot dot file format. Dot is the extension for the Graphviz files. And it basically describes directed graphs or undirected graphs.

Brandel Zachernuk: And they’re nice.

Peter Wasilko: Integrations with the LaTeX typesetting system so that you can. Pour them into beautifully typeset documents.

Frode Hegland: So have a look at the link I just gave. Thank you Peter. Because of you, I have it. It shows the grammar and that’s it. Very readable, very visual, meta appropriate so to speak. We should be able to throw our own stuff in there, shouldn’t we? In our own system, you know, like, show this. And then there’s plus these other things.

Peter Wasilko: Oh, definitely.

Frode Hegland: Mark.

Mark Anderson: I. It’s interesting because I was one of the things here that I saw at play is that we, we tend to we sort of associated nice looking things with having meaning. And yet the other half of me who spends a lot of time in tools for thought is we’re constantly telling people, stop tidying it up. You’re making it worse because you’re you’re making effort to make something, have a form that it it doesn’t yet have. So it’s really and so I know I’ve, I’ve said in the past way, way back when sort of Fabian was sharing some of the prototypes. I was saying, yeah, but it’s all a bit blocky and things, but I do at the same time. Hold on to the fact that actually, if you do too much polish too early begs the question, what? What one’s making better? And how this links across to the graph thing. I remember using it a while back, and I realized after I’d been playing with it a while that I, I was falling down the rabbit hole of thinking that what I thought I was doing was if only I could, if only I could basically have the right input, then magically I would get something pleasing that would basically explain life, universe and everything which which of course wasn’t really a feasible outcome. And and also that some although I can graph some things out, they’re never going to be they’re never going to end up in a, in a state where anyone presentation is going to explain all parts of it. So I think there’s a there’s a dynamic there to investigate as well. And how we work ourselves around that problem of, of we’re looking for the dopamine hit of getting something pretty and informative, but that’s probably not what we were after. Fabian.

Frode Hegland: Just a really brief one on that mark. That is why it’s so important to have different kinds of ways of viewing it, because often doing too much is not a good idea. Absolutely agree. Fabian, please.

Fabien Bentou: So it’s a I. I use gravis for a while, I. It’s not perfect, but it’s It helps already. What? I was actually. So. Yeah, I recommend it. It can. It’s trivial to port tweaks or in having just a 2D image. Not a problem. Probably even PNG like, nice transparency and all. I don’t think it’s a big challenge. And then I clicked on Peter’s link about the editor. My initial reaction is like, I can’t see it being useful because the power, in my opinion of gravis, is you give it text, you can copy paste, you can have some value, you can basically generate it according to your patterns. Let’s say all this type of nodes have a color or whatnot. But I think one thing here that’s quite interesting and that might be usable in EXR is the the one affordance I see in that editor is that you can click on one of the node or edge, and then it’s going to highlight the code related to it. Of course when you have three items to node then an edge is like pointless. It’s overkill because of course you know which nodes does what, even if you don’t even know the language you get it. But when you have quite a bit, it’s actually being able to jump right there to the right piece of text is actually, I think it’s a very interesting affordance. It’s probably something that could be interesting in XOR, because instead of clicking on it, you grab the 3D node for it. So it. Yeah, just give me a little bit of food for thought for interactivity in this kind of graph because, yeah, when it, when it grows in complexity, that’s also probably where it’s useful. If it’s super simple, you don’t need to graph it. You don’t need to show it, you get it. But if it’s complex enough. So yeah, it’s it’s interesting to consider this editor and this kind of complex process in XOR and with a new affordance, arguably.

Frode Hegland: That’s fantastic. Fabian. That is really, really interesting. It’s particularly a thing of clicking and getting having now really experimented with author in vision and, you know, building it obviously have a different use experience. First of all, the the AI selection often interferes even when I use a dedicated trackpad, which hopefully in software should be able to say, I’m an author, don’t let my eyes suddenly select things. But that’s kind of a detail. The bigger issue here is and what Andrew is developing. And from what you’re saying and what Mark is saying, I just wrote down the comments. The importance is in the metadata, as I repeat, obviously. But what I mean is how things can potentially connect, not just how it is manually presented. I find an author when I write my own stuff, that I go through the defined concepts dialog more than I look at the map. For instance, just two different views of the same thing. So if we can in our community develop ways where Deeney for example, being our prime user, you know, select stuff, pull it aside, make a note and whatever. But all of these things are addressable so that you can choose to have a map style view. And it refers back as by the way, if if you have a chance, you should look at in the headset what Andrew has done, because already you have a list of references. You can pull one out and you can click on it and it’ll show you the sentence where it appears on the PDF. You know, this is where it’s becoming very tangible and exciting. Thank you Andrew. Look forward to Wednesday. Yeah. That’s it. Fabian, please.

Fabien Bentou: Two things quickly. I forgot exactly. The first one is I discussed with it briefly with Leon about this, but meta is now potentially proposing to give a browser window to application and including in WebEx source. So you would have, like, your 3D experience. And then in the middle of it not very configurable and all, but you would have a browser window which would be in high resolution, high quality would be a single window, not, not positioned like right in your face. I think the use case basically are like small payments. And I will try to share the link in the chat, but yeah limited but still a lot more than nothing. And for example, you can imagine browsing a wiki or reading a probably even a PDF or an HTML page probably getting information in and out because you need to upon the right page and getting some data out, for example, the payment has been processed. So that would be it’s definitely something to explore. I don’t know how that would translate to other platforms besides meta the quests, but something to check. And then the other quick thing was I don’t think Brant actually even mentioned it, but the article he published has been published on eye tracking in webXR. So basically you you look at a target, for example, a primitive, a sphere, a torus, whatever, and you pinch with your finger and then you can move it wherever, or you can move it around, let’s say and you can look at another object, pinch again, select it, move it around. So another way to interact because eye tracking until last week was not usable in webXR which is. Annoying because of course, that’s one of the new feature of the Vision Pro. But yeah, it’s now available. Still doesn’t help for augmented reality. And it’s like it’s a limited it needs a bit of unpacking also in terms of interaction, but at least eye tracking can be experimented in WebEx or on the Vision Pro.

Frode Hegland: Thank you. That’s cool. And please remember meta named their company after visual meta. Just, you know, not not copying I mean time wise afterwards. So a question then with this dot thing as I put in the comments here. Is that something we should just adopt? An integrated with, you know, in visual media, please remember all. I mean it’s a it’s a box. You know, I’m not at all dictating what’s inside it. We just need to get to the point of some kind of. Sharing happening. You know, just having the time dimension and for one thing, could be allowing us to test specific things between environments. You still have your hand up, Fabian. Please go.

Mark Anderson: I seem to recall from doing making Dot. It graphs, you know, basically. Just see if I could do that. It was something of a faff. And I do wonder if In a sense, investing time in making Dot, which is a long lived but was simple format from the time it came from. Whereas something that probably you can probably parse something like dot from a JSON file or something. I’m just thinking in terms of making life easy for ourselves. And I think today Jason, Jason and Jason Lldi, those sort of things seem to be about where where data transfer is at and by yield to others in the room have more practical experience for me than that. So we don’t necessarily need to go writing stuff in Dot. That, that may just be making more work for ourselves. Yeah.

Leon van Kammen: Yeah, that’s that’s a very good topic you’re touching on, because this graph data is sort of a yeah. On one hand, I am completely agreeing with you. I think we should not get too focused on this beauty and simplicity of graphs, especially since it’s it’s a snapshot of a certain time. And like, a lot of things evolved to more of this RDF and Json-ld kind of stuff. On the other hand, I also have a feeling that that that is a sort of, you know, a trap like this. And I realized that also when fraud started to, you know, make me enthusiast about BibTeX again is that it’s really easy to get sort of trapped into these very Yeah. Maybe it’s. Both sides are a trap. Like, it’s. It’s easy to get trapped into this simple, beautiful graphviz, which is probably oversimplified, simplifying a lot of things. And it’s also easy to get trapped into these super technical, complicated RDF, Json-ld, kind of thinking direction. So yeah, I I’m Yeah, I wanted to talk you more into graphics, but now, while I’m reasoning out loud, I’m. No, it’s. Both sides are a bit of a trap.

Brandel Zachernuk: I’m sorry.

Frode Hegland: No, no. Go on.

Mark Anderson: I just want to. I mean, I’m thinking about. I mean, if I recall, I mean, dot is basically node and edge information. I mean, and you can write that text in lots of ways, but but pare it down to the minimum. You’ve got nodes and you’ve got edges and then you’ve got properties on both of those. So and you can write that many ways, but it’s also textual, I suppose. What Graphviz was doing was doing the, you know, the layout algorithms. And of course, those have got bigger and sexier in a big way since and you look at all the wonders you can do in things like, you know, D3, JS and whatnot. So I suppose I suppose the question is, well, what what in that is, is meaningful. And I think the seductive part is we put stuff in and we, we, we, we get a picture out and we’re happy as a picture. You know, we put text in and we get a picture out especially if we can’t ourselves draw. Well and that’s, I think, where we get seduced by the press because what we don’t tend to ask is, have I made any something that’s something that’s useful and meaningful and I mean, interesting. For instance. So I looked at Fabian’s wiki just now, one of the links he put and in the sidebar, it’s got a dynamically generated little sort of graph. And you could look at that two ways. You can say, well, it’s just a sort of like a messy spider’s web, but you hover over the nodes or something there in nodes if you interact with it more meaningfully, rather than just mashing the keypad and, and you’re pushing all the knobs kind of thing you can potentially walk around, walk around that space.

Mark Anderson: And. If the if the if if what is causing those? What is represented by the edges in that graph is meaningful, then you should get somewhere interesting. So there’s an interesting reflection in that, because in a sense, if you’re walking around the graph and you’re not getting anywhere, somewhere interesting you conceptually have a problem. Not not in the way you’ve recorded the information, but in, in how you’ve captured the, the links. Now, whether you whether you made them links yourselves explicitly or some you know, algorithm that you’re working with has made them for you. And that’s. I think that’s a bit we’re all exploring in a way at the moment. I mean, I just don’t think there’s enough time across everyone, and I just don’t mean the people in this room, but generally I just don’t think enough has has been put into that to yet know quite what it means. I think a lot of the effort has gone into making things that look pretty so far, and that’s not a bad thing. It’s not a bad thing. It’s entirely natural. But but the actual deeper meaning of it, I think, is yet to emerge.

Frode Hegland: I think, Mark, you have a different way of looking than I do. I think when we’re talking about look pretty, I think we mean very different things. I’m a very visual thinker. I think you’re a more conceptual thinker. So clearly we need different ways, but. Anyway I know you don’t agree with a lot of visualizations, but I just wanted to show one thing to Fabiana, one thing to you particular and for everyone. So this is a screenshot of Inside Reader in Vision. And these are the views we currently have. Obviously this is my mock up, but it has been implemented. One of them is all horizontal and one is all wall. That’s 100%. You’ve been experimenting with getting all the pages up there. Normal PDF, nothing special, but just that. And then there is the issue of this kind of stuff. This is how most of us deal with communicating design things to people. We have an image and we put arrows on it to show things. And so Mark now is going to help me with my thesis, you know, God help us both because he’s just a tad better than me. I can imagine giving him something like this rather than something like this. So after this call, I’m going to clean up this little bit here.

Frode Hegland: But there will be things that I want to communicate to him and back. We all know annotations, obviously from word and so on, but it’s crazy how constrained it is. So I really could imagine. Having a PDF is just a main type of thing, and having all kinds of stuff coming out of it. You know, the timelines and all the things we’re talking about. But that can only happen once we Properly allow the data to have. But now I don’t think I have a screen video here of Fabian of that. Reader review, but I have started recording my sessions, especially out and about like this one is in the British Museum. A bunch of German girls sitting to my left, some kind of a school outing and a few glances. When finally I took it off, they asked me, is that the Apple headset? It has never been released in Europe yet, and they know about it, right? So we’re not living in a what is VR world. We’re living in a what brand is happening world. I think that’s a that was anecdotal. But but kind of interesting. Now you have to go in a minute. Yeah. We should all have gone before. But

Frode Hegland: These Monday discussions which go all over the place. It would be nice if it was slightly oriented towards a thing. And I guess Fabian has given us a talk earlier about it’s got to be a new thing. This is the future of text, after all. So I think we should try to just really encourage throwing all kinds of nonsense out there and and see what happens. And mark your hand is up.

Mark Anderson: I did very quickly saying when you were showing those, those few graphs earlier, that what I thought when you showed the picture of the effect of the text with the, the, you know, the side annotations, is this reflection on the fact that so what you’re really seeing is you’re seeing the central canonical narrative, but contextualized because one of the things you might be trying to signal to the reader is, okay, this is this is this is the path I’m trying to thread through these, these things which may not be present immediately, literally in the text, in the narrative. But I’m trying to indicate to you that they’re there because someone reading it might say, oh, right, I get I get what you’re meaning. And even if it’s not the narrative, it’s absolutely there in the surrounding things. But that was the point. Fabian?

Speaker9: Yeah.

Fabien Bentou: Very quickly, because I really must run. But it’s I’m wondering also at which point it’s pure novelty or the kind of what is like the the Red Queen in Alice in Wonderland about always nu nu nu. We need to run faster in order to go to the same speed. So people also saying, oh, it’s the Apple headset. It’s also because they’ve seen every other or they’ve seen a lot of headset. And when you’ve put one headset if it’s not like at least I don’t know, I guess it’s, I guess 20% better in every dimension if you’re an expert to make a significant gap. And if you’re not an expert, maybe 100% or 50% better otherwise, like, yeah, it’s just another what not. So I’m, I think we need to also be a little bit cautious on that front that yeah, all the set of technology and tools are evolving fast. Doesn’t also mean they’re useful. And the interest they are from people who are not actually doing anything with it is. Yeah, superficial at best.

Frode Hegland: Absolutely. But this is the future of text. So we should be following your prototype. And this is not the current best of text that we need, you know, blah blah blah. So, you know, you’re absolutely right on the ball. Very often old fashioned solutions are the best and we have no problems with that. We’ve mostly read paper books, all of us. You know, we often use 2D screens. That’s fine.

Brandel Zachernuk: Oh, yeah.

Frode Hegland: So I’ll see some of you on Wednesday. Fabian and Leon, I know you’re both quite busy. If you suddenly have a little bit of time on Wednesday. Come in. There’s no commitments. I know you generally can’t be there. That’s fine. And also crazy thoughts for the future. Very much encouraged. I just think that we should have some means to throw these balls of knowledge around in this space. That’s it. So. Yeah. Thank you all. Any other last comments, questions or. Oh, by the way, please at least have a look at this. This is Danny’s book that she released on Friday with Mario’s. It would be really good if we at least have a look at it.

Fabien Bentou: I’d be happy if they if she wants to do a presentation introduction or something. I’d love to hear it.

Frode Hegland: Okay. Good idea. I will definitely talk to her about that. And Yeah. Thanks, everyone. Oh, one more thing. Please think of people to invite to the symposium and book next week. I’m going to try to do a bit of listing stuff together. If you think of someone, please send me through whatever medium you choose. All right, have a good week. Bye, guys.

Fabien Bentou: Take care. Bye bye.

Leave a comment

Your email address will not be published. Required fields are marked *