11 March 2024

Peter Wasilko: Are there.

Frode Hegland: Hello. Sorry I’m late. I’m going to turn on some heat in this room.

Peter Wasilko: I thought I was off by an hour.

Frode Hegland: You were? But we had to follow America, right? That’s the way of the world, I think.

Peter Wasilko: Well, I’m brunching, so I’ll chew softly.

Frode Hegland: Yeah, I’ll carry a big stick. I. Hello, everyone. So Andrew, I guess Danny is not coming today. I think that’s what she said. But then she also talked about the timing, so I’m not 100%.

Andrew Thompson: Yeah, I think in general she can’t make Mondays for a while. Just because she has so much going on with classes and grants and everything else. Yeah. She’ll be here on Wednesdays. That was my understanding. I assume we’ll see her sometimes.

Frode Hegland: Yeah, I think you’re right. So I’m just going to plug in recharging here while we’re talking. Yeah. No, that doesn’t sound too dissimilar from my impression.

Speaker4: So what’s.

Frode Hegland: Going on.

Speaker4: Here?

Frode Hegland: That’s so nice. The EU is so good to us. Usb-c. It’s it’s nice to have that standardized. Not lots of different plugs. Right. So. Yeah. Let’s see who else is coming in. Anybody have anything beforehand?

Speaker4: But could you explain what you meant by the. Request you made about another meeting?

Frode Hegland: Yes, I can, and Danny is about to. Come on.

Speaker4: So many buttons to beat.

Frode Hegland: He made it anyway. Morning.

Dene Grigar: How was everybody?

Speaker4: Morning. Oh. We’re fine.

Frode Hegland: You know, Europeans just changing everything to suit the Americans like it’s been for 100 years.

Frode Hegland: It’s good to start early.

Dene Grigar: Do that. We can go home. I’m happy not to be here today if that’s going to be a problem.

Frode Hegland: I wouldn’t put it that way.

Speaker4: We are the greatest nation that ever existed.

Frode Hegland: Well, at least according to Alexander Hamilton. You are the greatest experiment that I will agree with. Right.

Speaker4: So we have to declare it a failure.

Frode Hegland: Yeah. Rob, you asked about the thought of having another meeting. The thought of having another meeting is to make things more flexible and to have a meeting where all we talk about is interaction. We don’t talk about workflow, data formats, academia. We just talk about, hey, wasn’t that a neat button? Or hey, did you see what I dreaded? That kind of stuff.

Speaker4: Yeah, I think that would be interesting.

Frode Hegland: So, Danny, I’m super happy you’re here today, despite you not being able to be here on many Mondays. There is. I want to make a kind of a big challenge to to everyone that I just thought of rushing here. That’s why I’m late. And that is. The interactions and X are to date are. Not very good.

Speaker4: It’s my.

Frode Hegland: Feeling. I brought my Vision Pro with a dedicated keyboard and trackpad into town today and I can’t. Okay, so I don’t have the app Store installed, so I’m missing things. But even an author, my own dear author, I can’t really get that much work done. You know I it does. It is nice to get the laptop screen bigger. But then, you know, as most of you have, I also have a big monitor on my main desk. So right now. There is no compelling evidence, compelling workflow that I know of. For the vision. It just doesn’t exist. So that’s really worrying. And equally worrying is how incredibly slow the general population is looking at this stuff. You know, now it’s just an overpriced toy and all of that stuff. There seems to be very, very little appreciation. That and at least 3 to 5 years we’re going to have the Apple vision here or something equivalent. Right. And 5 to 10 years it’s going to be a trivial cost item. It’s going to be like Term.but all these things we talked about, it’s coming. It’s completely obvious. Let’s wait for Fabian. I’ll try not to repeat the whole thing. Put you to sleep. I am late, but I do have my coffee, so do not worry of anyone. Where I was staying with Denny in Washington. There were all these wonderful coffee places by the bed and breakfast.

Frode Hegland: It was lovely. Right. So, Fabian just to recap, the last 10s, in my experience now, I find hardly any reason to work in an XHR, even on the vision, even with the apps, even with the keyboard and trackpad just isn’t there. That’s half of what I want to say today. The other half of the complaint that I want to say is, it seems to be very little imagination in the general public that in a few years these devices will be quite cheap and readily available. So we got to do what we’ve been talking about for the last two years. So on that note, I’m thinking not so much about the future of text, because that’s smaller, but thinking about the hypertext conference. I don’t want to go there. And, Danny, I hope you’re holding my hand under the table on this. I don’t want to go there and just be in another room and putting the headset on. A few people, showing them a little demo. That’s just not enough. Of course we’re going to be writing a paper, but what I really want to do is have our collective effort be to copy Doug’s demo in our way, not copy his workflow or anything, but try to make it that big. I’m not saying we can do it. I’m not saying we have the ego or the money for it, but in terms of framework and color, I think we should really do that, try to do something absolutely massive.

Frode Hegland: And oh, we lost Fabian and in with Mike. Your mic doesn’t work. That’s not a detriment yet. Until you speak. Okay. There. You know. Yeah. So so what I’m thinking is what Andrew is doing is phenomenal. And for quite a while now, he’s going to be just tweaking what we already have. It’s going to take a lot of experimenting to get that done. But guys, why do you all think about the idea of first of all, as we talked about many times, kind of have our own book available to read in XL, right? So that should be, on the base level, relatively trivial. But of course we can elevate it. So how about instead of just trying to do one thing, what we do is because we now have this sorry, I’m talking getting ahead of myself. We have one URL. When people load that in their headset, there isn’t just one thing to enter from that location. Right. There can be also community efforts clearly labeled as community effort. Right? So that you know, that’s one big thing. And the other thing is, I would really, really like it if the same information or documents can go through the different communities. What do you think? Starting with Fabian.

Fabien Benetou : Maybe Fabian last because I built a bit on top of it, but I stray away.

Speaker4: Yeah. What?

Fabien Benetou : It’s not exactly on point, so I’d rather if others could reply first and then I can open up.

Frode Hegland: It just. I mean, so I’m having problems with my darling thesis and visual meta and all of that stuff, and that’s making me think about a lot of these aspects. But but even in addition to that. I do think we need to aim higher, and one thing Danny and I have started doing is working on the invitation for more people for the book and symposium. So, you know, once we get a little further on that, we all need to think of who to invite.

Frode Hegland: Yeah, I see your point there, Mark. But yeah. Any comment on that first and just shooting higher.

Dene Grigar: We have Fabian. And then I’d like to say something after Mark.

Frode Hegland: Well, no, you first Danny, because Fabian said he would like to wait a bit.

Dene Grigar: Well, you and I talked about in the grant. We talked to the Sloan Foundation about using the conference as the opportunity for usability testing. So we do have something to do that’s very important. And we can line up, you know, line up people to come in in increments during the day and have them.

Speaker4: Yeah, absolutely go through.

Dene Grigar: So I mean, we have something powerful we’re doing. We’re not just like going. It’s like, oh look what we got here. There is a real focus to our participation besides the paper. Now that said, we need to have something for them to experience. So if we think about what we’re trying to accomplish. Then what we bring to this to the table will help. That will help define what we bring. Right. And we need to set up the tool. Like what are we asking them to do. What do we want to learn? And I think the point you’re bringing up are important. And I will say that people aren’t talking about productivity with the Apple Vision Pro because they’re not thinking productivity. And the point of our grant and the reason why we got is because we’re thinking about it. We’re going to tell people how it’s productive. And so that’s what the usability testing is all about. That’s what the case studies are all about. So I think we’re on the right track.

Speaker4: Oh no question.

Frode Hegland: Absolutely. 100%.

Dene Grigar: And I’m not surprised. I guess it’s final. I say I’m not surprised that we haven’t seen more. Done and more talked about because this is how things unfold. I mean, hypertext was coined in 68. We didn’t frigging get storage space until the 80s, right? It takes a while for things to catch on. People have to get it in their brains and understand and process. And think about what it means to be hypertextual. And now it’s so ubiquitous we don’t even talk about hypertext anymore. Right? So I think there’s a process for this to happen. We’re at the forefront of this, and that’s cool. I don’t want everybody talking about it right now.

Frode Hegland: Yeah. No, I agree on absolutely all points. Mark. And then Fabian, we can deviate a bit.

Mark Anderson: I mean, it’s funny as you picking up on on Dene’s point. I mean, it’s just half the problem for hypertext is it’s overshot the target. So now nobody talks about it and they still don’t understand it. They just assume it’s something to do with the web, which is not it’s something deeper and richer. And it’s actually very pertinent to here because I, I’m not sure I, I mean, even though I don’t have the kit to do some of the things you’re doing, I, I’m, I don’t I actually find it quite interesting what can be done in there. Because part of it to me is about manipulation of objects, even if they’re abstract objects for things I’m thinking about in a way that is very, very is or is much harder, certainly for most people to do on the 2D plane of a desktop or a piece of paper. I quite get that it might be difficult to type something, but is that everyone’s need and want? I think one of the things here is to be mindful that every time we do something ourselves, it’s really hard to sit outside our own experience. So, for instance, it might be difficult to have a meaningful work writing experience sitting in a train. But is that the primary use case this early in the field? The really interesting things that have come out from the experiments have already been done here is just watching, actually, how hard it is to move from the from the, the conceiving, being able to treat a document itself as hypertextual actually having all these linked parts to actually being able to do that in reality, because things like, turns out the gestures are less easily signaled than we’d imagined.

Mark Anderson: We just use our hands and we wave them around. It’s all quite natural. It turns out that, you know, as with so many other things that has to be computed, we have to be unambiguous in our signals. And so that’s another challenge. But I’m, I’m I funnily enough, I for once I’m, I come out of the gate a bit more positive on this. I think it’s a long way to go. I think think things will happen as fast as we’d like. And I think it’s, it’s it’s it’s laudable but overreaching to think we’re going to do something like the mother of all demos in, in at this point, it’s it’s it’s too early for this. I think what I’ve seen in this, in what we’ve done so far and in the timescale we’ve got until September, I just don’t think enough bits are there. And bearing in mind that the wider audience know even less about it than the than the group of people here assembled, I think it’s actually a really big ask.

Mark Anderson: I think the the biggest scope for tripping up and failure here is, is to just assume everyone else is going to think, wow, whatever we show them, the reality is it’s going to take an awful lot to make people go, wow, it’s going to make it’s quite hard, I think, to get any people to just even engage with the idea of it, because as far as I’m concerned, it’s something they just don’t need. So it’s actually really useful, the stuff that we’re doing in terms of the, for instance, the academic reading, because I can see for instance, doing information triage of papers and things. Is something that we can potentially start to do. It’s not ideal yet, but we can actually start to do that. It’s already showing up. It’s showing problems in the way we store our documents. And again, that’s a challenge in the short term, but these things are sort of overcome. But the things that are probably useful and tractable in the short term, I don’t think are going to be oh, wow. For the massive audience. Because this is we’re not trying to do entertainment, we’re trying to do meaningful work. And the Wow will come out of doing meaningful work rather than it looking sexy.

Speaker4: I agree.

Frode Hegland: I’m not sure if that’s the whole package. Fabian, are you okay if I respond to that? Okay. Thank you. I do think. So my my main observation was that I cannot currently do real work in vision. That was the current. That was not saying it can’t happen. And what’s happening with Andrew is a very good road in that direction. But even that is really, really baby steps. So what I’m proposing is that having used the vision of for a while and Rob, Danny and Fabian, tell me if you agree or also you under because of the actual vision experience rather than exile in general. First of all, the frames for native apps are really clunky. They have hugely cornered things. There’s huge space at the side, so when you have something, your field of view is pretty much immediately obstructed. It’s an incredible demo to be able to put lots of screens everywhere, but it doesn’t take a long time before it’s basically a mess, right? So I also have noticed, and I think this is very, very important over the last few days, I put the vision on a few friends who are technically knowledgeable but haven’t used it before, and the thing that always makes them go big is, oh wow, this is amazing. Are the environments being on the moon or Yellowstone or that kind of stuff? So that’s really, really interesting. So I think we should start thinking about stuff we’ve talked about quite a long time ago, building proper environments in this. So, you know, you have walls to put things on because so far, what we got with Andrew’s brilliance is a lack of space in a way, or a lack of spatiality. You know, we have these things and that is not a complaint. All I’m trying to bring up today is a call for more suggestions of more interactions and XR.

Frode Hegland: I’m not sure. I haven’t shown anyone our our own stuff, Denny. I’m talking about the general notion of of being just in the headset.

Dene Grigar: Well, you already asked me what I thought about having an environment while I’m trying to work, and I said I don’t want one, but I think you can also have like on or off. And so I don’t necessarily I don’t want an environment when I’m working in tech when I’m. Let me finish what I’m working in the computer environment here on the, on the desktop. I don’t have a background. Going while I’m working. I have the whole pages in front of me. I don’t have like something moving or any kind of images back there. In fact, that distracts me. I want to focus right on the text. And so that’s that environment is great if you’re in entertainment, but not for academic use.

Frode Hegland: I’m not sure if we’re talking about environment in the same sense. So far people have done a big wow because of the environments that we know about. But what I’m talking more about is an office. So either you can use your office are the office that you have to put things on a wall so it knows where a wall is because it can be overloaded. And I’m also talking about maybe making a virtual office. Sure, I strongly have the same feeling as you when I’m reading. I don’t want stuff going on. I absolutely agree with you. But there are also ideas for over there. I have that kind of information and it is a space not just floating away, but those things can be discussed and lots of different versions. I’m just saying that it may be a good way to now step a little back and get more perspectives to maybe have to build more.

Dene Grigar: But by more perspectives, though, we need to stay in the academic realm because that’s what we’re building for. We’re not building for our best friends. I don’t really care what my neighbors say, but I care about is what will my colleagues say, what are what when they experience this, when they’re making videos or doing whatever they’re they’re doing, they’re writing about their work. You know, doing annual reviews for each other peer review work, you know, what do they want? That’s that’s who we’re building for. We’ll let Apple build for Jane and Joe Doe. And Andrew put in the. I think what Andrew says here is really important. We did try some simple environment test in person. It was generally something we agreed to move against. However, if we want to revisit it, I can implement that. I believe we will need to try some new things though, rather than test the same things we’ve done already. Yeah.

Speaker4: So yeah, absolutely.

Dene Grigar: But I think it should be an on or off aspect because there’ll be there’ll be people like me that do not want to be tainted by some sort of background. I don’t want an off. I got an office, I got this this. Okay, nice.

Frode Hegland: Okay, okay, then I understand that I’m not saying we should replicate the moon like Apple has. I’m not saying we should replicate being in a forest like we have tested. I don’t know what I mean. What I mean is that when you’re working on something that doesn’t have a substrate, texts naturally has a substrate. It can become quite messy. So if we allow you as an all users, the kind of space they want, like maybe a desk might be useful and to put something down on maybe. Okay. Right. Mark over to you.

Mark Anderson: I sense two things are coming together quite naturally. The thing is, the things that will go wow about are, are sort of visual effects, but they’re not necessarily things they’re useful. So we’ll go there. That was amazing. Walk out. But but actually I mean I do think I, I accord with Deeney here in terms of things that are practically useful. I mean the problem comes first. So if I build myself a virtual office and I end up worrying about what the virtual wall is and why I stick it on this wall, then another better is to ask a question why does it need to be on a wall? Why does it you know what? What is the actual case? Apart from the fact that, oh, if it was there, I could. So that’s I’m not saying we know the answer to that, but I don’t think it’s a given that we need to have these things. Because and it’s not it’s not a binary thing because as Fabian has rightly shown with some of his experiments, you know, you may want to put structures in place because you want to have some spatial sense of where they are, which alludes more to the sort of Memory Palace thing. But certainly in terms of the use case that we’ve taken on board for the for for the grant work. I think, I think it’s a, it’s a long way from that. I mean, the really interesting thing is, without us intentionally doing so, where we are almost at the moment is we’re we’re sort of reimagining reference managers. And actually, that’s a really good thing because I think generally we generally feel that these are not, you know, these are not great tools and they’ve sort of and they grow like topsy over the time. So in their various ways, they’re all quite difficult to work with and, and not ideal. But we now have what’s, what’s really interesting with experiments being done at the moment, we’re beginning to have some tractability on the problem. And in an information space, if you’re, if you’re if you’re trying to keep situational awareness, you want unnecessary prompts at a minimum because they.

Speaker4: Talking about.

Mark Anderson: There as a as a unless, unless they’re there for a deliberate sort of Anchorage purpose, they actually create they actually create more cognitive noise.

Frode Hegland: You love using the term noise when we’re talking about visual display Mark.

Mark Anderson: It is noise because you’ve got to process it.

Speaker4: So and.

Mark Anderson: It doesn’t do.

Frode Hegland: You’re making an assumption that’s noise. So the question to Mark Gandini is how do you organize your files and your computer. Do you use list view or do you do icon view where the documents are everywhere? And how do you organize them?

Dene Grigar: Repeat that, please. I was responding about noise.

Frode Hegland: On your On your computer. When you organize your files, do you organize them in lists or do you organize them as icons loosely in a folder?

Dene Grigar: Both. Depending on situation.

Speaker4: Right.

Frode Hegland: So in some cases you both agree that it is useful to have the information in a specific structure, right?

Dene Grigar: Well, I think information is structured. I mean, even if I have it.

Mark Anderson: I’m not sure the analogy holds. I mean, I know I.

Speaker4: Mike, I’m.

Frode Hegland: Not saying we should have like a fake wooden wall background room with pictures on it. No, no.

Mark Anderson: Sorry, I was answering your earlier point. So I to use finder in most of its layouts. Probably the one I use more than anything is the sort of the cascading column thing, because it’s just a very quick way for, for traversal. But then I’m sort of it’s partly echoed in here. So it’s broadly spatial. But no, I change around all the time. So I turn the question around do I work in one? Do I work in one fixed way? No, no, no, because it’s entirely it’s an affordance to help me do my work. So I arrange that tool to help me to do the task I’m doing at the moment, aware that there are multiple, effectively multiple view specs. So I use the view spec that’s pertinent to what I’m doing, which is why I said it’s sort of hard to say, what do you use this or that? Because I think it’s asking the wrong question. I just use the one that’s pertinent to the time. A question you could ask is, is there a view spec that you don’t have? And I have to go away and think about that. There probably is. And to a certain extent, one of the things we’re doing with the current experiment is, is we are very much exploring a new sort of view spec where you have this malleable space, information space, and, and you can actually you can begin to look inside. You can you can almost see the hypertext within the document, as opposed to treating it as a sort of, you know, a single solid box.

Frode Hegland: Okay. I didn’t think this would be a contentious issue. I apologize. I’ll drop it. I won’t talk about. We’re not.

Dene Grigar: We’re not being contentious. We’re having a debate.

Frode Hegland: No, you’re very strongly saying no. You don’t want it. You don’t want any background? Absolutely not. I’m an academic. I don’t want it. That is not a debate. That is very clear. I mean, and also, Mark, when you’re talking about noise and things, at some point you’re not just going to want text in a space all the time. I am not trying to confine anybody to one thing. What what I said at the beginning of this was really we should probably look at more ways to visualize this while Andrew is working on what he’s doing. And so, you know, I feel I’ve got a very strong kind of bites back on the notion of things in the space, and I apologize.

Speaker4: We went through.

Dene Grigar: This already in the lab. We spent a week on this discussing it. And we also are not when you introduce this topic, you talked about your friends and the wow factor. The wow factor shouldn’t be the Disney experience. It should be like, look what I can do with this. This is so incredible. I could not I mean, this is how I felt about the Macintosh when I put my fingers on it. It was the fact that I could hit and in and type in instead of like three keys in my in my not word perfect. The program before that where I had to type three keys to get an N, it’s like wow, wow. And that that is a different way than, oh, look, it’s so cute. And so I want it to be. I want us to take an academic approach to this, and I also want us to not be sidetracked by our friends who I, as much as I adore them, don’t have an academic background and don’t use computers in the way we use them. And I do want to add, it’s interesting you ask the question about structure because everything I, I can’t function without structure.

Dene Grigar: My whole life is structured right. But when I use my icons on my desktop, we have different views. I keep it in the most like the modify, like the most recently modified, so I know what’s at the top. Then I can switch to alpha order or date begun. I mean, I go back and forth all the time to find things I need, but things sit on my desktop. It drives John crazy because it looks messy, but to me it’s where everything I need at the moment is. And I can find it like that. And it’s all across all these devices that I’m using right now. At the same time, I use lists when I go inside of a folder. Right. So I think it’s it’s different and it’s different for different people. And I’m not arguing with you, you we’re having a debate and it’s a spirited debate and that’s a good thing. And yes, I’m pushing back. That’s my job. I’m an academic, right?

Frode Hegland: Yeah, that’s that’s true. That’s true. Okay, Fabian, we’ve been waiting for you. And then. Peter.

Fabien Benetou : Yes. So something a lot less well, a lot easier or simpler, let’s say. Well, first, a little point about the testing with users. I, I’ve never felt in my professional life not ashamed while doing demo. Like, when I’ve built something, I always think about what I’ve done and then the 50 things I want to do next. And every time I show something, I’m like, wow, it’s not ready for it. It’s not ready for prime time. So I just want to say it’s it’s a normal feeling. But it’s important then to be able to get the perspective of others who did not yet have the privilege to try this, and also privileged to get a fresh eye and be able to honestly criticize what’s been built. But again, every time I’m like, shit, I don’t want to show this. So I assume that’s. Yeah, it’s a normal feeling especially if you’re on the edge, basically might even not make sense, but I think it’s pretty humbling yet precious. Now for the for my own experience, actually, those last days, a bit more on a just, like, first person playing with tinkering with the Vision Pro. It’s It’s interesting because I think so my, my expertise in is in webXR and my own interest in everything volumetric like I don’t want flat interfaces in space. Yet at the same time, I might argue here in this specific context, even though in terms of the big picture for the project, I don’t if I understood correctly, it should not only target the Vision Pro. So let’s say something that is interesting on just that device is not good enough, basically. But basically I, I found myself reading in the device and not wanting to puke or to my eyes out.

Fabien Benetou : And that’s a new feeling, because so far, reading on such headsets have never been good enough, basically. So that that’s actually new. And also now I have multiple windows like not just tabs of the browser, but like multiple windows from the browser. And that’s also new because, for example on the quest three, I could have windows in the browser, but they would be aligned on a kind of semi sphere, and I think I would have only a couple of them, but I would argue they were not spatial yet. They were basically on the line. Whereas here on the Vision Pro you can take a window from the browser and move it basically wherever you want, and that’s new. So that means a 2D interface to a document in a resolution high enough to be pleasantly read or code to be modified or whatnot works today. And again, I would argue that that wasn’t the case a couple of weeks or months ago. And all this to say, it might not be as spatial in the sense of putting not just a flat documents in front and moving them around. But this works today. And and I don’t think that’s I think in that specific context of focusing on research documents and arrange them and sort them and not necessarily read the full thing if one doesn’t want to. But now it’s possible to do this all this to say that maybe doing some prototypes that are not even in WebEx or but just on 2D web because of what the OS, the operating system of the Vision Pro allows. Might still be interesting enough. There are, I think just with this, just some new use case and usages to explore in my opinion.

Frode Hegland: Yeah. Thank you. Peter.

Peter Wasilko: Okay. I dropped a couple images in the sidebar. Thinking about whether I want to have an environment in VR. Yes, I would. Partially because my real desk is a monstrosity with piles of information feeds, books, and things to the point that it becomes a distraction. And I’d love to have, say, Dillinger’s desk from Tron simulated as an immersive environment while I’m working. So I have a nice, clean, flat surface in which I could plop things, and the virtual surface would be a natural place to spatially bind them. As I’m working in VR, as opposed to just having the full, flexible freedom of everything out at any angle, in any direction. So creating that nice classic virtual desktop surface kind of like a couple of videos that I posted last night where they were using the desktop, as well as things in the environment around them. And then the other thing that I really would like, if I’m creating my virtual environment to be doing academic work, would be a series of ambient displays. No, I don’t need the moon. No, I don’t need a lake with birds flying and ripples unless the birds flying were actually mapped to something that I cared about for academic reasons. In the real world, for instance, the number of birds flying by in the environment could be synthetically generated based upon Twitter feeds that I was following, so that I’d know that that activity of birds actually represents a corresponding a conversation happening in Twitter.

Peter Wasilko: If I choose to focus on, I could then bring that into range. I really, really love the chat circles. Transcript. Interface with multiple parallel timelines, horizontal bars reflecting message length and frequency, and the ability to play with that dimensionally. Now imagine a bunch of those floating out as part of my environment. So I have my virtual desk. I have documents on my virtual desk. Plus I see these sort of like poles arrayed around me in the distance, and each pole would represent a timeline of a different source of information. A couple of them could be the Toro Group feeds that could show new bibliographies being posted on a topic that I care about on those timelines. Another one could be email messages that are coming in in different threads that I’ve been following. But the point is, they wouldn’t be realized with the full detail, to the point that it becomes a distraction. So it wouldn’t be like I have Apple mail sitting in the background and I see all the posts with all the text instead.

Peter Wasilko: It would be a very simplified, reified view just indicating the activity sort of abstracted out. Almost. You could look at a derivative feed, even to show the rate at which activity in one is increasing relative to activity in another. So you could have some sort of abstract pulsing things in the background. It would be subtle, it would be a placid, peaceful environment, but the visual elements in the environment would sort of keep my attention. It would keep me grounded in what’s happening. Again, with all these multiple information feeds that I care about. Without becoming a distraction. When I’m focused on an individual document, then I’d be able to reach out and grab one of those and bring that into my primary focus context as I’m going and get back to what I was doing. So I like the idea of the environment with the clean, elegant, sparse, neat desk that I could be arranging things on in immersive thing as opposed to dealing with the clutter of the real world office, which is kind of unavoidable. Okay, so that’s where I’m thinking in terms of environments. I’ll sit down now and let you talk about it.

Dene Grigar: But can I say something? Peter, that was great. Thank you. One of the things that I was thinking about, too, is that when I’m working on my computers here, I’ve got Google giving me little updates, meeting coming up in ten minutes. I’ve got, you know Twitter telling me that there’s a new post that I’ve been mentioned in. I’ve got all these things happening, and I like that when I’m just. Functioning, right? But when I’m writing, when I’m doing something that requires me to think, I don’t want that set to turn everything off. And then have to turn everything back on. Right? So it’s a constant battle with staying organized. And so what I really want is an environment where once I get in it, I don’t have to. I can just think or I can just think and and function. Then when I move back to my regular 2D web environment, I have all these, you know, these alarms and things that alerts and things that tell me what I need to be doing. Meeting reminders and stuff like that. So I think there’s two ways to think about it. You know, one is this busy, busy environment that we’re currently involved in. The other one is this one that we really want to function in, in terms of writing and thinking. So there is there is a there is a dichotomy for some of us. That’s not the conversation.

Fabien Benetou : It’s to me. That’s why also I do some retreats like physical retreats and with or without stuff. Or sometimes I even, like, get excited because I’ll be stuck on a plane because I know I’ll not be able to pull a connection. And then all those distraction that that. Because usually if it’s something that’s actually challenging, I feel uncomfortable. Like if I need to write text or code or whatever, that is not easy. I’m going to postpone it just a bit. Like I’m in kind of tricky place between excitement and being uncomfortable. And if I have any excuse like, oh, a notification and it’s kind of work related and kind of important and I go away, but it’s not going to actually help me. So yeah, that’s also why, like last year, more or less at this time, a bit later, I was taking all my, I mean, a bunch of my gadgets, connecting them together, but still offline. Like I want to have all my tools, but not the distractions that come from other information that’s not related to the task at hand.

Speaker4: Okay, Mark.

Mark Anderson: So one thing that sort of I was thinking, listening to this is sort of what we’re alluding to is getting into that state of, I think some people like to call flow, where you’re sufficiently declutched from the scenario. In other words, most of your concentration is going into your job at hand. And that’s, I suppose, what I was alluding to. And I, I, I used the word noise earlier. It wasn’t a sort of casual pejorative I was thinking about.

Speaker8: You know, I’m.

Mark Anderson: In my early youth, I worked in some really high intensity information environments. So, you know, in a sense, I’ve encountered worse. And trying to make, trying to maintain sort of flow and sense and sort of an awareness of what the term is. But, you know, that that awareness of where you are in things can be quite hard. And one of the things I learned in that early thing was to get rid of anything that wasn’t helping. So putting putting something into an environment that didn’t need to be there is, is my sense of what I mean by noise. I’m not saying it’s something that’s set in a sensory way unpleasant or that it’s not to my taste. It’s nothing to do with that. It’s simply that. Does it need to be there? Is it helping the task at hand? Because one of the I think one of the interesting things and it’s right you brought up some of the things in the opening to this to say, well, okay, it turns out, you know, the tools aren’t quite where we thought they were yet. And, you know, the kit’s got a way to go and all this sort of thing is. But I just go back to the most interesting things that I think I’ve seen from what we’ve been doing so far, is the things we’re beginning to do that we couldn’t do before. And, and the thing that that stands out from that is our ability to deconstruct what in any other form of our work would be physical things that we just can’t take apart, or digital things that we cannot easily dismantle.

Mark Anderson: But, you know, just in the way that they are our first examples with, with looking at, you know, the reading experience of an academic document has led us into the thing of looking, okay, well, what is what is the inner structure of the document? You know, what is the inner hypertext of this? How what are all the bits and where they do? And what can we do with that? You know, do I actually have to see them in a bounded rectangle or a circle for that matter? Do it, you know, do they have to be like that? Can I actually pull them apart? Because what am I trying to do? So for instance, if I need to be looking at, I don’t know. Well, we’ve been looking at references in the last sort of session. We, you know, in the design thing. Well, if I’m looking at the session, do I know or care about the text? Why don’t I know it’s there? I know it can be summoned to me when I need it, but do I absolutely need it there? No. It’s noise. The way now, whether it’s off at the side or whether it’s completely hidden. But these these are the things to to explore. And this is really quite different from the way we’ve interacted with things before. And that to me is the excitement of this, of this new space.

Dene Grigar: Here. We get back to what you were really talking about at the very beginning, which I think has a lot of interesting fruitful discussions. And that is interactions. In you know, in VR and XR. I mean, that that is that is definitely a problem right now. Like, what can you do? And I think that’s something we should we can address. It’s a fruitful area.

Speaker4: Frida. That’s you.

Frode Hegland: I mean, the.

Speaker4: The.

Frode Hegland: Point of what I said in the beginning here was quite. We need to look bigger and step back a little bit. But that was really my only point because also a little bit for you Brandel, since you went there. I’m really stressed of my PhD because my examiners don’t accept that there’s any value in visual media whatsoever. It’s, you know, it’s been so many years, you know, arguing the corrections. I don’t actually think they understand what it is. And this is not pejorative. These are very intelligent people, obviously. But, you know, we all have different perspectives on things. And they come back with corrections stating things that are already answered. So in terms of knowledge flow, it’s I’m having a bit of a difficult time because what I did and I sent it to some of you, I ran my thesis through Claude, and I first asked it in a positive way. Is this the most brilliant thesis? Just joking. It wasn’t that positive. But, you know, is this fine? And Claude came back and said, yes, I only asked. I didn’t ask about the thesis. I asked about the research questions because I wanted to see the interaction and it was very supportive, but I did heavily hint that it was good. So then later on, which I may not have shared with everyone, I said, what are the problems with this thesis? And it came back with some interesting problems so that all of that was interesting. But what I’ve done that I haven’t shared with anyone is I asked it today if it could take Dave Millard and Mark Anderson’s last year hypertext conference proceedings and extracts references out of it and format it in visual meta formatting. And it did.

Speaker4: I didn’t teach.

Frode Hegland: It what visual meta is, but in Claude you can upload five documents. So I also had a normal document. I had visual meta at the end of it. So the thing that we’ve thought about for many years now, that at some point in the future the AI will be able to extract metadata is now. I’m not saying I’m not going to waste our time on how good the analysis I. Is of whether my research questions are good or not. That’s beyond the scope of what I’m highlighting. But what I am saying is that extracting entities, extracting. I’m sure it’ll do headings as well. I haven’t asked it yet. Extracting references. It is a done deal. It takes a while. It’s a bit clunky, but you know, that’s just a matter of time for these things to get faster. So that’s a positive and a negative. But when I go into town with my beautiful vision, put it on with my paired special keyboard and trackpad, and I sit in work, there’s so little stuff I can work in. And of course, that’s a positive for us as a research community. You know, even author, which is now submitted for review, should be able to have it soon. It’s fine, but it’s not a full work environment. Right. And that’s why I thought that we as a community, and now that Andrew is working on something really good and he has a lot of tweaking to do to just make that stuff work, we really should be extremely pretentious and say, can we do a mother of all demo type thing in September? How massive can we make it? Right? And I’m not. I’m.

Dene Grigar: I don’t know. I’m just. I’m negative. I’m just saying I think there’s enough stress and stress on us already and there’s not enough time. So I. The mother, the mother. Can we make a demo? Yes. Can it be like Engelbart’s mother of all things? No, I do not think I want to do that. I don’t want that stress on me during the summer. And I don’t want that on Andrew.

Frode Hegland: Let me qualify a little bit, please. And Doug’s day there was nothing. So everything was created from scratch, which is a lot harder and also easier. In our world, there are lots of different components infrastructures, APIs, systems, software, all of that stuff. Right? So the notion that I have really is that we and we more clearly make it. So you have one URL that you put on when you’re on, when you got your headset on or whatever brand. And in there we don’t only show the Andrew one, we also have a better way to show some of the community work, maybe from Fabian, maybe from Adam. That’s what I mean. Right. And I’m talking about not necessarily. Some of them could be completely random if Adam has some weird text interaction thing. Sure, we should provide access to that, but I’m wondering if we can do something slightly more coherent. So when someone experiences this.

Speaker4: Okay there.

Frode Hegland: There is more of a sense of a workflow. I mean, don’t forget, Doug didn’t have things like AI or speech recognition. There is an amazing wealth of technologies we have now that we can maybe use together. Peter. And then Andrew, please.

Peter Wasilko: Okay. It occurs to me that we could always fake it until we can make it. What if we just develop? Some semi-random generators of secondary information flows. And we focus on the vigil that. So basically assume that there’s an API and the magic substantive stuff to produce the information that’s happening in the back. And instead we generate mock data on those feeds and put that in the environment so we don’t have to worry about incorporating the Toro feeds and WordPress feeds and everything else. We just simply assume that there’s an arbitrary set of APIs, and we describe a format of the data coming out of them, which could be messages of arbitrary length and timestamps and some.

Speaker4: Peter, can I just.

Frode Hegland: Stop you that to say I agree with you, we can absolutely fake the data.

Peter Wasilko: Okay, good. That’s going. And then we just think about how we could have some ambient displays making use of the fake data and not get hung up on the actual implementation for demo purposes.

Speaker4: Yeah, that’s.

Frode Hegland: That’s a fair perspective. Andrew.

Andrew Thompson: So this is not in response to Peter because that’s a whole different beast to tackle. But the whole demoing. And you want to have multiple different demos available. Those nothing strange to that with just having multiple tabs open in the headset window. I don’t see anything wrong with that. I don’t know why we have to make a custom launch page for every single demo.

Speaker4: No. Know what.

Frode Hegland: I mean? Is you have this dot, you know, to launch in. Just imagine if Okay. I mean, like, Fabian has these amazing environments where you can click on a link, and so that’s what I’m talking about. Just your in VR, you don’t need to go in and out through a browser that you have like doorways or an office. It really doesn’t matter. It could even be a list where you just click on it and then you’re in another XR environment rather than kind of coming. Right.

Andrew Thompson: So the big problem with that, right? If you want us all to be inside of the webXR, so you’re only on one browser page, you never change pages. That means that all of these different tests have to be built on the same base code, which they are not. They’re all tests. They’re all programed very differently. They all use webXR, but the way that they run is all custom because none of us have, like A syntax that we’ve agreed on while just doing our own thing. So if you want all of them to run in the same browser without ever going to a different page We can’t use any of the current tests, we’d have to remake them, which, you know, is a discussion worth having. We can do that. It’s just we got to decide if that’s worth the effort just to have them not have to load in and out. They’d keep the headset on the whole time either way.

Frode Hegland: Yeah. I mean, at this point that I’m making here about being in an environment and clicking on link isn’t. It is not a hill or even a anything I’m willing to even get an injury on. It’s not that important. What I’m more trying to advocate for is. So I have an okay amount of time that I can set aside and dream about these things literally do nothing, go for a walk. And because it is related to other work, you know, I do that we all think about this a lot. But because I’ve developed software in this field, I naturally compare. And the thing that seems to really be necessary to speed up academic workflow, that is how you handle connections, right?

Speaker4: And one.

Frode Hegland: Of them is the simple thing of you come across a citation and a document. Currently it’s rubbish. Mark Anderson. As you all know, suggested that in reader you can click on and get more information. And we’ve taken that further so that if you have the documents, you can actually open it immediately. That is really, really valuable when doing research to be able to instantly get stuff. And it also addresses the The speed issue, like all the eyes now are really, really slow. So I was just hoping we could get more of a coherent kind of workflow. I’m waffling now a little bit. Mark.

Dene Grigar: Mark I think you’re muted.

Speaker4: My love, I do apologize.

Mark Anderson: Right. So just quickly about references. I think we’re missing the bigger point. The thing that’s really bad about it is, is, is that no one checks it, and you can’t you can’t easily check it yourself. So the unseen time, something that academics do a lot. And if we could make that better, just make it magically appear somewhere. Is is is it’s cinema. But it’s it only works if. What if what lies underneath is is correct. Which is a nice segue back to what I was going to say, is that one of the challenges here is, well, we all know what excitement looks like when we explore. It feels like when we experience it. But engineering excitement is kind of difficult especially if you’re if we need to make something that actually works. And I don’t know, I’ve been 25 years making, making, making up information for things and it hasn’t got any. I haven’t got no better at it. So I, I whilst I’m absolutely down with the idea of, you know, the sort of fake it to make it bit, but it only gets you so far because it’s really, really hard to, to fake that last part of the experience because made up data is invariably brittle. So if you want to take real data, then you’ve got to do a ton of cleaning, which nobody wants to do, and which actually one thing we’re not building tools for. So one of the sort of things that we’re just for a second, one of the things we’re sort of doing, and it’s been really interesting experiment we’re doing is, is I think we’re beginning to to dig into that. I there’s too much fluff at the moment around sort of AI and what you get in and out.

Mark Anderson: So if I can’t find references and document, well, we shouldn’t be spending money on it at all. That’s about the easiest thing you can give it to do, because it’s highly structured or certainly in a in if you take a good example, and it doesn’t mean that that’s in any way a trivial amount of work, it’s very impressive that it can be that it can be done, but it’s not particularly impressive in terms of but it doesn’t tell you why something was there. So the real work that you’re doing as an academic is you’re building new knowledge. You’re trying to do the synthesis, synthesis and understanding. And so when we look at the reading and the work in XR, just making things look nice isn’t going to help us a lot. It’s it’s basically allowing us to offload the bits that we don’t need to keep right at the front of consciousness so we can interact with them easily. And this is where the sort of deconstruction element comes into play. But I don’t think there’s anything there’s any particular wow factor in, in, you know, just how we present sort of references per se. The real problem there is not the presentation layer. It’s actually the endemic lack of quality across the, the, the informational network. And the only thing we can do there is find ways for good to push out bad. But if we concentrate on just what things look like, I always worry when someone says I can press a link and get this. Well, yeah, but if you don’t look at what arrives and you don’t validate it, okay then then then you’re not doing the job.

Frode Hegland: Yeah. Okay. So we are making the assumption that we do have. Good metadata for this project. I mean, one of the three things for this is visual meta, right? And when you write a document and export it with visual meta. Who you are, the title All that. It is literally burnt into the page. So we’re not talking about the olden days. We’re not talking about what people may have done before. We’re not talking about that. This is firsthand data. Now, when I did the test to extract references from your paper and put it in visual meta, it did a very good job. I can imagine relatively easily that we could have a prompt whereby it analyzes a paper, goes to check that these papers actually exist. You know, that some of these basic things are done. You know, that could be an absolutely worthwhile part of what we’re doing. But in the basic thing here, you copy from something you paste into something you export. That is a flawless citation based on. It’ll find it. So, you know, we’re not trying to change the whole world, but we are trying to make a more solid workflow.

Mark Anderson: I just. Yes. I mean, it’s great to have a flawless citation, but it’s very small beer. I mean, the thing is, the sad fact is it’s something that’s really badly done, but it’s so low down the stack. It’s there because when you get to the point where you need to know why something was said or where did that idea come from, then you need to dereference that. Up until that point, it actually has little bearing. Yeah.

Speaker4: But Mark.

Mark Anderson: So we’re giving a false we’re just giving a false level of appearance to that. And the other thing to bear in mind is with visual meta. I mean, essentially what we’re doing is we’re just we’re basically printing a sidecar of information into the back of a document, which is fine, but it’s just data. It doesn’t do anything of itself.

Speaker4: Okay. What should it do.

Frode Hegland: In it of itself?

Mark Anderson: No it doesn’t, I mean you.

Frode Hegland: What should it do in and of itself?

Mark Anderson: Well, all I’m saying is we keep bringing up visual matter, but it doesn’t actually change the price of fish in terms of what we’re trying to do. And the exploratory.

Frode Hegland: Mark, if you write a document. I read it and I cite you. And then Rangel reads what I wrote comes across the citation I made of your document. It flows through 100%. That’s what I’m talking about. I’m not talking about in the past. If we natively start this, like with our own book and so on, there is no chance of an error creeping in the document. And when Brandel reads my paper, can automatically check if he already has that document or where it is. These are tractable things. Within. Within what? What we’re saying with the Sloan Foundation Grant is that if people have good workflows for metadata and visual, Matt is the one that I suggest. But of course I’m open to other aspects. Things will get better, right? I’m not talking about stuff that’s already done, not just. Yeah. Denny, please.

Dene Grigar: There was a lot of things to untangle here, so I want to start with the eye and just mention that I. I firmly believe we should be teaching it and using it. Absolutely. I’m not against I and I’m hoping that some of the issues that academics are concerned about, like it’s not peer review, it’s not it doesn’t substitute for peer review. It’s just a way to get started. It’s a way to think about things. But it’s not. You don’t build on it. Right. There’s no foundation there. We don’t know who in the hell it’s pulling from. So there’s a lot of credibility issues it can be pulling from Donald Trump for all I for all I know. And that’s that would be a nightmare. Right. So where is it getting this information. A lot of it’s from Wikipedia from what I understand. Okay. So that’s the first thing. So you know when we run something through I as I do. The important thing is what questions we ask and what kind of knowledge we want to get out of it. It’s not it’s not going to tell me how to do what it is I want to do. It’s just going to provide some information that might take me in a different direction. Now, going back to the notion of of what we’re talking about in terms of, of metadata and visual meta, I don’t, I mean, I don’t I’ve not read your thesis. I have no idea how you presented it. You’ve shown me research questions that you’ve been running through I and that I find that interesting. And I guess the way I would imagine working with people in an area that they’re not necessarily familiar with, and I will say that my own dissertation was something that my professors had no interest in at all.

Dene Grigar: Right. So how do I hook them in to what I’m doing and get them to see my point without number one, getting, you know, getting them mad at me. Number two, getting them to drop me as a student. I mean, there’s a lot of things that are fraught with academics. And I was using technology as my methodology, which they they weren’t even they were barely using computers. And I was using Moos and MUDs and gophers and, you know, I was doing this research. At a time when the internet was just starting to really kind of permeate through the university and what I had to I had to show them the reality of what was happening. That number one, number two, my thesis itself upset them and still upset one. One of them, my dissertation chair. So what you have to do is seduce them into your topic. You can’t say visual media exist. Here it is. You walk them through the, the issues of, of metadata and you lead them through to the very bottom, to the very end where you hook them and they see that what what you’re talking about is valid right there. They know what metadata is. They’re familiar. And Clos especially is using metadata for his mother project. Right. And he’s doing exactly what you’re talking about. I mean, I’ve seen demos of mother. So you know what he’s doing requires Metta. Visual Metta.

Frode Hegland: Well, I mean, thank you for going into that aspect, but that the question of what metadata is is also contentious in this because at least one of the examiners have said that if the data is on the same level as the metadata is on the same level, it’s no longer metadata, it’s just data. To which my answer is, who cares? It’s data about the data. But it’s a big thing. But you know, that’s my personal little stress. But what I’m talking about here in the in the bigger thing is.

Speaker4: It was. You know, we.

Frode Hegland: Got to find a way to link some of these together, right? We got to do something bigger. And the. And the reason I talked about environment for a little while in terms of visual environment and VR is that I fully feel the same way when I need to concentrate, you know, I go full screen on my laptop and I don’t want anything in my room. So there are different modes and different times of when we work with our information. No question about that. But there are also oh yeah, we forgot to mention. Just can tell. Hussein. I Hussein, we started an hour ago. Where in America time. So I sent an email, but I forgot to mention it everywhere. I apologize for once. America is ahead of the rest of the world. Haha. Anyway So I guess what we’re talking about is that, well, Mondays are also going to be a little bit about the book and the symposium.

Speaker4: We got to.

Frode Hegland: Do a few more resets, I think, and go a bit bigger. Yeah, I see your comment here. Mark, I mean.

Mark Anderson: Now I hear you because it’s I can see that, you know, at some deep engineering level, there might be a point, but broadly it’s you that that distinction is used incorrectly in virtually every place I meet it. It’s normally used to to make some score, some ridiculous point that doesn’t move things forward. So it’s it’s all just, you know, it’s data about the data, whether it’s when, when this bit of data becomes something else is metadata is neither here nor there. I mean, the interesting observation out of that is, is, is actually how you abstract upwards with metadata because you, you can’t actually search all of it at once. It’s well, people like to say they can, but it turns out that there’s always a degree of, of cheating somewhere behind the screens, because people don’t want to address that aspect, mainly because they don’t want to do the abstraction, because the abstraction is hard. And, and, and we don’t have an automatic tool for it yet. And this is one of the things, I think, that gets people grinding their teeth about AI, because that’s the kind of thing we want it to be doing. And it can’t because it can’t. It can’t yet understand our language in the way that we mean it. But the point I was sorry, and I maybe I misspoke earlier, the point I was trying to make about what we can or can’t do with with visual metaphors. I mean, visual metaphor is just a wrap around whatever, whatever is inside it. The point is, what’s important is actually the information in there. And that’s the thing that in all the projects I’m involved in, nobody wants to talk about because everyone assumes somebody else is going to do that because what they need is the endpoint. So visual metaphor is really useful.

Frode Hegland: I know, I know, I’m interrupting you, but mark the point about someone else’s problem and the data not being trustworthy. I’ve heard it quite a few times. The notion here is that it starts that way, right? So so that’s why the example isn’t necessarily citing something old. It is. I cite Dini Brandle reads it this and you know, and then he cites somebody and goes, it starts with this method.

Speaker4: So. So therefore the.

Frode Hegland: Original document has to exist. Okay. That’s what I’m saying. So so sorry I agree with you. Just, you know.

Mark Anderson: Yeah I’m not sure how that quite that is also true, but I’m not quite sure how it meshes. But anyway, yes, it’s perfectly, perfectly fair point.

Frode Hegland: Okay, before Brandel gets the mic, just so Mark, we may have an important misunderstanding here. So let me just walk you through something and you stop me. Where there is an issue, somebody writes an original document in our community, right. Using basic visual methods exported with that, someone else copies as a citation into a new document and exports. That third person then reads that document comes across the citation. That citation then will link to the original, right?

Mark Anderson: Yeah. But the first thing you do is you check that before you reuse it, before you do anything with it, you actually check that that works, because if you don’t do that, the whole thing is broken. And that’s the thing that often doesn’t happen. And that’s an area where perhaps some automation could help us. So it’s not it’s not about sort of truth as in truth or lie. It’s just that, you know, is this what it says it is? Things get broken along the way. And it’s not anyone’s anyone’s intent. But this is this part of the problem with everything just being made easier for us to do, because it’s a link and it’s all automatically handled. Is that nowhere in, you know, if you open the box, there is there is no one actually looking at it and check if it’s correct. And we’re all we’re all guilty in this because we all like to not waste our time doing jobs we don’t like. And I’m not being I’m not being trite about somebody else. I’m just trying to surface this notion that that it’s there in all of us. There are things that we we can and will do and things that we definitely don’t want to do at all, because clearly it needs to be done by somebody else. Now, sometimes it really does because it needs skills we don’t have, but other times it’s just make work that has to be done. Well, one of the so one of the problems in the metadata is less how we package it up. And, you know, so visual metaphor is a very nice way of doing this. That puts a useful sticking plaster onto the some of the problems that exist in PDF. But it’s what’s in the it’s the data that’s within it. Yeah. But so.

Speaker4: Are you.

Frode Hegland: Talking about someone just copying a reference section and using it in a different paper?

Mark Anderson: Sorry.

Frode Hegland: I don’t understand the problem. Are you talking about a student just going into the reference section, copying some references and sticking it in their paper?

Mark Anderson: Yeah, because that’s what you’re describing. So, you know, I think you said it goes to the other person. The other person said, what I said to you was right. And the first thing you do or you try and teach people to do is you, you look at that and find it. You don’t have to read it then, but you validate that it is what you think it is, and you at least resolve that to a level of your own trust. Because until you do that, you are complicit in the the the failure to maintain the effectively the the custody of the citation.

Speaker4: Exactly.

Frode Hegland: Which is the exact process that they support, which current systems do not support. Because you can click on it, open it, instantly, verify what it is.

Speaker4: Okay.

Mark Anderson: So what you okay. So one of the one of the things then that that that supposes and I’m I’ll choose my words carefully here because I’m so sensitive and I’m not saying otherwise, but so that that assumes that everything that arrives from our visual metaphor is, is, is complete and correct.

Speaker4: Yeah, this is.

Frode Hegland: Because if it is copied using visual meta.

Speaker4: If. No, no.

Mark Anderson: Then all right, let me know. It’s important. So the slight decision here is that visual meta will copy what it is given correctly and put it into something. That doesn’t mean that what it is given to copy is correct. Yes, it does an oversight.

Speaker4: No, no, that’s the hope.

Frode Hegland: But Mark, that is the whole point. The point is that if you write a document with your name, title and all of that exported as a PDF, I open it and I copy a bit of text or I just copy it to cite it, paste it in my documents, then it will be correct based on that original document, because there are no opportunities for errors to creep in.

Speaker4: Right? Right. But that’s that’s not a guarantee of the format.

Brandel Zachernuk: That’s a guarantee of the input modality and making use of your tools. Thank you. So, so in a context of a trustless internet and where people can sort of act in bad faith, and it’s not a price guarantee in the same way that having some kind of centralized thing or some kind of Web3 distributed on chain system of validation, like in a trustless environment, that’s not a guarantee, that’s just that it is easy to make it so that it is true, not that it definitely will be. That’s not what I wanted to say, but.

Speaker4: Oh, no.

Frode Hegland: Hang on. No, go. Go on to what you want to say. But but just briefly, imagine if this paper that we’re talking about was downloaded from the ACM and it had all this visual meta stuff on it, then you do have both of those attributes. You have a placer was downloaded, you have an attribution that is clear to that. And also and this is where I think I very much agree with Mark. We can build processes to help with the checking of these things on top of that. But yeah.

Speaker4: And that’s, that’s.

Brandel Zachernuk: The that’s the thing that I did want to talk about is that that per se, the presence of some visual metadata doesn’t, doesn’t guarantee

Speaker4: It doesn’t guarantee.

Brandel Zachernuk: Correctness of the contents of that metadata, but what it does should do is render a much more straightforward mechanism by which it could be possible that it is verifiably true. So the simple presence of metadata is. Here’s how you would check if it’s true. And then then there’s that requisite check. One of the important things I think about about metadata, but maybe specifically visual meta is the way that it, it sort of sounds over the top for a moment. It interrogates the closure over what a document constitutes. And so that which means that, you know, we’re used to a paper existing at exactly to the exact boundaries of what a paper is right now. So references present like this. And titles and citations exist like this. And and data exists in an addendum or an appendix like that. You know, and one of the you know, one of the things that is actually critical about this present moment is that because.

Speaker4: Drinks and other.

Brandel Zachernuk: Things have been trained on papers in the exact form they have. They’re exceptionally good at faking those things and making things. Or rather. Predicting what a plausible one looks like, you know, without like, that’s that’s not a pejorative way of putting it. That is the objective. Like, that’s what they are doing. They’re predicting a plausible looking paper. But one of the benefits that something like visual meta has is that even if somebody gets up and hallucinates a, some visual meta it contains inside it its own recipe for like, how would you tell if that’s true? And that test will immediately fail in the context of a of a of a visual meta based one, because it’ll have plausible looking visual meta that says here are the references on the other side. And if they are not there, then you can simply go like, well, no, it’s not actually because the like one of the things somebody said earlier was like somebody writes an original piece of writing, and I was going to dispute that because nobody ever does. There’s nothing new under the sun. And one of the things that’s useful about our capacity to, to use space and to use dynamism is to be able to make.

Brandel Zachernuk: Links not maybe not visible in, you know, in Nelson’s sort of. Particular way. But having the ability to have relevance to the ongoing display of. These things that we have, that we have. That we understand how to explicitly situate this writing amid everything else that that actually lived. And so visual matter is, you know, a mandatory sort of folder for a relatively open domain of additional data. And and of, and a relatively automated or automatable way of checking and validating those things. And that’s, you know, in and of itself, essential for the present moment because of hallucination, because of GPT. But then the next thing is that it it’s an invitation because it’s not because it doesn’t, in and of itself have a preferred mode of presentation than it is functionally, you know, a mandate for a view specs of one point or other because you you have a bunch of things you could do, and there’s no single canonical way to do it. It means that. There’s more data there than, you know. Like it’s pretty hard to rehydrate a PDF. You can try to make the headings bigger, or you can try to, you know, but, you know, for the.

Speaker4: All that it has.

Brandel Zachernuk: Is what’s there. When you have more there. When you have more than what’s there in terms of its visual presentation, then that that’s an immediate invitation to consider that. So yeah, it’s on the on that basis that I would argue about the validity of the imperative nature of visual metaphor. They.

Speaker4: So that sounds.

Frode Hegland: Nice. Sorry. I was trying to export something for you guys. My computer went funny. Just one second. Hang on. Oh, here it runs. I was going to send you the document books. I am meeting I was supposed to meet last week, but something happened. I’m meeting with the ACM tomorrow, and I am going to say two things. Please, please use virtual meta. And then I’m going to say, in a dream world, how do you want this to work as a completely separate question, as an interview, to understand for them, how would they best want to deliver data and metadata? So that will be interesting and that’ll inform what we’re doing.

Speaker4: No way.

Dene Grigar: I guess to finish that. That’s a great question, but let’s finish it in what methods and what mediums and what what is the output for like there the proceedings for their communicating. So I think just add on to that.

Frode Hegland: Well that’s a very good point, Danny. Thank you. I was primarily talking about the proceedings, but yes, also in general, I will make that clear. And one of the things I’m glad that particularly Randall is here about Mr. 3D. Also, you Fabian, of course, is having gone through and asked Claude about the benefits of visual meta. And on first, getting a really nice list because I asked in a nice way, like I said in the very beginning of this call, then later on, a bit more practical. One of the things that it seems there’s two things. One is speed, right? If the eye if all the metadata is there, the eye doesn’t have to do anything you can do go through documents really fast. That’s important. The second thing is the kind of embeddings that we’ve been talking about, guys. But that doesn’t seem to be standardized anywhere. So you have a block of data in appendix at the end of the document says on page four, so far down there’s a 3D model or there’s an interaction or whatever. That’s the kind of stuff we now truly have a chance to fix. And if in the future visual data disappears and this has put someone else, I couldn’t give a monkey’s tail about it. It’s just not important. But what I think we need to do for September is to present an environment where documents that can go under the radar and work anywhere now, can just have explosively useful interactions. And then we got to do something that’s different. So that’s why, you know, we talked about a few times, I really want to start embedding incredible breadth vector type shit in our documents. And our own book can be a test beta.

Peter Wasilko: Okay, just a couple nuances on the visual matter. I think we can consider its self identifying information as being ground truth. And we can say with certitude that citations that are appearing to other works in it are ground truth with respect to the person producing the visual meta, having believed that those are accurate citations. But there’s still the open question as to whether following those citation links will lead to documents. That are indeed reflective of the understanding that was captured in the paper when it was signed, because you always have the chance that an author is going to cite a paper and get its underlying proposition dead wrong. And there’s also the possibility that there’s a typographical error that leaks into a citation. The problem with hallucinations is really irritating. I had the chat system hallucinate a paper by Marc Bornstein that I knew did not exist, and it was very irksome. It looked very convincing and. Great description of citations and all. Now we can head off that if we incorporate in visual meta and internal checksum, that wouldn’t be readily hallucinate able. So maybe the checksum could be made up of what characters appear at a certain number of offsets and then do a quick CRC 32. Combine that together and some sort of check, we can come up with a little algorithm so that a hallucinated checksum would almost certainly fail right out. So you could tell that this is fake visual meta as opposed to real visual meta. Of course we’re going to do that.

Peter Wasilko: We have to have a universally available tool to generate the checksum so that people aren’t trying to compute it themselves. So we could have a checksum service and maybe a checksum npm library or something that would give people a little command line tool to VM checksum and be the document in. And it’ll generate the checksum number for you. Beyond that, we do want to have a hook for correcting mistakes in visual media in some sort of a canonical form. We talk about that like a year or so back about how we could provide some sort of update mechanism, and it would be really good if by our demo we have that hook fleshed out a little bit so that they see that there’s a mechanism for correcting errors that creep into visual data, visual data in a way that can be tied back to the original author of the original document, so that we know that it’s a legitimate correction versus a third party correction, have the system be able to indicate that, you know, here’s a correction that Freud made himself for his document. And here’s a correction that was recommended by Mark Anderson based upon a typo that he detected in Freud’s document. Then the mechanism should also allow fraud to indicate that he verified Mark Anderson’s correction, and is indeed validating that as being the correct correction, so that someone using a high level API would be able to get the final conformed version of the paper versus the original version of the paper.

Speaker4: This.

Frode Hegland: There’s nothing wrong with having different levels like linked data having different levels of, of of implementation. Sorry, a question there. At what scope, Mark do you mean? Oh, the library question. Okay, so what I mean by that is

Speaker4: If.

Frode Hegland: Library. Library. By now, I just mean a list of documents. So it’s probably the worst possible term. I do apologize, but what I mean is that you know, I ran you can see in the screenshot here, this the result I got from taking an ACM paper, I happened to be Mark, and Dave’s asking it to reformat it into visual meta and did it. Pardon my French. Fucking great job, right? Both scary and brilliant. This is not analysis. This is just moving things about. So in terms of libraries, if for instance, you mark you because you are a librarian in this field, amongst other things, if you publish a document that just says, here are documents that I consider to be important and legit, I should be able to import that document into my system. And any documents that match that coming up in references are, you know, cool marks, upworthy’s, you know, and that should be in parallel with the system of any one of you. Find a document that is either redacted, retracted, or you think is absolutely ridiculous. And I agree with you, I should be able to import that into my system too. If that comes up in the references of a new document, you just say, what the hell is this?

Speaker4: Right.

Frode Hegland: It’s a human thing. So if we can have humans that we choose to agree with color things, I think that would be really useful. And I can see your face frowning there in thought, because obviously this, I think, leans a lot into the work that you’re doing, how you connect these things.

Speaker4: I mean.

Frode Hegland: To put it to you bluntly, right? Whatever’s gone. Okay, so Mark and I had a ferocious argument the other day, as we often do on the notion of where to go beyond PDF and the whole, you know, and he very intelligently looked in further directions than just a new substrate while I was more on the substrate. Right at some point, you know, in the next whatever period of time PDF will disappear. Of course it will, right? Nothing lost forever. But if we can just now, in September, have different demos, different things, and at the end of the documents through AI or whatever, it just writes what is in the document or how it connects. How useful could that be?

Mark Anderson: An interesting thing from your screenshot is that what’s been lost in the extraction is all the segmentation of names, because one of the things the BibTeX does, one of the one thing it does do usefully if. Well, there are two ways you can write names. If you use the sensible one, it actually does suffixes and and postfixes because human names like dates and everything else turn out to be much more complicated than we would know or like. You know, in the ideal world, everyone would have, you know, one first name and middle initial and a last name, but we don’t. And some people have very long names. Some people transpose things. And so there’s just endless complexity. And. I mean, ideally, you know, if we’d started out maybe 100 years back, we just said, oh, stuff it. Let’s just have a Uuid for any paper that’s made. We wouldn’t have this problem now, but we don’t. And we have information scattered across a lot of independent commercial fiefdoms. And this is part of the problem. And no one wants to blink first. And so you’ve got the sort of you have this awful thing where big tech is sort of the only thing that’s common. I mean, CES is older, so less use for that and and is not maintained. Big tech dates from pre-web so doesn’t really like to understand the web and all that sort of stuff. It doesn’t really think digital. So it’s also bodged on the end. There are two implementations of the use of big tech in the tech world, which is where it’s mainly used, which suffice to say, don’t agree and don’t work on them. So all these things to be worked through and I and in the middle of that, it’s really difficult to force errors out. So for instance, in the case of the, the, the ACM, there is no one at the ACM whose job it is to accept and verify even well-intentioned corrections sent in. Just stop to consider that.

Speaker4: That’s why we have you. Well.

Mark Anderson: But yeah, but the the the scary part of that is that well, if you don’t even if you don’t even have a correction we can do our level best to force the bad out. And, you know, to wit. So one of the things I’ve done this year is I’ve just done, you know, I have produced a really clean set of 34 years of hypertext conference, which is a tiny drop, a tiny, tiny drop in the ocean of stuff that at least anyone who wants that can get something without, you know, without typos in the names and things like that. That really shouldn’t be around this day and age, because that was that was mainly because someone I got a sort of reviewer two comments said, well, why are you bothering this? It’s all on dblP. And of course the mistakes are in the dblP because no one checks that either, because it’s all just washed around, because everyone wants somebody else to do the the work of checking stuff.

Frode Hegland: But even even in the paper olden days, it would be. No. Leon, what are you doing? You. Will you join us on Wednesday? Okay. Just think big magical thoughts. See you later. Hope to see you on Wednesday or Monday.

Frode Hegland: I think that to a large degree, and this is a real question for the community. We will expect that what we’re building will work on reliable data. Right. And and the methods through which we will be told something is not reliable, will be human. It will be quite literally finding someone like Mark, someone in our field we trust. And if they have a document that says this is rubbish, that that can then be used to have an authority do it, I don’t think is a reasonable situation, even though it would be nice Randall as well.

Speaker4: I

Brandel Zachernuk: I don’t know about that. I think that authoritative and all of those other things are kind of objective stances that resist deeper scrutiny. And what we should be thinking about is processes and actions that people have to take based on intention. Yes. And so having having a system that. Gives you clearer steps, or take some of those steps based on what you want to do and what you want to know. How would you tell? You know, that’s the thing I think is the most interesting thing about visual media for me is like, is this true? How would you tell? Here are some of the steps, you know, because otherwise you kind of need to start on that internet sleuthing yourself like there was the the Republic, the Democratic State of the Union address Joe Biden made. And then there was a woman who lied about some stuff and then last week Katie Britt lied about the, the instance of some horrendous human rights abuse and where it was and when it happened, and it took just some guy to pick up the thread. And then it became relatively straightforward and clear and almost you know impossible to, to, to dispute that, that what he was saying about what she was saying was, was true and that what she said was false. And but the thing is that there was no opener for that.

Brandel Zachernuk: You know, it was this guy who was annoyed at it. And so he started to do some sleuthing. What I think visual meta is, is an invitation to say, like, here is the beginning of a thread and I don’t think it needs to be more than that. You know, I think I think and I think that not everybody needs to follow all of those threads all of the time. I think it’s it’s to do with intention and action over a specific set of set of sort of objects and interests within a paper, because you don’t need to follow all of those things. You know, that’s to Mark’s point about visual noise. Ted Nelson. Visual links rendered persistently is noise 95% of the time and only signal it for that brief moment when you really do care about that particular. So like conceptually, that is absolutely valid. But esthetically and sensorially, it’s a massive, massive distraction. And with what you’re doing is looking for the links. And so my, my view of visual meta is not that it be objective or that things be guaranteed, but that it readily helps you answer the question, how would you tell? What do you want to do with this? And so yeah, that’s that’s the level I would pitch it at. And the sort of the, the central premise of its value.

Frode Hegland: Thank you. Hang on. I’m going through UPS here. I wrote down a quote from you there. Not that I’m going to quote you, but it inspired me. This is an invitation and enabler of interaction. You know it. It is an invitation. Absolutely. And that’s the key. Now, I just think that it would be I’m grateful for you guys taking the time, arguing the whole visual meta thing because it’s I’m very grateful. I could imagine an early version of our book this year PDF visual meta. Fabian takes it into his environment and renders it as he sees fit.

Speaker4: Right.

Frode Hegland: It can destroy the whole pdf. I don’t care, extract the data, use it, or use the metadata. Randall maybe does a reader. Adam does a crazy thing. That’s the whole point, right? To really try new ways of doing this, I don’t think we can do it with the current things we have now. And it was I gave you all in the chat here, the AI and visual meta discussion, which is really fascinating. I would say that comparing other means, it’s very good, but what is the feeling in the room of. The notion that when we do this demo, it’s the same basic documents, but they can do crazy new things and hopefully they’ll it’ll either be adopted or even better. People say, this is absolutely stupid the way you guys did it. Here’s a better way. That would be a win too, right?

Speaker4: But Mark. Yeah.

Mark Anderson: I was just looking at because I one of the things sort of circling back towards the experiment we’re doing and what they think I’m not arguing and this isn’t arguing the notion of assuming that some of the stuff we’re using is true and correct. I think a really useful part of the the demo that we’re doing, the reading thing actually, is to unpick part of the story. So, okay, you’ve been passed and you’ve been passed. The document has lots of supporting information. What what do what what do we know about this. So, you know, if it has a link, does something exist at the end of that link? If so, color code it. Show me something. Give me give me an indication. What’s a book that’s been cited with no page number in it? Am I really expected to go read 690 pages of something in a language I don’t understand? Because. So in other words, this is a low quality link. That in other words, if you just use this and pass it on, you might want to check before you do it, or else you’re just punting. You’re just punting the work to somebody else, or maybe not use it at all. And that’s something that’s, that’s something that our environment that when you’re, when you’re doing this, this is, this is, this is what you’re having to do basically in your mind’s eye when you’re reading a document especially if you think you might be reusing something.

Mark Anderson: So that’s something demonstrably we could do, sort of trying to think of practical things we can do in, in our experiments because. In other words. So once we’ve once we’ve received this information, we’ve we’ve got it in our paper, we can begin to perhaps there are some things that we can test. Certainly anything that’s got a Doi, you can actually check that there’s something at the end of the Doi. Now, I know there are other I know there are ways in which it can fail, but that’s something you can demonstrably do. Okay. If you’re online, but that’s actually quite useful. Because to a certain extent, if you know that 75% of things in the document at least have a Doi, you’re something that’s that’s reduced somewhat the load of if you if you need to go check through those references just in a sense, you know, to have some understanding of what they are to do this quick check. Well, okay, that’s that means that, you know, 75% of them aren’t some obscure book that I, you know, I may have real difficulty tracking down. So that kind of thing is actually really, really useful. It seems small beer, but it’s something that we can do. We can we can build, we can sort of hint with color or objects or whatever into the XR environment. And that’s something demonstrably will be harder to do actually in the 2D space because it’s, it’s all got to be all that’s got to be rendered on top of something.

Mark Anderson: It’s all got to be pre flattened. Whereas the thing that I think I find most interesting about the XR space is not so much that it’s 3D as it’s, it’s sort of two plus D. So I can have things that are, you know, related to them, but that, you know, in other words, I can see that this icon is actually not literally rendered into the thing behind it. It’s an affordance on top of it. And that’s where I think the XR space for, for this kind of work potentially really shines. And in that sense, actually having information that’s not particularly good is really interesting because one of the things that that the space can help us with is actually working the wheat from the chaff, because as an academic, if you’ve got to engage with that document, you actually have got to do well, unless you unless you think it’s of no relevance and it’s something you put on the side. But if it’s something that you really need to consume because it’s, you know, in the center of your field or something, you actually have got got to engage with it because you don’t know until you know, it’s not relevant or wrong that it isn’t something you need to do. So there’s, I think, something positive we can take from this.

Frode Hegland: Yeah. Mark. Thank you. And, Brian, I’ll just one second here. Peter. Your notion of separate temporary appendices for visual media is very powerful. And, Fabian. So think about this, Fabian. Imagine a PDF. Yes, PDF or word or RDF doesn’t matter. But a document that has no content, only metadata, right? It happens to be wrapped in visual media, but that’s not really the point. The point is that we have to additionally cool things. One of them is you can have a document that is only a reference section, as we kind of discussed before. So if you put that into your environment, it opens up very much like Andrew’s 100 references right now. And imagine also if we have spatial data that we’ve talked about before, forget the rest of the document. This is no longer an academic PDF. It is just Fabian’s room dot PDF. But would you find it useful with the kind of work you are doing to have to be able to import such a thing from a user? Leading question.

Fabien Benetou : I’m not sure. I mean, technically, I it’s feasible, but a document to me, a document with only metadata is like a pointer or something that I would have basically to unpack to get the actual data. So it would be an intermediary step, but like I don’t think it could stop there basically.

Frode Hegland: So Peter put up a few links earlier here, and he’s also, as we all know in slack pointed some of the Bret Victor real world things. Right. So imagine that you have built. Something Fabian. Like a timeline, which is an easy example, right? If the timeline data was in such a way. Or if that was part of a document and you needed to use that. What I’m thinking about is we’ve talked about the the headsets, all of them being thinking caps, right. So imagine you’re reading the document and then Fabian’s environment is the best for let’s say, timeline. So you open it in your view and it ignores everything except the thing that your software is best for. Does that make more sense?

Fabien Benetou : Yes, I think so. So I select whatever is useful within my environment.

Frode Hegland: Yeah. So that like.

Speaker4: You know.

Frode Hegland: If we try to make something really, really rich, we shouldn’t be ego people and expect one piece of software to do all the interactions, obviously. So we may even be more fine grained and saying, you know, for for example, Andrew’s system right now only looks at references. It doesn’t even know what the rest of the paper is. For the sake of argument, maybe open it in Fabyan’s view and it does a world map of it, because that’s what it’s focused on. So we choose the software as a tool for different, very different views of the document. Does that make sense?

Speaker4: He looks so.

Frode Hegland: Enthusiastic. I’ll hand it over to Brandel and we’ll keep this going.

Brandel Zachernuk: I wanted to calibrate in terms of how high the sights have to be, how high one sights have to be in order to provide value to the, to the general population. And I would say arguably even to the academic population, as a reminder, like, so there’s a there’s a capability in all modern web browsers, Firefox, Chrome, Safari called find on page and it’s obtained by control F or command F, and yet even on a desktop computer or a laptop or whatever you want, 90% of people do not know it exists. If you show it to them, they’ll be like, what is that? And I would argue I would, I would suspect I’m not sure. Dean. Mark. Anybody else if you’re familiar, whether whether that’s, that’s better among the academic population, but I would assume probably not as better as you would hope, given the necessity for actual research. And so you know, one of the one of the things that is really important is just bringing those basic tools, those capacities more ready to hand. And so having the ability to recognize that there are there is another way to look at something or there’s there’s a view a little bit through the looking glass into the rest of the sort of the the constellation of documents and information and knowledge that this, this case that that’s currently under investigation is, is representing is a is a actually relatively easy thing to do better than currently exists.

Brandel Zachernuk: People don’t know that a link has the ability to be followed without losing context, you know? And it’s a challenge because. Something that we’ve done at Apple a lot is actually remove what we consider to be extraneous links from a web page, because we know that people get distracted and they don’t come back. And so if you have a job that you want them to do, then you want to make sure that they can drive it toward it. So there are tensions there. But what it means is that actually, the sites aren’t be especially high high in terms of how how much needs to be done and how what kind of value people can get from just getting getting people to do just that, that, that little bit more be aware of that little bit more of the the richness and the environment in which a piece of information, a piece of writing happens to live.

Frode Hegland: Absolutely brutal. This is the discussion we’ve been having on Wednesdays. You’ve been part of most of them, but even the idea of currently to move a document up and down is this sideways. This does require an education of the user. So we’re really and also pointing to select is move these three fingers, which is not what you’d think. But as you know very well that’s what experiment shows works. So there’s definitely going to be a yeah it does look a bit devilish. There will definitely be a level of education. We just have to make the education seem worth the effort. Right. This is remember Steve Jobs called the Mac the bicycle for the mind. The bicycle does take a bit of training to use. And that’s why I think that a little bit of theater, a little bit of flair when people enter this, even some of the original non-interactive stuff to give people a feeling that, wow, here we can do something maybe nice, but we certainly do not want to go for a flimflam over. Over content. I put in our chat here a control click menu, which I think is magic. A lot of windows people do it left click or right click or whatever they call it. Often they’re over. There’s too much stuff in them, of course, but some people even go there to copy. So that is, you know, some things can be snuck in there. I think we have to go a bit further, though. Now the thing about

Speaker4: Are.

Frode Hegland: The premise of what we’re doing is openness, of augmentation, of course. So This is why it’s so important to be able to take information from one place to another. And this is why it’s so exciting, I think. I mean, imagine the thing I just talked about with Fabian. For instance, imagine a document is opened in there and stuff is done to it in that specialized environment. When it comes out, there really should be an option of augmentations done in Fabian Environment. Do you want to keep them? So that becomes a Peter Wasilkow additional appendix. So if you open it in another environment, it may or may not know how to parse that. But when you go back to Fabian.

Speaker4: Great. Robert.

Frode Hegland: Sorry. Mark, I’m done anyway, so you might as well go off mute. It’s cool.

Mark Anderson: Sorry. Noises off.

Speaker4: Don’t worry.

Mark Anderson: You know, I was just thinking in talking about the you know find on page. The other thing was basically the number of people who don’t actually use command click. So, in other words, to branch when you’re, when you’re actually exploring, essentially a network of things is because, you know, gone are the days when browsers crashed. If you open more than about three tabs. I’m always amazed at how many tabs. Some people, some some people shoot out the other end of the bracket with hundreds, but you know, it’s your own. And but I’ll admit, I mean, often that’s really my working often my working part of my working desktop is maybe 20, 30 tabs that I’ve got open because effectively that is the working knowledge branch I’m on at the moment. And the only thing that connects them is the the links on that page. And that makes me think in terms of Excel things we could do. So and drawing a few threads together. So if I was looking at a document and I might want to say, oh, okay, I want to reuse let’s take reference documents because it’s like public domain information anyway, so we’re not stealing anyone’s words at this point. But you might say, right, I want to take this, this, this, and I just want to literally cast them onto another object in my thing, which might be a paper I’m starting to write.

Mark Anderson: So an object that represents another document or just or even just something that can hold what I’m going to pass it. That’s quite interesting, because that fits entirely within the notion that you’ve been pointing out to this, this thing that within, within a notion like something like, you know, visual meta within this sort of thing of a rapper that we may have doubts of what goes in. But the point is, once it’s in there, it’s broadly not going to be messed with anymore. And we can take it from there and we can put it somewhere else. And the one thing we won’t have done in doing that is to, is to add to any corruption that’s in it. So that’s quite an interesting thing to experiment with. Now, whether you are putting that information out into, say, your reference manager, I mean, I know you’re not a great fan of reference managers, but, you know, there there is here as PDFs are. So I don’t think we can pretend they aren’t there. And it’s probably where most academic, what most academics are using, albeit an amazing range of of tools that are only only bounded by the term reference manager.

Mark Anderson: And if you put them all together, they look decidedly different, partly because some people believe their annotation space into their reference managers, others keep it separate. Which which causes a bit of further noise and difficulty in seeing what’s the same. But one of the one of the big skeins, I think, at the center of, well, not just academic work, to be honest. You know, we got a lawyer here in the room but, you know, in the professions as well where you where you may be needing to say, refer to standards or you know, legal constraints on your, what your organization is doing. So you actually need to make a citation of form. You may not think of it in terms of a, as an academic would call it a citation, but you’re doing the same thing. You are absolutely stating that this thing that I’m saying is underpinned or is done in cognizance of the thing at the other end of this, essentially this link. So I think there’s something interesting we can play with there in the, you know, the time we’ve got before September.

Frode Hegland: I have a big question and chat here. But Randall, please go first.

Speaker10: Thank you. So something Mark.

Brandel Zachernuk: Said that really set me off was I think you said kind of search one document with another document or to use your tab groups as a, as a tool to impose on another set of things. And that made me think about, you know, like, like a lot of people, I have been sort of drawn into like hardware and metal metalwork and woodwork. Youtube. And I haven’t done that kind of stuff. I don’t really have a handy bone in my body, but but it’s fascinating. One of the things that’s fascinating is to see the way the inordinate amount of time that’s spent on building tools, building jigs, constructing things to do future work. And in fact, that’s something that’s a relatively common refrain within within professional grade kind of Photoshop and Maya glitter used as well as sometimes people are literally like there’s a tool called box cutter. You create jigs for cutting things into other shapes. And it’s and it’s not just for a skeuomorphic. Oh, isn’t this funny? Because it’s what carpenters do. It’s legitimately one of the most useful things that you can do to intervene on polygonal surfaces and stuff like that. And that made me think about just like, what are the jigs? What are the tools that we can we can construct to do knowledge work? What are what’s a knowledge jig? And I think that’s a really, really interesting metaphor and lens through which to look at all of these things is to construct things, to do things, because a lot of the time we don’t we don’t really have an ability to do to, to to see that we have frames that we use, but they’re all sort of locked up safely in here.

Brandel Zachernuk: And to externalize them, to materialize the ways in which those lenses can be built means that people don’t have to. Yeah. Like I’ll search for a keyword, but if I have the ability to search for a whole set of keywords, you know, search for a dozen keywords at once rather than just one then that would give me an an, an inordinately better capacity to kind of read over or make an analysis of the text. And so, yeah, like. And something that I’m going to be carrying for a while is so I think that’s essentially what you were saying, Mark. But yeah, whether whether it was or not, I think it’s it’s brilliant.

Speaker4: So. I don’t know.

Frode Hegland: I’m going to be free to go over a little bit today because we started earlier UK time. So it’s entirely up to you. If you want to finish at five or go on a bit, but you know, I’m spontaneous. I’m not a very organized guy. So I’m going to read out the question that I put in here. And this is to all of you what might be the most impressive experience we could provide in September, given no real constraints? But something and you may reply in whatever terms you like, whatever aspect your life like. But what could be really something you’d be so proud of having been part of, and something so clear to other people?

Frode Hegland: Imagine the room full of nerds, academics, hypertext, people usually relatively casual. Some people walk in late. There’s a projector. The headset is available in the hallway. The lights go down. What do we say?

Dene Grigar: May I ask a question? Why are we talking about this right now? Because I don’t think we’re at a place where we can know what we’re going to say. We’re still in the experimental exploration stage. I think we need a little bit more time to figure out what the opportunities are. We’re just we’ve just been at it a few weeks, right? Eight, six weeks.

Frode Hegland: I think you’re completely right, Danny. The way that I like to design software, though, is I write the ad first. And then try to do the user guide and then look at the spec. So it’s just like an exercise to do every couple of months. This is absolutely not an attempt at finding a final anything. It’s just an attempt to put us back in the room for a little bit and fantasize. So maybe something useful will come out of it. Most likely not, but it’ll be nice to see in a few months how it might have changed. Because maybe we’re a bit tunnel vision now. Maybe not. I really don’t know.

Dene Grigar: Well, I’m not meaning to push back, but just to just to say that the difference between this project and software development is we’re not putting something on the market. We’ve said this a thousand times to other people that have tried to get us to think about this as a product, but it’s not. This is a exploratory research project funded by a grant that allows us the opportunity and the funding. To play and to think and to imagine. So I think that is an important. Thing to continue to continue remembering so that we don’t put out an ad first.

Speaker4: Okay.

Dene Grigar: The I’m trying not to be difficult. I’m just. I’m just saying.

Speaker4: A better.

Dene Grigar: Question might be.

Speaker4: I don’t.

Frode Hegland: Consider you difficult. I’m not saying we should do exactly the same. I’m not talking about an ad, but also this is not in terms of shaping one piece of software. It may very well be that the presentation will involve things we haven’t built.

Speaker4: So it.

Frode Hegland: Is loose. So what other question would you suggest then? I would really love to. Well, I think.

Dene Grigar: That’s that’s what we should come up with. Like what is it we should be, you know, what is the target, the proposed targets that we can think about. Like what what kinds of uses, what possibilities are there for academics to employ this headset environment, this amazing technology? To think about how we actually can improve the way we work goes back to that research question that I promoted. I proposed last Wednesday to everybody for the case studies. You know, that what can we get and what can we get done in time to talk about this and show people? And it’s maybe a video, but it’s also where we’re leading people for the testing, the usability testing. I’m just one case study. We want to see what other people think. It’d be really great to have, you know, someone like, I don’t know, Mario is our test, you know, person to say, well, that’s I can use it like you’re saying, but I can see this, this and this, and we write that down and then ask, how do you imagine that occurring? What is the process for that?

Frode Hegland: So I don’t see those questions being in competition of priority at all. I’m just meaning to say if there are any

Dene Grigar: And I know you’re I know Doug is a wonderful person and that the. I teach that video every semester. I teach 375 and 201. I’ve been teaching it for years. Right. The mother of all demos. Fantastic. Important. But we’re we’re not. I’m not, you’re not, we’re not Doug Engelbart. We’re not introducing the mouse.

Speaker4: Well, I don’t know about that.

Frode Hegland: I don’t know, and also, Doug wasn’t meaning to introduce the mouse there that obviously as of course, you know, that that wasn’t his key thing. And and one thing that Doug talked about a lot in his later years was his thing that he finally admit of if he spoke up, people say, well, who are you? Right. I’m not saying that me or us have massive egos. We’re going to change the world that day. Absolutely not. But I do think we have a responsibility, since we’re focused here, of at least accepting that we’re working towards that. That’s that’s all I mean. So we should absolutely do the scenario. Absolutely do everything you said. Absolutely. No question about that. I’m simply talking about if anybody here has specific ideas for what might be impressive and Mark has his hand up.

Mark Anderson: I do. And I’m sorry, I can’t take credit. This is something Brandel said. But when you talked about Jiggs, I thought, oh, right. Yes. This is interesting, actually, because and one of the challenges with exploratory knowledge work is you’re having to deal with the imagined as well as the real. So you can’t do this shtick of being a network graph engineer and say, all I’ve got to do is connect all the nodes up and brought in a sexy fashion. No, it’s more complicated than that, because there are things that there are things that sort of exist. There are things that you imagine might exist or linkages that you think might exist, and you have got to explore these. And if you’re being open minded most of the time, you don’t know at the outset which will work and which won’t. And it’s nothing to do with them being necessarily good or bad. It’s just sometimes, well, rather, as we’ve discovered with some of the interesting things about, you know, who’d have thought gestures were difficult in exile, you know, so there is all this to go through. So I think that’s something genuinely in the center of what we’re doing that would be impressive to make, to make an impact on, not least of which because that’s actually quite hard to do. In the sort of the 2D working space we have at the moment. I mean, generally, I think most of us do that more loose stuff, either on a piece of paper or in our in our bonce, you know, because what else are you going to use?

Speaker4: The

Frode Hegland: Question Leon asks there in the text was, is there a sense of top annoyances reported by academics when I did my mini scoping survey for my thesis, the thing that came up that was the biggest annoyance for academics was seeing connections. In many different ways, and that seems to be so aligned with what we’re working on here. And I just also wanted to say the thing in Doug’s demo that was amazing, I think wasn’t one thing. It was the sequence of many things that were possible. Including, you know, when you saw the map of driving to buy milk and so on. See you later, Danny on Wednesday.

Speaker4: You.

Frode Hegland: Know, so it wasn’t the one thing. And that’s why I’m excited about the idea of us presenting not one thing, but maybe like a main thing and then some community things. And it’s all related. Randall Apple came back saying they need information for Author Vision. They’re saying, how does the macular vision version work together? So that’s nice. I can just say iCloud.

Speaker4: That’ll be easy. But let me just.

Frode Hegland: Prod you guys then. Okay, I’ll start with Mark since he’s the most academically academic at the moment. Imagine. Just you’re sitting there because it’s maybe Dini presenting right now. You’re sitting in the audience, you worked on this, and there’s something coming up that you are really excited about. The rest of the people seeing you’re sitting there like, this is so cool. What might that be?

Mark Anderson: Well, I just explained it and we’ve talked right past it, so I don’t know what else to suggest. Okay. Actually being able being being able to, you know, this thing of being able to tease things apart and to do this exploratory work which is absolutely aligns with a sense of so what that implies is you’ve got to have something to hold it, which is why this notion of the jig. I think is, is relevant. It’s challenging because you’re building a jig for something that doesn’t necessarily exist in any structural form, you understand, but you have some sense. So it’s, it’s it’s getting into and actually exploring rather than trying to imagine the outcome, but actually exploring what it means to tease something apart. So, well, you know, okay, you can pull it to bits now you’ve got lots of bits. Are those bits actually meaningful? Don’t know till we tried. If we can make bits, are some bits more meaningful than others? If so, how do we identify what those bits are? And in in within keeping what Denise said about what the grants doing now that might be that okay. Well, nothing we have at the moment arrives in a form that allows it to be constructed, deconstructed in the manner we’d like to show. But hopefully we’re making a case to say, okay, so one of the things we need to do is we need to start making things in this form. We’ll have a problem with legacy stuff, but that shouldn’t that should not, you know, otherwise we get into sort of rear view mirror thinking and we’re doing, you know, we’re we’re always doing the slightly faster horse and cart, which is something to which we’re all prone.

Mark Anderson: But I do think And I think this also maps into. And this is probably why there’s confusion early in this sort of discussion about whether or whether one does or doesn’t want a background for something because if the background isn’t part of what you’re trying to think about, it’s, it’s it’s sort of it can be in the way. So, so whereas the counter example that sprung to mind was Randall Singer saying, okay, but that’s nice. I’m going to go and lie on top of the hill and I’m going to read the book. So you know that that’s a really good example where actually having that, that extra sort of environment is really useful because it’s actually feeding into your mood and your concentration in a different way. So it’s actually working in a positive way rather than against it. So there’s no one size fits all. But I absolutely think we should give a little more consideration to the deconstruction and to the extent saying, okay, what does that deconstruction look like? And are there are there elements within the source, for want of a better word, the source documents we have that we could use that we’re not thinking about. So in other words, are we just looking at the easy low hanging fruit like, oh, there’s a section at the end called references. Well, that’s obviously a part, but I just don’t know. I’m not sure we’ve prodded it hard enough at it, but being able to do that and building ourselves tools by which we can effectively transfer thoughts or proxies for thoughts from one locus, one object to another, I think is quite beguiling.

Frode Hegland: That’s really perfect. I’m very happy with you and Brendel talking about the deconstruction, and you got to the nub of it at the end there. What does it look like? I think we need to really, really investigate that. Brendel. Please continue.

Speaker4: Yeah.

Brandel Zachernuk: So I’ve been I’ve been still on jigs, still thinking about what that sort of intermediary step, you know is between people having the ability to build an entire Ryobi drill. And there is a, there is a spectrum that people are involved with in digital stuff. It’s mostly not in text, obviously. It’s it’s it’s it’s in Photoshop and. Well, no, no, there are, there are macros and actions and things like that in word more often in Excel, because it has a slightly lower hanging fruit in the form of highly automated, automated or well formed kind of structures that people can run repetitive actions over it. But yeah, like the concept of a jig, the concept of and the thing that sort of was burning a hole in my mind the other week, month, whatever of the fact that a browser is nominally known as a user agent, and also the idea of a of a of a of a browser extension as well sort of the combination of all of those thoughts swirling together is, is the idea of being able to build ad hoc macros or actions that provide lenses over which you can kind of understand things and promote and represent things. Is a really, really interesting one, having the ability to to kind of intervene on these ways and maybe having to do so, you know, via AI or by sort of inference or repetitive inference over a couple of documents and say, like, I do this, I did this to this, and I do that to that.

Brandel Zachernuk: Can you see what that is across everything, please? Yeah. I think that there’s something really powerful there. I would say it’s like it’s not essential for that to be in VR, but it is vastly enhanced by having the space to do so. So it’s and that’s, you know, something that I as I was sort of deconstructing, reading and writing for the last ten years I came back to a number of times it’s like none of these things is essential to be done in VR per se, but they’re all elevated by having the space and the sort of the gestural richness of of being able to kind of put these things in a place where they’re ready to hand, but not necessarily kind of center a view, because we still need to see the way we see, and we just need to know that there are alternatives that exist within reach.

Frode Hegland: The notion of a jig is so lovely, because this whole discussion today started a bit roughly on the notion of environments. And of course, adieu, adieu. There you go, a view. A jig is an environment, right? Sometimes you may choose to work on the moon. Fine. That should be your choice. But sometimes you may want to have things there now. So I can imagine building on this, that we get on stage and there is a huge projection of what’s seen in the workspace. Someone opens an ACM document and it explodes into an analysis that that user has chosen, using jigs to one, go through all the references to find out something or other, this blah blah, blah, all these different things. And then you can modify and move that around a lot. But but being that you don’t have to start with every single pulling out, you have like this special opening view. So maybe we and I know this is not exactly new for us, but just for context of here, maybe you say I am now opening a document that I’m reading for my field and you. I’m now opening a student document. It’ll do different things right, and I’m now trying to communicate this thing. These are probably some of the workflows we should consider looking at. Mark.

Mark Anderson: Yes. One other thing that I again, just can’t tell you that it occurred to me that we haven’t we haven’t really sort of thought forward on is I’m reminded actually, how annotation that word or highlighting are so deeply rooted in the notion of you basically drawing a bit of paper. And so what does? Well, I don’t know what you call it, but let’s say, well, what is digital annotation if, if I free it from the stricture that it has to be about drawing a box around a bit of, around a bit of text, which is, which is not this is this is not to say that it’s wrong or not useful, but I’m just trying, you know, if you if you want to dream and look ahead let’s let’s think about. Well, okay. How how might I do this? How do I capture what what would I be doing? The thing that is the annotation. This, this atom of an idea, how does it exist in the space? How does it relate to the thing that it’s it’s talking to? Because a useful thing is that I might, for instance, have an idea and what I’m really that what is the crystal of that idea is, is it’s, say, drawing. It’s drawing a connection between five different points in a narrative. Well, I guess you could you could highlight five different parts of the narrative and, and, you know, put a yellow box around them and then write some text on the side. So as it relates to. But I’m just wondering if we’re if we are trying to look more flexibly about use the space to do things that we cannot do on paper.

Mark Anderson: I think that’s another quite interesting part. And I just pick up an interesting sort of a thought that sort of came in from what Freud just said, which is expanding for a purpose is almost a jig in itself. I think that I think one of the, the challenge I see in this is that we, I think we can all here, here one, you know, one of our numbers saying, right, well, we’ll explode this, but I’m just wondering what that actually means. When you take it from where we are, which essentially is text in whatever form, in a, in a linear narrative to something that that we are positing has no single linear form. So the exploding is a difficult bit, imagining what you might do with it, dare I say, I think is comparatively easy, because we can imagine that the bit that so that the hard thinking bit and the bit for me, which will be the wow moment, is doing that exploding and being able to show how and why it was done. Now, it doesn’t mean it can’t be made better in the future, but that’s something you definitely can’t do now that if we could show how even begin to show how that might be done and what what can be built off the back of it, because, you know, it takes time to build these things. I think that would be tremendously powerful because it’s demonstrably something that doesn’t exist at the moment.

Frode Hegland: Yeah, absolutely. Randall, please.

Brandel Zachernuk: Yeah. That reminds me of something. Something that’s a common refrain in places that sort of make variously make and critique, like future vision videos, as a lot of the time, you know, Microsoft’s best sort of depictions of it are tantamount to like screenshots or video captures of the people in the in the high powered department of drawing circles around things. And like that. That could be true, but it’s not definitely true that that’s an important thing that these very, very wealthy looking people ought to deserve to be able to do. But the but, you know, it comes back to Mark’s point is like, what does that circle mean? Why is it important? Like, why does it require the brilliance of that person there to put the circle around there and not that that like, what is it about this performance functionally that that has semantic relevance to the individual and to the organization in which they’re situated? And it’s and it’s that intent. It’s like, what do you want to do with it? What is going to come out of this action that I think is kind of missing in a lot of those visions where it’s just. Yeah, and and the reason is because they don’t know. And it’s a shorthand functionally for what is a useful sort of epistemic action to undertake with a, with the information that they’re presented with. But yeah, like. There needs to be a better answer than sort of. This is this is a stand in this is a functional encoding of some kind of relevant intellectual activity over some kind of future substrate. We but the way we get to that is with intention with, with, with like goals and visions of what are reasonable and important things to be able to do. That’s not an answer, but that’s. Yeah, that it’s just absent in so many of parts of that vision.

Mark Anderson: I mean, do you know, is there an area where people. The truth is, I don’t know where you start to look for the people that who must. There must be some people who are sort of thinking on this. As a concept, if you see what I mean. You’re sitting at the front of that saying, okay, so the way you deconstruct this is sort of the following. I think the heart, I think it’s really hard for us to step back from the physicality of the world. And we’re dealing we’re dealing with a, a nonphysical environment we’re creating. And it’s just really, really, really hard to imagine that not in terms of physical objects.

Speaker4: Yeah, well.

Brandel Zachernuk: There are places where people are are having to sort of arbitrarily move their views around. And 3D modeling is a pretty apt example. So if you watch people doing like there’s just ample video footage online of people in the same way that there’s, you know, YouTube restoration videos, there’s also and lathe cube and all of those other things. There’s also people using blender for various purposes, for sculpting, for hard surface modeling, things like that. And one of the things that is really interesting is to just watch, not necessarily the the object being made, but the way it’s being made and the views that they’re sort of obliged to go into. And very occasionally people have to go into what is actually like blender has functionally an almost fully like capable version fell inside it to deal with all of the spreadsheet of those numbers. It’s pretty bewildering and pretty buried. But yeah, like, there’s a lot of different ways that people look at stuff and and a lot of different sort of composite views that they construct in order to be able to read things like there’s this one about how to make growth wrinkles on a, on a worm or on a face and but in order to do that, they also need to be able to create these, these compression maps where, you know, you see that it’s blue in one direction and red in the and the other, depending on the relative size of the triangles of that part of it relative to its, its kind of rest pose, which is how much skin should actually be on it.

Brandel Zachernuk: And yeah, it’s amazing because these are they’re like the curse talks about in epistemic versus pragmatic action. And, you know, they’re indecipherable, like they’re indistinguishable in places because it’s at some level all systemic in order to be able to construct these views. And. Yeah. So like I think 3D is a good one. Coding doesn’t have much of it. No, I don’t like it. In terms of what people are doing to augment their views of it. Yeah, I would be interested in finding more as well. But yeah, there are a couple of videos that, like jump out at me as being interesting and I’ll drop them into the into the slack.

Speaker4: But Randall, it.

Frode Hegland: Would be really cool if you do something with your headphone microphone, because your sound is actually quite a bit worse than the rest of people. Weirdly enough. Do you know if it’s the noise reduction setting or something you’re using, or is there something else?

Brandel Zachernuk: Sometimes there’s a heater on. It’s not on at this instant.

Frode Hegland: No, but I mean, are you using voice isolation or standard?

Speaker4: For the I.

Brandel Zachernuk: Don’t know what those things are.

Frode Hegland: If you go up on your Mac under the green zoom icon and go down. There is a mic mode option there.

Brandel Zachernuk: I’m standard now. Is that better?

Frode Hegland: It’s okay, but Yeah.

Speaker4: Your words are so important.

Frode Hegland: If we want to hear them well.

Mark Anderson: This feels like the conversation where the guy says what settings were used for that picture? And the chap says, I don’t know, it could have been mountains or big people. I’m not sure. This.

Frode Hegland: Something like that. So I think we made some progress in this talk. I’m going from the notion of jigs and deconstruction to what might that actually look like and interact like we we’ve talked about maybe having a day where we only discuss interactions. I don’t know what the appetite and the group is for that. So we get a lot of politics out of the way, so to speak. And it’s just, hey, look at this interaction or that interaction. What do you all feel?

Speaker4: What?

Frode Hegland: Youth theater and Fabian specifically, since you’re quietest at the moment, what do you feel of having an extra day just to talk about interactions and XR?

Fabien Benetou : If I have unlimited number of days, yes. Otherwise if it could be now, I mean, not today, but yes. I was thinking, in fact, of I mean, we suggested in slack the demo that some somebody did with unity which I installed on my quest couple of days ago. He sent another update, I think, on Twitter. Adam shared them back. And I think, yes, that’s the kind of thing that we should try and discuss. But for the added value of both the project and our own shared interest discussed or project, like. Okay, we tried it. It is good. Not good. But could be useful knowledge work. And I think that would be an interesting intersection where we can also not just chat, but do something about it based on our interest and competency here. So yes.

Frode Hegland: Okay, we will consider talking about that. But for today I think we’re done.

Speaker11: This meeting is being recorded.

Frode Hegland: So we yeah we have we have another few minutes. And I asked Fabian about testing and releasing demos and he said more frequently, is that what you said, Fabian?

Fabien Benetou : Yes, basically. And more frequently and also with the slightest amount of documentation, just enough so that there will be some interest, like even I, who is quite interested in the topic. I also need a little bit of excitement or to something to spark my curiosity so it can be a single image with a single line of text. Just enough to see. Or is it something that I’ve never done before? And and then ideally, yeah. What is actually being tested? Because I think in couple of months for September, for example, people will also be pressed for time. They will be curious. We’ll have a different amount of knowledge, namely more about research and less about XR, probably but they will also be curious yet have other things to do. So I think basically being able to train on not just how to build a demo, but deliver the demo would be useful. And again, I think it would help to grow the community. Maybe I would find it good or not good. Doesn’t matter much. But then other people will be, oh, I want to try this and I want to see how it’s being built. And they will have also a position about whatever using hands, not using hands, background, not having a background visible, etc..

Fabien Benetou : And I think it’s again, useful feedback because that’s probably not what we want is in September, somebody repeating the same thing that someone else here has said three months ago, like the whole point, in my opinion, of this is for us to learn and yeah, there’s absolutely no need to wait a bit longer. So I think, yeah, that that would be my recommendation, a good example. So to finish is on slack. I shared, I think or maybe it wasn’t me. I forgot, to be honest, but somebody doing unity demo with different interactions. The last one I think was reshared recently with a shooting arrows on button, basically. And I think he’s doing a good job, like he has a open repository on GitHub, but he doesn’t have just like a list of APK, or he has a single image or a GIF for every demo, and it takes a couple of minutes to try, and then we can see right away is it interesting or not? And I think being able to do this replace our context. Yeah. That would be my suggestion.

Frode Hegland: Yeah, that makes perfect sense. I just put a link in here that is to our testing page. Every Wednesday morning there’s a new build. Well, most mostly, so far there has been. So. Yeah, absolutely. Okay.

Mark Anderson: I one quick one. I think it’s really interesting. Useful in what Fabio was saying. There is that and it’s something I think I’ve, I’ve sort of taken on board in, in, in trying out the sort of weekly builds and things is, is. I hesitate to call it documentation. It’s just the fact that actually when you take it cold, so you take it somewhere, you know they will, sure enough, put it on standing up when they should be sitting down or they’ll be facing the wrong way or something we didn’t think of. And so it’s us getting used to what is what is the minimum amount of stuff bearing in mind that, you know, there won’t be interested enough. Most people will not be interested enough to listen to what you say. What is the minimum amount of stuff we absolutely need them to understand just so they don’t put it on. Find it. Nothing seems to be happening. Say, well, it’s obviously rubbish and take it off again. And you know, and the moment passes. So it’s a weird thing because it’s definitely what you, you know, it’s not slabs of documentation, but it’s just enough, you know, you should expect to see this or, you know, or the environment is expecting you to enter, you know, you’re expected to be in a particular position sitting or standing, whatever. So not not too many things, but I suspect us doing that as a practice, that’s something we can get well honed, certainly say by September. Because it does. It does sort of. It does need repetition to work out what’s not necessary. And you know what we’re leaving out.

Frode Hegland: Yeah, absolutely. By the way, Fabian Hevia, if you have a chance to look at that page when you’re in the headset and that link straight to the. Straight to our current

Speaker4: Current demo.

Frode Hegland: So? So that’s absolutely useful.

Speaker4: Quick.

Fabien Benetou : Quick word also on this. I think it’s interesting if some of those demos, if people accept just like a call like now to try and comment live about how they feel. I mean, germ of user testing it’s probably the second best thing after being, like, right next to the person.

Speaker4: Yeah.

Frode Hegland: No, absolutely. It’s important to put it on especially academics heads. That’s absolutely right. But, you know, one thing that I think is so important about Doug’s demo was it wasn’t one thing. It was a sequence of things. So, you know, I think we can break the demo into this, this and this. Andrew. Oh, and also same data, ideally same document, this, this and this Fabian, as you know, into his world in a limited or full set and so on. I think that’s very important. Peter.

Peter Wasilko: Also, I think people have lost the thread of the idea of people having to be trained to a certain extent, to make the best use of tools. As we got into there’s an app for that thing. The effort to dumb down the end user has been kind of staggering, and we try to push back against that to get to the notion that, you know, you might not be able to instantaneously, when you’re watching this demo, understand everything that’s going on here, but know that if you were investing a certain amount of time in it to master these concepts, you get a rich set of tools that can be leveraged and combined in interesting ways. So it’s worth it. And you should be willing to invest that time and consider it being, you know, a powerful tool that you need some training on as opposed to just a push button for final result.

Frode Hegland: Yeah, absolutely. I do agree with that.

Speaker4: So.

Fabien Benetou : Yeah.

Frode Hegland: Just looking at the document you sent. Write anything else for today.

Fabien Benetou : Very quickly. I went to the demo and it’s great because there are with some videos, description, etc. I clicked on the thing about the code because I wanted to see what the code was. It did not work like I couldn’t reach the repository. Which is fine. It’s not a problem. I think having access though, to the repository obviously would be nice, and I think I understand it’s the expectation. But it also helps maybe opening issues. So like to, to say, oh, it works on my device, doesn’t work on my device, I get ideas, etc., because it cannot always be synchronous. And also what I had in mind is I just shared with somebody some of my repository for like basically being able to browse OpenStreetMap in XR. And I was able to select specifically the line I wanted to share with that person and say, this is how I did it. It’s done this way specifically. And I think here it would be useful also to be able to extract okay, this is the new function is this way to add citation, etc.. And the code is done this way because again here very specifically, it’s about learning together through building. And if we can extract the novelty on how it’s done, even though it probably cannot be just copy paste in another environment it basically increases the ability to test faster and combine. And I would, I would highlight the, the novel code specifically.

Speaker4: Yeah.

Frode Hegland: You should have access. Indeed. He asked on slack if we could give our usernames for GitHub or whatever. So please revert to Denny and also Andrew and Slack saying, give me access, please, and you’ll get it. Adam didn’t think he had it, and then he was surprised a few days ago, to find out that he did. So that’s a very reasonable thing. So, Peter, in our closing minutes.

Peter Wasilko: Yes. Final thought. I’m still dealing with tooling automation, and it’s a bit of a slog, but I found a new tool that looks like it’s going to be very useful. It’s called Moon Repo, and it’s a mono repo manager and support system, so I just dropped the link in the sidebar. But I’m basically trying to bring all of my separate projects into one mono repo and organize things and automate it so that you’ll just be able to use the moon control panel in VS code to click a button to automatically run M4 against everything that has M4 macros in it. Another button to regenerate parsers from grammar source files. And it’s very nice because it will help you keep track of task dependencies and automatically fire off all of the prerequisites before running things. So it seems to be a nice, elegant solution that so far is working. I don’t know if it’ll blow up on me yet or not, but if I can get everything working, I’ll share it with the group and let you know how it turned out.

Speaker4: Yeah. Thank you.

Frode Hegland: Peter.

Speaker4: Any.

Frode Hegland: Other thoughts or comments? So if anybody has more specific ideas for the demo, on whatever level, do not keep them secret.

Speaker4: So you some.

Frode Hegland: Of you on Wednesday, some of you on Monday. Well hopefully all of you.

Speaker4: At least take care. Bye bye.

Peter Wasilko: Have a good one, guys. All right.

Leave a comment

Your email address will not be published. Required fields are marked *